1
|
Chen J, Wang X, Huang C, Hu X, Shen X, Zhang D. A Large Finer-grained Affective Computing EEG Dataset. Sci Data 2023; 10:740. [PMID: 37880266 PMCID: PMC10600242 DOI: 10.1038/s41597-023-02650-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Accepted: 10/16/2023] [Indexed: 10/27/2023] Open
Abstract
Affective computing based on electroencephalogram (EEG) has gained increasing attention for its objectivity in measuring emotional states. While positive emotions play a crucial role in various real-world applications, such as human-computer interactions, the state-of-the-art EEG datasets have primarily focused on negative emotions, with less consideration given to positive emotions. Meanwhile, these datasets usually have a relatively small sample size, limiting exploration of the important issue of cross-subject affective computing. The proposed Finer-grained Affective Computing EEG Dataset (FACED) aimed to address these issues by recording 32-channel EEG signals from 123 subjects. During the experiment, subjects watched 28 emotion-elicitation video clips covering nine emotion categories (amusement, inspiration, joy, tenderness; anger, fear, disgust, sadness, and neutral emotion), providing a fine-grained and balanced categorization on both the positive and negative sides of emotion. The validation results show that emotion categories can be effectively recognized based on EEG signals at both the intra-subject and the cross-subject levels. The FACED dataset is expected to contribute to developing EEG-based affective computing algorithms for real-world applications.
Collapse
Affiliation(s)
- Jingjing Chen
- Dept. of Psychology, School of Social Sciences, Tsinghua University, Beijing, China
- Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing, China
| | - Xiaobin Wang
- Dept. of Psychology, School of Social Sciences, Tsinghua University, Beijing, China
- Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing, China
| | - Chen Huang
- Dept. of Psychology, School of Social Sciences, Tsinghua University, Beijing, China
- Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing, China
| | - Xin Hu
- Dept. of Psychology, School of Social Sciences, Tsinghua University, Beijing, China
- Dept. of Psychiatry, School of Medicine, University of Pittsburgh, Pittsburgh, USA
| | - Xinke Shen
- Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing, China
- Dept. of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Dan Zhang
- Dept. of Psychology, School of Social Sciences, Tsinghua University, Beijing, China.
- Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing, China.
| |
Collapse
|
2
|
Neverlien ECS, Lu R, Kumar M, Molinas M. Decoding Emotions From EEG Responses Elicited by Videos Using Machine Learning Techniques on Two Datasets. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083098 DOI: 10.1109/embc40787.2023.10341106] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
In recent times, we have seen extensive research in the field of EEG-based emotion identification. The majority of solutions suggested by current literature use sophisticated deep learning techniques for the identification of human emotions. These models are very complex and need huge resources to implement. Hence, in this work, a method for human emotion recognition is proposed which is based on much simpler architecture. For that, two publicly available datasets SEED and DEAP are used to perform experiments. First, the EEG signals of the two datasets are segmented into epochs of 1second duration. The epochs are also decomposed into different brain rhythms. The features computation is performed in two different ways, one is directly from the epochs and the other way is from the brain rhythms obtained after the decomposition of the epochs. Several features and their combination are examined with different classifiers. For the DEAP dataset baseline features are also utilised. It is observed that the support vector machine (SVM) has shown the best performance for the DEAP dataset when baseline feature correction and epoch decomposition are implemented together. The best achieved average accuracy is 96.50% and 96.71% for high versus low valence classes and high versus low arousal classes, respectively. For the SEED dataset, the best average accuracy of 86.89% is achieved using the multilayer perceptron (MLP) with 2 hidden layers.Clinical relevance- This work can be further explored to develop an automated mental health monitor which can assist doctors in their primary screening.
Collapse
|
3
|
Mazzacane S, Coccagna M, Manzella F, Pagliarini G, Sironi VA, Gatti A, Caselli E, Sciavicco G. Towards an objective theory of subjective liking: A first step in understanding the sense of beauty. PLoS One 2023; 18:e0287513. [PMID: 37352316 PMCID: PMC10289447 DOI: 10.1371/journal.pone.0287513] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Accepted: 06/07/2023] [Indexed: 06/25/2023] Open
Abstract
The study of the electroencephalogram signals recorded from subjects during an experience is a way to understand the brain processes that underlie their physical and emotional involvement. Such signals have the form of time series, and their analysis could benefit from applying techniques that are specific to this kind of data. Neuroaesthetics, as defined by Zeki in 1999, is the scientific approach to the study of aesthetic perceptions of art, music, or any other experience that can give rise to aesthetic judgments, such as liking or disliking a painting. Starting from a proprietary dataset of 248 trials from 16 subjects exposed to art paintings, using a real ecological context, this paper analyses the application of a novel symbolic machine learning technique, specifically designed to extract information from unstructured data and to express it in form of logical rules. Our purpose is to extract qualitative and quantitative logical rules, to relate the voltage at specific frequencies and in specific electrodes, and that, within the limits of the experiment, may help to understand the brain process that drives liking or disliking experiences in human subjects.
Collapse
Affiliation(s)
- S. Mazzacane
- CIAS Interdepartmental Research Center (Dept. of Architecture, Dept. of Chemical, Pharmaceutical and Agricultural Sciences), University of Ferrara, Ferrara, Italy
| | - M. Coccagna
- CIAS Interdepartmental Research Center (Dept. of Architecture, Dept. of Chemical, Pharmaceutical and Agricultural Sciences), University of Ferrara, Ferrara, Italy
| | - F. Manzella
- Dept. of Mathematics and Computer Science, University of Ferrara, Ferrara, Italy
| | - G. Pagliarini
- Dept. of Mathematics and Computer Science, University of Ferrara, Ferrara, Italy
| | - V. A. Sironi
- CESPEB Research Center, Neuroaesthetic Laboratory, University Bicocca, Milan, Italy
| | - A. Gatti
- Dept. of Humanistic Studies, University of Ferrara, Ferrara, Italy
| | - E. Caselli
- CIAS Interdepartmental Research Center (Dept. of Architecture, Dept. of Chemical, Pharmaceutical and Agricultural Sciences), University of Ferrara, Ferrara, Italy
| | - G. Sciavicco
- Dept. of Mathematics and Computer Science, University of Ferrara, Ferrara, Italy
| |
Collapse
|
4
|
Cui G, Li X, Touyama H. Emotion recognition based on group phase locking value using convolutional neural network. Sci Rep 2023; 13:3769. [PMID: 36882447 PMCID: PMC9992377 DOI: 10.1038/s41598-023-30458-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Accepted: 02/23/2023] [Indexed: 03/09/2023] Open
Abstract
Electroencephalography (EEG)-based emotion recognition is an important technology for human-computer interactions. In the field of neuromarketing, emotion recognition based on group EEG can be used to analyze the emotional states of multiple users. Previous emotion recognition experiments have been based on individual EEGs; therefore, it is difficult to use them for estimating the emotional states of multiple users. The purpose of this study is to find a data processing method that can improve the efficiency of emotion recognition. In this study, the DEAP dataset was used, which comprises EEG signals of 32 participants that were recorded as they watched 40 videos with different emotional themes. This study compared emotion recognition accuracy based on individual and group EEGs using the proposed convolutional neural network model. Based on this study, we can see that the differences of phase locking value (PLV) exist in different EEG frequency bands when subjects are in different emotional states. The results showed that an emotion recognition accuracy of up to 85% can be obtained for group EEG data by using the proposed model. It means that using group EEG data can effectively improve the efficiency of emotion recognition. Moreover, the significant emotion recognition accuracy for multiple users achieved in this study can contribute to research on handling group human emotional states.
Collapse
Affiliation(s)
- Gaochao Cui
- Graduate School of Engineering, Toyama Prefectural University, Imizu, 9390398, Japan.
| | - Xueyuan Li
- Graduate School of Engineering, Toyama Prefectural University, Imizu, 9390398, Japan
| | - Hideaki Touyama
- Graduate School of Engineering, Toyama Prefectural University, Imizu, 9390398, Japan
| |
Collapse
|
5
|
Yuvaraj R, Thagavel P, Thomas J, Fogarty J, Ali F. Comprehensive Analysis of Feature Extraction Methods for Emotion Recognition from Multichannel EEG Recordings. SENSORS (BASEL, SWITZERLAND) 2023; 23:915. [PMID: 36679710 PMCID: PMC9867328 DOI: 10.3390/s23020915] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Revised: 01/07/2023] [Accepted: 01/09/2023] [Indexed: 06/17/2023]
Abstract
Advances in signal processing and machine learning have expedited electroencephalogram (EEG)-based emotion recognition research, and numerous EEG signal features have been investigated to detect or characterize human emotions. However, most studies in this area have used relatively small monocentric data and focused on a limited range of EEG features, making it difficult to compare the utility of different sets of EEG features for emotion recognition. This study addressed that by comparing the classification accuracy (performance) of a comprehensive range of EEG feature sets for identifying emotional states, in terms of valence and arousal. The classification accuracy of five EEG feature sets were investigated, including statistical features, fractal dimension (FD), Hjorth parameters, higher order spectra (HOS), and those derived using wavelet analysis. Performance was evaluated using two classifier methods, support vector machine (SVM) and classification and regression tree (CART), across five independent and publicly available datasets linking EEG to emotional states: MAHNOB-HCI, DEAP, SEED, AMIGOS, and DREAMER. The FD-CART feature-classification method attained the best mean classification accuracy for valence (85.06%) and arousal (84.55%) across the five datasets. The stability of these findings across the five different datasets also indicate that FD features derived from EEG data are reliable for emotion recognition. The results may lead to the possible development of an online feature extraction framework, thereby enabling the development of an EEG-based emotion recognition system in real time.
Collapse
Affiliation(s)
- Rajamanickam Yuvaraj
- National Institute of Education, Nanyang Technological University, Singapore 637616, Singapore
| | - Prasanth Thagavel
- Interdisciplinary Graduate School, Nanyang Technological University, Singapore 639798, Singapore
| | - John Thomas
- Montreal Neurological Institute, McGill University, Montreal, QC H3A 2B4, Canada
| | - Jack Fogarty
- National Institute of Education, Nanyang Technological University, Singapore 637616, Singapore
| | - Farhan Ali
- National Institute of Education, Nanyang Technological University, Singapore 637616, Singapore
| |
Collapse
|
6
|
Wei Y, Liu Y, Li C, Cheng J, Song R, Chen X. TC-Net: A Transformer Capsule Network for EEG-based emotion recognition. Comput Biol Med 2023; 152:106463. [PMID: 36571938 DOI: 10.1016/j.compbiomed.2022.106463] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Revised: 11/30/2022] [Accepted: 12/19/2022] [Indexed: 12/24/2022]
Abstract
Deep learning has recently achieved remarkable success in emotion recognition based on Electroencephalogram (EEG), in which convolutional neural networks (CNNs) are the mostly used models. However, due to the local feature learning mechanism, CNNs have difficulty in capturing the global contextual information involving temporal domain, frequency domain, intra-channel and inter-channel. In this paper, we propose a Transformer Capsule Network (TC-Net), which mainly contains an EEG Transformer module to extract EEG features and an Emotion Capsule module to refine the features and classify the emotion states. In the EEG Transformer module, EEG signals are partitioned into non-overlapping windows. A Transformer block is adopted to capture global features among different windows, and we propose a novel patch merging strategy named EEG-PatchMerging (EEG-PM) to better extract local features. In the Emotion Capsule module, each channel of the EEG feature maps is encoded into a capsule to better characterize the spatial relationships among multiple features. Experimental results on two popular datasets (i.e., DEAP and DREAMER) demonstrate that the proposed method achieves the state-of-the-art performance in the subject-dependent scenario. Specifically, on DEAP (DREAMER), our TC-Net achieves the average accuracies of 98.76% (98.59%), 98.81% (98.61%) and 98.82% (98.67%) at valence, arousal and dominance dimensions, respectively. Moreover, the proposed TC-Net also shows high effectiveness in multi-state emotion recognition tasks using the popular VA and VAD models. The main limitation of the proposed model is that it tends to obtain relatively low performance in the cross-subject recognition task, which is worthy of further study in the future.
Collapse
Affiliation(s)
- Yi Wei
- Department of Biomedical Engineering, Hefei University of Technology, Hefei 230009, China.
| | - Yu Liu
- Department of Biomedical Engineering, Hefei University of Technology, Hefei 230009, China; Anhui Province Key Laboratory of Measuring Theory and Precision Instrument, Hefei University of Technology, Hefei 230009, China.
| | - Chang Li
- Department of Biomedical Engineering, Hefei University of Technology, Hefei 230009, China.
| | - Juan Cheng
- Department of Biomedical Engineering, Hefei University of Technology, Hefei 230009, China.
| | - Rencheng Song
- Department of Biomedical Engineering, Hefei University of Technology, Hefei 230009, China.
| | - Xun Chen
- Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei 230026, China.
| |
Collapse
|
7
|
Ahmed MZI, Sinha N, Ghaderpour E, Phadikar S, Ghosh R. A Novel Baseline Removal Paradigm for Subject-Independent Features in Emotion Classification Using EEG. BIOENGINEERING (BASEL, SWITZERLAND) 2023; 10:bioengineering10010054. [PMID: 36671626 PMCID: PMC9854727 DOI: 10.3390/bioengineering10010054] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/11/2022] [Revised: 12/25/2022] [Accepted: 12/27/2022] [Indexed: 01/03/2023]
Abstract
Emotion plays a vital role in understanding the affective state of mind of an individual. In recent years, emotion classification using electroencephalogram (EEG) has emerged as a key element of affective computing. Many researchers have prepared datasets, such as DEAP and SEED, containing EEG signals captured by the elicitation of emotion using audio-visual stimuli, and many studies have been conducted to classify emotions using these datasets. However, baseline power removal is still considered one of the trivial aspects of preprocessing in feature extraction. The most common technique that prevails is subtracting the baseline power from the trial EEG power. In this paper, a novel method called InvBase method is proposed for removing baseline power before extracting features that remain invariant irrespective of the subject. The features extracted from the baseline removed EEG data are then used for classification of two classes of emotion, i.e., valence and arousal. The proposed scheme is compared with subtractive and no-baseline-correction methods. In terms of classification accuracy, it outperforms the existing state-of-art methods in both valence and arousal classification. The InvBase method plus multilayer perceptron shows an improvement of 29% over the no-baseline-correction method and 15% over the subtractive method.
Collapse
Affiliation(s)
- Md. Zaved Iqubal Ahmed
- Department of Computer Science & Engineering, National Institute of Technology, Silchar 788010, India
- Correspondence: (M.Z.I.A.); (E.G.)
| | - Nidul Sinha
- Department of Electrical Engineering, National Institute of Technology, Silchar 788010, India
| | - Ebrahim Ghaderpour
- Department of Earth Sciences and CERI Research Center, Sapienza University of Rome, Piazzale Aldo Moro, 5, 00185 Rome, Italy
- Correspondence: (M.Z.I.A.); (E.G.)
| | - Souvik Phadikar
- Neurology Department, University of Wisconsin-Madison, Madison, WI 53705, USA
| | - Rajdeep Ghosh
- School of Computing Science and Engineering, VIT Bhopal University, Bhopal 466114, India
| |
Collapse
|
8
|
Weerasinghe M, Quigley A, Pucihar KC, Toniolo A, Miguel A, Kljun M. Arigatō: Effects of Adaptive Guidance On Engagement and Performance in Augmented Reality Learning Environments. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; PP:3737-3747. [PMID: 36048999 DOI: 10.1109/tvcg.2022.3203088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Experiential learning (ExL) is the process of learning through experience or more specifically "learning through reflection on doing". In this paper, we propose a simulation of these experiences, in Augmented Reality (AR), addressing the problem of language learning. Such systems provide an excellent setting to support "adaptive guidance", in a digital form, within a real environment. Adaptive guidance allows the instructions and learning content to be customised for the individual learner, thus creating a unique learning experience. We developed an adaptive guidance AR system for language learning, we call Arigato (Augmented Reality Instructional ¯ Guidance & Tailored Omniverse), which offers immediate assistance, resources specific to the learner's needs, manipulation of these resources, and relevant feedback. Considering guidance, we employ this prototype to investigate the effect of the amount of guidance (fixed vs. adaptive-amount) and the type of guidance (fixed vs. adaptive-associations) on the engagement and consequently the learning outcomes of language learning in an AR environment. The results for the amount of guidance show that compared to the adaptive-amount, the fixed-amount of guidance group scored better in the immediate and delayed (after 7 days) recall tests. However, this group also invested a significantly higher mental effort to complete the task. The results for the type of guidance show that the adaptive-associations group outperforms the fixed-associations group in the immediate, delayed (after 7 days) recall tests, and learning efficiency. The adaptive-associations group also showed significantly lower mental effort and spent less time to complete the task.
Collapse
|
9
|
The Effect of Time Window Length on EEG-Based Emotion Recognition. SENSORS 2022; 22:s22134939. [PMID: 35808434 PMCID: PMC9269830 DOI: 10.3390/s22134939] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Revised: 06/27/2022] [Accepted: 06/27/2022] [Indexed: 11/29/2022]
Abstract
Various lengths of time window have been used in feature extraction for electroencephalogram (EEG) signal processing in previous studies. However, the effect of time window length on feature extraction for the downstream tasks such as emotion recognition has not been well examined. To this end, we investigate the effect of different time window (TW) lengths on human emotion recognition to find the optimal TW length for extracting electroencephalogram (EEG) emotion signals. Both power spectral density (PSD) features and differential entropy (DE) features are used to evaluate the effectiveness of different TW lengths based on the SJTU emotion EEG dataset (SEED). Different lengths of TW are then processed with an EEG feature-processing approach, namely experiment-level batch normalization (ELBN). The processed features are used to perform emotion recognition tasks in the six classifiers, the results of which are then compared with the results without ELBN. The recognition accuracies indicate that a 2-s TW length has the best performance on emotion recognition and is the most suitable to be used in EEG feature extraction for emotion recognition. The deployment of ELBN in the 2-s TW can further improve the emotion recognition performances by 21.63% and 5.04% when using an SVM based on PSD and DE features, respectively. These results provide a solid reference for the selection of TW length in analyzing EEG signals for applications in intelligent systems.
Collapse
|
10
|
EEG emotion recognition based on enhanced SPD matrix and manifold dimensionality reduction. Comput Biol Med 2022; 146:105606. [PMID: 35588679 DOI: 10.1016/j.compbiomed.2022.105606] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2022] [Revised: 05/08/2022] [Accepted: 05/09/2022] [Indexed: 11/21/2022]
Abstract
Recently, Riemannian geometry-based pattern recognition has been widely employed to brain computer interface (BCI) researches, providing new idea for emotion recognition based on electroencephalogram (EEG) signals. Although the symmetric positive definite (SPD) matrix manifold constructed from the traditional covariance matrix contains large amount of spatial information, these methods do not perform well to classify and recognize emotions, and the high dimensionality problem still unsolved. Therefore, this paper proposes a new strategy for EEG emotion recognition utilizing Riemannian geometry with the aim of achieving better classification performance. The emotional EEG signals of 32 healthy subjects were from an open-source dataset (DEAP). The wavelet packets were first applied to extract the time-frequency features of the EEG signals, and then the features were used to construct the enhanced SPD matrix. A supervised dimensionality reduction algorithm was then designed on the Riemannian manifold to reduce the high dimensionality of the SPD matrices, gather samples of the same labels together, and separate samples of different labels as much as possible. Finally, the samples were mapped to the tangent space, and the K-nearest neighbors (KNN), Random Forest (RF) and Support Vector Machine (SVM) method were employed for classification. The proposed method achieved an average accuracy of 91.86%, 91.84% on the valence and arousal recognition tasks. Furthermore, we also obtained the superior accuracy of 86.71% on the four-class recognition task, demonstrated the superiority over state-of-the-art emotion recognition methods.
Collapse
|
11
|
Rahman MM, Sarkar AK, Hossain MA, Hossain MS, Islam MR, Hossain MB, Quinn JMW, Moni MA. Recognition of human emotions using EEG signals: A review. Comput Biol Med 2021; 136:104696. [PMID: 34388471 DOI: 10.1016/j.compbiomed.2021.104696] [Citation(s) in RCA: 36] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2021] [Revised: 07/23/2021] [Accepted: 07/23/2021] [Indexed: 10/20/2022]
Abstract
Assessment of the cognitive functions and state of clinical subjects is an important aspect of e-health care delivery, and in the development of novel human-machine interfaces. A subject can display a range of emotions that significantly influence cognition, and emotion classification through the analysis of physiological signals is a key means of detecting emotion. Electroencephalography (EEG) signals have become a common focus of such development compared to other physiological signals because EEG employs simple and subject-acceptable methods for obtaining data that can be used for emotion analysis. We have therefore reviewed published studies that have used EEG signal data to identify possible interconnections between emotion and brain activity. We then describe theoretical conceptualization of basic emotions, and interpret the prevailing techniques that have been adopted for feature extraction, selection, and classification. Finally, we have compared the outcomes of these recent studies and discussed the likely future directions and main challenges for researchers developing EEG-based emotion analysis methods.
Collapse
Affiliation(s)
- Md Mustafizur Rahman
- Department of Electrical and Electronic Engineering, Jashore University of Science & Technology, Jashore, 7408, Bangladesh.
| | - Ajay Krishno Sarkar
- Department of Electrical and Electronic Engineering, Rajshahi University of Engineering & Technology, Rajshahi, 6204, Bangladesh.
| | - Md Amzad Hossain
- Department of Electrical and Electronic Engineering, Jashore University of Science & Technology, Jashore, 7408, Bangladesh.
| | - Md Selim Hossain
- Department of Electrical and Electronic Engineering, Rajshahi University of Engineering & Technology, Rajshahi, 6204, Bangladesh.
| | - Md Rabiul Islam
- Department of Electrical and Electronic Engineering, Khulna University of Engineering & Technology, Khulna, 9203, Bangladesh.
| | - Md Biplob Hossain
- Department of Electrical and Electronic Engineering, Jashore University of Science & Technology, Jashore, 7408, Bangladesh.
| | - Julian M W Quinn
- Healthy Ageing Theme, Garvan Institute of Medical Research, Darlinghurst, NSW, 2010, Australia.
| | - Mohammad Ali Moni
- Healthy Ageing Theme, Garvan Institute of Medical Research, Darlinghurst, NSW, 2010, Australia; School of Health and Rehabilitation Sciences, Faculty of Health and Behavioural Sciences, The University of Queensland St Lucia, QLD 4072, Australia.
| |
Collapse
|
12
|
Differences first in asymmetric brain: A bi-hemisphere discrepancy convolutional neural network for EEG emotion recognition. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.03.105] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
13
|
Chen J, Li H, Ma L, Bo H, Soong F, Shi Y. Dual-Threshold-Based Microstate Analysis on Characterizing Temporal Dynamics of Affective Process and Emotion Recognition From EEG Signals. Front Neurosci 2021; 15:689791. [PMID: 34335165 PMCID: PMC8318040 DOI: 10.3389/fnins.2021.689791] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Accepted: 06/14/2021] [Indexed: 11/13/2022] Open
Abstract
Recently, emotion classification from electroencephalogram (EEG) data has attracted much attention. As EEG is an unsteady and rapidly changing voltage signal, the features extracted from EEG usually change dramatically, whereas emotion states change gradually. Most existing feature extraction approaches do not consider these differences between EEG and emotion. Microstate analysis could capture important spatio-temporal properties of EEG signals. At the same time, it could reduce the fast-changing EEG signals to a sequence of prototypical topographical maps. While microstate analysis has been widely used to study brain function, few studies have used this method to analyze how brain responds to emotional auditory stimuli. In this study, the authors proposed a novel feature extraction method based on EEG microstates for emotion recognition. Determining the optimal number of microstates automatically is a challenge for applying microstate analysis to emotion. This research proposed dual-threshold-based atomize and agglomerate hierarchical clustering (DTAAHC) to determine the optimal number of microstate classes automatically. By using the proposed method to model the temporal dynamics of auditory emotion process, we extracted microstate characteristics as novel temporospatial features to improve the performance of emotion recognition from EEG signals. We evaluated the proposed method on two datasets. For public music-evoked EEG Dataset for Emotion Analysis using Physiological signals, the microstate analysis identified 10 microstates which together explained around 86% of the data in global field power peaks. The accuracy of emotion recognition achieved 75.8% in valence and 77.1% in arousal using microstate sequence characteristics as features. Compared to previous studies, the proposed method outperformed the current feature sets. For the speech-evoked EEG dataset, the microstate analysis identified nine microstates which together explained around 85% of the data. The accuracy of emotion recognition achieved 74.2% in valence and 72.3% in arousal using microstate sequence characteristics as features. The experimental results indicated that microstate characteristics can effectively improve the performance of emotion recognition from EEG signals.
Collapse
Affiliation(s)
- Jing Chen
- School of Computer Science and Technology, Faculty of Computing, Harbin Institute of Technology, Harbin, China
| | - Haifeng Li
- School of Computer Science and Technology, Faculty of Computing, Harbin Institute of Technology, Harbin, China
| | - Lin Ma
- School of Computer Science and Technology, Faculty of Computing, Harbin Institute of Technology, Harbin, China
| | - Hongjian Bo
- Shenzhen Academy of Aerospace Technology, Shenzhen, China
| | - Frank Soong
- Speech Group, Microsoft Research Asia, Beijing, China
| | - Yaohui Shi
- Heilongjiang Provincial Hospital, Harbin, China
| |
Collapse
|
14
|
Balconi M, Fronda G. How to Induce and Recognize Facial Expression of Emotions by Using Past Emotional Memories: A Multimodal Neuroscientific Algorithm. Front Psychol 2021; 12:619590. [PMID: 34040557 PMCID: PMC8141597 DOI: 10.3389/fpsyg.2021.619590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2020] [Accepted: 04/14/2021] [Indexed: 11/16/2022] Open
Affiliation(s)
- Michela Balconi
- International Research Center for Cognitive Applied Neuroscience (IrcCAN), Catholic University of the Sacred Heart, Milan, Italy
- Research Unit in Affective and Social Neuroscience, Department of Psychology, Catholic University of the Sacred Heart, Milan, Italy
| | - Giulia Fronda
- International Research Center for Cognitive Applied Neuroscience (IrcCAN), Catholic University of the Sacred Heart, Milan, Italy
- Research Unit in Affective and Social Neuroscience, Department of Psychology, Catholic University of the Sacred Heart, Milan, Italy
| |
Collapse
|
15
|
Maheshwari D, Ghosh SK, Tripathy RK, Sharma M, Acharya UR. Automated accurate emotion recognition system using rhythm-specific deep convolutional neural network technique with multi-channel EEG signals. Comput Biol Med 2021; 134:104428. [PMID: 33984749 DOI: 10.1016/j.compbiomed.2021.104428] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Revised: 04/15/2021] [Accepted: 04/19/2021] [Indexed: 10/21/2022]
Abstract
Emotion is interpreted as a psycho-physiological process, and it is associated with personality, behavior, motivation, and character of a person. The objective of affective computing is to recognize different types of emotions for human-computer interaction (HCI) applications. The spatiotemporal brain electrical activity is measured using multi-channel electroencephalogram (EEG) signals. Automated emotion recognition using multi-channel EEG signals is an exciting research topic in cognitive neuroscience and affective computing. This paper proposes the rhythm-specific multi-channel convolutional neural network (CNN) based approach for automated emotion recognition using multi-channel EEG signals. The delta (δ), theta (θ), alpha (α), beta (β), and gamma (γ) rhythms of EEG signal for each channel are evaluated using band-pass filters. The EEG rhythms from the selected channels coupled with deep CNN are used for emotion classification tasks such as low-valence (LV) vs. high valence (HV), low-arousal (LA) vs. high-arousal (HA), and low-dominance (LD) vs. high dominance (HD) respectively. The deep CNN architecture considered in the proposed work has eight convolutions, three average pooling, four batch-normalization, three spatial drop-outs, two drop-outs, one global average pooling and, three dense layers. We have validated our developed model using three publicly available databases: DEAP, DREAMER, and DASPS. The results reveal that the proposed multivariate deep CNN approach coupled with β-rhythm has obtained the accuracy values of 98.91%, 98.45%, and 98.69% for LV vs. HV, LA vs. HA, and LD vs. HD emotion classification strategies, respectively using DEAP database with 10-fold cross-validation (CV) scheme. Similarly, the accuracy values of 98.56%, 98.82%, and 98.99% are obtained for LV vs. HV, LA vs. HA, and LD vs. HD classification schemes, respectively, using deep CNN and θ-rhythm. The proposed multi-channel rhythm-specific deep CNN classification model has obtained the average accuracy value of 57.14% using α-rhythm and trial-specific CV using DASPS database. Moreover, for 8-quadrant based emotion classification strategy, the deep CNN based classifier has obtained an overall accuracy value of 24.37% using γ-rhythms of multi-channel EEG signals. Our developed deep CNN model can be used for real-time automated emotion recognition applications.
Collapse
Affiliation(s)
- Daksh Maheshwari
- Department of Electrical and Electronics Engineering, BITS-Pilani, Hyderabad Campus, Hyderabad, 500078, India
| | - S K Ghosh
- Department of Electrical and Electronics Engineering, BITS-Pilani, Hyderabad Campus, Hyderabad, 500078, India
| | - R K Tripathy
- Department of Electrical and Electronics Engineering, BITS-Pilani, Hyderabad Campus, Hyderabad, 500078, India.
| | - Manish Sharma
- Department of Electrical and Computer Science Engineering, IITRAM, Ahmedabad, India
| | - U Rajendra Acharya
- Department of Electronics and Computer Engineering, Ngee Ann Polytechnic, Singapore; Department of Bioinformatics and Medical Engineering, Asia University, Taichung, Taiwan; International Research Organization for Advanced Science and Technology, Kumamoto University, Kumamoto, Japan
| |
Collapse
|
16
|
Emotion Recognition: An Evaluation of ERP Features Acquired from Frontal EEG Electrodes. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11094131] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The challenge to develop an affective Brain Computer Interface requires the understanding of emotions psychologically, physiologically as well as analytically. To make the analysis and classification of emotions possible, emotions have been represented in a two-dimensional or three-dimensional space represented by arousal and valence domains or arousal, valence and dominance domains, respectively. This paper presents the classification of emotions into four classes in an arousal–valence plane using the orthogonal nature of emotions. The average Event Related Potential (ERP) attributes and differential of average ERPs acquired from the frontal region of 24 subjects have been used to classify emotions into four classes. The attributes acquired from the frontal electrodes, viz., Fp1, Fp2, F3, F4, F8 and Fz, have been used for developing a classifier. The four-class subject-independent emotion classification results in the range of 67–83% have been obtained. Using three classifiers, a mid-range accuracy of 85% has been obtained, which is considerably better than existing studies on ERPs.
Collapse
|
17
|
Yin Y, Zheng X, Hu B, Zhang Y, Cui X. EEG emotion recognition using fusion model of graph convolutional neural networks and LSTM. Appl Soft Comput 2021. [DOI: 10.1016/j.asoc.2020.106954] [Citation(s) in RCA: 85] [Impact Index Per Article: 28.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
18
|
Shen F, Peng Y, Kong W, Dai G. Multi-Scale Frequency Bands Ensemble Learning for EEG-Based Emotion Recognition. SENSORS 2021; 21:s21041262. [PMID: 33578835 PMCID: PMC7916620 DOI: 10.3390/s21041262] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/26/2021] [Revised: 02/05/2021] [Accepted: 02/06/2021] [Indexed: 11/16/2022]
Abstract
Emotion recognition has a wide range of potential applications in the real world. Among the emotion recognition data sources, electroencephalography (EEG) signals can record the neural activities across the human brain, providing us a reliable way to recognize the emotional states. Most of existing EEG-based emotion recognition studies directly concatenated features extracted from all EEG frequency bands for emotion classification. This way assumes that all frequency bands share the same importance by default; however, it cannot always obtain the optimal performance. In this paper, we present a novel multi-scale frequency bands ensemble learning (MSFBEL) method to perform emotion recognition from EEG signals. Concretely, we first re-organize all frequency bands into several local scales and one global scale. Then we train a base classifier on each scale. Finally we fuse the results of all scales by designing an adaptive weight learning method which automatically assigns larger weights to more important scales to further improve the performance. The proposed method is validated on two public data sets. For the “SEED IV” data set, MSFBEL achieves average accuracies of 82.75%, 87.87%, and 78.27% on the three sessions under the within-session experimental paradigm. For the “DEAP” data set, it obtains average accuracy of 74.22% for four-category classification under 5-fold cross validation. The experimental results demonstrate that the scale of frequency bands influences the emotion recognition rate, while the global scale that directly concatenating all frequency bands cannot always guarantee to obtain the best emotion recognition performance. Different scales provide complementary information to each other, and the proposed adaptive weight learning method can effectively fuse them to further enhance the performance.
Collapse
Affiliation(s)
- Fangyao Shen
- School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou 310018, China; (F.S.); (Y.P.); (W.K.)
| | - Yong Peng
- School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou 310018, China; (F.S.); (Y.P.); (W.K.)
- MoE Key Laboratory of Advanced Perception and Intelligent Control of High-End Equipment, Anhui Polytechnic University, Wuhu 241000, China
| | - Wanzeng Kong
- School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou 310018, China; (F.S.); (Y.P.); (W.K.)
- Key Laboratory of Brain Machine Collaborative Intelligence of Zhejiang Province, Hangzhou 310018, China
| | - Guojun Dai
- School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou 310018, China; (F.S.); (Y.P.); (W.K.)
- Key Laboratory of Brain Machine Collaborative Intelligence of Zhejiang Province, Hangzhou 310018, China
- Correspondence:
| |
Collapse
|
19
|
Cheng J, Chen M, Li C, Liu Y, Song R, Liu A, Chen X. Emotion Recognition From Multi-Channel EEG via Deep Forest. IEEE J Biomed Health Inform 2021; 25:453-464. [PMID: 32750905 DOI: 10.1109/jbhi.2020.2995767] [Citation(s) in RCA: 46] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Recently, deep neural networks (DNNs) have been applied to emotion recognition tasks based on electroencephalography (EEG), and have achieved better performance than traditional algorithms. However, DNNs still have the disadvantages of too many hyperparameters and lots of training data. To overcome these shortcomings, in this article, we propose a method for multi-channel EEG-based emotion recognition using deep forest. First, we consider the effect of baseline signal to preprocess the raw artifact-eliminated EEG signal with baseline removal. Secondly, we construct 2 D frame sequences by taking the spatial position relationship across channels into account. Finally, 2 D frame sequences are input into the classification model constructed by deep forest that can mine the spatial and temporal information of EEG signals to classify EEG emotions. The proposed method can eliminate the need for feature extraction in traditional methods and the classification model is insensitive to hyperparameter settings, which greatly reduce the complexity of emotion recognition. To verify the feasibility of the proposed model, experiments were conducted on two public DEAP and DREAMER databases. On the DEAP database, the average accuracies reach to 97.69% and 97.53% for valence and arousal, respectively; on the DREAMER database, the average accuracies reach to 89.03%, 90.41%, and 89.89% for valence, arousal and dominance, respectively. These results show that the proposed method exhibits higher accuracy than the state-of-art methods.
Collapse
|
20
|
Wosiak A, Dura A. Hybrid Method of Automated EEG Signals' Selection Using Reversed Correlation Algorithm for Improved Classification of Emotions. SENSORS 2020; 20:s20247083. [PMID: 33321895 PMCID: PMC7764031 DOI: 10.3390/s20247083] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/30/2020] [Revised: 12/07/2020] [Accepted: 12/08/2020] [Indexed: 11/16/2022]
Abstract
Based on the growing interest in encephalography to enhance human-computer interaction (HCI) and develop brain-computer interfaces (BCIs) for control and monitoring applications, efficient information retrieval from EEG sensors is of great importance. It is difficult due to noise from the internal and external artifacts and physiological interferences. The enhancement of the EEG-based emotion recognition processes can be achieved by selecting features that should be taken into account in further analysis. Therefore, the automatic feature selection of EEG signals is an important research area. We propose a multistep hybrid approach incorporating the Reversed Correlation Algorithm for automated frequency band-electrode combinations selection. Our method is simple to use and significantly reduces the number of sensors to only three channels. The proposed method has been verified by experiments performed on the DEAP dataset. The obtained effects have been evaluated regarding the accuracy of two emotions-valence and arousal. In comparison to other research studies, our method achieved classification results that were 4.20-8.44% greater. Moreover, it can be perceived as a universal EEG signal classification technique, as it belongs to unsupervised methods.
Collapse
|
21
|
Maruyama Y, Ogata Y, Martínez-Tejada LA, Koike Y, Yoshimura N. Independent Components of EEG Activity Correlating with Emotional State. Brain Sci 2020; 10:brainsci10100669. [PMID: 32992779 PMCID: PMC7600548 DOI: 10.3390/brainsci10100669] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2020] [Revised: 09/17/2020] [Accepted: 09/23/2020] [Indexed: 12/28/2022] Open
Abstract
Among brain-computer interface studies, electroencephalography (EEG)-based emotion recognition is receiving attention and some studies have performed regression analyses to recognize small-scale emotional changes; however, effective brain regions in emotion regression analyses have not been identified yet. Accordingly, this study sought to identify neural activities correlating with emotional states in the source space. We employed independent component analysis, followed by a source localization method, to obtain distinct neural activities from EEG signals. After the identification of seven independent component (IC) clusters in a k-means clustering analysis, group-level regression analyses using frequency band power of the ICs were performed based on Russell's valence-arousal model. As a result, in the regression of the valence level, an IC cluster located in the cuneus predicted both high- and low-valence states and two other IC clusters located in the left precentral gyrus and the precuneus predicted the low-valence state. In the regression of the arousal level, the IC cluster located in the cuneus predicted both high- and low-arousal states and two posterior IC clusters located in the cingulate gyrus and the precuneus predicted the high-arousal state. In this proof-of-concept study, we revealed neural activities correlating with specific emotional states across participants, despite individual differences in emotional processing.
Collapse
Affiliation(s)
- Yasuhisa Maruyama
- Institute of Innovative Research, Tokyo Institute of Technology, Yokohama, Kanagawa 226-8503, Japan; (Y.M.); (Y.O.); (L.A.M.-T.); (Y.K.)
| | - Yousuke Ogata
- Institute of Innovative Research, Tokyo Institute of Technology, Yokohama, Kanagawa 226-8503, Japan; (Y.M.); (Y.O.); (L.A.M.-T.); (Y.K.)
- Department of Advanced Neuroimaging, Integrative Brain Imaging Center, National Center of Neurology and Psychiatry, Kodaira, Tokyo 187-8551, Japan
| | - Laura A. Martínez-Tejada
- Institute of Innovative Research, Tokyo Institute of Technology, Yokohama, Kanagawa 226-8503, Japan; (Y.M.); (Y.O.); (L.A.M.-T.); (Y.K.)
| | - Yasuharu Koike
- Institute of Innovative Research, Tokyo Institute of Technology, Yokohama, Kanagawa 226-8503, Japan; (Y.M.); (Y.O.); (L.A.M.-T.); (Y.K.)
- Department of Advanced Neuroimaging, Integrative Brain Imaging Center, National Center of Neurology and Psychiatry, Kodaira, Tokyo 187-8551, Japan
| | - Natsue Yoshimura
- Institute of Innovative Research, Tokyo Institute of Technology, Yokohama, Kanagawa 226-8503, Japan; (Y.M.); (Y.O.); (L.A.M.-T.); (Y.K.)
- Department of Advanced Neuroimaging, Integrative Brain Imaging Center, National Center of Neurology and Psychiatry, Kodaira, Tokyo 187-8551, Japan
- PRESTO, JST, Kawaguchi, Saitama 332-0012, Japan
- Neural Information Analysis Laboratories, ATR, Kyoto 619-0288, Japan
- Correspondence:
| |
Collapse
|
22
|
EEG-based emotion recognition using 4D convolutional recurrent neural network. Cogn Neurodyn 2020; 14:815-828. [PMID: 33101533 DOI: 10.1007/s11571-020-09634-1] [Citation(s) in RCA: 40] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Revised: 08/25/2020] [Accepted: 09/04/2020] [Indexed: 01/22/2023] Open
Abstract
In this paper, we present a novel method, called four-dimensional convolutional recurrent neural network, which integrating frequency, spatial and temporal information of multichannel EEG signals explicitly to improve EEG-based emotion recognition accuracy. First, to maintain these three kinds of information of EEG, we transform the differential entropy features from different channels into 4D structures to train the deep model. Then, we introduce CRNN model, which is combined by convolutional neural network (CNN) and recurrent neural network with long short term memory (LSTM) cell. CNN is used to learn frequency and spatial information from each temporal slice of 4D inputs, and LSTM is used to extract temporal dependence from CNN outputs. The output of the last node of LSTM performs classification. Our model achieves state-of-the-art performance both on SEED and DEAP datasets under intra-subject splitting. The experimental results demonstrate the effectiveness of integrating frequency, spatial and temporal information of EEG for emotion recognition.
Collapse
|
23
|
Baghizadeh M, Maghooli K, Farokhi F, Dabanloo NJ. A new emotion detection algorithm using extracted features of the different time-series generated from ST intervals Poincaré map. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2020.101902] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
24
|
Sharma R, Pachori RB, Sircar P. Automated emotion recognition based on higher order statistics and deep learning algorithm. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2020.101867] [Citation(s) in RCA: 60] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
25
|
Wang Y, Qiu S, Li J, Ma X, Liang Z, Li H, He H. EEG-Based Emotion Recognition with Similarity Learning Network. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:1209-1212. [PMID: 31946110 DOI: 10.1109/embc.2019.8857499] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Emotion recognition is an important field of research in Affective Computing (AC), and the EEG signal is one of useful signals in detecting and evaluating emotion. With the development of the deep learning, the neural network is widely used in constructing the EEG-based emotion recognition model. In this paper, we propose an effective similarity learning network, on the basis of a bidirectional long short term memory (BLSTM) network. The pairwise constrain loss will help to learn a more discriminative embedding feature space, combined with the traditional supervised classification loss function. The experiment result demonstrates that the pairwise constrain loss can significantly improve the emotion classification performance. In addition, our method outperforms the state-of-the-art emotion classification approaches in the benchmark EEG emotion dataset-SEED dataset, which get a mean accuracy of 94.62%.
Collapse
|
26
|
Wang Y, Qiu S, Zhao C, Yang W, Li J, Ma X, He H. EEG-Based Emotion Recognition with Prototype-Based Data Representation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:684-689. [PMID: 31945990 DOI: 10.1109/embc.2019.8857340] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Emotions play an important role in human communication, and EEG signals are widely used for emotion recognition. Despite the extensive research of EEG in recent year, it is still challenging to interpret EEG signals effectively due to the massive noises in EEG signals. In this paper, we propose an effective emotion recognition framework, which contains two main parts: the representation network and the prototype selection algorithm. Through our proposed representation network, samples from the same kind of emotion state are more close to each other in high-level representation, and then, we selected the prototypes from the clustering set in feature space match the following testing samples. This method takes advantage of the powerful representation ability of deep learning and learns a better describable feature space rather than learn the classifier explicitly. The experiments on SEED dataset achieves a high accuracy of 93.29% and outperforms a set of baseline methods and the recent deep learning emotion classification approaches. These experimental results demonstrate the effectiveness of our proposed emotion recognition framework.
Collapse
|
27
|
Abstract
The emerging field of affective computing focuses on enhancing computers’ ability to understand and appropriately respond to people’s affective states in human-computer interactions, and has revealed significant potential for a wide spectrum of applications. Recently, the electroencephalography (EEG) based affective computing has gained increasing interest for its good balance between mechanistic exploration and real-world practical application. The present work reviewed ten theoretical and operational challenges for the existing affective computing researches from an interdisciplinary perspective of information technology, psychology, and neuroscience. On the theoretical side, we suggest that researchers should be well aware of the limitations of the commonly used emotion models, and be cautious about the widely accepted assumptions on EEG-emotion relationships as well as the transferability of findings based on different research paradigms. On the practical side, we propose several operational recommendations for the challenges about data collection, feature extraction, model implementation, online system design, as well as the potential ethical issues. The present review is expected to contribute to an improved understanding of EEG-based affective computing and promote further applications.
Collapse
Affiliation(s)
- Xin Hu
- Department of Psychology, School of Social Sciences, Tsinghua University, Beijing 100084, China
- These authors contributed equally to this work
| | - Jingjing Chen
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China
- These authors contributed equally to this work
| | - Fei Wang
- Department of Psychology, School of Social Sciences, Tsinghua University, Beijing 100084, China
- Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing 100084, China
| | - Dan Zhang
- Department of Psychology, School of Social Sciences, Tsinghua University, Beijing 100084, China
- Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing 100084, China
| |
Collapse
|
28
|
von Düring F, Ristow I, Li M, Denzel D, Colic L, Demenescu LR, Li S, Borchardt V, Liebe T, Vogel M, Walter M. Glutamate in Salience Network Predicts BOLD Response in Default Mode Network During Salience Processing. Front Behav Neurosci 2019; 13:232. [PMID: 31632250 PMCID: PMC6783560 DOI: 10.3389/fnbeh.2019.00232] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2019] [Accepted: 09/17/2019] [Indexed: 01/09/2023] Open
Abstract
Background Brain investigations identified salience network (SN) comprising the dorsal Anterior Cingulate Cortex (dACC) and the Anterior Insula (AI). Magnetic resonance spectroscopy (MRS) studies revealed the link between the glutamate concentration in the ACC and alterations in attentional scope. Hence, we investigated whether glutamate concentration in the dACC modulates brain response during salience processing. Methods Twenty-seven healthy subjects (12♀, 15♁) provided both STEAM MRS at 7T measuring glutamate concentrations in the dACC as well as a functional magnetic resonance imaging (fMRI) task to study the influence on content-related salience processing and expectedness. Salience was modulated for both sexual and non-sexual emotional photos in either expected or unexpected situations. Correlation between MRS and task fMRI was investigated by performing regression analyses controlling for age, gender, and gray matter partial volume. Results/Conclusion During picture processing, the extent of deactivation in the Posterior Cingulate Cortex (PCC) was attenuated by two different salience attributions: sexual content and unexpectedness of emotional content. Our results indicate that stimulus inherent salience induces an attenuation of the deactivation in PCC, which is in turn balanced by higher level of glutamate in the dACC.
Collapse
Affiliation(s)
- Felicia von Düring
- Clinical Affective Neuroimaging Laboratory, Otto von Guericke University Magdeburg, Magdeburg, Germany
| | - Inka Ristow
- Clinical Affective Neuroimaging Laboratory, Otto von Guericke University Magdeburg, Magdeburg, Germany
| | - Meng Li
- Clinical Affective Neuroimaging Laboratory, Otto von Guericke University Magdeburg, Magdeburg, Germany.,Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Dominik Denzel
- Clinical Affective Neuroimaging Laboratory, Otto von Guericke University Magdeburg, Magdeburg, Germany
| | - Lejla Colic
- Clinical Affective Neuroimaging Laboratory, Otto von Guericke University Magdeburg, Magdeburg, Germany.,Department of Behavioral Neurology, Leibniz Institute for Neurobiology, Magdeburg, Germany.,Department of Psychiatry, Yale School of Medicine, New Haven, CT, United States
| | - Liliana Ramona Demenescu
- Clinical Affective Neuroimaging Laboratory, Otto von Guericke University Magdeburg, Magdeburg, Germany
| | - Shijia Li
- Clinical Affective Neuroimaging Laboratory, Otto von Guericke University Magdeburg, Magdeburg, Germany.,School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
| | - Viola Borchardt
- Clinical Affective Neuroimaging Laboratory, Otto von Guericke University Magdeburg, Magdeburg, Germany.,Department of Behavioral Neurology, Leibniz Institute for Neurobiology, Magdeburg, Germany
| | - Thomas Liebe
- Clinical Affective Neuroimaging Laboratory, Otto von Guericke University Magdeburg, Magdeburg, Germany
| | - Matthias Vogel
- Department of Psychosomatic Medicine and Psychotherapy, Otto von Guericke University Magdeburg, Magdeburg, Germany
| | - Martin Walter
- Clinical Affective Neuroimaging Laboratory, Otto von Guericke University Magdeburg, Magdeburg, Germany.,Max Planck Institute for Biological Cybernetics, Tübingen, Germany.,Department of Behavioral Neurology, Leibniz Institute for Neurobiology, Magdeburg, Germany.,Department of Psychiatry and Psychotherapy, Otto von Guericke University Magdeburg, Magdeburg, Germany.,Center for Behavioral Brain Sciences, Magdeburg, Germany.,Department of Psychiatry and Psychotherapy, University of Tübingen, Tübingen, Germany.,Department of Psychiatry and Psychotherapy, University of Jena, Jena, Germany
| |
Collapse
|
29
|
Ramzan M, Dawn S. Learning-based classification of valence emotion from electroencephalography. Int J Neurosci 2019; 129:1085-1093. [PMID: 31215829 DOI: 10.1080/00207454.2019.1634070] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
The neuroimaging research field has been revolutionized with the development of human cognitive functions without the use of brain pathways. To assist such systems, electroencephalography (EEG) based measures play an important role. In this study, the publicly available database of emotion analysis using physiological signals, has been used to identify the human emotions such as valence (positive/negative) from the given recorded EEG signals. With the identification of such emotion, the feeling of goodness or badness related individual experiences with the situation can be identified from his/her brain signals. The different machine learning classifiers such as random forest, decisions trees, K-nearest neighbor, support vector machines, naive Bayes and neural network have been used to identify and evaluate such emotions. The previous work done by the other authors on the same dataset using various quantitative approaches are compared with the approaches used in this study yields higher accuracy rates with the random forest and decision tree. The effectiveness of each classifier in terms of statistical measures such as accuracy, F-score, etc. has been evaluated. The random forest classifier was found to outperform with an accuracy of 98%, closely followed by the Decision tree at 94% are the most effective classifiers in classifying the valence emotions of the EEG data for 6 subjects.
Collapse
Affiliation(s)
- Munaza Ramzan
- Department of Computer Science & Engineering, Jaypee Institute of Information Technology , Noida , India
| | - Suma Dawn
- Department of Computer Science & Engineering, Jaypee Institute of Information Technology , Noida , India
| |
Collapse
|
30
|
Gao Z, Cui X, Wan W, Gu Z. Recognition of Emotional States Using Multiscale Information Analysis of High Frequency EEG Oscillations. ENTROPY (BASEL, SWITZERLAND) 2019; 21:E609. [PMID: 33267323 PMCID: PMC7515095 DOI: 10.3390/e21060609] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/22/2019] [Revised: 06/18/2019] [Accepted: 06/18/2019] [Indexed: 11/18/2022]
Abstract
Exploring the manifestation of emotion in electroencephalogram (EEG) signals is helpful for improving the accuracy of emotion recognition. This paper introduced the novel features based on the multiscale information analysis (MIA) of EEG signals for distinguishing emotional states in four dimensions based on Russell's circumplex model. The algorithms were applied to extract features on the DEAP database, which included multiscale EEG complexity index in the time domain, and ensemble empirical mode decomposition enhanced energy and fuzzy entropy in the frequency domain. The support vector machine and cross validation method were applied to assess classification accuracy. The classification performance of MIA methods (accuracy = 62.01%, precision = 62.03%, recall/sensitivity = 60.51%, and specificity = 82.80%) was much higher than classical methods (accuracy = 43.98%, precision = 43.81%, recall/sensitivity = 41.86%, and specificity = 70.50%), which extracted features contain similar energy based on a discrete wavelet transform, fractal dimension, and sample entropy. In this study, we found that emotion recognition is more associated with high frequency oscillations (51-100Hz) of EEG signals rather than low frequency oscillations (0.3-49Hz), and the significance of the frontal and temporal regions are higher than other regions. Such information has predictive power and may provide more insights into analyzing the multiscale information of high frequency oscillations in EEG signals.
Collapse
Affiliation(s)
- Zhilin Gao
- Key Laboratory of Child Development and Learning Science, Ministry of Education, School of Biological Science & Medical Engineering, Southeast University, Nanjing 210000, China
| | - Xingran Cui
- Key Laboratory of Child Development and Learning Science, Ministry of Education, School of Biological Science & Medical Engineering, Southeast University, Nanjing 210000, China
- Institute of Biomedical Devices (Suzhou), Southeast University, Suzhou 215000, China
| | - Wang Wan
- Key Laboratory of Child Development and Learning Science, Ministry of Education, School of Biological Science & Medical Engineering, Southeast University, Nanjing 210000, China
| | - Zhongze Gu
- Key Laboratory of Child Development and Learning Science, Ministry of Education, School of Biological Science & Medical Engineering, Southeast University, Nanjing 210000, China
| |
Collapse
|
31
|
Chao H, Dong L, Liu Y, Lu B. Emotion Recognition from Multiband EEG Signals Using CapsNet. SENSORS (BASEL, SWITZERLAND) 2019; 19:E2212. [PMID: 31086110 PMCID: PMC6540345 DOI: 10.3390/s19092212] [Citation(s) in RCA: 125] [Impact Index Per Article: 25.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/26/2019] [Revised: 05/09/2019] [Accepted: 05/10/2019] [Indexed: 11/16/2022]
Abstract
Emotion recognition based on multi-channel electroencephalograph (EEG) signals is becoming increasingly attractive. However, the conventional methods ignore the spatial characteristics of EEG signals, which also contain salient information related to emotion states. In this paper, a deep learning framework based on a multiband feature matrix (MFM) and a capsule network (CapsNet) is proposed. In the framework, the frequency domain, spatial characteristics, and frequency band characteristics of the multi-channel EEG signals are combined to construct the MFM. Then, the CapsNet model is introduced to recognize emotion states according to the input MFM. Experiments conducted on the dataset for emotion analysis using EEG, physiological, and video signals (DEAP) indicate that the proposed method outperforms most of the common models. The experimental results demonstrate that the three characteristics contained in the MFM were complementary and the capsule network was more suitable for mining and utilizing the three correlation characteristics.
Collapse
Affiliation(s)
- Hao Chao
- School of Computer Science and Technology, Henan Polytechnic University, Jiaozuo 454000, China.
| | - Liang Dong
- School of Computer Science and Technology, Henan Polytechnic University, Jiaozuo 454000, China.
| | - Yongli Liu
- School of Computer Science and Technology, Henan Polytechnic University, Jiaozuo 454000, China.
| | - Baoyun Lu
- School of Computer Science and Technology, Henan Polytechnic University, Jiaozuo 454000, China.
| |
Collapse
|
32
|
Taran S, Bajaj V. Emotion recognition from single-channel EEG signals using a two-stage correlation and instantaneous frequency-based filtering method. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 173:157-165. [PMID: 31046991 DOI: 10.1016/j.cmpb.2019.03.015] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/24/2019] [Revised: 03/06/2019] [Accepted: 03/20/2019] [Indexed: 06/09/2023]
Abstract
BACKGROUND AND OBJECTIVE The recognition of emotional states is a crucial step in the development of a brain-computer interface (BCI) system. Emotion recognition system finds applications in medical science for the impaired and disabled people. Electroencephalography assesses the neurophysiology of the brain for recognition of different emotional states. METHODS The audio-video stimulus based experimental setup is arranged for the electroencephalogram (EEG) recordings of happy, fear, sad, and relax emotions and a two-stage filtering method is proposed for the recognition of emotion EEG signals. At the first stage, a correlation-criterion is suggested for removal of noisy intrinsic mode functions (IMFs) from the IMFs obtained by applying the empirical mode decomposition on the raw EEG signal. The noise-free IMFs are used to reconstruct the denoised EEG signal with improved stationarity characteristics. The denoised EEG signal is further decomposed into modes using the variational mode decomposition (VMD). At the second stage, the instantaneous-frequency based filtering of VMD modes is performed and filtered modes are retained for the reconstruction of denoised EEG signal with the desired frequency range. After two-stage filtering, the non-linear measures of filtered EEG signals are used as input features to multi-class least squares support vector machine (MC-LS-SVM) classifier for emotion recognition. RESULTS The different kernel functions are tested in MC-LS-SVM classifier for emotion recognition. The Morlet wavelet (MW) kernel function provides the best individual classification accuracies for happy, fear, sad, and relax emotions as 92.79%, 87.62%, 88.98%, and 93.13%, respectively. The MW-kernel function also obtained the best overall accuracy of 90.63%, F1-score 0.9064, and kappa value 0.8751. CONCLUSIONS The Audio-video stimulus based emotion EEG-dataset is recorded. A new filtering method is proposed for EEG signals. The proposed method provides better emotion recognition performance as compared to the state-of-the-art methods and classifies emotions using single-bipolar EEG channel, which can greatly reduce the complexity of emotion-recognition based BCI systems.
Collapse
Affiliation(s)
- Sachin Taran
- Discipline of Electronics and Communication Engineering, PDPM Indian Institute of Information Technology, Design and Manufacturing, Jabalpur 452005, India.
| | - Varun Bajaj
- Discipline of Electronics and Communication Engineering, PDPM Indian Institute of Information Technology, Design and Manufacturing, Jabalpur 452005, India.
| |
Collapse
|
33
|
Hu X, Chen J, Wang F, Zhang D. Ten challenges for EEG-based affective computing. BRAIN SCIENCE ADVANCES 2019. [DOI: 10.26599/bsa.2019.9050005] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
|
34
|
Zilidou VI, Frantzidis CA, Romanopoulou ED, Paraskevopoulos E, Douka S, Bamidis PD. Functional Re-organization of Cortical Networks of Senior Citizens After a 24-Week Traditional Dance Program. Front Aging Neurosci 2018; 10:422. [PMID: 30618727 PMCID: PMC6308125 DOI: 10.3389/fnagi.2018.00422] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2018] [Accepted: 12/04/2018] [Indexed: 12/22/2022] Open
Abstract
Neuroscience is developing rapidly by providing a variety of modern tools for analyzing the functional interactions of the brain and detection of pathological deviations due to neurodegeneration. The present study argues that the induction of neuroplasticity of the mature human brain leads to the prevention of dementia. Promising solution seems to be the dance programs because they combine cognitive and physical activity in a pleasant way. So, we investigated whether the traditional Greek dances can improve the cognitive, physical and functional status of the elderly always aiming at promoting active and healthy aging. Forty-four participants were randomly assigned equally to the training group and an active control group. The duration of the program was 6 months. Also, the participants were evaluated for their physical status and through an electroencephalographic (EEG) examination at rest (eyes-closed condition). The EEG testing was performed 1–14 days before (pre) and after (post) the training. Cortical network analysis was applied by modeling the cortex through a generic anatomical model of 20,000 fixed dipoles. These were grouped into 512 cortical regions of interest (ROIs). High quality, artifact-free data resulting from an elaborate pre-processing pipeline were segmented into multiple, 30 s of continuous epochs. Then, functional connectivity among those ROIs was performed for each epoch through the relative wavelet entropy (RWE). Synchronization matrices were computed and then thresholded in order to provide binary, directed cortical networks of various density ranges. The results showed that the dance training improved optimal network performance as estimated by the small-world property. Further analysis demonstrated that there were also local network changes resulting in better information flow and functional re-organization of the network nodes. These results indicate the application of the dance training as a possible non-pharmacological intervention for promoting mental and physical well-being of senior citizens. Our results were also compared with a combination of computerized cognitive and physical training, which has already been demonstrated to induce neuroplasticity (LLM Care).
Collapse
Affiliation(s)
- Vasiliki I Zilidou
- Laboratory of Medical Physics, Medical School, Aristotle University of Thessaloniki, Thessaloniki, Greece.,Department of Physical Activity and Recreation, School of Physical Education and Sport Science, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Christos A Frantzidis
- Laboratory of Medical Physics, Medical School, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Evangelia D Romanopoulou
- Laboratory of Medical Physics, Medical School, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Evangelos Paraskevopoulos
- Laboratory of Medical Physics, Medical School, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Styliani Douka
- Department of Physical Activity and Recreation, School of Physical Education and Sport Science, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Panagiotis D Bamidis
- Laboratory of Medical Physics, Medical School, Aristotle University of Thessaloniki, Thessaloniki, Greece
| |
Collapse
|
35
|
Shi F, Dey N, Ashour AS, Sifaki-Pistolla D, Sherratt RS. Meta-KANSEI Modeling with Valence-Arousal fMRI Dataset of Brain. Cognit Comput 2018. [DOI: 10.1007/s12559-018-9614-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
36
|
Recognition of Emotions Using Multichannel EEG Data and DBN-GC-Based Ensemble Deep Learning Framework. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2018; 2018:9750904. [PMID: 30647727 PMCID: PMC6311795 DOI: 10.1155/2018/9750904] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/14/2018] [Revised: 11/14/2018] [Accepted: 11/25/2018] [Indexed: 11/29/2022]
Abstract
Fusing multichannel neurophysiological signals to recognize human emotion states becomes increasingly attractive. The conventional methods ignore the complementarity between time domain characteristics, frequency domain characteristics, and time-frequency characteristics of electroencephalogram (EEG) signals and cannot fully capture the correlation information between different channels. In this paper, an integrated deep learning framework based on improved deep belief networks with glia chains (DBN-GCs) is proposed. In the framework, the member DBN-GCs are employed for extracting intermediate representations of EEG raw features from multiple domains separately, as well as mining interchannel correlation information by glia chains. Then, the higher level features describing time domain characteristics, frequency domain characteristics, and time-frequency characteristics are fused by a discriminative restricted Boltzmann machine (RBM) to implement emotion recognition task. Experiments conducted on the DEAP benchmarking dataset achieve averaged accuracy of 75.92% and 76.83% for arousal and valence states classification, respectively. The results show that the proposed framework outperforms most of the above deep classifiers. Thus, potential of the proposed framework is demonstrated.
Collapse
|
37
|
Teixeira AR, Santos IM, Tomé AM. Identifying evoked potential response patterns using independent component analysis and unsupervised learning. Biomed Phys Eng Express 2018. [DOI: 10.1088/2057-1976/aaeeed] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
38
|
Zhang Y, Qin F, Liu B, Qi X, Zhao Y, Zhang D. Wearable Neurophysiological Recordings in Middle-School Classroom Correlate With Students' Academic Performance. Front Hum Neurosci 2018; 12:457. [PMID: 30483086 PMCID: PMC6240591 DOI: 10.3389/fnhum.2018.00457] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2018] [Accepted: 10/25/2018] [Indexed: 11/13/2022] Open
Abstract
The rapid development of wearable bio-sensing techniques has made it possible to continuously record neurophysiological signals in naturalistic scenarios such as the classroom. The present study aims to explore the neurophysiological correlates of middle-school students' academic performance. The electrodermal signals (EDAs) and heart rates (HRs) were collected via wristband from 100 Grade seven students during their daily Chinese and math classes for 10 days in 2 weeks. Significant correlations were found between the academic performance as reflected by the students' final exam scores and the EDA responses. Further regression analyses revealed significant prediction of the academic performance mainly by the transient EDA responses (R 2 = 0.083, p < 0.05, with Chinese classes only; R 2 = 0.030, p < 0.05, with both Chinese and math classes included). By combining the self-report data about session-based general statuses and the neurophysiological data, the explained powers of the regression models were further improved (R 2 = 0.095, p < 0.05, with Chinese classes only; R 2 = 0.057, p < 0.05, with both Chinese and math classes included), and the neurophysiological data were shown to have independent contributions to the regression models. In addition, the regression models became non-significant by exchanging the academic performances of the Chinese and math classes as the dependent variables, suggesting at least partly distinct neurophysiological responses for the two types of classes. Our findings provide evidences supporting the feasibility of predicting educational outputs by wearable neurophysiological recordings.
Collapse
Affiliation(s)
- Yu Zhang
- Institute of Education, Tsinghua University, Beijing, China
| | - Fei Qin
- Institute of Education, Tsinghua University, Beijing, China
| | - Bo Liu
- Institute of Education, Tsinghua University, Beijing, China
| | - Xuan Qi
- Institute of Education, Tsinghua University, Beijing, China
| | - Yingying Zhao
- Institute of Education, Tsinghua University, Beijing, China
| | - Dan Zhang
- Department of Psychology, School of Social Sciences, Tsinghua University, Beijing, China
| |
Collapse
|
39
|
Bajaj V, Taran S, Sengur A. Emotion classification using flexible analytic wavelet transform for electroencephalogram signals. Health Inf Sci Syst 2018; 6:12. [PMID: 30279982 DOI: 10.1007/s13755-018-0048-y] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2018] [Accepted: 08/07/2018] [Indexed: 11/24/2022] Open
Abstract
Emotion based brain computer system finds applications for impaired people to communicate with surroundings. In this paper, electroencephalogram (EEG) database of four emotions (happy, fear, sad, and relax) is recorded and flexible analytic wavelet transform (FAWT) is proposed for the emotion classification. FAWT analyzes the EEG signal into sub-bands and statistical measures are computed from the sub-bands for extraction of emotion specific information. The emotion classification performance of sub-band wise extracted features is examined over the variants of k-nearest-neighbor (KNN) classifier. The weighted-KNN provides the best emotion classification performance 86.1% as compared to other KNN variants. The proposed method shows better emotion classification performance as compared to other existing four emotions classification methods.
Collapse
Affiliation(s)
- Varun Bajaj
- 1PDPM Indian Institute of Information Technology, Design and Manufacturing Jabalpur, Jabalpur, 452005 India
| | - Sachin Taran
- 1PDPM Indian Institute of Information Technology, Design and Manufacturing Jabalpur, Jabalpur, 452005 India
| | - Abdulkadir Sengur
- 2Electrical and Electronics Engineering Department, Technology Faculty, Firat University, Elazig, Turkey
| |
Collapse
|
40
|
Bagha S, Tripathy RK, Nanda P, Preetam C, Das DP. Understanding perception of active noise control system through multichannel EEG analysis. Healthc Technol Lett 2018; 5:101-106. [PMID: 29923552 PMCID: PMC5998761 DOI: 10.1049/htl.2017.0016] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2017] [Revised: 12/14/2017] [Accepted: 04/05/2018] [Indexed: 12/05/2022] Open
Abstract
In this Letter, a method is proposed to investigate the effect of noise with and without active noise control (ANC) on multichannel electroencephalogram (EEG) signal. The multichannel EEG signal is recorded during different listening conditions such as silent, music, noise, ANC with background noise and ANC with both background noise and music. The multiscale analysis of EEG signal of each channel is performed using the discrete wavelet transform. The multivariate multiscale matrices are formulated based on the sub-band signals of each EEG channel. The singular value decomposition is applied to the multivariate matrices of multichannel EEG at significant scales. The singular value features at significant scales and the extreme learning machine classifier with three different activation functions are used for classification of multichannel EEG signal. The experimental results demonstrate that, for ANC with noise and ANC with noise and music classes, the proposed method has sensitivity values of 75.831% (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{upgreek}
\usepackage{mathrsfs}
\setlength{\oddsidemargin}{-69pt}
\begin{document}
}{}$p \lt 0.001$\end{document}p<0.001) and 99.31% (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{upgreek}
\usepackage{mathrsfs}
\setlength{\oddsidemargin}{-69pt}
\begin{document}
}{}$p \lt 0.001$\end{document}p<0.001), respectively. The method has an accuracy value of 83.22% for the classification of EEG signal with music and ANC with music as stimuli. The important finding of this study is that by the introduction of ANC, music can be better perceived by the human brain.
Collapse
Affiliation(s)
- Sangeeta Bagha
- Department of Process Modelling and Instrumentation, CSIR-Institute of Minerals and Materials Technology, Bhubaneswar, India.,Academy of Scientific and Innovative Research (AcSIR), India.,Silicon Institute of Technology, Bhubaneswar, India
| | - R K Tripathy
- Faculty of Engineering and Technology (ITER), Siksha 'O' Anusandhan, Bhubaneswar, India
| | - Pranati Nanda
- Department of Physiology, All India Institute of Medical Sciences (AIIMS), Bhubaneswar, India
| | - C Preetam
- Department of ENT, All India Institute of Medical Sciences (AIIMS), Bhubaneswar, India
| | - Debi Prasad Das
- Department of Process Modelling and Instrumentation, CSIR-Institute of Minerals and Materials Technology, Bhubaneswar, India.,Academy of Scientific and Innovative Research (AcSIR), India
| |
Collapse
|
41
|
Yang Y, Wu QMJ, Zheng WL, Lu BL. EEG-Based Emotion Recognition Using Hierarchical Network With Subnetwork Nodes. IEEE Trans Cogn Dev Syst 2018. [DOI: 10.1109/tcds.2017.2685338] [Citation(s) in RCA: 71] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
42
|
Simultaneous EEG Analysis and Feature Extraction Selection Based on Unsupervised Learning. Brain Inform 2018. [DOI: 10.1007/978-3-030-05587-5_25] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022] Open
|
43
|
Review and Classification of Emotion Recognition Based on EEG Brain-Computer Interface System Research: A Systematic Review. APPLIED SCIENCES-BASEL 2017. [DOI: 10.3390/app7121239] [Citation(s) in RCA: 118] [Impact Index Per Article: 16.9] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
|
44
|
Human Emotion Recognition with Electroencephalographic Multidimensional Features by Hybrid Deep Neural Networks. APPLIED SCIENCES-BASEL 2017. [DOI: 10.3390/app7101060] [Citation(s) in RCA: 51] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
45
|
Lin YP, Jao PK, Yang YH. Improving Cross-Day EEG-Based Emotion Classification Using Robust Principal Component Analysis. Front Comput Neurosci 2017; 11:64. [PMID: 28769778 PMCID: PMC5515900 DOI: 10.3389/fncom.2017.00064] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2017] [Accepted: 06/30/2017] [Indexed: 11/20/2022] Open
Abstract
Constructing a robust emotion-aware analytical framework using non-invasively recorded electroencephalogram (EEG) signals has gained intensive attentions nowadays. However, as deploying a laboratory-oriented proof-of-concept study toward real-world applications, researchers are now facing an ecological challenge that the EEG patterns recorded in real life substantially change across days (i.e., day-to-day variability), arguably making the pre-defined predictive model vulnerable to the given EEG signals of a separate day. The present work addressed how to mitigate the inter-day EEG variability of emotional responses with an attempt to facilitate cross-day emotion classification, which was less concerned in the literature. This study proposed a robust principal component analysis (RPCA)-based signal filtering strategy and validated its neurophysiological validity and machine-learning practicability on a binary emotion classification task (happiness vs. sadness) using a five-day EEG dataset of 12 subjects when participated in a music-listening task. The empirical results showed that the RPCA-decomposed sparse signals (RPCA-S) enabled filtering off the background EEG activity that contributed more to the inter-day variability, and predominately captured the EEG oscillations of emotional responses that behaved relatively consistent along days. Through applying a realistic add-day-in classification validation scheme, the RPCA-S progressively exploited more informative features (from 12.67 ± 5.99 to 20.83 ± 7.18) and improved the cross-day binary emotion-classification accuracy (from 58.31 ± 12.33% to 64.03 ± 8.40%) as trained the EEG signals from one to four recording days and tested against one unseen subsequent day. The original EEG features (prior to RPCA processing) neither achieved the cross-day classification (the accuracy was around chance level) nor replicated the encouraging improvement due to the inter-day EEG variability. This result demonstrated the effectiveness of the proposed method and may shed some light on developing a realistic emotion-classification analytical framework alleviating day-to-day variability.
Collapse
Affiliation(s)
- Yuan-Pin Lin
- Institute of Medical Science and Technology, National Sun Yat-sen UniversityKaohsiung, Taiwan.,Institute for Neural Computation, University of California, San DiegoLa Jolla, CA, United States
| | - Ping-Keng Jao
- Center for Neuroprosthetics, School of Engineering, École Polytechnique Fédérale de LausanneGeneva, Switzerland.,Research Center for IT Innovation, Academia SinicaTaipei, Taiwan
| | - Yi-Hsuan Yang
- Research Center for IT Innovation, Academia SinicaTaipei, Taiwan
| |
Collapse
|
46
|
Lin YP, Jung TP. Improving EEG-Based Emotion Classification Using Conditional Transfer Learning. Front Hum Neurosci 2017; 11:334. [PMID: 28701938 PMCID: PMC5486154 DOI: 10.3389/fnhum.2017.00334] [Citation(s) in RCA: 46] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2017] [Accepted: 06/09/2017] [Indexed: 11/13/2022] Open
Abstract
To overcome the individual differences, an accurate electroencephalogram (EEG)-based emotion-classification system requires a considerable amount of ecological calibration data for each individual, which is labor-intensive and time-consuming. Transfer learning (TL) has drawn increasing attention in the field of EEG signal mining in recent years. The TL leverages existing data collected from other people to build a model for a new individual with little calibration data. However, brute-force transfer to an individual (i.e., blindly leveraged the labeled data from others) may lead to a negative transfer that degrades performance rather than improving it. This study thus proposed a conditional TL (cTL) framework to facilitate a positive transfer (improving subject-specific performance without increasing the labeled data) for each individual. The cTL first assesses an individual’s transferability for positive transfer and then selectively leverages the data from others with comparable feature spaces. The empirical results showed that among 26 individuals, the proposed cTL framework identified 16 and 14 transferable individuals who could benefit from the data from others for emotion valence and arousal classification, respectively. These transferable individuals could then leverage the data from 18 and 12 individuals who had similar EEG signatures to attain maximal TL improvements in valence- and arousal-classification accuracy. The cTL improved the overall classification performance of 26 individuals by ~15% for valence categorization and ~12% for arousal counterpart, as compared to their default performance based solely on the subject-specific data. This study evidently demonstrated the feasibility of the proposed cTL framework for improving an individual’s default emotion-classification performance given a data repository. The cTL framework may shed light on the development of a robust emotion-classification model using fewer labeled subject-specific data toward a real-life affective brain-computer interface (ABCI).
Collapse
Affiliation(s)
- Yuan-Pin Lin
- Institute of Medical Science and Technology, National Sun Yat-sen UniversityKaohsiung, Taiwan.,Institute for Neural Computation, University of CaliforniaSan Diego, San Diego, CA, United States
| | - Tzyy-Ping Jung
- Institute for Neural Computation, University of CaliforniaSan Diego, San Diego, CA, United States
| |
Collapse
|
47
|
Islam MN, Seera M, Loo CK. A robust incremental clustering-based facial feature tracking. Appl Soft Comput 2017. [DOI: 10.1016/j.asoc.2016.12.033] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
48
|
Yin Z, Zhao M, Wang Y, Yang J, Zhang J. Recognition of emotions using multimodal physiological signals and an ensemble deep learning model. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2017; 140:93-110. [PMID: 28254094 DOI: 10.1016/j.cmpb.2016.12.005] [Citation(s) in RCA: 103] [Impact Index Per Article: 14.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/20/2016] [Revised: 10/31/2016] [Accepted: 12/12/2016] [Indexed: 05/23/2023]
Abstract
BACKGROUND AND OBJECTIVE Using deep-learning methodologies to analyze multimodal physiological signals becomes increasingly attractive for recognizing human emotions. However, the conventional deep emotion classifiers may suffer from the drawback of the lack of the expertise for determining model structure and the oversimplification of combining multimodal feature abstractions. METHODS In this study, a multiple-fusion-layer based ensemble classifier of stacked autoencoder (MESAE) is proposed for recognizing emotions, in which the deep structure is identified based on a physiological-data-driven approach. Each SAE consists of three hidden layers to filter the unwanted noise in the physiological features and derives the stable feature representations. An additional deep model is used to achieve the SAE ensembles. The physiological features are split into several subsets according to different feature extraction approaches with each subset separately encoded by a SAE. The derived SAE abstractions are combined according to the physiological modality to create six sets of encodings, which are then fed to a three-layer, adjacent-graph-based network for feature fusion. The fused features are used to recognize binary arousal or valence states. RESULTS DEAP multimodal database was employed to validate the performance of the MESAE. By comparing with the best existing emotion classifier, the mean of classification rate and F-score improves by 5.26%. CONCLUSIONS The superiority of the MESAE against the state-of-the-art shallow and deep emotion classifiers has been demonstrated under different sizes of the available physiological instances.
Collapse
Affiliation(s)
- Zhong Yin
- Engineering Research Center of Optical Instrument and System, Ministry of Education, Shanghai Key Lab of Modern Optical System, University of Shanghai for Science and Technology, Shanghai, 200093, PR China.
| | - Mengyuan Zhao
- School of Social Sciences, University of Shanghai for Science and Technology, Shanghai, 200093, PR China
| | - Yongxiong Wang
- Engineering Research Center of Optical Instrument and System, Ministry of Education, Shanghai Key Lab of Modern Optical System, University of Shanghai for Science and Technology, Shanghai, 200093, PR China.
| | - Jingdong Yang
- Engineering Research Center of Optical Instrument and System, Ministry of Education, Shanghai Key Lab of Modern Optical System, University of Shanghai for Science and Technology, Shanghai, 200093, PR China
| | - Jianhua Zhang
- Department of Automation, East China University of Science and Technology, Shanghai 200237, PR China
| |
Collapse
|
49
|
Mehmood RM, Lee HJ. Towards Building a Computer Aided Education System for Special Students Using Wearable Sensor Technologies. SENSORS 2017; 17:s17020317. [PMID: 28208734 PMCID: PMC5335943 DOI: 10.3390/s17020317] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/21/2016] [Revised: 01/27/2017] [Accepted: 02/04/2017] [Indexed: 12/05/2022]
Abstract
Human computer interaction is a growing field in terms of helping people in their daily life to improve their living. Especially, people with some disability may need an interface which is more appropriate and compatible with their needs. Our research is focused on similar kinds of problems, such as students with some mental disorder or mood disruption problems. To improve their learning process, an intelligent emotion recognition system is essential which has an ability to recognize the current emotional state of the brain. Nowadays, in special schools, instructors are commonly use some conventional methods for managing special students for educational purposes. In this paper, we proposed a novel computer aided method for instructors at special schools where they can teach special students with the support of our system using wearable technologies.
Collapse
Affiliation(s)
- Raja Majid Mehmood
- Division of Computer Science and Engineering, Chonbuk National University, Jeonju 54896, Korea.
| | - Hyo Jong Lee
- Division of Computer Science and Engineering, Chonbuk National University, Jeonju 54896, Korea.
- Center for Advanced Image and Information Technology, Chonbuk National University, Jeonju 54896, Korea.
| |
Collapse
|
50
|
|