1
|
Goizueta S, Maza A, Sierra A, Navarro MD, Noé E, Ferri J, Llorens R. Heart rate variability responses to personalized and non-personalized affective videos. A study on healthy subjects and patients with disorders of consciousness. Front Psychol 2025; 16:1560496. [PMID: 40248829 PMCID: PMC12004283 DOI: 10.3389/fpsyg.2025.1560496] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2025] [Accepted: 03/14/2025] [Indexed: 04/19/2025] Open
Abstract
Introduction The diagnosis of patients with disorders of consciousness (DOC), including those in a minimally conscious state (MCS) and those with unresponsive wakefulness syndrome (UWS), remains a significant clinical challenge. Neurobehavioral assessment primarily relies on motor responses to commands, which are often difficult to interpret due to impaired comprehension and cognitive-motor dissociation, resulting in a high rate of misdiagnosis. While electrical, hemodynamic, and metabolic brain responses, combined with personalized stimuli, have shown promise in improving diagnosis, the role of cardiac activity-less intrusive and time-efficient-remains underexplored. Methods This study investigated heart rate variability (HRV) responses to personalized videos of acquaintances versus non-personalized videos of strangers. The study included 17 healthy subjects and 11 patients with DOC. Cardiac responses were recorded and analyzed to compare responses to different stimuli and to examine differences between the two groups. Results Healthy subjects exhibited significant differences in several HRV measures in response to both personalized and non-personalized stimuli, whereas patients with DOC did not demonstrate similar differences. Additionally, significant differences were observed in HRV measures between healthy subjects and patients with DOC. Conclusion These findings suggest impaired emotional processing in patients with DOC. Further exploration of these differences may enhance diagnostic approaches for this patient population, particularly through the integration of HRV-based measures.
Collapse
Affiliation(s)
- Sandra Goizueta
- Neurorehabilitation and Brain Research Group, Institute for Human-Centered Technology Research, Universitat Politècnica de València, València, Spain
| | - Anny Maza
- Neurorehabilitation and Brain Research Group, Institute for Human-Centered Technology Research, Universitat Politècnica de València, València, Spain
| | - Ana Sierra
- Neurorehabilitation and Brain Research Group, Institute for Human-Centered Technology Research, Universitat Politècnica de València, València, Spain
| | - María Dolores Navarro
- IRENEA, Instituto de Rehabilitación Neurológica, Fundación Hospitales Vithas, València, Spain
| | - Enrique Noé
- IRENEA, Instituto de Rehabilitación Neurológica, Fundación Hospitales Vithas, València, Spain
| | - Joan Ferri
- IRENEA, Instituto de Rehabilitación Neurológica, Fundación Hospitales Vithas, València, Spain
| | - Roberto Llorens
- Neurorehabilitation and Brain Research Group, Institute for Human-Centered Technology Research, Universitat Politècnica de València, València, Spain
| |
Collapse
|
2
|
Zhu Z, Wang X, Xu Y, Chen W, Zheng J, Chen S, Chen H. An emotion recognition method based on frequency-domain features of PPG. Front Physiol 2025; 16:1486763. [PMID: 40070463 PMCID: PMC11893849 DOI: 10.3389/fphys.2025.1486763] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2024] [Accepted: 01/29/2025] [Indexed: 03/14/2025] Open
Abstract
Objective This study aims to employ physiological model simulation to systematically analyze the frequency-domain components of PPG signals and extract their key features. The efficacy of these frequency-domain features in effectively distinguishing emotional states will also be investigated. Methods A dual windkessel model was employed to analyze PPG signal frequency components and extract distinctive features. Experimental data collection encompassed both physiological (PPG) and psychological measurements, with subsequent analysis involving distribution patterns and statistical testing (U-tests) to examine feature-emotion relationships. The study implemented support vector machine (SVM) classification to evaluate feature effectiveness, complemented by comparative analysis using pulse rate variability (PRV) features, morphological features, and the DEAP dataset. Results The results demonstrate significant differentiation in PPG frequency-domain feature responses to arousal and valence variations, achieving classification accuracies of 87.5% and 81.4%, respectively. Validation on the DEAP dataset yielded consistent patterns with accuracies of 73.5% (arousal) and 71.5% (valence). Feature fusion incorporating the proposed frequency-domain features enhanced classification performance, surpassing 90% accuracy. Conclusion This study uses physiological modeling to analyze PPG signal frequency components and extract key features. We evaluate their effectiveness in emotion recognition and reveal relationships among physiological parameters, frequency features, and emotional states. Significance These findings advance understanding of emotion recognition mechanisms and provide a foundation for future research.
Collapse
Affiliation(s)
- Zhibin Zhu
- College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, China
| | - Xuanyi Wang
- Department of Psychology and Behaviorial Sciences, Zhejiang University, Hangzhou, China
| | - Yifei Xu
- College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, China
| | - Wanlin Chen
- Department of Psychology and Behaviorial Sciences, Zhejiang University, Hangzhou, China
| | - Jing Zheng
- College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, China
| | - Shulin Chen
- Department of Psychology and Behaviorial Sciences, Zhejiang University, Hangzhou, China
| | - Hang Chen
- College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, China
- Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Hangzhou, China
- Connected Healthcare Big Data Research Center, Zhejiang Lab, Hangzhou, China
| |
Collapse
|
3
|
Zieni B, Ritchie MA, Mandalari AM, Boem F. An Interdisciplinary Overview on Ambient Assisted Living Systems for Health Monitoring at Home: Trade-Offs and Challenges. SENSORS (BASEL, SWITZERLAND) 2025; 25:853. [PMID: 39943492 PMCID: PMC11819874 DOI: 10.3390/s25030853] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/19/2024] [Revised: 01/23/2025] [Accepted: 01/24/2025] [Indexed: 02/16/2025]
Abstract
The integration of IoT and Ambient Assisted Living (AAL) enables discreet real-time health monitoring in home environments, offering significant potential for personalized and preventative care. However, challenges persist in balancing privacy, cost, usability, and system reliability. This paper provides an overview of recent advancements in sensor and IoT technologies for assisted living, with a focus on elderly individuals living independently. It categorizes sensor types and technologies that enhance healthcare delivery and explores an interdisciplinary framework encompassing sensing, communication, and decision-making systems. Through this analysis, this paper highlights current applications, identifies emerging challenges, and pinpoints critical areas for future research. This paper aims to inform ongoing discourse and advocate for interdisciplinary approaches in system design to address existing trade-offs and optimize performance.
Collapse
Affiliation(s)
- Baraa Zieni
- Department of Electronic and Electrical Engineering, University College London, London WC1E 7JE, UK; (A.M.M.); (F.B.)
| | - Matthew A. Ritchie
- Department of Electronic and Electrical Engineering, University College London, London WC1E 7JE, UK; (A.M.M.); (F.B.)
| | | | | |
Collapse
|
4
|
Choi GY, Shin JG, Lee JY, Lee JS, Heo IS, Yoon HY, Lim W, Jeong JW, Kim SH, Hwang HJ. EEG Dataset for the Recognition of Different Emotions Induced in Voice-User Interaction. Sci Data 2024; 11:1084. [PMID: 39362909 PMCID: PMC11449991 DOI: 10.1038/s41597-024-03887-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Accepted: 09/17/2024] [Indexed: 10/05/2024] Open
Abstract
Electroencephalography (EEG)-based open-access datasets are available for emotion recognition studies, where external auditory/visual stimuli are used to artificially evoke pre-defined emotions. In this study, we provide a novel EEG dataset containing the emotional information induced during a realistic human-computer interaction (HCI) using a voice user interface system that mimics natural human-to-human communication. To validate our dataset via neurophysiological investigation and binary emotion classification, we applied a series of signal processing and machine learning methods to the EEG data. The maximum classification accuracy ranged from 43.3% to 90.8% over 38 subjects and classification features could be interpreted neurophysiologically. Our EEG data could be used to develop a reliable HCI system because they were acquired in a natural HCI environment. In addition, auxiliary physiological data measured simultaneously with the EEG data also showed plausible results, i.e., electrocardiogram, photoplethysmogram, galvanic skin response, and facial images, which could be utilized for automatic emotion discrimination independently from, as well as together with the EEG data via the fusion of multi-modal physiological datasets.
Collapse
Affiliation(s)
- Ga-Young Choi
- Department of Electronics and Information Engineering, Korea University, Sejong, 30019, Republic of Korea
| | - Jong-Gyu Shin
- Department of Industrial Engineering, Kumoh National Institute of Technology, Gumi, 39177, Republic of Korea
| | - Ji-Yoon Lee
- Department of Electronics and Information Engineering, Korea University, Sejong, 30019, Republic of Korea
- Interdisciplinary Graduate Program for Artificial Intelligence Smart Convergence Technology, Korea University, Sejong, 30019, Republic of Korea
| | - Jun-Seok Lee
- Department of Electronics and Information Engineering, Korea University, Sejong, 30019, Republic of Korea
- Interdisciplinary Graduate Program for Artificial Intelligence Smart Convergence Technology, Korea University, Sejong, 30019, Republic of Korea
| | - In-Seok Heo
- Department of Industrial Engineering, Kumoh National Institute of Technology, Gumi, 39177, Republic of Korea
| | - Ha-Yeong Yoon
- Department of Data Science, Seoul National University of Science and Technology, Seoul, 01811, Republic of Korea
| | - Wansu Lim
- School of Electronic and Electrical Engineering, Sungkyunkwan University, Suwon, 16419, Republic of Korea
| | - Jin-Woo Jeong
- Department of Data Science, Seoul National University of Science and Technology, Seoul, 01811, Republic of Korea
| | - Sang-Ho Kim
- Department of Industrial Engineering, Kumoh National Institute of Technology, Gumi, 39177, Republic of Korea.
| | - Han-Jeong Hwang
- Department of Electronics and Information Engineering, Korea University, Sejong, 30019, Republic of Korea.
- Interdisciplinary Graduate Program for Artificial Intelligence Smart Convergence Technology, Korea University, Sejong, 30019, Republic of Korea.
| |
Collapse
|
5
|
Li Y, Tan R, Lin T, Liu Q, Wang CD, Chen M. ER-GET: Emotion Recognition Based on Global ECG Trajectory. IEEE J Biomed Health Inform 2024; 28:5201-5213. [PMID: 38814766 DOI: 10.1109/jbhi.2024.3403188] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/01/2024]
Abstract
In recent years, the recognition of human emotions based on electrocardiogram (ECG) signals has been considered a novel area of study among researchers. Despite the challenge of extracting latent emotion information from ECG signals, existing methods are able to recognize emotions by calculating the heart rate variability (HRV) features. However, such local features have drawbacks, as they do not provide a comprehensive description of ECG signals, leading to suboptimal recognition performance. For the first time, we propose a new strategy to extract hidden emotional information from the global ECG trajectory for emotion recognition. Specifically, a period of ECG signals is decomposed into sub-signals of different frequency bands through ensemble empirical mode decomposition (EEMD), and a series of multi-sequence trajectory graphs is constructed by orthogonally combining these sub-signals to extract latent emotional information. Additionally, to better utilize these graph features, a network has been designed that includes self-supervised graph representation learning and ensemble learning for classification. This approach surpasses recent notable works, achieving outstanding results, with an accuracy of 95.08% in arousal and 95.90% in valence detection. Additionally, this global feature is compared and discussed in relation to HRV features, with the intention of providing inspiration for subsequent research.
Collapse
|
6
|
Zhang Y, Kang Y, Guo X, Li P, He H. The effect analysis of shape design of different charging piles based on Human physiological characteristics using the MF-DFA. Sci Rep 2024; 14:8345. [PMID: 38594451 PMCID: PMC11004129 DOI: 10.1038/s41598-024-59147-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Accepted: 04/08/2024] [Indexed: 04/11/2024] Open
Abstract
With the rapid development of new energy vehicles, the users have an increasing demand for charging piles. It is generally believed that the charging pile is a kind of practical product, and it only needs to realize the charging function. However, as a product, the shape design of the charging pile will directly affect the user experience, thus affecting product sales. Therefore, in the face of increasingly fierce market competition, when designing the shape of charging piles, it is necessary to adopt the traditional evaluation method and human physiological cognitive characteristics to evaluate the shape of charging piles more objectively. From the user's point of view, using the electroencephalogram (EEG) of the user, with the help of the multifractal detrended fluctuation analysis (MF-DFA) method, this paper comprehensively analyzes the differences in emotional cognitive characteristics between two kinds of charging piles, namely, the charging pile with a curved appearance design and the charging pile with square appearance design. The results show that there are significant differences in human physiological cognitive characteristics between two kinds of charging piles with different shapes. And different shapes of charging piles have different physiological cognitive differences for users. When designing charging pile product shapes, human beings can objectively evaluate the product shape design according to the physiological cognition differences of users, so as to optimize the charging pile product shape design.
Collapse
Affiliation(s)
- Yusheng Zhang
- Electric Power Research Institute of State Grid Shaanxi Electric Power Company, Xi'an, 710003, China.
| | - Yaoyuan Kang
- Electric Power Research Institute of State Grid Shaanxi Electric Power Company, Xi'an, 710003, China
| | - Xin Guo
- State Grid Electric Auto Service Co., Ltd, Xi'an, 710003, China
| | - Pan Li
- Electric Power Research Institute of State Grid Shaanxi Electric Power Company, Xi'an, 710003, China
| | - Hanqing He
- Electric Power Research Institute of State Grid Shaanxi Electric Power Company, Xi'an, 710003, China
| |
Collapse
|
7
|
Wang D, Lian J, Cheng H, Zhou Y. Music-evoked emotions classification using vision transformer in EEG signals. Front Psychol 2024; 15:1275142. [PMID: 38638516 PMCID: PMC11024288 DOI: 10.3389/fpsyg.2024.1275142] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 03/20/2024] [Indexed: 04/20/2024] Open
Abstract
Introduction The field of electroencephalogram (EEG)-based emotion identification has received significant attention and has been widely utilized in both human-computer interaction and therapeutic settings. The process of manually analyzing electroencephalogram signals is characterized by a significant investment of time and work. While machine learning methods have shown promising results in classifying emotions based on EEG data, the task of extracting distinct characteristics from these signals still poses a considerable difficulty. Methods In this study, we provide a unique deep learning model that incorporates an attention mechanism to effectively extract spatial and temporal information from emotion EEG recordings. The purpose of this model is to address the existing gap in the field. The implementation of emotion EEG classification involves the utilization of a global average pooling layer and a fully linked layer, which are employed to leverage the discernible characteristics. In order to assess the effectiveness of the suggested methodology, we initially gathered a dataset of EEG recordings related to music-induced emotions. Experiments Subsequently, we ran comparative tests between the state-of-the-art algorithms and the method given in this study, utilizing this proprietary dataset. Furthermore, a publicly accessible dataset was included in the subsequent comparative trials. Discussion The experimental findings provide evidence that the suggested methodology outperforms existing approaches in the categorization of emotion EEG signals, both in binary (positive and negative) and ternary (positive, negative, and neutral) scenarios.
Collapse
Affiliation(s)
- Dong Wang
- School of Information Science and Electrical Engineering, Shandong Jiaotong University, Jinan, China
- School of Intelligence Engineering, Shandong Management University, Jinan, China
| | - Jian Lian
- School of Intelligence Engineering, Shandong Management University, Jinan, China
| | - Hebin Cheng
- School of Intelligence Engineering, Shandong Management University, Jinan, China
| | - Yanan Zhou
- School of Arts, Beijing Foreign Studies University, Beijing, China
| |
Collapse
|
8
|
Lee JP, Jang H, Jang Y, Song H, Lee S, Lee PS, Kim J. Encoding of multi-modal emotional information via personalized skin-integrated wireless facial interface. Nat Commun 2024; 15:530. [PMID: 38225246 PMCID: PMC10789773 DOI: 10.1038/s41467-023-44673-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Accepted: 12/28/2023] [Indexed: 01/17/2024] Open
Abstract
Human affects such as emotions, moods, feelings are increasingly being considered as key parameter to enhance the interaction of human with diverse machines and systems. However, their intrinsically abstract and ambiguous nature make it challenging to accurately extract and exploit the emotional information. Here, we develop a multi-modal human emotion recognition system which can efficiently utilize comprehensive emotional information by combining verbal and non-verbal expression data. This system is composed of personalized skin-integrated facial interface (PSiFI) system that is self-powered, facile, stretchable, transparent, featuring a first bidirectional triboelectric strain and vibration sensor enabling us to sense and combine the verbal and non-verbal expression data for the first time. It is fully integrated with a data processing circuit for wireless data transfer allowing real-time emotion recognition to be performed. With the help of machine learning, various human emotion recognition tasks are done accurately in real time even while wearing mask and demonstrated digital concierge application in VR environment.
Collapse
Affiliation(s)
- Jin Pyo Lee
- School of Material Science and Engineering, Ulsan National Institute of Science and Technology, Ulsan, 44919, South Korea
- School of Materials Science and Engineering, Nanyang Technological University, 50 Nanyang Avenue, Singapore, 639798, Singapore
| | - Hanhyeok Jang
- School of Material Science and Engineering, Ulsan National Institute of Science and Technology, Ulsan, 44919, South Korea
| | - Yeonwoo Jang
- School of Material Science and Engineering, Ulsan National Institute of Science and Technology, Ulsan, 44919, South Korea
| | - Hyeonseo Song
- School of Material Science and Engineering, Ulsan National Institute of Science and Technology, Ulsan, 44919, South Korea
| | - Suwoo Lee
- School of Material Science and Engineering, Ulsan National Institute of Science and Technology, Ulsan, 44919, South Korea
| | - Pooi See Lee
- School of Materials Science and Engineering, Nanyang Technological University, 50 Nanyang Avenue, Singapore, 639798, Singapore.
| | - Jiyun Kim
- School of Material Science and Engineering, Ulsan National Institute of Science and Technology, Ulsan, 44919, South Korea.
- Center for Multidimensional Programmable Matter, Ulsan National Institute of Science and Technology, Ulsan, 44919, South Korea.
| |
Collapse
|
9
|
Bazargani M, Tahmasebi A, Yazdchi M, Baharlouei Z. An Emotion Recognition Embedded System using a Lightweight Deep Learning Model. JOURNAL OF MEDICAL SIGNALS & SENSORS 2023; 13:272-279. [PMID: 37809016 PMCID: PMC10559299 DOI: 10.4103/jmss.jmss_59_22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Revised: 12/16/2022] [Accepted: 03/13/2023] [Indexed: 10/10/2023]
Abstract
Background Diagnosing emotional states would improve human-computer interaction (HCI) systems to be more effective in practice. Correlations between Electroencephalography (EEG) signals and emotions have been shown in various research; therefore, EEG signal-based methods are the most accurate and informative. Methods In this study, three Convolutional Neural Network (CNN) models, EEGNet, ShallowConvNet and DeepConvNet, which are appropriate for processing EEG signals, are applied to diagnose emotions. We use baseline removal preprocessing to improve classification accuracy. Each network is assessed in two setting ways: subject-dependent and subject-independent. We improve the selected CNN model to be lightweight and implementable on a Raspberry Pi processor. The emotional states are recognized for every three-second epoch of received signals on the embedded system, which can be applied in real-time usage in practice. Results Average classification accuracies of 99.10% in the valence and 99.20% in the arousal for subject-dependent and 90.76% in the valence and 90.94% in the arousal for subject independent were achieved on the well-known DEAP dataset. Conclusion Comparison of the results with the related works shows that a highly accurate and implementable model has been achieved for practice.
Collapse
Affiliation(s)
- Mehdi Bazargani
- Department of Biomedical Engineering, Faculty of Engineering, University of Isfahan, Isfahan, Iran
| | - Amir Tahmasebi
- Department of Biomedical Engineering, Faculty of Engineering, University of Isfahan, Isfahan, Iran
| | - Mohammadreza Yazdchi
- Department of Biomedical Engineering, Faculty of Engineering, University of Isfahan, Isfahan, Iran
| | - Zahra Baharlouei
- Medical Image and Signal Processing Research Center, School of Advanced Technologies in Medicine, Isfahan University of Medical Sciences, Isfahan, Iran
| |
Collapse
|
10
|
Jeong DK, Kim HG, Kim JY. Emotion Recognition Using Hierarchical Spatiotemporal Electroencephalogram Information from Local to Global Brain Regions. Bioengineering (Basel) 2023; 10:1040. [PMID: 37760143 PMCID: PMC10525488 DOI: 10.3390/bioengineering10091040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 08/26/2023] [Accepted: 08/28/2023] [Indexed: 09/29/2023] Open
Abstract
To understand human emotional states, local activities in various regions of the cerebral cortex and the interactions among different brain regions must be considered. This paper proposes a hierarchical emotional context feature learning model that improves multichannel electroencephalography (EEG)-based emotion recognition by learning spatiotemporal EEG features from a local brain region to a global brain region. The proposed method comprises a regional brain-level encoding module, a global brain-level encoding module, and a classifier. First, multichannel EEG signals grouped into nine regions based on the functional role of the brain are input into a regional brain-level encoding module to learn local spatiotemporal information. Subsequently, the global brain-level encoding module improved emotional classification performance by integrating local spatiotemporal information from various brain regions to learn the global context features of brain regions related to emotions. Next, we applied a two-layer bidirectional gated recurrent unit (BGRU) with self-attention to the regional brain-level module and a one-layer BGRU with self-attention to the global brain-level module. Experiments were conducted using three datasets to evaluate the EEG-based emotion recognition performance of the proposed method. The results proved that the proposed method achieves superior performance by reflecting the characteristics of multichannel EEG signals better than state-of-the-art methods.
Collapse
Affiliation(s)
- Dong-Ki Jeong
- Department of Electronic Convergence Engineering, Kwangwoon University, 20 Gwangun-ro, Nowon-gu, Seoul 01897, Republic of Korea;
| | - Hyoung-Gook Kim
- Department of Electronic Convergence Engineering, Kwangwoon University, 20 Gwangun-ro, Nowon-gu, Seoul 01897, Republic of Korea;
| | - Jin-Young Kim
- Department of ICT Convergence System Engineering, Chonnam National University, 77 Yongbong-ro, Buk-gu, Gwangju 61186, Republic of Korea;
| |
Collapse
|
11
|
Nandini D, Yadav J, Rani A, Singh V. Design of subject independent 3D VAD emotion detection system using EEG signals and machine learning algorithms. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104894] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/08/2023]
|
12
|
Collins ML, Davies TC. Emotion differentiation through features of eye-tracking and pupil diameter for monitoring well-being. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083457 DOI: 10.1109/embc40787.2023.10340178] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Emotions are an important contributor to human self-expression and well-being. However, many populations express their emotions differently from what is considered "typical". Previous literature has indicated a possible relationship between emotion and eye-movement. The objective of this paper is to further explore this proposed relationship by identifying specific features of eye-movement that relate to six emotion categories: joy, surprise, indifference, disgust, sadness, and fear. Features of eye-movement are extracted from measurements of pupil diameter, saccades, and fixations. These measurements are collected as participants view images from the International Affective Picture System, a validated image deck used to evoke known levels of pleasure, arousal, and dominance. Example features of eye-movement measurements such as pupil diameter include maximum or minimum values, means, and standard deviations. Statistical analyses indicate that the extracted features of eye-tracking in this paper can identify fear and sadness with relative accuracy, while more work is needed to differentiate among joy, indifference, disgust, and surprise. Future work aims to understand differences between typically developing populations such as the individuals included in this analysis, and clinical populations such as individuals with cerebral palsy.Clinical Relevance- This pilot study suggests a link between emotion and features of eye-movement. The information will later be used to develop assistive communication devices that better meet the self-expression needs of individuals with motor and communication challenges.
Collapse
|
13
|
Zong J, Xiong X, Zhou J, Ji Y, Zhou D, Zhang Q. FCAN-XGBoost: A Novel Hybrid Model for EEG Emotion Recognition. SENSORS (BASEL, SWITZERLAND) 2023; 23:5680. [PMID: 37420845 DOI: 10.3390/s23125680] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 06/03/2023] [Accepted: 06/15/2023] [Indexed: 07/09/2023]
Abstract
In recent years, artificial intelligence (AI) technology has promoted the development of electroencephalogram (EEG) emotion recognition. However, existing methods often overlook the computational cost of EEG emotion recognition, and there is still room for improvement in the accuracy of EEG emotion recognition. In this study, we propose a novel EEG emotion recognition algorithm called FCAN-XGBoost, which is a fusion of two algorithms, FCAN and XGBoost. The FCAN module is a feature attention network (FANet) that we have proposed for the first time, which processes the differential entropy (DE) and power spectral density (PSD) features extracted from the four frequency bands of the EEG signal and performs feature fusion and deep feature extraction. Finally, the deep features are fed into the eXtreme Gradient Boosting (XGBoost) algorithm to classify the four emotions. We evaluated the proposed method on the DEAP and DREAMER datasets and achieved a four-category emotion recognition accuracy of 95.26% and 94.05%, respectively. Additionally, our proposed method reduces the computational cost of EEG emotion recognition by at least 75.45% for computation time and 67.51% for memory occupation. The performance of FCAN-XGBoost outperforms the state-of-the-art four-category model and reduces computational costs without losing classification performance compared with other models.
Collapse
Affiliation(s)
- Jing Zong
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China
| | - Xin Xiong
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China
| | - Jianhua Zhou
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China
| | - Ying Ji
- Graduate School, Kunming Medical University, Kunming 650500, China
| | - Diao Zhou
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China
| | - Qi Zhang
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China
| |
Collapse
|
14
|
Alam A, Urooj S, Ansari AQ. Design and Development of a Non-Contact ECG-Based Human Emotion Recognition System Using SVM and RF Classifiers. Diagnostics (Basel) 2023; 13:2097. [PMID: 37370991 DOI: 10.3390/diagnostics13122097] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 06/06/2023] [Accepted: 06/10/2023] [Indexed: 06/29/2023] Open
Abstract
Emotion recognition becomes an important aspect in the development of human-machine interaction (HMI) systems. Positive emotions impact our lives positively, whereas negative emotions may cause a reduction in productivity. Emotionally intelligent systems such as chatbots and artificially intelligent assistant modules help make our daily life routines effortless. Moreover, a system which is capable of assessing the human emotional state would be very helpful to assess the mental state of a person. Hence, preventive care could be offered before it becomes a mental illness or slides into a state of depression. Researchers have always been curious to find out if a machine could assess human emotions precisely. In this work, a unimodal emotion classifier system in which one of the physiological signals, an electrocardiogram (ECG) signal, has been used is proposed to classify human emotions. The ECG signal was acquired using a capacitive sensor-based non-contact ECG belt system. The machine-learning-based classifiers developed in this work are SVM and random forest with 10-fold cross-validation on three different sets of ECG data acquired for 45 subjects (15 subjects in each age group). The minimum classification accuracies achieved with SVM and RF emotion classifier models are 86.6% and 98.2%, respectively.
Collapse
Affiliation(s)
- Aftab Alam
- Department of Electrical Engineering, Jamia Millia Islamia, Delhi 110025, India
| | - Shabana Urooj
- Department of Electrical Engineering, College of Engineering, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | | |
Collapse
|
15
|
Xie X, Cai J, Fang H, Wang B, He H, Zhou Y, Xiao Y, Yamanaka T, Li X. Affective Impressions Recognition under Different Colored Lights Based on Physiological Signals and Subjective Evaluation Method. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23115322. [PMID: 37300049 DOI: 10.3390/s23115322] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Revised: 05/22/2023] [Accepted: 05/25/2023] [Indexed: 06/12/2023]
Abstract
The design of the light environment plays a critical role in the interaction between people and visual objects in space. Adjusting the space's light environment to regulate emotional experience is more practical for the observers under lighting conditions. Although lighting plays a vital role in spatial design, the effects of colored lights on individuals' emotional experiences are still unclear. This study combined physiological signal (galvanic skin response (GSR) and electrocardiography (ECG)) measurements and subjective assessments to detect the changes in the mood states of observers under four sets of lighting conditions (green, blue, red, and yellow). At the same time, two sets of abstract and realistic images were designed to discuss the relationship between light and visual objects and their influence on individuals' impressions. The results showed that different light colors significantly affected mood, with red light having the most substantial emotional arousal, then blue and green. In addition, GSR and ECG measurements were significantly correlated with impressions evaluation results of interest, comprehension, imagination, and feelings in subjective evaluation. Therefore, this study explores the feasibility of combining the measurement of GSR and ECG signals with subjective evaluations as an experimental method of light, mood, and impressions, which provided empirical evidence for regulating individuals' emotional experiences.
Collapse
Affiliation(s)
- Xing Xie
- School of Art and Design, Guangdong University of Technology, Guangzhou 510000, China
| | - Jun Cai
- School of Art and Design, Guangdong University of Technology, Guangzhou 510000, China
- Academy of Arts and Design, Tsinghua University, Beijing 100086, China
| | - Hai Fang
- School of Art and Design, Guangdong University of Technology, Guangzhou 510000, China
| | - Beibei Wang
- Guangdong Provincial Key Laboratory of Nanophotonic Functional Materials and Devices, School of Information and Optoelectronic Science and Engineering, South China Normal University, Guangzhou 510006, China
| | - Huan He
- Guangdong Provincial Key Laboratory of Nanophotonic Functional Materials and Devices, School of Information and Optoelectronic Science and Engineering, South China Normal University, Guangzhou 510006, China
| | - Yuanzhi Zhou
- Guangdong Provincial Key Laboratory of Nanophotonic Functional Materials and Devices, School of Information and Optoelectronic Science and Engineering, South China Normal University, Guangzhou 510006, China
| | - Yang Xiao
- School of Physics and Telecommunication Engineering, South China Normal University, Guangzhou 510006, China
| | | | - Xinming Li
- Guangdong Provincial Key Laboratory of Nanophotonic Functional Materials and Devices, School of Information and Optoelectronic Science and Engineering, South China Normal University, Guangzhou 510006, China
| |
Collapse
|
16
|
Fan T, Qiu S, Wang Z, Zhao H, Jiang J, Wang Y, Xu J, Sun T, Jiang N. A new deep convolutional neural network incorporating attentional mechanisms for ECG emotion recognition. Comput Biol Med 2023; 159:106938. [PMID: 37119553 DOI: 10.1016/j.compbiomed.2023.106938] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2023] [Revised: 03/28/2023] [Accepted: 04/14/2023] [Indexed: 05/01/2023]
Abstract
Using ECG signals captured by wearable devices for emotion recognition is a feasible solution. We propose a deep convolutional neural network incorporating attentional mechanisms for ECG emotion recognition. In order to address the problem of individuality differences in emotion recognition tasks, we incorporate an improved Convolutional Block Attention Module (CBAM) into the proposed deep convolutional neural network. The deep convolutional neural network is responsible for capturing ECG features. Channel attention in CBAM is responsible for adding weight information to ECG features of different channels and spatial attention is responsible for the weighted representation of ECG features of different regions inside the channel. We used three publicly available datasets, WESAD, DREAMER, and ASCERTAIN, for the ECG emotion recognition task. The new state-of-the-art results are set in three datasets for multi-class classification results, WESAD for tri-class results, and ASCERTAIN for two-category results, respectively. A large number of experiments are performed, providing an interesting analysis of the design of the convolutional structure parameters and the role of the attention mechanism used. We propose to use large convolutional kernels to improve the effective perceptual field of the model and thus fully capture the ECG signal features, which achieves better performance compared to the commonly used small kernels. In addition, channel attention and spatial attention were added to the deep convolutional model separately to explore their contribution levels. We found that in most cases, channel attention contributed to the model at a higher level than spatial attention.
Collapse
Affiliation(s)
- Tianqi Fan
- Key Laboratory of Intelligent Control and Optimization for Industrial Equipment of Ministry of Education, Dalian University of Technology, Dalian, China.
| | - Sen Qiu
- Key Laboratory of Intelligent Control and Optimization for Industrial Equipment of Ministry of Education, Dalian University of Technology, Dalian, China.
| | - Zhelong Wang
- Key Laboratory of Intelligent Control and Optimization for Industrial Equipment of Ministry of Education, Dalian University of Technology, Dalian, China.
| | - Hongyu Zhao
- Key Laboratory of Intelligent Control and Optimization for Industrial Equipment of Ministry of Education, Dalian University of Technology, Dalian, China.
| | - Junhan Jiang
- First Affiliated Hospital of China Medical University, Shenyang, China.
| | | | - Junnan Xu
- Department of Medical Oncology, Cancer Hospital of Dalian University of Technology, Shenyang, China.
| | - Tao Sun
- Department of Medical Oncology, Cancer Hospital of Dalian University of Technology, Shenyang, China.
| | - Nan Jiang
- College of Information Engineering, East China Jiaotong University, Nanchang, China.
| |
Collapse
|
17
|
Sun C, Li H, Ma L. Speech emotion recognition based on improved masking EMD and convolutional recurrent neural network. Front Psychol 2023; 13:1075624. [PMID: 36698559 PMCID: PMC9869168 DOI: 10.3389/fpsyg.2022.1075624] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Accepted: 12/16/2022] [Indexed: 01/12/2023] Open
Abstract
Speech emotion recognition (SER) is the key to human-computer emotion interaction. However, the nonlinear characteristics of speech emotion are variable, complex, and subtly changing. Therefore, accurate recognition of emotions from speech remains a challenge. Empirical mode decomposition (EMD), as an effective decomposition method for nonlinear non-stationary signals, has been successfully used to analyze emotional speech signals. However, the mode mixing problem of EMD affects the performance of EMD-based methods for SER. Various improved methods for EMD have been proposed to alleviate the mode mixing problem. These improved methods still suffer from the problems of mode mixing, residual noise, and long computation time, and their main parameters cannot be set adaptively. To overcome these problems, we propose a novel SER framework, named IMEMD-CRNN, based on the combination of an improved version of the masking signal-based EMD (IMEMD) and convolutional recurrent neural network (CRNN). First, IMEMD is proposed to decompose speech. IMEMD is a novel disturbance-assisted EMD method and can determine the parameters of masking signals to the nature of signals. Second, we extract the 43-dimensional time-frequency features that can characterize the emotion from the intrinsic mode functions (IMFs) obtained by IMEMD. Finally, we input these features into a CRNN network to recognize emotions. In the CRNN, 2D convolutional neural networks (CNN) layers are used to capture nonlinear local temporal and frequency information of the emotional speech. Bidirectional gated recurrent units (BiGRU) layers are used to learn the temporal context information further. Experiments on the publicly available TESS dataset and Emo-DB dataset demonstrate the effectiveness of our proposed IMEMD-CRNN framework. The TESS dataset consists of 2,800 utterances containing seven emotions recorded by two native English speakers. The Emo-DB dataset consists of 535 utterances containing seven emotions recorded by ten native German speakers. The proposed IMEMD-CRNN framework achieves a state-of-the-art overall accuracy of 100% for the TESS dataset over seven emotions and 93.54% for the Emo-DB dataset over seven emotions. The IMEMD alleviates the mode mixing and obtains IMFs with less noise and more physical meaning with significantly improved efficiency. Our IMEMD-CRNN framework significantly improves the performance of emotion recognition.
Collapse
|
18
|
Ji Y, Dong SY. Deep learning-based self-induced emotion recognition using EEG. Front Neurosci 2022; 16:985709. [PMID: 36188460 PMCID: PMC9523358 DOI: 10.3389/fnins.2022.985709] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Accepted: 08/29/2022] [Indexed: 11/13/2022] Open
Abstract
Emotion recognition from electroencephalogram (EEG) signals requires accurate and efficient signal processing and feature extraction. Deep learning technology has enabled the automatic extraction of raw EEG signal features that contribute to classifying emotions more accurately. Despite such advances, classification of emotions from EEG signals, especially recorded during recalling specific memories or imagining emotional situations has not yet been investigated. In addition, high-density EEG signal classification using deep neural networks faces challenges, such as high computational complexity, redundant channels, and low accuracy. To address these problems, we evaluate the effects of using a simple channel selection method for classifying self-induced emotions based on deep learning. The experiments demonstrate that selecting key channels based on signal statistics can reduce the computational complexity by 89% without decreasing the classification accuracy. The channel selection method with the highest accuracy was the kurtosis-based method, which achieved accuracies of 79.03% and 79.36% for the valence and arousal scales, respectively. The experimental results show that the proposed framework outperforms conventional methods, even though it uses fewer channels. Our proposed method can be beneficial for the effective use of EEG signals in practical applications.
Collapse
|
19
|
Kim S, Kim TS, Lee WH. Accelerating 3D Convolutional Neural Network with Channel Bottleneck Module for EEG-Based Emotion Recognition. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22186813. [PMID: 36146160 PMCID: PMC9500982 DOI: 10.3390/s22186813] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Revised: 08/28/2022] [Accepted: 09/06/2022] [Indexed: 05/07/2023]
Abstract
Deep learning-based emotion recognition using EEG has received increasing attention in recent years. The existing studies on emotion recognition show great variability in their employed methods including the choice of deep learning approaches and the type of input features. Although deep learning models for EEG-based emotion recognition can deliver superior accuracy, it comes at the cost of high computational complexity. Here, we propose a novel 3D convolutional neural network with a channel bottleneck module (CNN-BN) model for EEG-based emotion recognition, with the aim of accelerating the CNN computation without a significant loss in classification accuracy. To this end, we constructed a 3D spatiotemporal representation of EEG signals as the input of our proposed model. Our CNN-BN model extracts spatiotemporal EEG features, which effectively utilize the spatial and temporal information in EEG. We evaluated the performance of the CNN-BN model in the valence and arousal classification tasks. Our proposed CNN-BN model achieved an average accuracy of 99.1% and 99.5% for valence and arousal, respectively, on the DEAP dataset, while significantly reducing the number of parameters by 93.08% and FLOPs by 94.94%. The CNN-BN model with fewer parameters based on 3D EEG spatiotemporal representation outperforms the state-of-the-art models. Our proposed CNN-BN model with a better parameter efficiency has excellent potential for accelerating CNN-based emotion recognition without losing classification performance.
Collapse
Affiliation(s)
- Sungkyu Kim
- Department of Software Convergence, Kyung Hee University, Yongin 17104, Korea
| | - Tae-Seong Kim
- Department of Biomedical Engineering, Kyung Hee University, Yongin 17104, Korea
| | - Won Hee Lee
- Department of Software Convergence, Kyung Hee University, Yongin 17104, Korea
- Correspondence: ; Tel.: +82-31-201-3750
| |
Collapse
|
20
|
Koorathota S, Khan Z, Lapborisuth P, Sajda P. Multimodal Neurophysiological Transformer for Emotion Recognition. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:3563-3567. [PMID: 36086657 DOI: 10.1109/embc48229.2022.9871421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Understanding neural function often requires multiple modalities of data, including electrophysiogical data, imaging techniques, and demographic surveys. In this paper, we introduce a novel neurophysiological model to tackle major challenges in modeling multimodal data. First, we avoid non-alignment issues between raw signals and extracted, frequency-domain features by addressing the issue of variable sampling rates. Second, we encode modalities through "cross-attention" with other modalities. Lastly, we utilize properties of our parent transformer architecture to model long-range dependencies between segments across modalities and assess intermediary weights to better understand how source signals affect prediction. We apply our Multimodal Neurophysiological Transformer (MNT) to predict valence and arousal in an existing open-source dataset. Experiments on non-aligned multimodal time-series show that our model performs similarly and, in some cases, outperforms existing methods in classification tasks. In addition, qualitative analysis suggests that MNT is able to model neural influences on autonomic activity in predicting arousal. Our architecture has the potential to be fine-tuned to a variety of downstream tasks, including for BCI systems.
Collapse
|
21
|
Vowles CJ, Van Engelen SN, Noyek SE, Fayed N, Davies TC. The Use of Conductive Lycra Fabric in the Prototype Design of a Wearable Device to Monitor Physiological Signals. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:922-925. [PMID: 36085829 DOI: 10.1109/embc48229.2022.9871042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Wearable technology has become commonplace for the measurement of heart rate, steps taken, and monitoring exercise regimes. However, wearables can also be used to enable or enhance the lives of persons living with disabilities. This paper discusses the design of a wearable device that aims to facilitate the assessment of physiological signals using conductive Lycra fabric. The device will be applicable for daily use within diverse contexts including the evaluation of emotional experiences of children with Severe Motor and Communication Impairment and the detection of Obstructive Sleep Apnea in children with Down Syndrome. The Lycra fabric sensors are used to acquire electrocardiographic signals, galvanic skin response, and respiratory signals. Articulated design requirements include constraints related to the ability to fit children of all sizes, and meeting medical device standards and biocompatibility, and criteria related to low costs, comfortability, and maintainability. Upon prototyping and preliminary testing, this device was found to offer an affordable, comfortable, and accessible solution to the monitoring of physiological signals. Clinical Relevance- This research provides initial knowledge and momentum towards an affordable wearable device using conductive Lycra to effectively monitor and assess physiological signals in children with disabilities.
Collapse
|
22
|
A Recognition Method of Athletes’ Mental State in Sports Training Based on Support Vector Machine Model. JOURNAL OF ELECTRICAL AND COMPUTER ENGINEERING 2022. [DOI: 10.1155/2022/1566664] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
Abstract
Athletes participate in competitive competitions, and the ultimate goal is to better display their personal competitive level in the competition so as to achieve the goal of defeating their opponents and winning the competition. In all types of competitions, most matches are instantaneous, and opportunities are fleeting. The instantaneous nature and fierce competition of sports competitions require athletes who participate in sports competitions to have a high psychological quality. It can be seen that the quality of the mental state directly determines the performance of the athletes in usual training and competition. In the process of sports, if athletes can obtain real-time changes in their mental states when they encounter various situations, they can formulate more targeted and effective training or competition strategies according to the athletes’ states. For the opponent, by analyzing the opponent’s psychological state during exercise, the game strategy can be adjusted in real time in a targeted manner, and the probability of winning the game can be provided. Based on this background, this paper proposes to use support vector machine (SVM) to identify the mental state of athletes during exercise. This paper first collects the data of body movements and facial expressions of athletes during training or competition. Use multimodal data to train an SVM model. Output the emotional state of athletes at different stages based on test data. In order to verify the applicability of the method in this paper to the athlete subjects, several comparative models were used in the experiment to verify the performance of the used models. The experimental results show that the accuracy rate of emotion recognition obtained by this method is more than 80%. This shows that the research in this paper has certain application value.
Collapse
|
23
|
Goshvarpour A, Goshvarpour A. Innovative Poincare's plot asymmetry descriptors for EEG emotion recognition. Cogn Neurodyn 2022; 16:545-559. [PMID: 35603058 PMCID: PMC9120274 DOI: 10.1007/s11571-021-09735-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2021] [Revised: 09/18/2021] [Accepted: 10/13/2021] [Indexed: 10/20/2022] Open
Abstract
Given the importance of emotion recognition in both medical and non-medical applications, designing an automatic system has captured the attention of several scholars. Currently, EEG-based emotion recognition has a special position, which has not fulfilled the desired accuracy rates yet. This experiment intended to provide novel EEG asymmetry measures to improve emotion recognition rates. Four emotional states have been classified using the k-nearest neighbor (kNN), support vector machine, and Naïve Bayes. Feature selection has been performed, and the role of employing a different number of top-ranked features on emotion recognition rates has been assessed. To validate the efficiency of the proposed scheme, two public databases, including the SJTU Emotion EEG Dataset-IV (SEED-IV) and a Database for Emotion Analysis using Physiological signals (DEAP) were evaluated. The experimental results indicated that kNN outperformed the other classifiers with a maximum accuracy of 95.49 and 98.63% using SEED-IV and DEAP datasets, respectively. In conclusion, the results of the proposed novel EEG-asymmetry measures make the framework a superior one compared to the state-of-art EEG emotion recognition approaches.
Collapse
Affiliation(s)
- Atefeh Goshvarpour
- Department of Biomedical Engineering, Faculty of Electrical Engineering, Sahand University of Technology, Tabriz, Iran
| | - Ateke Goshvarpour
- Department of Biomedical Engineering, Imam Reza International University, Rezvan Campus, Phalestine Sq., Mashhad, Razavi Khorasan Iran
| |
Collapse
|
24
|
A new data augmentation convolutional neural network for human emotion recognition based on ECG signals. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103580] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
25
|
Branco LRF, Ehteshami A, Azgomi HF, Faghih RT. Closed-Loop Tracking and Regulation of Emotional Valence State From Facial Electromyogram Measurements. Front Comput Neurosci 2022; 16:747735. [PMID: 35399915 PMCID: PMC8990324 DOI: 10.3389/fncom.2022.747735] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Accepted: 02/21/2022] [Indexed: 11/25/2022] Open
Abstract
Affective studies provide essential insights to address emotion recognition and tracking. In traditional open-loop structures, a lack of knowledge about the internal emotional state makes the system incapable of adjusting stimuli parameters and automatically responding to changes in the brain. To address this issue, we propose to use facial electromyogram measurements as biomarkers to infer the internal hidden brain state as feedback to close the loop. In this research, we develop a systematic way to track and control emotional valence, which codes emotions as being pleasant or obstructive. Hence, we conduct a simulation study by modeling and tracking the subject's emotional valence dynamics using state-space approaches. We employ Bayesian filtering to estimate the person-specific model parameters along with the hidden valence state, using continuous and binary features extracted from experimental electromyogram measurements. Moreover, we utilize a mixed-filter estimator to infer the secluded brain state in a real-time simulation environment. We close the loop with a fuzzy logic controller in two categories of regulation: inhibition and excitation. By designing a control action, we aim to automatically reflect any required adjustments within the simulation and reach the desired emotional state levels. Final results demonstrate that, by making use of physiological data, the proposed controller could effectively regulate the estimated valence state. Ultimately, we envision future outcomes of this research to support alternative forms of self-therapy by using wearable machine interface architectures capable of mitigating periods of pervasive emotions and maintaining daily well-being and welfare.
Collapse
Affiliation(s)
- Luciano R. F. Branco
- Department of Electrical and Computer Engineering, University of Houston, Houston, TX, United States
| | - Arian Ehteshami
- Department of Electrical and Computer Engineering, University of Houston, Houston, TX, United States
| | - Hamid Fekri Azgomi
- Department of Electrical and Computer Engineering, University of Houston, Houston, TX, United States
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, United States
| | - Rose T. Faghih
- Department of Electrical and Computer Engineering, University of Houston, Houston, TX, United States
- Department of Biomedical Engineering, New York University, New York, NY, United States
| |
Collapse
|
26
|
Maithri M, Raghavendra U, Gudigar A, Samanth J, Murugappan M, Chakole Y, Acharya UR. Automated emotion recognition: Current trends and future perspectives. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 215:106646. [PMID: 35093645 DOI: 10.1016/j.cmpb.2022.106646] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/08/2021] [Revised: 12/25/2021] [Accepted: 01/16/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND Human emotions greatly affect the actions of a person. The automated emotion recognition has applications in multiple domains such as health care, e-learning, surveillance, etc. The development of computer-aided diagnosis (CAD) tools has led to the automated recognition of human emotions. OBJECTIVE This review paper provides an insight into various methods employed using electroencephalogram (EEG), facial, and speech signals coupled with multi-modal emotion recognition techniques. In this work, we have reviewed most of the state-of-the-art papers published on this topic. METHOD This study was carried out by considering the various emotion recognition (ER) models proposed between 2016 and 2021. The papers were analysed based on methods employed, classifier used and performance obtained. RESULTS There is a significant rise in the application of deep learning techniques for ER. They have been widely applied for EEG, speech, facial expression, and multimodal features to develop an accurate ER model. CONCLUSION Our study reveals that most of the proposed machine and deep learning-based systems have yielded good performances for automated ER in a controlled environment. However, there is a need to obtain high performance for ER even in an uncontrolled environment.
Collapse
Affiliation(s)
- M Maithri
- Department of Mechatronics, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India
| | - U Raghavendra
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India
| | - Anjan Gudigar
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India.
| | - Jyothi Samanth
- Department of Cardiovascular Technology, Manipal College of Health Professions, Manipal Academy of Higher Education, Manipal, Karnataka 576104, India
| | - Murugappan Murugappan
- Department of Electronics and Communication Engineering, Kuwait College of Science and Technology, 13133, Kuwait
| | - Yashas Chakole
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India
| | - U Rajendra Acharya
- School of Engineering, Ngee Ann Polytechnic, Clementi 599489, Singapore; Department of Biomedical Informatics and Medical Engineering, Asia University, Taichung, Taiwan; Department of Biomedical Engineering, School of Science and Technology, SUSS University, Singapore
| |
Collapse
|
27
|
Ma F, Li Y, Ni S, Huang SL, Zhang L. Data Augmentation for Audio-Visual Emotion Recognition with an Efficient Multimodal Conditional GAN. APPLIED SCIENCES 2022; 12:527. [DOI: 10.3390/app12010527] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Audio-visual emotion recognition is the research of identifying human emotional states by combining the audio modality and the visual modality simultaneously, which plays an important role in intelligent human-machine interactions. With the help of deep learning, previous works have made great progress for audio-visual emotion recognition. However, these deep learning methods often require a large amount of data for training. In reality, data acquisition is difficult and expensive, especially for the multimodal data with different modalities. As a result, the training data may be in the low-data regime, which cannot be effectively used for deep learning. In addition, class imbalance may occur in the emotional data, which can further degrade the performance of audio-visual emotion recognition. To address these problems, we propose an efficient data augmentation framework by designing a multimodal conditional generative adversarial network (GAN) for audio-visual emotion recognition. Specifically, we design generators and discriminators for audio and visual modalities. The category information is used as their shared input to make sure our GAN can generate fake data of different categories. In addition, the high dependence between the audio modality and the visual modality in the generated multimodal data is modeled based on Hirschfeld-Gebelein-Rényi (HGR) maximal correlation. In this way, we relate different modalities in the generated data to approximate the real data. Then, the generated data are used to augment our data manifold. We further apply our approach to deal with the problem of class imbalance. To the best of our knowledge, this is the first work to propose a data augmentation strategy with a multimodal conditional GAN for audio-visual emotion recognition. We conduct a series of experiments on three public multimodal datasets, including eNTERFACE’05, RAVDESS, and CMEW. The results indicate that our multimodal conditional GAN has high effectiveness for data augmentation of audio-visual emotion recognition.
Collapse
Affiliation(s)
- Fei Ma
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua University, Shenzhen 518055, China
| | - Yang Li
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua University, Shenzhen 518055, China
| | - Shiguang Ni
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China
| | - Shao-Lun Huang
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua University, Shenzhen 518055, China
| | - Lin Zhang
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua University, Shenzhen 518055, China
| |
Collapse
|
28
|
A Distributed Ensemble Machine Learning Technique for Emotion Classification from Vocal Cues. BIG DATA ANALYTICS 2022. [DOI: 10.1007/978-3-031-24094-2_9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023] Open
|
29
|
Cai J, Xiao R, Cui W, Zhang S, Liu G. Application of Electroencephalography-Based Machine Learning in Emotion Recognition: A Review. Front Syst Neurosci 2021; 15:729707. [PMID: 34887732 PMCID: PMC8649925 DOI: 10.3389/fnsys.2021.729707] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2021] [Accepted: 11/08/2021] [Indexed: 11/13/2022] Open
Abstract
Emotion recognition has become increasingly prominent in the medical field and human-computer interaction. When people’s emotions change under external stimuli, various physiological signals of the human body will fluctuate. Electroencephalography (EEG) is closely related to brain activity, making it possible to judge the subject’s emotional changes through EEG signals. Meanwhile, machine learning algorithms, which are good at digging out data features from a statistical perspective and making judgments, have developed by leaps and bounds. Therefore, using machine learning to extract feature vectors related to emotional states from EEG signals and constructing a classifier to separate emotions into discrete states to realize emotion recognition has a broad development prospect. This paper introduces the acquisition, preprocessing, feature extraction, and classification of EEG signals in sequence following the progress of EEG-based machine learning algorithms for emotion recognition. And it may help beginners who will use EEG-based machine learning algorithms for emotion recognition to understand the development status of this field. The journals we selected are all retrieved from the Web of Science retrieval platform. And the publication dates of most of the selected articles are concentrated in 2016–2021.
Collapse
Affiliation(s)
- Jing Cai
- College of Instrumentation and Electrical Engineering, Jilin University, Changchun, China
| | - Ruolan Xiao
- College of Instrumentation and Electrical Engineering, Jilin University, Changchun, China
| | - Wenjie Cui
- College of Instrumentation and Electrical Engineering, Jilin University, Changchun, China
| | - Shang Zhang
- College of Instrumentation and Electrical Engineering, Jilin University, Changchun, China
| | - Guangda Liu
- College of Instrumentation and Electrical Engineering, Jilin University, Changchun, China
| |
Collapse
|
30
|
Goshvarpour A, Goshvarpour A. Human Emotion Recognition using Polar-Based Lagged Poincare Plot Indices of Eye-Blinking Data. INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE AND APPLICATIONS 2021. [DOI: 10.1142/s1469026821500231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Emotion recognition using bio-signals is currently a hot and challenging topic in human–computer interferences, robotics, and affective computing. A broad range of literature has been published by analyzing the internal/external behaviors of the subjects in confronting emotional events/stimuli. Eye movements, as an external behavior, are frequently used in the multi-modal emotion recognition system. On the other hand, classic statistical features of the signal have generally been assessed, and the evaluation of its dynamics has been neglected so far. For the first time, the dynamics of single-modal eye-blinking data are characterized. Novel polar-based indices of the lagged Poincare plots were introduced. The optimum lag was estimated using mutual information. After reconstruction of the plot, the polar measures of all points were characterized using statistical measures. The support vector machine (SVM), decision tree, and Naïve Bayes were implemented to complete the process of classification. The highest accuracy of 100% with an average accuracy of 84.17% was achieved for fear/sad discrimination using SVM. The suggested framework provided outstanding performances in terms of recognition rates, simplicity of the methodology, and less computational cost. Our results also showed that eye-blinking data possesses the potential for emotion recognition, especially in classifying fear emotion.
Collapse
Affiliation(s)
- Atefeh Goshvarpour
- Department of Biomedical Engineering, Faculty of Electrical Engineering, Sahand University of Technology, Tabriz, Iran
| | - Ateke Goshvarpour
- Department of Biomedical Engineering, Imam Reza International University, Mashhad, Razavi Khorasan, Iran
| |
Collapse
|
31
|
Jiang J, Meng Q, Ji J. Combining Music and Indoor Spatial Factors Helps to Improve College Students' Emotion During Communication. Front Psychol 2021; 12:703908. [PMID: 34594267 PMCID: PMC8476911 DOI: 10.3389/fpsyg.2021.703908] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2021] [Accepted: 08/10/2021] [Indexed: 11/13/2022] Open
Abstract
Against the background of weakening face-to-face social interaction, the mental health of college students deserves attention. There are few existing studies on the impact of audiovisual interaction on interactive behavior, especially emotional perception in specific spaces. This study aims to indicate whether the perception of one's music environment has influence on college students' emotion during communication in different indoor conditions including spatial function, visual and sound atmospheres, and interior furnishings. The three-dimensional pleasure-arousal-dominance (PAD) emotional model was used to evaluate the changes of emotions before and after communication. An acoustic environmental measurement was performed and the evaluations of emotion during communication was investigated by a questionnaire survey with 331 participants at six experimental sites [including a classroom (CR), a learning corridor (LC), a coffee shop (CS), a fast food restaurant (FFR), a dormitory (DT), and a living room(LR)], the following results were found: Firstly, the results in different functional spaces showed no significant effect of music on communication or emotional states during communication. Secondly, the average score of the musical evaluation was 1.09 higher in the warm-toned space compared to the cold-toned space. Thirdly, the differences in the effects of music on emotion during communication in different sound environments were significant and pleasure, arousal, and dominance could be efficiently enhanced by music in the quiet space. Fourthly, dominance was 0.63 higher in the minimally furnished space. Finally, we also investigated influence of social characteristics on the effect of music on communication in different indoor spaces, in terms of the intimacy level, the gender combination, and the group size. For instance, when there are more than two communicators in the dining space, pleasure and arousal can be efficiently enhanced by music. This study shows that combining the sound environment with spatial factors (for example, the visual and sound atmosphere) and the interior furnishings can be an effective design strategy for promoting social interaction in indoor spaces.
Collapse
Affiliation(s)
- Jiani Jiang
- Key Laboratory of Cold Region Urban and Rural Human Settlement Environment Science and Technology, Ministry of Industry and Information Technology, School of Architecture, Harbin Institute of Technology, Harbin, China
| | - Qi Meng
- Key Laboratory of Cold Region Urban and Rural Human Settlement Environment Science and Technology, Ministry of Industry and Information Technology, School of Architecture, Harbin Institute of Technology, Harbin, China
| | - Jingtao Ji
- Key Laboratory of Cold Region Urban and Rural Human Settlement Environment Science and Technology, Ministry of Industry and Information Technology, School of Architecture, Harbin Institute of Technology, Harbin, China
| |
Collapse
|
32
|
Rahman MM, Sarkar AK, Hossain MA, Hossain MS, Islam MR, Hossain MB, Quinn JMW, Moni MA. Recognition of human emotions using EEG signals: A review. Comput Biol Med 2021; 136:104696. [PMID: 34388471 DOI: 10.1016/j.compbiomed.2021.104696] [Citation(s) in RCA: 48] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2021] [Revised: 07/23/2021] [Accepted: 07/23/2021] [Indexed: 10/20/2022]
Abstract
Assessment of the cognitive functions and state of clinical subjects is an important aspect of e-health care delivery, and in the development of novel human-machine interfaces. A subject can display a range of emotions that significantly influence cognition, and emotion classification through the analysis of physiological signals is a key means of detecting emotion. Electroencephalography (EEG) signals have become a common focus of such development compared to other physiological signals because EEG employs simple and subject-acceptable methods for obtaining data that can be used for emotion analysis. We have therefore reviewed published studies that have used EEG signal data to identify possible interconnections between emotion and brain activity. We then describe theoretical conceptualization of basic emotions, and interpret the prevailing techniques that have been adopted for feature extraction, selection, and classification. Finally, we have compared the outcomes of these recent studies and discussed the likely future directions and main challenges for researchers developing EEG-based emotion analysis methods.
Collapse
Affiliation(s)
- Md Mustafizur Rahman
- Department of Electrical and Electronic Engineering, Jashore University of Science & Technology, Jashore, 7408, Bangladesh.
| | - Ajay Krishno Sarkar
- Department of Electrical and Electronic Engineering, Rajshahi University of Engineering & Technology, Rajshahi, 6204, Bangladesh.
| | - Md Amzad Hossain
- Department of Electrical and Electronic Engineering, Jashore University of Science & Technology, Jashore, 7408, Bangladesh.
| | - Md Selim Hossain
- Department of Electrical and Electronic Engineering, Rajshahi University of Engineering & Technology, Rajshahi, 6204, Bangladesh.
| | - Md Rabiul Islam
- Department of Electrical and Electronic Engineering, Khulna University of Engineering & Technology, Khulna, 9203, Bangladesh.
| | - Md Biplob Hossain
- Department of Electrical and Electronic Engineering, Jashore University of Science & Technology, Jashore, 7408, Bangladesh.
| | - Julian M W Quinn
- Healthy Ageing Theme, Garvan Institute of Medical Research, Darlinghurst, NSW, 2010, Australia.
| | - Mohammad Ali Moni
- Healthy Ageing Theme, Garvan Institute of Medical Research, Darlinghurst, NSW, 2010, Australia; School of Health and Rehabilitation Sciences, Faculty of Health and Behavioural Sciences, The University of Queensland St Lucia, QLD 4072, Australia.
| |
Collapse
|
33
|
Panahi F, Rashidi S, Sheikhani A. Application of fractional Fourier transform in feature extraction from ELECTROCARDIOGRAM and GALVANIC SKIN RESPONSE for emotion recognition. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102863] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
34
|
Mai ND, Lee BG, Chung WY. Affective Computing on Machine Learning-Based Emotion Recognition Using a Self-Made EEG Device. SENSORS (BASEL, SWITZERLAND) 2021; 21:5135. [PMID: 34372370 PMCID: PMC8348417 DOI: 10.3390/s21155135] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Revised: 07/24/2021] [Accepted: 07/27/2021] [Indexed: 11/16/2022]
Abstract
In this research, we develop an affective computing method based on machine learning for emotion recognition using a wireless protocol and a wearable electroencephalography (EEG) custom-designed device. The system collects EEG signals using an eight-electrode placement on the scalp; two of these electrodes were placed in the frontal lobe, and the other six electrodes were placed in the temporal lobe. We performed experiments on eight subjects while they watched emotive videos. Six entropy measures were employed for extracting suitable features from the EEG signals. Next, we evaluated our proposed models using three popular classifiers: a support vector machine (SVM), multi-layer perceptron (MLP), and one-dimensional convolutional neural network (1D-CNN) for emotion classification; both subject-dependent and subject-independent strategies were used. Our experiment results showed that the highest average accuracies achieved in the subject-dependent and subject-independent cases were 85.81% and 78.52%, respectively; these accuracies were achieved using a combination of the sample entropy measure and 1D-CNN. Moreover, our study investigates the T8 position (above the right ear) in the temporal lobe as the most critical channel among the proposed measurement positions for emotion classification through electrode selection. Our results prove the feasibility and efficiency of our proposed EEG-based affective computing method for emotion recognition in real-world applications.
Collapse
Affiliation(s)
- Ngoc-Dau Mai
- Department of Artificial Intelligence Convergence, Pukyong National University, Busan 48513, Korea;
| | - Boon-Giin Lee
- School of Computer Science, The University of Nottingham Ningbo China, Ningbo 315100, China;
| | - Wan-Young Chung
- Department of Artificial Intelligence Convergence, Pukyong National University, Busan 48513, Korea;
| |
Collapse
|
35
|
Hasnul MA, Aziz NAA, Alelyani S, Mohana M, Aziz AA. Electrocardiogram-Based Emotion Recognition Systems and Their Applications in Healthcare-A Review. SENSORS 2021; 21:s21155015. [PMID: 34372252 PMCID: PMC8348698 DOI: 10.3390/s21155015] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Revised: 07/15/2021] [Accepted: 07/19/2021] [Indexed: 11/30/2022]
Abstract
Affective computing is a field of study that integrates human affects and emotions with artificial intelligence into systems or devices. A system or device with affective computing is beneficial for the mental health and wellbeing of individuals that are stressed, anguished, or depressed. Emotion recognition systems are an important technology that enables affective computing. Currently, there are a lot of ways to build an emotion recognition system using various techniques and algorithms. This review paper focuses on emotion recognition research that adopted electrocardiograms (ECGs) as a unimodal approach as well as part of a multimodal approach for emotion recognition systems. Critical observations of data collection, pre-processing, feature extraction, feature selection and dimensionality reduction, classification, and validation are conducted. This paper also highlights the architectures with accuracy of above 90%. The available ECG-inclusive affective databases are also reviewed, and a popularity analysis is presented. Additionally, the benefit of emotion recognition systems towards healthcare systems is also reviewed here. Based on the literature reviewed, a thorough discussion on the subject matter and future works is suggested and concluded. The findings presented here are beneficial for prospective researchers to look into the summary of previous works conducted in the field of ECG-based emotion recognition systems, and for identifying gaps in the area, as well as in developing and designing future applications of emotion recognition systems, especially in improving healthcare.
Collapse
Affiliation(s)
- Muhammad Anas Hasnul
- Faculty of Engineering and Technology, Multimedia University, Melaka 75450, Malaysia; (M.A.H.); (A.A.A.)
| | - Nor Azlina Ab. Aziz
- Faculty of Engineering and Technology, Multimedia University, Melaka 75450, Malaysia; (M.A.H.); (A.A.A.)
- Correspondence:
| | - Salem Alelyani
- Center for Artificial Intelligence (CAI), King Khalid University, Abha 61421, Saudi Arabia; (S.A.); (M.M.)
- College of Computer Science, King Khalid University, Abha 61421, Saudi Arabia
| | - Mohamed Mohana
- Center for Artificial Intelligence (CAI), King Khalid University, Abha 61421, Saudi Arabia; (S.A.); (M.M.)
| | - Azlan Abd. Aziz
- Faculty of Engineering and Technology, Multimedia University, Melaka 75450, Malaysia; (M.A.H.); (A.A.A.)
| |
Collapse
|
36
|
Romaniszyn-Kania P, Pollak A, Bugdol MD, Bugdol MN, Kania D, Mańka A, Danch-Wierzchowska M, Mitas AW. Affective State during Physiotherapy and Its Analysis Using Machine Learning Methods. SENSORS 2021; 21:s21144853. [PMID: 34300591 PMCID: PMC8309702 DOI: 10.3390/s21144853] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/24/2021] [Revised: 07/11/2021] [Accepted: 07/12/2021] [Indexed: 12/12/2022]
Abstract
Invasive or uncomfortable procedures especially during healthcare trigger emotions. Technological development of the equipment and systems for monitoring and recording psychophysiological functions enables continuous observation of changes to a situation responding to a situation. The presented study aimed to focus on the analysis of the individual’s affective state. The results reflect the excitation expressed by the subjects’ statements collected with psychological questionnaires. The research group consisted of 49 participants (22 women and 25 men). The measurement protocol included acquiring the electrodermal activity signal, cardiac signals, and accelerometric signals in three axes. Subjective measurements were acquired for affective state using the JAWS questionnaires, for cognitive skills the DST, and for verbal fluency the VFT. The physiological and psychological data were subjected to statistical analysis and then to a machine learning process using different features selection methods (JMI or PCA). The highest accuracy of the kNN classifier was achieved in combination with the JMI method (81.63%) concerning the division complying with the JAWS test results. The classification sensitivity and specificity were 85.71% and 71.43%.
Collapse
Affiliation(s)
- Patrycja Romaniszyn-Kania
- Faculty of Biomedical Engineering, Silesian University of Technology, Roosevelta 40, 41-800 Zabrze, Poland; (M.D.B.); (M.N.B.); (A.M.); (M.D.-W.); (A.W.M.)
- Correspondence:
| | - Anita Pollak
- Institute of Psychology, University of Silesia in Katowice, Bankowa 12, 40-007 Katowice, Poland;
| | - Marcin D. Bugdol
- Faculty of Biomedical Engineering, Silesian University of Technology, Roosevelta 40, 41-800 Zabrze, Poland; (M.D.B.); (M.N.B.); (A.M.); (M.D.-W.); (A.W.M.)
| | - Monika N. Bugdol
- Faculty of Biomedical Engineering, Silesian University of Technology, Roosevelta 40, 41-800 Zabrze, Poland; (M.D.B.); (M.N.B.); (A.M.); (M.D.-W.); (A.W.M.)
| | - Damian Kania
- Institute of Physiotherapy and Health Sciences, The Jerzy Kukuczka Academy of Physical Education in Katowice, Mikołowska 72A, 40-065 Katowice, Poland;
| | - Anna Mańka
- Faculty of Biomedical Engineering, Silesian University of Technology, Roosevelta 40, 41-800 Zabrze, Poland; (M.D.B.); (M.N.B.); (A.M.); (M.D.-W.); (A.W.M.)
| | - Marta Danch-Wierzchowska
- Faculty of Biomedical Engineering, Silesian University of Technology, Roosevelta 40, 41-800 Zabrze, Poland; (M.D.B.); (M.N.B.); (A.M.); (M.D.-W.); (A.W.M.)
| | - Andrzej W. Mitas
- Faculty of Biomedical Engineering, Silesian University of Technology, Roosevelta 40, 41-800 Zabrze, Poland; (M.D.B.); (M.N.B.); (A.M.); (M.D.-W.); (A.W.M.)
| |
Collapse
|
37
|
Emotion Recognition from ECG Signals Using Wavelet Scattering and Machine Learning. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11114945] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
Affect detection combined with a system that dynamically responds to a person’s emotional state allows an improved user experience with computers, systems, and environments and has a wide range of applications, including entertainment and health care. Previous studies on this topic have used a variety of machine learning algorithms and inputs such as audial, visual, or physiological signals. Recently, a lot of interest has been focused on the last, as speech or video recording is impractical for some applications. Therefore, there is a need to create Human–Computer Interface Systems capable of recognizing emotional states from noninvasive and nonintrusive physiological signals. Typically, the recognition task is carried out from electroencephalogram (EEG) signals, obtaining good accuracy. However, EEGs are difficult to register without interfering with daily activities, and recent studies have shown that it is possible to use electrocardiogram (ECG) signals for this purpose. This work improves the performance of emotion recognition from ECG signals using wavelet transform for signal analysis. Features of the ECG signal are extracted from the AMIGOS database using a wavelet scattering algorithm that allows obtaining features of the signal at different time scales, which are then used as inputs for different classifiers to evaluate their performance. The results show that the proposed algorithm for extracting features and classifying the signals obtains an accuracy of 88.8% in the valence dimension, 90.2% in arousal, and 95.3% in a two-dimensional classification, which is better than the performance reported in previous studies. This algorithm is expected to be useful for classifying emotions using wearable devices.
Collapse
|
38
|
Electronic Devices for Stress Detection in Academic Contexts during Confinement Because of the COVID-19 Pandemic. ELECTRONICS 2021. [DOI: 10.3390/electronics10030301] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
This article studies the development and implementation of different electronic devices for measuring signals during stress situations, specifically in academic contexts in a student group of the Engineering Department at the University of Pamplona (Colombia). For the research’s development, devices for measuring physiological signals were used through a Galvanic Skin Response (GSR), the electrical response of the heart by using an electrocardiogram (ECG), the electrical activity produced by the upper trapezius muscle (EMG), and the development of an electronic nose system (E-nose) as a pilot study for the detection and identification of the Volatile Organic Compounds profiles emitted by the skin. The data gathering was taken during an online test (during the COVID-19 Pandemic), in which the aim was to measure the student’s stress state and then during the relaxation state after the exam period. Two algorithms were used for the data process, such as Linear Discriminant Analysis and Support Vector Machine through the Python software for the classification and differentiation of the assessment, achieving 100% of classification through GSR, 90% with the E-nose system proposed, 90% with the EMG system, and 88% success by using ECG, respectively.
Collapse
|
39
|
Tonacci A, Billeci L, Di Mambro I, Marangoni R, Sanmartin C, Venturi F. Wearable Sensors for Assessing the Role of Olfactory Training on the Autonomic Response to Olfactory Stimulation. SENSORS 2021; 21:s21030770. [PMID: 33498830 PMCID: PMC7865293 DOI: 10.3390/s21030770] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/31/2020] [Revised: 01/20/2021] [Accepted: 01/21/2021] [Indexed: 12/13/2022]
Abstract
Wearable sensors are nowadays largely employed to assess physiological signals derived from the human body without representing a burden in terms of obtrusiveness. One of the most intriguing fields of application for such systems include the assessment of physiological responses to sensory stimuli. In this specific regard, it is not yet known which are the main psychophysiological drivers of olfactory-related pleasantness, as the current literature has demonstrated the relationship between odor familiarity and odor valence, but has not clarified the consequentiality between the two domains. Here, we enrolled a group of university students to whom olfactory training lasting 3 months was administered. Thanks to the analysis of electrocardiogram (ECG) and galvanic skin response (GSR) signals at the beginning and at the end of the training period, we observed different autonomic responses, with higher parasympathetically-mediated response at the end of the period with respect to the first evaluation. This possibly suggests that an increased familiarity to the proposed stimuli would lead to a higher tendency towards relaxation. Such results could suggest potential applications to other domains, including personalized treatments based on odors and foods in neuropsychiatric and eating disorders.
Collapse
Affiliation(s)
- Alessandro Tonacci
- Institute of Clinical Physiology, National Research Council of Italy (IFC-CNR), 56124 Pisa, Italy;
| | - Lucia Billeci
- Institute of Clinical Physiology, National Research Council of Italy (IFC-CNR), 56124 Pisa, Italy;
- Correspondence:
| | - Irene Di Mambro
- School of Engineering, University of Pisa, 56122 Pisa, Italy;
| | - Roberto Marangoni
- Department of Biology, University of Pisa, 56127 Pisa, Italy;
- Institute of Biophysics, National Resarch Council of Italy (IBF-CNR), Via Moruzzi 1, 56124 Pisa, Italy
| | - Chiara Sanmartin
- Department of Agriculture, Food and Environment, University of Pisa, 56124 Pisa, Italy; (C.S.); (F.V.)
| | - Francesca Venturi
- Department of Agriculture, Food and Environment, University of Pisa, 56124 Pisa, Italy; (C.S.); (F.V.)
- NexFood Srl, 57121 Livorno, Italy
| |
Collapse
|
40
|
Eye-Tracking Analysis for Emotion Recognition. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2020; 2020:2909267. [PMID: 32963512 PMCID: PMC7492682 DOI: 10.1155/2020/2909267] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/11/2019] [Revised: 07/23/2020] [Accepted: 08/03/2020] [Indexed: 11/18/2022]
Abstract
This article reports the results of the study related to emotion recognition by using eye-tracking. Emotions were evoked by presenting a dynamic movie material in the form of 21 video fragments. Eye-tracking signals recorded from 30 participants were used to calculate 18 features associated with eye movements (fixations and saccades) and pupil diameter. To ensure that the features were related to emotions, we investigated the influence of luminance and the dynamics of the presented movies. Three classes of emotions were considered: high arousal and low valence, low arousal and moderate valence, and high arousal and high valence. A maximum of 80% classification accuracy was obtained using the support vector machine (SVM) classifier and leave-one-subject-out validation method.
Collapse
|
41
|
Differences in Driving Intention Transitions Caused by Driver's Emotion Evolutions. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2020; 17:ijerph17196962. [PMID: 32977577 PMCID: PMC7578958 DOI: 10.3390/ijerph17196962] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Revised: 09/14/2020] [Accepted: 09/21/2020] [Indexed: 11/17/2022]
Abstract
Joining worldwide efforts to understand the relationship between driving emotion and behavior, the current study aimed at examining the influence of emotions on driving intention transition. In Study 1, taking a car-following scene as an example, we designed the driving experiments to obtain the driving data in drivers’ natural states, and a driving intention prediction model was constructed based on the HMM. Then, we analyzed the probability distribution and transition probability of driving intentions. In Study 2, we designed a series of emotion-induction experiments for eight typical driving emotions, and the drivers with induced emotion participated in the driving experiments similar to Study 1. Then, we obtained the driving data of the drivers in eight typical emotional states, and the driving intention prediction models adapted to the driver’s different emotional states were constructed based on the HMM severally. Finally, we analyzed the probabilistic differences of driving intention in divers’ natural states and different emotional states, and the findings showed the changing law of driving intention probability distribution and transfer probability caused by emotion evolution. The findings of this study can promote the development of driving behavior prediction technology and an active safety early warning system.
Collapse
|
42
|
Dar MN, Akram MU, Khawaja SG, Pujari AN. CNN and LSTM-Based Emotion Charting Using Physiological Signals. SENSORS (BASEL, SWITZERLAND) 2020; 20:E4551. [PMID: 32823807 PMCID: PMC7472085 DOI: 10.3390/s20164551] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/14/2020] [Revised: 07/30/2020] [Accepted: 08/04/2020] [Indexed: 02/07/2023]
Abstract
Novel trends in affective computing are based on reliable sources of physiological signals such as Electroencephalogram (EEG), Electrocardiogram (ECG), and Galvanic Skin Response (GSR). The use of these signals provides challenges of performance improvement within a broader set of emotion classes in a less constrained real-world environment. To overcome these challenges, we propose a computational framework of 2D Convolutional Neural Network (CNN) architecture for the arrangement of 14 channels of EEG, and a combination of Long Short-Term Memory (LSTM) and 1D-CNN architecture for ECG and GSR. Our approach is subject-independent and incorporates two publicly available datasets of DREAMER and AMIGOS with low-cost, wearable sensors to extract physiological signals suitable for real-world environments. The results outperform state-of-the-art approaches for classification into four classes, namely High Valence-High Arousal, High Valence-Low Arousal, Low Valence-High Arousal, and Low Valence-Low Arousal. Emotion elicitation average accuracy of 98.73% is achieved with ECG right-channel modality, 76.65% with EEG modality, and 63.67% with GSR modality for AMIGOS. The overall highest accuracy of 99.0% for the AMIGOS dataset and 90.8% for the DREAMER dataset is achieved with multi-modal fusion. A strong correlation between spectral- and hidden-layer feature analysis with classification performance suggests the efficacy of the proposed method for significant feature extraction and higher emotion elicitation performance to a broader context for less constrained environments.
Collapse
Affiliation(s)
- Muhammad Najam Dar
- Department of Computer and Software Engineering, College of Electrical and Mechanical Engineering, National University of Sciences and Technology, Islamabad 44000, Pakistan; (M.U.A.); (S.G.K.)
| | - Muhammad Usman Akram
- Department of Computer and Software Engineering, College of Electrical and Mechanical Engineering, National University of Sciences and Technology, Islamabad 44000, Pakistan; (M.U.A.); (S.G.K.)
| | - Sajid Gul Khawaja
- Department of Computer and Software Engineering, College of Electrical and Mechanical Engineering, National University of Sciences and Technology, Islamabad 44000, Pakistan; (M.U.A.); (S.G.K.)
| | - Amit N. Pujari
- School of Engineering and Technology, University of Hertfordshire, Hatfield AL10 9AB, England, UK;
- School of Engineering, University of Aberdeen, Aberdeen AB24 3UE, Scotland, UK
| |
Collapse
|
43
|
Evaluation of Novel Entropy-Based Complex Wavelet Sub-bands Measures of PPG in an Emotion Recognition System. J Med Biol Eng 2020. [DOI: 10.1007/s40846-020-00526-7] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
|
44
|
Meng Q, Jiang J, Liu F, Xu X. Effects of the Musical Sound Environment on Communicating Emotion. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2020; 17:E2499. [PMID: 32268523 PMCID: PMC7177471 DOI: 10.3390/ijerph17072499] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/16/2020] [Revised: 03/30/2020] [Accepted: 04/03/2020] [Indexed: 11/18/2022]
Abstract
The acoustic environment is one of the factors influencing emotion, however, existing research has mainly focused on the effects of noise on emotion, and on music therapy, while the acoustic and psychological effects of music on interactive behaviour have been neglected. Therefore, this study aimed to investigate the effects of music on communicating emotion including evaluation of music, and d-values of pleasure, arousal, and dominance (PAD), in terms of sound pressure level (SPL), musical emotion, and tempo. Based on acoustic environment measurement and a questionnaire survey with 52 participants in a normal classroom in Harbin city, China, the following results were found. First, SPL was significantly correlated with musical evaluation of communication: average scores of musical evaluation decreased sharply from 1.31 to -2.13 when SPL rose from 50 dBA to 60 dBA, while they floated from 0.88 to 1.31 between 40 dBA and 50 dBA. Arousal increased with increases in musical SPL in the negative evaluation group. Second, musical emotions had significant effects on musical evaluation of communication, among which the effect of joyful-sounding music was the highest; and in general, joyful- and stirring-sounding music could enhance pleasure and arousal efficiently. Third, musical tempo had significant effect on musical evaluation and communicating emotion, faster music could enhance arousal and pleasure efficiently. Finally, in terms of social characteristics, familiarity, gender combination, and number of participants affected communicating emotion. For instance, in the positive evaluation group, dominance was much higher in the single-gender groups. This study shows that some music factors, such as SPL, musical emotion, and tempo, can be used to enhance communicating emotion.
Collapse
Affiliation(s)
- Qi Meng
- Key Laboratory of Cold Region Urban and Rural Human Settlement Environment Science and Technology, Ministry of Industry and Information Technology, School of Architecture, Harbin Institute of Technology, 66 West Dazhi Street, Nan Gang District, Harbin 150001, China; (Q.M.); (J.J.)
| | - Jiani Jiang
- Key Laboratory of Cold Region Urban and Rural Human Settlement Environment Science and Technology, Ministry of Industry and Information Technology, School of Architecture, Harbin Institute of Technology, 66 West Dazhi Street, Nan Gang District, Harbin 150001, China; (Q.M.); (J.J.)
| | - Fangfang Liu
- Key Laboratory of Cold Region Urban and Rural Human Settlement Environment Science and Technology, Ministry of Industry and Information Technology, School of Architecture, Harbin Institute of Technology, 66 West Dazhi Street, Nan Gang District, Harbin 150001, China; (Q.M.); (J.J.)
| | - Xiaoduo Xu
- UCL The Bartlett School of Architecture, University College London (UCL), London WC1H 0QB, UK
| |
Collapse
|
45
|
Asgharzadeh-Bonab A, Amirani MC, Mehri A. Spectral entropy and deep convolutional neural network for ECG beat classification. Biocybern Biomed Eng 2020. [DOI: 10.1016/j.bbe.2020.02.004] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
46
|
Abstract
In this paper, we present in depth the hardware components of a low-cost cognitive assistant. The aim is to detect the performance and the emotional state that elderly people present when performing exercises. Physical and cognitive exercises are a proven way of keeping elderly people active, healthy, and happy. Our goal is to bring to people that are at their homes (or in unsupervised places) an assistant that motivates them to perform exercises and, concurrently, monitor them, observing their physical and emotional responses. We focus on the hardware parts and the deep learning models so that they can be reproduced by others. The platform is being tested at an elderly people care facility, and validation is in process.
Collapse
|
47
|
Sorinas J, Ferrández JM, Fernandez E. Brain and Body Emotional Responses: Multimodal Approximation for Valence Classification. SENSORS 2020; 20:s20010313. [PMID: 31935909 PMCID: PMC6982758 DOI: 10.3390/s20010313] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/28/2019] [Revised: 01/02/2020] [Accepted: 01/03/2020] [Indexed: 11/16/2022]
Abstract
In order to develop more precise and functional affective applications, it is necessary to achieve a balance between the psychology and the engineering applied to emotions. Signals from the central and peripheral nervous systems have been used for emotion recognition purposes, however, their operation and the relationship between them remains unknown. In this context, in the present work, we have tried to approach the study of the psychobiology of both systems in order to generate a computational model for the recognition of emotions in the dimension of valence. To this end, the electroencephalography (EEG) signal, electrocardiography (ECG) signal and skin temperature of 24 subjects have been studied. Each methodology has been evaluated individually, finding characteristic patterns of positive and negative emotions in each of them. After feature selection of each methodology, the results of the classification showed that, although the classification of emotions is possible at both central and peripheral levels, the multimodal approach did not improve the results obtained through the EEG alone. In addition, differences have been observed between cerebral and peripheral responses in the processing of emotions by separating the sample by sex; though, the differences between men and women were only notable at the peripheral nervous system level.
Collapse
Affiliation(s)
- Jennifer Sorinas
- The Institute of Bioengineering, University Miguel Hernandez, 03202 Elche, Spain
- Department of Electronics and Computer Technology, University of Cartagena, 30202 Cartagena, Spain;
- Correspondence: (J.S.); (E.F.)
| | - Jose Manuel Ferrández
- Department of Electronics and Computer Technology, University of Cartagena, 30202 Cartagena, Spain;
| | - Eduardo Fernandez
- The Institute of Bioengineering, University Miguel Hernandez, 03202 Elche, Spain
- Correspondence: (J.S.); (E.F.)
| |
Collapse
|
48
|
Goshvarpour A, Goshvarpour A. Schizophrenia diagnosis using innovative EEG feature-level fusion schemes. AUSTRALASIAN PHYSICAL & ENGINEERING SCIENCES IN MEDICINE 2020; 43:10.1007/s13246-019-00839-1. [PMID: 31898243 DOI: 10.1007/s13246-019-00839-1] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/28/2019] [Accepted: 12/21/2019] [Indexed: 11/25/2022]
Abstract
Electroencephalogram (EEG) has become a practical tool for monitoring and diagnosing pathological/psychological brain states. To date, an increasing number of investigations considered differences between brain dynamic of patients with schizophrenia and healthy controls. However, insufficient studies have been performed to provide an intelligent and accurate system that detects the schizophrenia using EEG signals. This paper concerns this issue by providing new feature-level fusion algorithms. Firstly, we analyze EEG dynamics using three well-known nonlinear measures, including complexity (Cx), Higuchi fractal dimension (HFD), and Lyapunov exponents (Lya). Next, we propose some innovative feature-level fusion strategies to combine the information of these indices. We evaluate the effect of the classifier parameter (σ) adjustment and the cross-validation partitioning criteria on classification accuracy. The performance of EEG classification using combined features was compared with the non-combined attributes. Experimental results showed higher classification accuracy when feature-level features were utilized, compared to when each feature was used individually or all fed to the classifier simultaneously. Using the proposed algorithm, the classification accuracy increased up to 100%. These results establish the suggested framework as a superior scheme compared to the state-of-the-art EEG schizophrenia diagnosis tool.
Collapse
Affiliation(s)
- Atefeh Goshvarpour
- Department of Biomedical Engineering, Faculty of Electrical Engineering, Sahand University of Technology, Tabriz, Iran
| | - Ateke Goshvarpour
- Department of Biomedical Engineering, Imam Reza International University, PO. BOX: 91735-553, Rezvan Campus (Female Students), Phalestine Sq., Mashhad, Razavi Khorasan, Iran.
| |
Collapse
|
49
|
Bulagang AF, Weng NG, Mountstephens J, Teo J. A review of recent approaches for emotion classification using electrocardiography and electrodermography signals. INFORMATICS IN MEDICINE UNLOCKED 2020. [DOI: 10.1016/j.imu.2020.100363] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022] Open
|
50
|
Multimodal Approach for Emotion Recognition Based on Simulated Flight Experiments. SENSORS 2019; 19:s19245516. [PMID: 31847210 PMCID: PMC6960577 DOI: 10.3390/s19245516] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/18/2019] [Revised: 12/08/2019] [Accepted: 12/09/2019] [Indexed: 11/17/2022]
Abstract
The present work tries to fill part of the gap regarding the pilots' emotions and their bio-reactions during some flight procedures such as, takeoff, climbing, cruising, descent, initial approach, final approach and landing. A sensing architecture and a set of experiments were developed, associating it to several simulated flights ( N f l i g h t s = 13 ) using the Microsoft Flight Simulator Steam Edition (FSX-SE). The approach was carried out with eight beginner users on the flight simulator ( N p i l o t s = 8 ). It is shown that it is possible to recognize emotions from different pilots in flight, combining their present and previous emotions. The cardiac system based on Heart Rate (HR), Galvanic Skin Response (GSR) and Electroencephalography (EEG), were used to extract emotions, as well as the intensities of emotions detected from the pilot face. We also considered five main emotions: happy, sad, angry, surprise and scared. The emotion recognition is based on Artificial Neural Networks and Deep Learning techniques. The Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE) were the main methods used to measure the quality of the regression output models. The tests of the produced output models showed that the lowest recognition errors were reached when all data were considered or when the GSR datasets were omitted from the model training. It also showed that the emotion surprised was the easiest to recognize, having a mean RMSE of 0.13 and mean MAE of 0.01; while the emotion sad was the hardest to recognize, having a mean RMSE of 0.82 and mean MAE of 0.08. When we considered only the higher emotion intensities by time, the most matches accuracies were between 55% and 100%.
Collapse
|