1
|
Shao X, Chen Z, Yu J, Lu F, Chen S, Xu J, Yao Y, Liu B, Yang P, Jiang Q, Hu B. Ultralow-cost piezoelectric sensor constructed by thermal compression bonding for long-term biomechanical signal monitoring in chronic mental disorders. NANOSCALE 2024; 16:2974-2982. [PMID: 38258372 DOI: 10.1039/d3nr06297j] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/24/2024]
Abstract
Wearable bioelectronic devices, which circumvent issues related to the large size and high cost of clinical equipment, have emerged as powerful tools for the auxiliary diagnosis and long-term monitoring of chronic psychiatric diseases. Current devices often integrate multiple intricate and expensive devices to ensure accurate diagnosis. However, their high cost and complexity hinder widespread clinical application and long-term user compliance. Herein, we developed an ultralow-cost poly(vinylidene fluoride)/zinc oxide nanofiber film-based piezoelectric sensor in a thermal compression bonding process. Our piezoelectric sensor exhibits remarkable sensitivity (13.4 mV N-1), rapid response (8 ms), and exceptional stability over 2000 compression/release cycles, all at a negligibly low fabrication cost. We demonstrate that pulse wave, blink, and speech signals can be acquired by the sensor, proposing a single biomechanical modality to monitor multiple physiological traits associated with bipolar disorder. This ultralow-cost and mass-producible piezoelectric sensor paves the way for extensive long-term monitoring and immediate feedback for bipolar disorder management.
Collapse
Affiliation(s)
- Xiaodong Shao
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing Medical University, Nanjing 210029, China.
| | - Zenan Chen
- School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, China
| | - Junxiao Yu
- The Affiliated Changzhou Second People's Hospital of Nanjing Medical University, Changzhou Second People's Hospital, Changzhou Medical Center, Nanjing Medical University, Changzhou 213161, China
| | - Fangzhou Lu
- School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, China
| | - Shisheng Chen
- School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, China
| | - Jingfeng Xu
- School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, China
| | - Yihao Yao
- School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, China
| | - Bin Liu
- School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, China
| | - Ping Yang
- School of Materials and Engineering, Nanjing Institute of Technology, Nanjing 211167, China
| | - Qin Jiang
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing Medical University, Nanjing 210029, China.
| | - Benhui Hu
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing Medical University, Nanjing 210029, China.
- School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, China
- Jiangsu Province Hospital, Nanjing Medical University First Affiliated Hospital, Nanjing 210029, China
| |
Collapse
|
2
|
Pradhan A, Srivastava S. Hierarchical extreme puzzle learning machine-based emotion recognition using multimodal physiological signals. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104624] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
|
3
|
Lin W, Li C, Zhang Y. A System of Emotion Recognition and Judgment and Its Application in Adaptive Interactive Game. SENSORS (BASEL, SWITZERLAND) 2023; 23:3250. [PMID: 36991961 PMCID: PMC10059653 DOI: 10.3390/s23063250] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 03/05/2023] [Accepted: 03/13/2023] [Indexed: 06/19/2023]
Abstract
A system of emotion recognition and judgment (SERJ) based on a set of optimal signal features is established, and an emotion adaptive interactive game (EAIG) is designed. The change in a player's emotion can be detected with the SERJ during the process of playing the game. A total of 10 subjects were selected to test the EAIG and SERJ. The results show that the SERJ and designed EAIG are effective. The game adapted itself by judging the corresponding special events triggered by a player's emotion and, as a result, enhanced the player's game experience. It was found that, in the process of playing the game, a player's perception of the change in emotion was different, and the test experience of a player had an effect on the test results. A SERJ that is based on a set of optimal signal features is better than a SERJ that is based on the conventional machine learning-based method.
Collapse
Affiliation(s)
- Wenqian Lin
- School of Media and Design, Hangzhou Dianzi University, Hangzhou 310018, China
| | - Chao Li
- College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China
| | - Yunjian Zhang
- College of Control Science and Technology, Zhejiang University, Hangzhou 310027, China
| |
Collapse
|
4
|
Sánchez-Reolid R, López de la Rosa F, Sánchez-Reolid D, López MT, Fernández-Caballero A. Machine Learning Techniques for Arousal Classification from Electrodermal Activity: A Systematic Review. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22228886. [PMID: 36433482 PMCID: PMC9695360 DOI: 10.3390/s22228886] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Revised: 11/14/2022] [Accepted: 11/14/2022] [Indexed: 05/14/2023]
Abstract
This article introduces a systematic review on arousal classification based on electrodermal activity (EDA) and machine learning (ML). From a first set of 284 articles searched for in six scientific databases, fifty-nine were finally selected according to various criteria established. The systematic review has made it possible to analyse all the steps to which the EDA signals are subjected: acquisition, pre-processing, processing and feature extraction. Finally, all ML techniques applied to the features of these signals for arousal classification have been studied. It has been found that support vector machines and artificial neural networks stand out within the supervised learning methods given their high-performance values. In contrast, it has been shown that unsupervised learning is not present in the detection of arousal through EDA. This systematic review concludes that the use of EDA for the detection of arousal is widely spread, with particularly good results in classification with the ML methods found.
Collapse
Affiliation(s)
- Roberto Sánchez-Reolid
- Departamento de Sistemas Informáticos, Universidad de Castilla-La Mancha, 02071 Albacete, Spain
- Neurocognition and Emotion Unit, Instituto de Investigación en Informática, 02071 Albacete, Spain
| | | | - Daniel Sánchez-Reolid
- Neurocognition and Emotion Unit, Instituto de Investigación en Informática, 02071 Albacete, Spain
| | - María T. López
- Departamento de Sistemas Informáticos, Universidad de Castilla-La Mancha, 02071 Albacete, Spain
- Neurocognition and Emotion Unit, Instituto de Investigación en Informática, 02071 Albacete, Spain
| | - Antonio Fernández-Caballero
- Departamento de Sistemas Informáticos, Universidad de Castilla-La Mancha, 02071 Albacete, Spain
- Neurocognition and Emotion Unit, Instituto de Investigación en Informática, 02071 Albacete, Spain
- CIBERSAM-ISCIII (Biomedical Research Networking Center in Mental Health, Instituto de Salud Carlos III), 28016 Madrid, Spain
- Correspondence:
| |
Collapse
|
5
|
Tavakkoli H, Motie Nasrabadi A. A Spherical Phase Space Partitioning Based Symbolic Time Series Analysis (SPSP—STSA) for Emotion Recognition Using EEG Signals. Front Hum Neurosci 2022; 16:936393. [PMID: 35845249 PMCID: PMC9276988 DOI: 10.3389/fnhum.2022.936393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Accepted: 06/01/2022] [Indexed: 02/01/2023] Open
Abstract
Emotion recognition systems have been of interest to researchers for a long time. Improvement of brain-computer interface systems currently makes EEG-based emotion recognition more attractive. These systems try to develop strategies that are capable of recognizing emotions automatically. There are many approaches due to different features extractions methods for analyzing the EEG signals. Still, Since the brain is supposed to be a nonlinear dynamic system, it seems a nonlinear dynamic analysis tool may yield more convenient results. A novel approach in Symbolic Time Series Analysis (STSA) for signal phase space partitioning and symbol sequence generating is introduced in this study. Symbolic sequences have been produced by means of spherical partitioning of phase space; then, they have been compared and classified based on the maximum value of a similarity index. Obtaining the automatic independent emotion recognition EEG-based system has always been discussed because of the subject-dependent content of emotion. Here we introduce a subject-independent protocol to solve the generalization problem. To prove our method’s effectiveness, we used the DEAP dataset, and we reached an accuracy of 98.44% for classifying happiness from sadness (two- emotion groups). It was 93.75% for three (happiness, sadness, and joy), 89.06% for four (happiness, sadness, joy, and terrible), and 85% for five emotional groups (happiness, sadness, joy, terrible and mellow). According to these results, it is evident that our subject-independent method is more accurate rather than many other methods in different studies. In addition, a subject-independent method has been proposed in this study, which is not considered in most of the studies in this field.
Collapse
|
6
|
Model of Emotion Judgment Based on Features of Multiple Physiological Signals. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12104998] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
The model of emotion judgment based on features of multiple physiological signals was investi-gated. In total, 40 volunteers participated in the experiment by playing a computer game while their physiological signals (skin electricity, electrocardiogram (ECG), pulse wave, and facial electromy-ogram (EMG)) were acquired. The volunteers were asked to complete an emotion questionnaire where six typical events that appeared in the game were included, and each volunteer rated their own emotion when experiencing the six events. Based on the analysis of game events, the signal data were cut into segments and the emotional trends were classified. The correlation between data segments and emotional trends was built using a statistical method combined with the questionnaire responses. The set of optimal signal features was obtained by processing the data of physiological signals, extracting the features of signal data, reducing the dimensionality of signal features, and classifying the emotion based on the set of signal data. Finally, the model of emotion judgment was established by selecting the features with a significance of 0.01 based on the correlation between the features in the set of optimal signal features and emotional trends.
Collapse
|
7
|
Yan M, Deng Z, He B, Zou C, Wu J, Zhu Z. Emotion classification with multichannel physiological signals using hybrid feature and adaptive decision fusion. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103235] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
8
|
Emotion Recognition and Regulation Based on Stacked Sparse Auto-Encoder Network and Personalized Reconfigurable Music. MATHEMATICS 2021. [DOI: 10.3390/math9060593] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Music can regulate and improve the emotions of the brain. Traditional emotional regulation approaches often adopt complete music. As is well-known, complete music may vary in pitch, volume, and other ups and downs. An individual’s emotions may also adopt multiple states, and music preference varies from person to person. Therefore, traditional music regulation methods have problems, such as long duration, variable emotional states, and poor adaptability. In view of these problems, we use different music processing methods and stacked sparse auto-encoder neural networks to identify and regulate the emotional state of the brain in this paper. We construct a multi-channel EEG sensor network, divide brainwave signals and the corresponding music separately, and build a personalized reconfigurable music-EEG library. The 17 features in the EEG signal are extracted as joint features, and the stacked sparse auto-encoder neural network is used to classify the emotions, in order to establish a music emotion evaluation index. According to the goal of emotional regulation, music fragments are selected from the personalized reconfigurable music-EEG library, then reconstructed and combined for emotional adjustment. The results show that, compared with complete music, the reconfigurable combined music was less time-consuming for emotional regulation (76.29% less), and the number of irrelevant emotional states was reduced by 69.92%. In terms of adaptability to different participants, the reconfigurable music improved the recognition rate of emotional states by 31.32%.
Collapse
|
9
|
Emotion recognition from EEG signals using empirical mode decomposition and second-order difference plot. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2020.102389] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
10
|
Dzieżyc M, Gjoreski M, Kazienko P, Saganowski S, Gams M. Can We Ditch Feature Engineering? End-to-End Deep Learning for Affect Recognition from Physiological Sensor Data. SENSORS 2020; 20:s20226535. [PMID: 33207564 PMCID: PMC7697590 DOI: 10.3390/s20226535] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/02/2020] [Revised: 11/01/2020] [Accepted: 11/06/2020] [Indexed: 01/18/2023]
Abstract
To further extend the applicability of wearable sensors in various domains such as mobile health systems and the automotive industry, new methods for accurately extracting subtle physiological information from these wearable sensors are required. However, the extraction of valuable information from physiological signals is still challenging—smartphones can count steps and compute heart rate, but they cannot recognize emotions and related affective states. This study analyzes the possibility of using end-to-end multimodal deep learning (DL) methods for affect recognition. Ten end-to-end DL architectures are compared on four different datasets with diverse raw physiological signals used for affect recognition, including emotional and stress states. The DL architectures specialized for time-series classification were enhanced to simultaneously facilitate learning from multiple sensors, each having their own sampling frequency. To enable fair comparison among the different DL architectures, Bayesian optimization was used for hyperparameter tuning. The experimental results showed that the performance of the models depends on the intensity of the physiological response induced by the affective stimuli, i.e., the DL models recognize stress induced by the Trier Social Stress Test more successfully than they recognize emotional changes induced by watching affective content, e.g., funny videos. Additionally, the results showed that the CNN-based architectures might be more suitable than LSTM-based architectures for affect recognition from physiological sensors.
Collapse
Affiliation(s)
- Maciej Dzieżyc
- Department of Computational Intelligence, Wrocław University of Science and Technology, 50-370 Wrocław, Poland; (P.K.); (S.S.)
- Faculty of Computer Science and Management, Wrocław University of Science and Technology, 50-370 Wrocław, Poland
- Correspondence:
| | - Martin Gjoreski
- Jožef Stefan Institute, 1000 Ljubljana, Slovenia; (M.G.); (M.G.)
- Jožef Stefan Postgraduate School, 1000 Ljubljana, Slovenia
| | - Przemysław Kazienko
- Department of Computational Intelligence, Wrocław University of Science and Technology, 50-370 Wrocław, Poland; (P.K.); (S.S.)
- Faculty of Computer Science and Management, Wrocław University of Science and Technology, 50-370 Wrocław, Poland
| | - Stanisław Saganowski
- Department of Computational Intelligence, Wrocław University of Science and Technology, 50-370 Wrocław, Poland; (P.K.); (S.S.)
- Faculty of Computer Science and Management, Wrocław University of Science and Technology, 50-370 Wrocław, Poland
| | - Matjaž Gams
- Jožef Stefan Institute, 1000 Ljubljana, Slovenia; (M.G.); (M.G.)
- Jožef Stefan Postgraduate School, 1000 Ljubljana, Slovenia
| |
Collapse
|
11
|
Selecting transferrable neurophysiological features for inter-individual emotion recognition via a shared-subspace feature elimination approach. Comput Biol Med 2020; 123:103875. [PMID: 32658790 DOI: 10.1016/j.compbiomed.2020.103875] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Revised: 06/04/2020] [Accepted: 06/20/2020] [Indexed: 11/23/2022]
Abstract
The interplay between human emotions, personality, and motivation results in individual specificity in neurophysiological data distributions for the same emotional category. To address this issue for building an emotion recognition system based on electroencephalogram (EEG) features, we propose a shared-subspace feature elimination (SSFE) approach to identify EEG variables with common characteristics across multiple individuals. In the SSFE framework, a low-dimensional space defined by a selected number of EEG features is created to represent the inter-emotion discriminant for different pairs of subjects evaluated based on the interclass margin. Using two public databases-DEAP and MAHNOB-HCI-the performance of the SSFE is validated according to the leave-one-subject-out paradigm. The performance of the proposed framework is compared with five other feature-selection methods. The effectiveness and computational cost of the SSFE is investigated across six machine learning models based on their optimal hyperparameters. In the end, the competitive binary classification accuracy from the SSFE of arousal and valence recognitions are determined to be 0.6521 and 0.6635, respectively, for DEAP, and 0.6520 and 0.6537, respectively for MAHNOB-HCI.
Collapse
|
12
|
Systematic Analysis of a Military Wearable Device Based on a Multi-Level Fusion Framework: Research Directions. SENSORS 2019; 19:s19122651. [PMID: 31212742 PMCID: PMC6631929 DOI: 10.3390/s19122651] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/10/2019] [Revised: 05/28/2019] [Accepted: 06/10/2019] [Indexed: 02/04/2023]
Abstract
With the development of the Internet of Battlefield Things (IoBT), soldiers have become key nodes of information collection and resource control on the battlefield. It has become a trend to develop wearable devices with diverse functions for the military. However, although densely deployed wearable sensors provide a platform for comprehensively monitoring the status of soldiers, wearable technology based on multi-source fusion lacks a generalized research system to highlight the advantages of heterogeneous sensor networks and information fusion. Therefore, this paper proposes a multi-level fusion framework (MLFF) based on Body Sensor Networks (BSNs) of soldiers, and describes a model of the deployment of heterogeneous sensor networks. The proposed framework covers multiple types of information at a single node, including behaviors, physiology, emotions, fatigue, environments, and locations, so as to enable Soldier-BSNs to obtain sufficient evidence, decision-making ability, and information resilience under resource constraints. In addition, we systematically discuss the problems and solutions of each unit according to the frame structure to identify research directions for the development of wearable devices for the military.
Collapse
|
13
|
Campbell E, Phinyomark A, Scheme E. Feature Extraction and Selection for Pain Recognition Using Peripheral Physiological Signals. Front Neurosci 2019; 13:437. [PMID: 31133782 PMCID: PMC6513974 DOI: 10.3389/fnins.2019.00437] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2018] [Accepted: 04/16/2019] [Indexed: 11/25/2022] Open
Abstract
In pattern recognition, the selection of appropriate features is paramount to both the performance and the robustness of the system. Over-reliance on machine learning-based feature selection methods can, therefore, be problematic; especially when conducted using small snapshots of data. The results of these methods, if adopted without proper interpretation, can lead to sub-optimal system design or worse, the abandonment of otherwise viable and important features. In this work, a deep exploration of pain-based emotion classification was conducted to better understand differences in the results of the related literature. In total, 155 different time domain and frequency domain features were explored, derived from electromyogram (EMG), skin conductance levels (SCL), and electrocardiogram (ECG) readings taken from the 85 subjects in response to heat-induced pain. To address the inconsistency in the optimal feature sets found in related works, an exhaustive and interpretable feature selection protocol was followed to obtain a generalizable feature set. Associations between features were then visualized using a topologically-informed chart, called Mapper, of this physiological feature space, including synthesis and comparison of results from previous literature. This topological feature chart was able to identify key sources of information that led to the formation of five main functional feature groups: signal amplitude and power, frequency information, nonlinear complexity, unique, and connecting. These functional groupings were used to extract further insight into observable autonomic responses to pain through a complementary statistical interaction analysis. From this chart, it was observed that EMG and SCL derived features could functionally replace those obtained from ECG. These insights motivate future work on novel sensing modalities, feature design, deep learning approaches, and dimensionality reduction techniques.
Collapse
Affiliation(s)
- Evan Campbell
- Department of Electrical and Computer Engineering, University of New Brunswick, Fredericton, NB, Canada.,Institute of Biomedical Engineering, University of New Brunswick, Fredericton, NB, Canada
| | - Angkoon Phinyomark
- Institute of Biomedical Engineering, University of New Brunswick, Fredericton, NB, Canada
| | - Erik Scheme
- Department of Electrical and Computer Engineering, University of New Brunswick, Fredericton, NB, Canada.,Institute of Biomedical Engineering, University of New Brunswick, Fredericton, NB, Canada
| |
Collapse
|
14
|
Panicker SS, Gayathri P. A survey of machine learning techniques in physiology based mental stress detection systems. Biocybern Biomed Eng 2019. [DOI: 10.1016/j.bbe.2019.01.004] [Citation(s) in RCA: 60] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
15
|
Recognition of Emotions Using Multichannel EEG Data and DBN-GC-Based Ensemble Deep Learning Framework. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2018; 2018:9750904. [PMID: 30647727 PMCID: PMC6311795 DOI: 10.1155/2018/9750904] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/14/2018] [Revised: 11/14/2018] [Accepted: 11/25/2018] [Indexed: 11/29/2022]
Abstract
Fusing multichannel neurophysiological signals to recognize human emotion states becomes increasingly attractive. The conventional methods ignore the complementarity between time domain characteristics, frequency domain characteristics, and time-frequency characteristics of electroencephalogram (EEG) signals and cannot fully capture the correlation information between different channels. In this paper, an integrated deep learning framework based on improved deep belief networks with glia chains (DBN-GCs) is proposed. In the framework, the member DBN-GCs are employed for extracting intermediate representations of EEG raw features from multiple domains separately, as well as mining interchannel correlation information by glia chains. Then, the higher level features describing time domain characteristics, frequency domain characteristics, and time-frequency characteristics are fused by a discriminative restricted Boltzmann machine (RBM) to implement emotion recognition task. Experiments conducted on the DEAP benchmarking dataset achieve averaged accuracy of 75.92% and 76.83% for arousal and valence states classification, respectively. The results show that the proposed framework outperforms most of the above deep classifiers. Thus, potential of the proposed framework is demonstrated.
Collapse
|
16
|
EEG-Based Emotion Recognition Using Quadratic Time-Frequency Distribution. SENSORS 2018; 18:s18082739. [PMID: 30127311 PMCID: PMC6111567 DOI: 10.3390/s18082739] [Citation(s) in RCA: 57] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/25/2018] [Revised: 08/12/2018] [Accepted: 08/17/2018] [Indexed: 11/25/2022]
Abstract
Accurate recognition and understating of human emotions is an essential skill that can improve the collaboration between humans and machines. In this vein, electroencephalogram (EEG)-based emotion recognition is considered an active research field with challenging issues regarding the analyses of the nonstationary EEG signals and the extraction of salient features that can be used to achieve accurate emotion recognition. In this paper, an EEG-based emotion recognition approach with a novel time-frequency feature extraction technique is presented. In particular, a quadratic time-frequency distribution (QTFD) is employed to construct a high resolution time-frequency representation of the EEG signals and capture the spectral variations of the EEG signals over time. To reduce the dimensionality of the constructed QTFD-based representation, a set of 13 time- and frequency-domain features is extended to the joint time-frequency-domain and employed to quantify the QTFD-based time-frequency representation of the EEG signals. Moreover, to describe different emotion classes, we have utilized the 2D arousal-valence plane to develop four emotion labeling schemes of the EEG signals, such that each emotion labeling scheme defines a set of emotion classes. The extracted time-frequency features are used to construct a set of subject-specific support vector machine classifiers to classify the EEG signals of each subject into the different emotion classes that are defined using each of the four emotion labeling schemes. The performance of the proposed approach is evaluated using a publicly available EEG dataset, namely the DEAPdataset. Moreover, we design three performance evaluation analyses, namely the channel-based analysis, feature-based analysis and neutral class exclusion analysis, to quantify the effects of utilizing different groups of EEG channels that cover various regions in the brain, reducing the dimensionality of the extracted time-frequency features and excluding the EEG signals that correspond to the neutral class, on the capability of the proposed approach to discriminate between different emotion classes. The results reported in the current study demonstrate the efficacy of the proposed QTFD-based approach in recognizing different emotion classes. In particular, the average classification accuracies obtained in differentiating between the various emotion classes defined using each of the four emotion labeling schemes are within the range of 73.8%–86.2%. Moreover, the emotion classification accuracies achieved by our proposed approach are higher than the results reported in several existing state-of-the-art EEG-based emotion recognition studies.
Collapse
|
17
|
Emotion Recognition Based on Weighted Fusion Strategy of Multichannel Physiological Signals. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2018; 2018:5296523. [PMID: 30073024 PMCID: PMC6057426 DOI: 10.1155/2018/5296523] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/15/2017] [Revised: 05/11/2018] [Accepted: 06/19/2018] [Indexed: 11/17/2022]
Abstract
Emotion recognition is an important pattern recognition problem that has inspired researchers for several areas. Various data from humans for emotion recognition have been developed, including visual, audio, and physiological signals data. This paper proposes a decision-level weight fusion strategy for emotion recognition in multichannel physiological signals. Firstly, we selected four kinds of physiological signals, including Electroencephalography (EEG), Electrocardiogram (ECG), Respiration Amplitude (RA), and Galvanic Skin Response (GSR). And various analysis domains have been used in physiological emotion features extraction. Secondly, we adopt feedback strategy for weight definition, according to recognition rate of each emotion of each physiological signal based on Support Vector Machine (SVM) classifier independently. Finally, we introduce weight in decision level by linear fusing weight matrix with classification result of each SVM classifier. The experiments on the MAHNOB-HCI database show the highest accuracy. The results also provide evidence and suggest a way for further developing a more specialized emotion recognition system based on multichannel data using weight fusion strategy.
Collapse
|
18
|
Coverage of Emotion Recognition for Common Wearable Biosensors. BIOSENSORS-BASEL 2018; 8:bios8020030. [PMID: 29587375 PMCID: PMC6023004 DOI: 10.3390/bios8020030] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/20/2018] [Revised: 03/16/2018] [Accepted: 03/22/2018] [Indexed: 11/21/2022]
Abstract
The present research proposes a novel emotion recognition framework for the computer prediction of human emotions using common wearable biosensors. Emotional perception promotes specific patterns of biological responses in the human body, and this can be sensed and used to predict emotions using only biomedical measurements. Based on theoretical and empirical psychophysiological research, the foundation of autonomic specificity facilitates the establishment of a strong background for recognising human emotions using machine learning on physiological patterning. However, a systematic way of choosing the physiological data covering the elicited emotional responses for recognising the target emotions is not obvious. The current study demonstrates through experimental measurements the coverage of emotion recognition using common off-the-shelf wearable biosensors based on the synchronisation between audiovisual stimuli and the corresponding physiological responses. The work forms the basis of validating the hypothesis for emotional state recognition in the literature and presents coverage of the use of common wearable biosensors coupled with a novel preprocessing algorithm to demonstrate the practical prediction of the emotional states of wearers.
Collapse
|
19
|
Applicability of Emotion Recognition and Induction Methods to Study the Behavior of Programmers. APPLIED SCIENCES-BASEL 2018. [DOI: 10.3390/app8030323] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
20
|
Goshvarpour A, Abbasi A, Goshvarpour A. Fusion of heart rate variability and pulse rate variability for emotion recognition using lagged poincare plots. AUSTRALASIAN PHYSICAL & ENGINEERING SCIENCES IN MEDICINE 2017; 40:617-629. [PMID: 28717902 DOI: 10.1007/s13246-017-0571-1] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/13/2017] [Accepted: 07/04/2017] [Indexed: 01/01/2023]
Abstract
Designing an efficient automatic emotion recognition system based on physiological signals has attracted great interests within the research of human-machine interactions. This study was aimed to classify emotional responses by means of a simple dynamic signal processing technique and fusion frameworks. The electrocardiogram and finger pulse activity of 35 participants were recorded during rest condition and when subjects were listening to music intended to stimulate certain emotions. Four emotion categories, including happiness, sadness, peacefulness, and fear were chosen. Estimating heart rate variability (HRV) and pulse rate variability (PRV), 4 Poincare indices in 10 lags were extracted. The support vector machine classifier was used for emotion classification. Both feature level (FL) and decision level (DL) fusion schemes were examined. Significant differences have been observed between lag 1 Poincare plot indices and the other lagged measures. The mean accuracies of 84.1, 82.9, 79.68, and 76.05% were obtained for PRV, DL, FL, and HRV measures, respectively. However, DL outperformed others in discriminating sadness and peacefulness, using SD1 and total features, correspondingly. In both cases, the classification rates improved up to 92% (with the sensitivity of 95% and specificity of 83.33%). Totally, DL resulted in better performances compared to FL. In addition, the impact of the fusion rules on the classification performances has been confirmed.
Collapse
Affiliation(s)
- Atefeh Goshvarpour
- Computational Neuroscience Laboratory, Department of Biomedical Engineering, Faculty of Electrical Engineering, Sahand University of Technology, New Sahand Town, P. O. BOX 51335/1996, Tabriz, Iran
| | - Ataollah Abbasi
- Computational Neuroscience Laboratory, Department of Biomedical Engineering, Faculty of Electrical Engineering, Sahand University of Technology, New Sahand Town, P. O. BOX 51335/1996, Tabriz, Iran.
| | - Ateke Goshvarpour
- Computational Neuroscience Laboratory, Department of Biomedical Engineering, Faculty of Electrical Engineering, Sahand University of Technology, New Sahand Town, P. O. BOX 51335/1996, Tabriz, Iran
| |
Collapse
|
21
|
Yin Z, Wang Y, Liu L, Zhang W, Zhang J. Cross-Subject EEG Feature Selection for Emotion Recognition Using Transfer Recursive Feature Elimination. Front Neurorobot 2017; 11:19. [PMID: 28443015 PMCID: PMC5385370 DOI: 10.3389/fnbot.2017.00019] [Citation(s) in RCA: 77] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2016] [Accepted: 03/24/2017] [Indexed: 11/29/2022] Open
Abstract
Using machine-learning methodologies to analyze EEG signals becomes increasingly attractive for recognizing human emotions because of the objectivity of physiological data and the capability of the learning principles on modeling emotion classifiers from heterogeneous features. However, the conventional subject-specific classifiers may induce additional burdens to each subject for preparing multiple-session EEG data as training sets. To this end, we developed a new EEG feature selection approach, transfer recursive feature elimination (T-RFE), to determine a set of the most robust EEG indicators with stable geometrical distribution across a group of training subjects and a specific testing subject. A validating set is introduced to independently determine the optimal hyper-parameter and the feature ranking of the T-RFE model aiming at controlling the overfitting. The effectiveness of the T-RFE algorithm for such cross-subject emotion classification paradigm has been validated by DEAP database. With a linear least square support vector machine classifier implemented, the performance of the T-RFE is compared against several conventional feature selection schemes and the statistical significant improvement has been found. The classification rate and F-score achieve 0.7867, 0.7526, 0.7875, and 0.8077 for arousal and valence dimensions, respectively, and outperform several recent reported works on the same database. In the end, the T-RFE based classifier is compared against two subject-generic classifiers in the literature. The investigation of the computational time for all classifiers indicates the accuracy improvement of the T-RFE is at the cost of the longer training time.
Collapse
Affiliation(s)
- Zhong Yin
- Shanghai Key Lab of Modern Optical System, Engineering Research Center of Optical Instrument and System, Ministry of Education, University of Shanghai for Science and TechnologyShanghai, China
| | - Yongxiong Wang
- Shanghai Key Lab of Modern Optical System, Engineering Research Center of Optical Instrument and System, Ministry of Education, University of Shanghai for Science and TechnologyShanghai, China
| | - Li Liu
- Shanghai Key Lab of Modern Optical System, Engineering Research Center of Optical Instrument and System, Ministry of Education, University of Shanghai for Science and TechnologyShanghai, China
| | - Wei Zhang
- Shanghai Key Lab of Modern Optical System, Engineering Research Center of Optical Instrument and System, Ministry of Education, University of Shanghai for Science and TechnologyShanghai, China
| | - Jianhua Zhang
- Department of Automation, East China University of Science and TechnologyShanghai, China
| |
Collapse
|
22
|
Yin Z, Zhao M, Wang Y, Yang J, Zhang J. Recognition of emotions using multimodal physiological signals and an ensemble deep learning model. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2017; 140:93-110. [PMID: 28254094 DOI: 10.1016/j.cmpb.2016.12.005] [Citation(s) in RCA: 129] [Impact Index Per Article: 16.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/20/2016] [Revised: 10/31/2016] [Accepted: 12/12/2016] [Indexed: 05/23/2023]
Abstract
BACKGROUND AND OBJECTIVE Using deep-learning methodologies to analyze multimodal physiological signals becomes increasingly attractive for recognizing human emotions. However, the conventional deep emotion classifiers may suffer from the drawback of the lack of the expertise for determining model structure and the oversimplification of combining multimodal feature abstractions. METHODS In this study, a multiple-fusion-layer based ensemble classifier of stacked autoencoder (MESAE) is proposed for recognizing emotions, in which the deep structure is identified based on a physiological-data-driven approach. Each SAE consists of three hidden layers to filter the unwanted noise in the physiological features and derives the stable feature representations. An additional deep model is used to achieve the SAE ensembles. The physiological features are split into several subsets according to different feature extraction approaches with each subset separately encoded by a SAE. The derived SAE abstractions are combined according to the physiological modality to create six sets of encodings, which are then fed to a three-layer, adjacent-graph-based network for feature fusion. The fused features are used to recognize binary arousal or valence states. RESULTS DEAP multimodal database was employed to validate the performance of the MESAE. By comparing with the best existing emotion classifier, the mean of classification rate and F-score improves by 5.26%. CONCLUSIONS The superiority of the MESAE against the state-of-the-art shallow and deep emotion classifiers has been demonstrated under different sizes of the available physiological instances.
Collapse
Affiliation(s)
- Zhong Yin
- Engineering Research Center of Optical Instrument and System, Ministry of Education, Shanghai Key Lab of Modern Optical System, University of Shanghai for Science and Technology, Shanghai, 200093, PR China.
| | - Mengyuan Zhao
- School of Social Sciences, University of Shanghai for Science and Technology, Shanghai, 200093, PR China
| | - Yongxiong Wang
- Engineering Research Center of Optical Instrument and System, Ministry of Education, Shanghai Key Lab of Modern Optical System, University of Shanghai for Science and Technology, Shanghai, 200093, PR China.
| | - Jingdong Yang
- Engineering Research Center of Optical Instrument and System, Ministry of Education, Shanghai Key Lab of Modern Optical System, University of Shanghai for Science and Technology, Shanghai, 200093, PR China
| | - Jianhua Zhang
- Department of Automation, East China University of Science and Technology, Shanghai 200237, PR China
| |
Collapse
|
23
|
Goshvarpour A, Abbasi A, Goshvarpour A. Indices from lagged poincare plots of heart rate variability: an efficient nonlinear tool for emotion discrimination. AUSTRALASIAN PHYSICAL & ENGINEERING SCIENCES IN MEDICINE 2017; 40:277-287. [PMID: 28210990 DOI: 10.1007/s13246-017-0530-x] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/31/2016] [Accepted: 01/27/2017] [Indexed: 11/29/2022]
Abstract
Interest in human emotion recognition, regarding physiological signals, has recently risen. In this study, an efficient emotion recognition system, based on geometrical analysis of autonomic nervous system signals, is presented. The electrocardiogram recordings of 47 college students were obtained during rest condition and affective visual stimuli. Pictures with four emotional contents, including happiness, peacefulness, sadness, and fear were selected. Then, ten lags of Poincare plot were constructed for heart rate variability (HRV) segments. For each lag, five geometrical indices were extracted. Next, these features were fed into an automatic classification system for the recognition of the four affective states and rest condition. The results showed that the Poincare plots have different shapes for different lags, as well as for different affective states. Considering higher lags, the greatest increment in SD1 and decrements in SD2 occurred during the happiness stimuli. In contrast, the minimum changes in the Poincare measures were perceived during the fear inducements. Therefore, the HRV geometrical shapes and dynamics were altered by the positive and negative values of valence-based emotion dimension. Using a probabilistic neural network, a maximum recognition rate of 97.45% was attained. Applying the proposed methodology based on lagged Poincare indices, a valuable tool for discriminating the emotional states was provided.
Collapse
Affiliation(s)
- Ateke Goshvarpour
- Computational Neuroscience Laboratory, Department of Biomedical Engineering, Faculty of Electrical Engineering, Sahand University of Technology, New Sahand Town, PO. BOX 51335/1996, Tabriz, Iran
| | - Ataollah Abbasi
- Computational Neuroscience Laboratory, Department of Biomedical Engineering, Faculty of Electrical Engineering, Sahand University of Technology, New Sahand Town, PO. BOX 51335/1996, Tabriz, Iran.
| | - Atefeh Goshvarpour
- Computational Neuroscience Laboratory, Department of Biomedical Engineering, Faculty of Electrical Engineering, Sahand University of Technology, New Sahand Town, PO. BOX 51335/1996, Tabriz, Iran
| |
Collapse
|
24
|
Goshvarpour A, Abbasi A, Goshvarpour A. GENDER DIFFERENCES IN RESPONSE TO AFFECTIVE AUDIO AND VISUAL INDUCTIONS: EXAMINATION OF NONLINEAR DYNAMICS OF AUTONOMIC SIGNALS. BIOMEDICAL ENGINEERING: APPLICATIONS, BASIS AND COMMUNICATIONS 2016. [DOI: 10.4015/s1016237216500241] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
Abstract
Physiological reflection of emotions can be tracked by autonomic signals. Several studies have been conducted using autonomic signal processing to examine men and women differences during the exposure of affective stimuli. Emotional pictures and music are two commonly used methods to induce affects in an experimental setup. The biological changes have been commonly monitored during a certain emotional inducement protocol, solely. This study was aimed to examine two induction paradigms involved auditory and visual cues using nonlinear dynamical approaches. To this end, various nonlinear parameters of galvanic skin response (GSR) and pulse signals of men and women were examined. The nonlinear analysis was performed using lagged Poincare parameters, detrended fluctuation indices (DFAs), Lyapunov exponents (LEs), some entropy measures, and recurrence quantification analysis (RQA). The Wilcoxon rank sum test was used to show significant differences between the groups. The results indicate that besides the type of affect induction, physiological differences of men and women are notable in negative emotions (sadness and fear). Regardless to the inducements, lagged Poincare parameters of the pulse signals and DFA indices of the GSR have shown significant differences in gender affective responses. However, applying pictorial stimuli, LEs are appropriate indicators for gender discrimination. It is also concluded that GSR dynamics are intensely affected by the kind of stimuli; while this is not validated for the pulse. These findings suggest that different emotional inductions evoked different autonomic responses in men and women, which can be appropriately monitored using nonlinear signal processing approaches.
Collapse
Affiliation(s)
- Atefeh Goshvarpour
- Computational Neuroscience Laboratory, Department of Biomedical Engineering, Faculty of Electrical Engineering, Sahand University of Technology, Tabriz, Iran
| | - Ataollah Abbasi
- Computational Neuroscience Laboratory, Department of Biomedical Engineering, Faculty of Electrical Engineering, Sahand University of Technology, Tabriz, Iran
| | - Ateke Goshvarpour
- Computational Neuroscience Laboratory, Department of Biomedical Engineering, Faculty of Electrical Engineering, Sahand University of Technology, Tabriz, Iran
| |
Collapse
|