1
|
Mari T, Henderson J, Ali SH, Hewitt D, Brown C, Stancak A, Fallon N. Machine learning and EEG can classify passive viewing of discrete categories of visual stimuli but not the observation of pain. BMC Neurosci 2023; 24:50. [PMID: 37715119 PMCID: PMC10504739 DOI: 10.1186/s12868-023-00819-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Accepted: 09/06/2023] [Indexed: 09/17/2023] Open
Abstract
Previous studies have demonstrated the potential of machine learning (ML) in classifying physical pain from non-pain states using electroencephalographic (EEG) data. However, the application of ML to EEG data to categorise the observation of pain versus non-pain images of human facial expressions or scenes depicting pain being inflicted has not been explored. The present study aimed to address this by training Random Forest (RF) models on cortical event-related potentials (ERPs) recorded while participants passively viewed faces displaying either pain or neutral expressions, as well as action scenes depicting pain or matched non-pain (neutral) scenarios. Ninety-one participants were recruited across three samples, which included a model development group (n = 40) and a cross-subject validation group (n = 51). Additionally, 25 participants from the model development group completed a second experimental session, providing a within-subject temporal validation sample. The analysis of ERPs revealed an enhanced N170 component in response to faces compared to action scenes. Moreover, an increased late positive potential (LPP) was observed during the viewing of pain scenes compared to neutral scenes. Additionally, an enhanced P3 response was found when participants viewed faces displaying pain expressions compared to neutral expressions. Subsequently, three RF models were developed to classify images into faces and scenes, neutral and pain scenes, and neutral and pain expressions. The RF model achieved classification accuracies of 75%, 64%, and 69% for cross-validation, cross-subject, and within-subject classifications, respectively, along with reasonably calibrated predictions for the classification of face versus scene images. However, the RF model was unable to classify pain versus neutral stimuli above chance levels when presented with subsequent tasks involving images from either category. These results expand upon previous findings by externally validating the use of ML in classifying ERPs related to different categories of visual images, namely faces and scenes. The results also indicate the limitations of ML in distinguishing pain and non-pain connotations using ERP responses to the passive viewing of visually similar images.
Collapse
Affiliation(s)
- Tyler Mari
- Department of Psychology, Institute of Population Health, University of Liverpool, 2.21 Eleanor Rathbone Building, Bedford Street South, Liverpool, L69 7ZA, UK.
| | - Jessica Henderson
- Department of Psychology, Institute of Population Health, University of Liverpool, 2.21 Eleanor Rathbone Building, Bedford Street South, Liverpool, L69 7ZA, UK
| | - S Hasan Ali
- Department of Psychology, Institute of Population Health, University of Liverpool, 2.21 Eleanor Rathbone Building, Bedford Street South, Liverpool, L69 7ZA, UK
| | - Danielle Hewitt
- Department of Psychology, Institute of Population Health, University of Liverpool, 2.21 Eleanor Rathbone Building, Bedford Street South, Liverpool, L69 7ZA, UK
| | - Christopher Brown
- Department of Psychology, Institute of Population Health, University of Liverpool, 2.21 Eleanor Rathbone Building, Bedford Street South, Liverpool, L69 7ZA, UK
| | - Andrej Stancak
- Department of Psychology, Institute of Population Health, University of Liverpool, 2.21 Eleanor Rathbone Building, Bedford Street South, Liverpool, L69 7ZA, UK
| | - Nicholas Fallon
- Department of Psychology, Institute of Population Health, University of Liverpool, 2.21 Eleanor Rathbone Building, Bedford Street South, Liverpool, L69 7ZA, UK
| |
Collapse
|
2
|
Dahal K, Bogue-Jimenez B, Doblas A. Global Stress Detection Framework Combining a Reduced Set of HRV Features and Random Forest Model. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23115220. [PMID: 37299947 DOI: 10.3390/s23115220] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Revised: 05/08/2023] [Accepted: 05/23/2023] [Indexed: 06/12/2023]
Abstract
Approximately 65% of the worldwide adult population has experienced stress, affecting their daily routine at least once in the past year. Stress becomes harmful when it occurs for too long or is continuous (i.e., chronic), interfering with our performance, attention, and concentration. Chronic high stress contributes to major health issues such as heart disease, high blood pressure, diabetes, depression, and anxiety. Several researchers have focused on detecting stress through combining many features with machine/deep learning models. Despite these efforts, our community has not agreed on the number of features to identify stress conditions using wearable devices. In addition, most of the reported studies have been focused on person-specific training and testing. Thanks to our community's broad acceptance of wearable wristband devices, this work investigates a global stress detection model combining eight HRV features with a random forest (RF) algorithm. Whereas the model's performance is evaluated for each individual, the training of the RF model contains instances of all subjects (i.e., global training). We have validated the proposed global stress model using two open-access databases (the WESAD and SWELL databases) and their combination. The eight HRV features with the highest classifying power are selected using the minimum redundancy maximum relevance (mRMR) method, reducing the training time of the global stress platform. The proposed global stress monitoring model identifies person-specific stress events with an accuracy higher than 99% after a global training framework. Future work should be focused on testing this global stress monitoring framework in real-world applications.
Collapse
Affiliation(s)
- Kamana Dahal
- Department of Electrical and Computer Engineering, The University of Memphis, Memphis, TN 38152, USA
| | - Brian Bogue-Jimenez
- Department of Electrical and Computer Engineering, The University of Memphis, Memphis, TN 38152, USA
| | - Ana Doblas
- Department of Electrical and Computer Engineering, The University of Memphis, Memphis, TN 38152, USA
| |
Collapse
|
3
|
Amin M, Ullah K, Asif M, Shah H, Mehmood A, Khan MA. Real-World Driver Stress Recognition and Diagnosis Based on Multimodal Deep Learning and Fuzzy EDAS Approaches. Diagnostics (Basel) 2023; 13:1897. [PMID: 37296750 PMCID: PMC10252378 DOI: 10.3390/diagnostics13111897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Revised: 04/04/2023] [Accepted: 05/13/2023] [Indexed: 06/12/2023] Open
Abstract
Mental stress is known as a prime factor in road crashes. The devastation of these crashes often results in damage to humans, vehicles, and infrastructure. Likewise, persistent mental stress could lead to the development of mental, cardiovascular, and abdominal disorders. Preceding research in this domain mostly focuses on feature engineering and conventional machine learning approaches. These approaches recognize different levels of stress based on handcrafted features extracted from various modalities including physiological, physical, and contextual data. Acquiring good quality features from these modalities using feature engineering is often a difficult job. Recent developments in the form of deep learning (DL) algorithms have relieved feature engineering by automatically extracting and learning resilient features. This paper proposes different CNN and CNN-LSTSM-based fusion models using physiological signals (SRAD dataset) and multimodal data (AffectiveROAD dataset) for the driver's two and three stress levels. The fuzzy EDAS (evaluation based on distance from average solution) approach is used to evaluate the performance of the proposed models based on different classification metrics (accuracy, recall, precision, F-score, and specificity). Fuzzy EDAS performance estimation shows that the proposed CNN and hybrid CNN-LSTM models achieved the first ranks based on the fusion of BH, E4-Left (E4-L), and E4-Right (E4-R). Results showed the significance of multimodal data for designing an accurate and trustworthy stress recognition diagnosing model for real-world driving conditions. The proposed model can also be used for the diagnosis of the stress level of a subject during other daily life activities.
Collapse
Affiliation(s)
- Muhammad Amin
- Department of Electronics, University of Peshawar, Peshawar 25120, Pakistan
- Department of Computer Science, Iqra National University, Peshawar 25000, Pakistan
| | - Khalil Ullah
- Department of Software Engineering, University of Malakand, Dir Lower, Chakdara 23050, Pakistan
| | - Muhammad Asif
- Department of Electronics, University of Peshawar, Peshawar 25120, Pakistan
| | - Habib Shah
- Department of Computer Science, King Khalid University, Abha 61421, Saudi Arabia
| | - Arshad Mehmood
- Department of Mechanical Engineering, University of Engineering & Technology, Peshawar 25120, Pakistan
| | | |
Collapse
|
4
|
Cheema A, Singh M, Kumar M, Setia G. Combined empirical mode decomposition and phase space reconstruction based psychologically stressed and non-stressed state classification from cardiac sound signals. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104585] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
|
5
|
Liu K, Jiao Y, Du C, Zhang X, Chen X, Xu F, Jiang C. Driver Stress Detection Using Ultra-Short-Term HRV Analysis under Real World Driving Conditions. ENTROPY (BASEL, SWITZERLAND) 2023; 25:194. [PMID: 36832561 PMCID: PMC9955749 DOI: 10.3390/e25020194] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Revised: 01/13/2023] [Accepted: 01/17/2023] [Indexed: 05/09/2023]
Abstract
Considering that driving stress is a major contributor to traffic accidents, detecting drivers' stress levels in time is helpful for ensuring driving safety. This paper attempts to investigate the ability of ultra-short-term (30-s, 1-min, 2-min, and 3-min) HRV analysis for driver stress detection under real driving circumstances. Specifically, the t-test was used to investigate whether there were significant differences in HRV features under different stress levels. Ultra-short-term HRV features were compared with the corresponding short-term (5-min) features during low-stress and high-stress phases by the Spearman rank correlation and Bland-Altman plots analysis. Furthermore, four different machine-learning classifiers, including a support vector machine (SVM), random forests (RFs), K-nearest neighbor (KNN), and Adaboost, were evaluated for stress detection. The results show that the HRV features extracted from ultra-short-term epochs were able to detect binary drivers' stress levels accurately. In particular, although the capability of HRV features in detecting driver stress also varied between different ultra-short-term epochs, MeanNN, SDNN, NN20, and MeanHR were selected as valid surrogates of short-term features for driver stress detection across the different epochs. For drivers' stress levels classification, the best performance was achieved with the SVM classifier, with an accuracy of 85.3% using 3-min HRV features. This study makes a contribution to building a robust and effective stress detection system using ultra-short-term HRV features under actual driving environments.
Collapse
Affiliation(s)
- Kun Liu
- School of Transportation & Logistics, Southwest Jiaotong University, Chengdu 610097, China
| | - Yubo Jiao
- School of Transportation & Logistics, Southwest Jiaotong University, Chengdu 610097, China
| | - Congcong Du
- School of Mines, China University of Mining and Technology, Xuzhou 221116, China
- Department of Aeronautical and Aviation Engineering, The Hong Kong Polytechnic University, Hung Hom, Hong Kong, China
| | - Xiaoming Zhang
- School of Transportation & Logistics, Southwest Jiaotong University, Chengdu 610097, China
| | - Xiaoyu Chen
- School of Transportation & Logistics, Southwest Jiaotong University, Chengdu 610097, China
| | - Fang Xu
- Department of Purchase Management, Sichuan Tourism University, Chengdu 610100, China
| | - Chaozhe Jiang
- School of Transportation & Logistics, Southwest Jiaotong University, Chengdu 610097, China
| |
Collapse
|
6
|
Mari T, Asgard O, Henderson J, Hewitt D, Brown C, Stancak A, Fallon N. External validation of binary machine learning models for pain intensity perception classification from EEG in healthy individuals. Sci Rep 2023; 13:242. [PMID: 36604453 PMCID: PMC9816165 DOI: 10.1038/s41598-022-27298-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Accepted: 12/29/2022] [Indexed: 01/07/2023] Open
Abstract
Discrimination of pain intensity using machine learning (ML) and electroencephalography (EEG) has significant potential for clinical applications, especially in scenarios where self-report is unsuitable. However, existing research is limited due to a lack of external validation (assessing performance using novel data). We aimed for the first external validation study for pain intensity classification with EEG. Pneumatic pressure stimuli were delivered to the fingernail bed at high and low pain intensities during two independent EEG experiments with healthy participants. Study one (n = 25) was utilised for training and cross-validation. Study two (n = 15) was used for external validation one (identical stimulation parameters to study one) and external validation two (new stimulation parameters). Time-frequency features of peri-stimulus EEG were computed on a single-trial basis for all electrodes. ML training and analysis were performed on a subset of features, identified through feature selection, which were distributed across scalp electrodes and included frontal, central, and parietal regions. Results demonstrated that ML models outperformed chance. The Random Forest (RF) achieved the greatest accuracies of 73.18, 68.32 and 60.42% for cross-validation, external validation one and two, respectively. Importantly, this research is the first to externally validate ML and EEG for the classification of intensity during experimental pain, demonstrating promising performance which generalises to novel samples and paradigms. These findings offer the most rigorous estimates of ML's clinical potential for pain classification.
Collapse
Affiliation(s)
- Tyler Mari
- Department of Psychology, Institute of Population Health, University of Liverpool, 2.21 Eleanor Rathbone Building, Bedford Street South, Liverpool, L69 7ZA, UK.
| | - Oda Asgard
- grid.10025.360000 0004 1936 8470Department of Psychology, Institute of Population Health, University of Liverpool, 2.21 Eleanor Rathbone Building, Bedford Street South, Liverpool, L69 7ZA UK
| | - Jessica Henderson
- grid.10025.360000 0004 1936 8470Department of Psychology, Institute of Population Health, University of Liverpool, 2.21 Eleanor Rathbone Building, Bedford Street South, Liverpool, L69 7ZA UK
| | - Danielle Hewitt
- grid.10025.360000 0004 1936 8470Department of Psychology, Institute of Population Health, University of Liverpool, 2.21 Eleanor Rathbone Building, Bedford Street South, Liverpool, L69 7ZA UK
| | - Christopher Brown
- grid.10025.360000 0004 1936 8470Department of Psychology, Institute of Population Health, University of Liverpool, 2.21 Eleanor Rathbone Building, Bedford Street South, Liverpool, L69 7ZA UK
| | - Andrej Stancak
- grid.10025.360000 0004 1936 8470Department of Psychology, Institute of Population Health, University of Liverpool, 2.21 Eleanor Rathbone Building, Bedford Street South, Liverpool, L69 7ZA UK
| | - Nicholas Fallon
- grid.10025.360000 0004 1936 8470Department of Psychology, Institute of Population Health, University of Liverpool, 2.21 Eleanor Rathbone Building, Bedford Street South, Liverpool, L69 7ZA UK
| |
Collapse
|
7
|
Robles D, Benchekroun M, Zalc V, Istrate D, Taramasco C. Stress Detection from Surface Electromyography using Convolutional Neural Networks. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:3235-3238. [PMID: 36086008 DOI: 10.1109/embc48229.2022.9871860] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The study of stress and its implications has been the focus of interest in various fields of science. Many automated/semi-automated stress detection systems based on physiological markers have been gaining enormous popularity and importance in recent years. Such non-voluntary physiological features exhibit unique characteristics in terms of reliability, accuracy. Combined with machine learning techniques, they offer a great field of study of stress identification and modelling. In this study, we explore the use of Convolutional Neural Networks (CNN) for stress detection through surface electromyography signals (sEMG) of the trapezius muscle. One of the main advantages of this model is the use of the sEMG signal without computed features, contrary to classical machine learning algorithms. The proposed model achieved good results, with 73% f1-score for a multi-class classification and 82% in a bi-class classification.
Collapse
|
8
|
Li Y, Li K, Wang S, Chen X, Wen D. Pilot Behavior Recognition Based on Multi-Modality Fusion Technology Using Physiological Characteristics. BIOSENSORS 2022; 12:404. [PMID: 35735552 PMCID: PMC9221330 DOI: 10.3390/bios12060404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 05/31/2022] [Accepted: 06/08/2022] [Indexed: 11/16/2022]
Abstract
With the development of the autopilot system, the main task of a pilot has changed from controlling the aircraft to supervising the autopilot system and making critical decisions. Therefore, the human-machine interaction system needs to be improved accordingly. A key step to improving the human-machine interaction system is to improve its understanding of the pilots' status, including fatigue, stress, workload, etc. Monitoring pilots' status can effectively prevent human error and achieve optimal human-machine collaboration. As such, there is a need to recognize pilots' status and predict the behaviors responsible for changes of state. For this purpose, in this study, 14 Air Force cadets fly in an F-35 Lightning II Joint Strike Fighter simulator through a series of maneuvers involving takeoff, level flight, turn and hover, roll, somersault, and stall. Electro cardio (ECG), myoelectricity (EMG), galvanic skin response (GSR), respiration (RESP), and skin temperature (SKT) measurements are derived through wearable physiological data collection devices. Physiological indicators influenced by the pilot's behavioral status are objectively analyzed. Multi-modality fusion technology (MTF) is adopted to fuse these data in the feature layer. Additionally, four classifiers are integrated to identify pilots' behaviors in the strategy layer. The results indicate that MTF can help to recognize pilot behavior in a more comprehensive and precise way.
Collapse
Affiliation(s)
| | - Ke Li
- National key Laboratory of Human Machine and Environment Engineering, School of Aeronautical Science and Engineering, Beihang University, Beijing 100191, China; (Y.L.); (S.W.); (X.C.)
| | | | | | - Dongsheng Wen
- National key Laboratory of Human Machine and Environment Engineering, School of Aeronautical Science and Engineering, Beihang University, Beijing 100191, China; (Y.L.); (S.W.); (X.C.)
| |
Collapse
|