1
|
Yang J, Wang Q, Dong X, Shen T. Synergistic integration of brain networks and time-frequency multi-view feature for sleep stage classification. Health Inf Sci Syst 2025; 13:15. [PMID: 39802081 PMCID: PMC11723870 DOI: 10.1007/s13755-024-00328-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2024] [Accepted: 12/17/2024] [Indexed: 01/16/2025] Open
Abstract
For diagnosing mental health conditions and assessing sleep quality, the classification of sleep stages is essential. Although deep learning-based methods are effective in this field, they often fail to capture sufficient features or adequately synthesize information from various sources. For the purpose of improving the accuracy of sleep stage classification, our methodology includes extracting a diverse array of features from polysomnography signals, along with their transformed graph and time-frequency representations. We have developed specific feature extraction modules tailored for each distinct view. To efficiently integrate and categorize the features derived from these different perspectives, we propose a cross-attention fusion mechanism. This mechanism is designed to adaptively merge complex sleep features, facilitating a more robust classification process. More specifically, our strategy includes the development of an efficient fusion network with multi-view features for classifying sleep stages that incorporates brain connectivity and combines both temporal and spectral elements for sleep stage analysis. This network employs a systematic approach to extract spatio-temporal-frequency features and uses cross-attention to merge features from different views effectively. In the experiments we conducted on the ISRUC public datasets, we found that our approach outperformed other proposed methods. In the ablation experiments, there was also a 2% improvement over the baseline model. Our research indicates that multi-view feature fusion methods with a cross-attention mechanism have strong potential in sleep stage classification.
Collapse
Affiliation(s)
- Jun Yang
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, No.727 Jingming South Road, Kunming, 650504 Yunnan China
| | - Qichen Wang
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, No.727 Jingming South Road, Kunming, 650504 Yunnan China
| | - Xiaoxing Dong
- First People’s Hospital of Yunnan Province, No.157 Jinbi Road, Kunming, 650032 Yunnan China
| | - Tao Shen
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, No.727 Jingming South Road, Kunming, 650504 Yunnan China
| |
Collapse
|
2
|
Deng G, Niu M, Rao S, Luo Y, Zhang J, Xie J, Yu Z, Liu W, Zhang J, Zhao S, Pan G, Li X, Deng W, Guo W, Zhang Y, Li T, Jiang H. A Unified Flexible Large Polysomnography Model for Sleep Staging and Mental Disorder Diagnosis. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2025:2024.12.11.24318815. [PMID: 39711704 PMCID: PMC11661386 DOI: 10.1101/2024.12.11.24318815] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 12/24/2024]
Abstract
Sleep disorders affect billions worldwide, yet clinical polysomnography (PSG) analysis remains hindered by labor-intensive manual scoring and limited generalizability of automated sleep staging tools across heterogeneous protocols. We present LPSGM, a large-scale PSG model designed to address two critical challenges in sleep medicine: cross-center generalization and adaptable diagnosis of neuropsychiatric disorders. Trained on 220,500 hours of multi-center PSG data (24,000 full-night recordings from 16 public datasets), LPSGM integrates domain-adaptive pre-training, flexible channel configurations, and a unified architecture to mitigate variability in equipment, montages, and populations during sleep staging while enabling downstream fine-tuning for mental disorder detection. In prospective validation, LPSGM achieves expert-level consensus in sleep staging (κ = 0.845 ± 0.066 vs. inter-expert κ = 0.850 ± 0.102) and matches the performance of fully supervised models on two independent private cohorts. When fine-tuned, it attains 88.01% accuracy in narcolepsy detection and 100% accuracy in identifying major depressive disorder (MDD), highlighting shared physiological biomarkers between sleep architecture and neuropsychiatric symptoms. By bridging automated sleep staging with real-world clinical deployment, LPSGM establishes a scalable, data-efficient framework for integrated sleep and mental health diagnostics. The code and pre-trained model are publicly available at https://github.com/Deng-GuiFeng/LPSGM to advance reproducibility and translational research in sleep medicine.
Collapse
|
3
|
Ren Z, Ma J, Ding Y. FlexibleSleepNet:A Model for Automatic Sleep Stage Classification Based on Multi-Channel Polysomnography. IEEE J Biomed Health Inform 2025; 29:3488-3501. [PMID: 40030855 DOI: 10.1109/jbhi.2025.3525626] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/05/2025]
Abstract
In the task of automatic sleep stage classification, deep learning models often face the challenge of balancing temporal-spatial feature extraction with computational complexity. To address this issue, this study introduces FlexibleSleepNet, a lightweight convolutional neural network architecture designed around the Adaptive Feature Extraction (AFE) Module and Scale-Varying Compression (SVC) Module. Through multi-channel polysomnography data input and preprocessing, FlexibleSleepNet utilizes the AFE Module to capture intra-channel features and employs the SVC Module for channel feature compression and dimension expansion. The collaborative work of these modules enables the network to effectively capture temporal-spatial dependencies between channels. Additionally, the network extracts feature maps through four distinct stages, each from different receptive field scales, culminating in precise sleep stage classification via a classification module. This study conducted k-fold cross-validation on three different databases: SleepEDF-20, SleepEDF-78, and SHHS. Experimental results show that FlexibleSleepNet demonstrates superior classification performance, achieving classification accuracies of 86.9% and 87.6% on the SleepEDF-20 and SHHS datasets, respectively. It performs particularly well on the SleepEDF-78 dataset, where it reaches a classification accuracy of 87.0%. Additionally, it has significantly enhanced computational efficiency while maintaining low computational complexity.
Collapse
|
4
|
Sirpal P, Sikora WA, Refai HH. Multimodal sleep signal tensor decomposition and hidden Markov Modeling for temazepam-induced anomalies across age groups. J Neurosci Methods 2025; 416:110375. [PMID: 39875078 DOI: 10.1016/j.jneumeth.2025.110375] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2024] [Revised: 01/15/2025] [Accepted: 01/23/2025] [Indexed: 01/30/2025]
Abstract
BACKGROUND Recent advances in multimodal signal analysis enable the identification of subtle drug-induced anomalies in sleep that traditional methods often miss. NEW METHOD We develop and introduce the Dynamic Representation of Multimodal Activity and Markov States (DREAMS) framework, which embeds explainable artificial intelligence (XAI) techniques to model hidden state transitions during sleep using tensorized EEG, EMG, and EOG signals from 22 subjects across three age groups (18-29, 30-49, and 50-66 years). By combining Tucker decomposition with probabilistic Hidden Markov Modeling, we quantified age-specific, temazepam-induced hidden states and significant differences in transition probabilities. RESULTS Jensen-Shannon Divergence (JSD) was employed to assess variability in hidden state transitions, with older subjects (50-66 years) under temazepam displaying heightened transition variability and network instability as indicated by a 48.57 % increase in JSD (from 0.35 to 0.52) and reductions in network density by 12.5 % (from 0.48 to 0.42) and modularity by 21.88 % (from 0.32 to 0.25). These changes reflect temazepam's disruptive impact on sleep architecture in older adults, aligning with known age-related declines in sleep stability and pharmacological sensitivity. In contrast, younger subjects exhibited lower divergence and retained relatively stable, cyclical transition patterns. Anomaly scores further quantified deviations in state transitions, with older subjects showing increased transition uncertainty and marked deviations in REM-like to NREM state transitions. COMPARISON WITH EXISTING METHODS This XAI-driven framework provides transparent, age-specific insights into temazepam's impact on sleep dynamics, going beyond traditional methods by identifying subtle, pharmacologically induced changes in sleep stage transitions that would otherwise be missed. CONCLUSIONS DREAMS supports the development of personalized interventions based on sleep transition variability across age groups, offering a powerful tool to understand temazepam's age-dependent effects on sleep architecture.
Collapse
Affiliation(s)
- Parikshat Sirpal
- School of Electrical and Computer Engineering, Gallogly College of Engineering, University of Oklahoma, Norman, OK 73019, USA.
| | - William A Sikora
- Stephenson School of Biomedical Engineering, University of Oklahoma, Tulsa, OK 74135, USA
| | - Hazem H Refai
- School of Electrical and Computer Engineering, Gallogly College of Engineering, University of Oklahoma, Norman, OK 73019, USA
| |
Collapse
|
5
|
Nateghi M, Rahbar Alam M, Amiri H, Nasiri S, Sameni R. Model-Based Electroencephalogram Instantaneous Frequency Tracking: Application in Automated Sleep-Wake Stage Classification. SENSORS (BASEL, SWITZERLAND) 2024; 24:7881. [PMID: 39771620 PMCID: PMC11678959 DOI: 10.3390/s24247881] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/30/2024] [Revised: 11/27/2024] [Accepted: 12/04/2024] [Indexed: 01/11/2025]
Abstract
Understanding sleep stages is crucial for diagnosing sleep disorders, developing treatments, and studying sleep's impact on overall health. With the growing availability of affordable brain monitoring devices, the volume of collected brain data has increased significantly. However, analyzing these data, particularly when using the gold standard multi-lead electroencephalogram (EEG), remains resource-intensive and time-consuming. To address this challenge, automated brain monitoring has emerged as a crucial solution for cost-effective and efficient EEG data analysis. A critical component of sleep analysis is detecting transitions between wakefulness and sleep states. These transitions offer valuable insights into sleep quality and quantity, essential for diagnosing sleep disorders, designing effective interventions, enhancing overall health and well-being, and studying sleep's effects on cognitive function, mood, and physical performance. This study presents a novel EEG feature extraction pipeline for the accurate classification of various wake and sleep stages. We propose a noise-robust model-based Kalman filtering (KF) approach to track changes in a time-varying auto-regressive model (TVAR) applied to EEG data during different wake and sleep stages. Our approach involves extracting features, including instantaneous frequency and instantaneous power from EEG, and implementing a two-step classifier for sleep staging. The first step classifies data into wake, REM, and non-REM categories, while the second step further classifies non-REM data into N1, N2, and N3 stages. Evaluation on the extended Sleep-EDF dataset (Sleep-EDFx), with 153 EEG recordings from 78 subjects, demonstrated compelling results with classifiers including Logistic Regression, Support Vector Machines, Extreme Gradient Boosting (XGBoost), and Light Gradient Boosting Machine (LGBM). The best performance was achieved with the LGBM and XGBoost classifiers, yielding an overall accuracy of over 77%, a macro-averaged F1 score of 0.69, and a Cohen's kappa of 0.68, highlighting the efficacy of the proposed method with a remarkably compact and interpretable feature set.
Collapse
Affiliation(s)
- Masoud Nateghi
- Department of Biomedical Informatics, School of Medicine, Emory University, Atlanta, GA 30322, USA; (H.A.); (S.N.)
| | | | - Hossein Amiri
- Department of Biomedical Informatics, School of Medicine, Emory University, Atlanta, GA 30322, USA; (H.A.); (S.N.)
| | - Samaneh Nasiri
- Department of Biomedical Informatics, School of Medicine, Emory University, Atlanta, GA 30322, USA; (H.A.); (S.N.)
| | - Reza Sameni
- Department of Biomedical Informatics, School of Medicine, Emory University, Atlanta, GA 30322, USA; (H.A.); (S.N.)
- Department of Biomedical Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
| |
Collapse
|
6
|
Liu Z, Zhang Q, Luo S, Qin M. FPJA-Net: A Lightweight End-to-End Network for Sleep Stage Prediction Based on Feature Pyramid and Joint Attention. Interdiscip Sci 2024; 16:769-780. [PMID: 39155326 DOI: 10.1007/s12539-024-00636-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2023] [Revised: 05/07/2024] [Accepted: 05/13/2024] [Indexed: 08/20/2024]
Abstract
Sleep staging is the most crucial work before diagnosing and treating sleep disorders. Traditional manual sleep staging is time-consuming and depends on the skill of experts. Nowadays, automatic sleep staging based on deep learning attracts more and more scientific researchers. As we know, the salient waves in sleep signals contain the most important information for automatic sleep staging. However, the key information is not fully utilized in existing deep learning methods since most of them only use CNN or RNN which could not capture multi-scale features in salient waves effectively. To tackle this limitation, we propose a lightweight end-to-end network for sleep stage prediction based on feature pyramid and joint attention. The feature pyramid module is designed to effectively extract multi-scale features in salient waves, and these features are then fed to the joint attention module to closely attend to the channel and location information of the salient waves. The proposed network has much fewer parameters and significant performance improvement, which is better than the state-of-the-art results. The overall accuracy and macro F1 score on the public dataset Sleep-EDF39, Sleep-EDF153 and SHHS are 90.1%, 87.8%, 87.4%, 84.4% and 86.9%, 83.9%, respectively. Ablation experiments confirm the effectiveness of each module.
Collapse
Affiliation(s)
- Zhi Liu
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing, 401135, China.
| | - Qinhan Zhang
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing, 401135, China
| | - Sixin Luo
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing, 401135, China
| | - Meiqiao Qin
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing, 401135, China
| |
Collapse
|
7
|
Ma S, Zhang D, Wang J, Xie J. A class alignment network based on self-attention for cross-subject EEG classification. Biomed Phys Eng Express 2024; 11:015013. [PMID: 39527843 DOI: 10.1088/2057-1976/ad90e8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2024] [Accepted: 11/11/2024] [Indexed: 11/16/2024]
Abstract
Due to the inherent variability in EEG signals across different individuals, domain adaptation and adversarial learning strategies are being progressively utilized to develop subject-specific classification models by leveraging data from other subjects. These approaches primarily focus on domain alignment and tend to overlook the critical task-specific class boundaries. This oversight can result in weak correlation between the extracted features and categories. To address these challenges, we propose a novel model that uses the known information from multiple subjects to bolster EEG classification for an individual subject through adversarial learning strategies. Our method begins by extracting both shallow and attention-driven deep features from EEG signals. Subsequently, we employ a class discriminator to encourage the same-class features from different domains to converge while ensuring that the different-class features diverge. This is achieved using our proposed discrimination loss function, which is designed to minimize the feature distance for samples of the same class across different domains while maximizing it for those from different classes. Additionally, our model incorporates two parallel classifiers that are harmonious yet distinct and jointly contribute to decision-making. Extensive testing on two publicly available EEG datasets has validated our model's efficacy and superiority.
Collapse
Affiliation(s)
- Sufan Ma
- School of Science, Jimei University, Xiamen, People's Republic of China
| | - Dongxiao Zhang
- School of Science, Jimei University, Xiamen, People's Republic of China
| | - Jiayi Wang
- School of Science, Jimei University, Xiamen, People's Republic of China
| | - Jialiang Xie
- School of Science, Jimei University, Xiamen, People's Republic of China
| |
Collapse
|
8
|
Ma S, Zhang D. A Cross-Attention-Based Class Alignment Network for Cross-Subject EEG Classification in a Heterogeneous Space. SENSORS (BASEL, SWITZERLAND) 2024; 24:7080. [PMID: 39517978 PMCID: PMC11548574 DOI: 10.3390/s24217080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/18/2024] [Revised: 10/31/2024] [Accepted: 11/01/2024] [Indexed: 11/16/2024]
Abstract
BACKGROUND Domain adaptation (DA) techniques have emerged as a pivotal strategy in addressing the challenges of cross-subject classification. However, traditional DA methods are inherently limited by the assumption of a homogeneous space, requiring that the source and target domains share identical feature dimensions and label sets, which is often impractical in real-world applications. Therefore, effectively addressing the challenge of EEG classification under heterogeneous spaces has emerged as a crucial research topic. METHODS We present a comprehensive framework that addresses the challenges of heterogeneous spaces by implementing a cross-domain class alignment strategy. We innovatively construct a cross-encoder to effectively capture the intricate dependencies between data across domains. We also introduce a tailored class discriminator accompanied by a corresponding loss function. By optimizing the loss function, we facilitate the aggregation of features with corresponding classes between the source and target domains, while ensuring that features from non-corresponding classes are dispersed. RESULTS Extensive experiments were conducted on two publicly available EEG datasets. Compared to advanced methods that combine label alignment with transfer learning, our method demonstrated superior performance across five heterogeneous space scenarios. Notably, in four heterogeneous label space scenarios, our method outperformed the advanced methods by an average of 7.8%. Moreover, in complex scenarios involving both heterogeneous label spaces and heterogeneous feature spaces, our method outperformed the state-of-the-art methods by an average of 4.1%. CONCLUSIONS This paper presents an efficient model for cross-subject EEG classification under heterogeneous spaces, which significantly addresses the challenges of EEG classification within heterogeneous spaces, thereby opening up new perspectives and avenues for research in related fields.
Collapse
Affiliation(s)
| | - Dongxiao Zhang
- School of Science, Jimei University, Xiamen 361000, China;
| |
Collapse
|
9
|
Jirakittayakorn N, Wongsawat Y, Mitrirattanakul S. An enzyme-inspired specificity in deep learning model for sleep stage classification using multi-channel PSG signals input: Separating training approach and its performance on cross-dataset validation for generalizability. Comput Biol Med 2024; 182:109138. [PMID: 39305732 DOI: 10.1016/j.compbiomed.2024.109138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2024] [Revised: 08/15/2024] [Accepted: 09/08/2024] [Indexed: 11/14/2024]
Abstract
Numerous automatic sleep stage classification systems have been developed, but none have become effective assistive tools for sleep technicians due to issues with generalization. Four key factors hinder the generalization of these models are instruments, montage of recording, subject type, and scoring manual factors. This study aimed to develop a deep learning model that addresses generalization problems by integrating enzyme-inspired specificity and employing separating training approaches. Subject type and scoring manual factors were controlled, while the focus was on instruments and montage of recording factors. The proposed model consists of three sets of signal-specific models including EEG-, EOG-, and EMG-specific model. The EEG-specific models further include three sets of channel-specific models. All signal-specific and channel-specific models were established with data manipulation and weighted loss strategies, resulting in three sets of data manipulation models and class-specific models, respectively. These models were CNNs. Additionally, BiLSTM models were applied to EEG- and EOG-specific models to obtain temporal information. Finally, classification task for sleep stage was handled by 'the-last-dense' layer. The optimal sampling frequency for each physiological signal was identified and used during the training process. The proposed model was trained on MGH dataset and evaluated using both within dataset and cross-dataset. For MGH dataset, overall accuracy of 81.05 %, MF1 of 79.05 %, Kappa of 0.7408, and per-class F1-scores: W (84.98 %), N1 (58.06 %), N2 (84.82 %), N3 (79.20 %), and REM (88.17 %) can be achieved. Performances on cross-datasets are as follows: SHHS1 200 records reached 79.54 %, 70.56 %, and 0.7078; SHHS2 200 records achieved 76.77 %, 66.30 %, and 0.6632; Sleep-EDF 153 records gained 78.52 %, 72.13 %, and 0.7031; and BCI-MU (local dataset) 94 records achieved 83.57 %, 82.17 %, and 0.7769 for overall accuracy, MF1, and Kappa respectively. Additionally, the proposed model has approximately 9.3 M trainable parameters and takes around 26 s to process one PSG record. The results indicate that the proposed model demonstrates generalizability in sleep stage classification and shows potential as a feasibility tool for real-world applications. Additionally, enzyme-inspired specificity effectively addresses the challenges posed by varying montage of recording, while the identified optimal frequencies mitigate instrument-related issues.
Collapse
Affiliation(s)
| | - Yodchanan Wongsawat
- Department of Biomedical Engineering, Faculty of Engineering, Mahidol University, Thailand.
| | | |
Collapse
|
10
|
Philip Mulamoottil S, Vigneswaran T. A double-layered fully automated insomnia identification model employing synthetic data generation using MCSA and CTGAN with single-channel EEG signals. Sci Rep 2024; 14:23427. [PMID: 39379545 PMCID: PMC11461835 DOI: 10.1038/s41598-024-74706-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2024] [Accepted: 09/27/2024] [Indexed: 10/10/2024] Open
Abstract
Insomnia was diagnosed by analyzing sleep stages obtained during polysomnography (PSG) recording. The state-of-the-art insomnia detection models that used physiological signals in PSG were successful in classification. However, the sleep stages of unbalanced data in small-time intervals were fed for classification in previous studies. This can be avoided by analyzing the insomnia detection structure in different frequency bands with artificially generated data from the existing one at the preprocessing and post-processing stages. Hence, the paper proposes a double-layered augmentation model using Modified Conventional Signal Augmentation (MCSA) and a Conditional Tabular Generative Adversarial Network (CTGAN) to generate synthetic signals from raw EEG and synthetic data from extracted features, respectively, in creating training data. The presented work is independent of sleep stage scoring and provides double-layered data protection with the utility of augmentation methods. It is ideally suited for real-time detection using a single-channel EEG provides better mobility and comfort while recording. The work analyzes each augmentation layer's performance individually, and better accuracy was observed when merging both. It also evaluates the augmentation performance in various frequency bands, which are decomposed using discrete wavelet transform, and observed that the alpha band contributes more to detection. The classification is performed using Decision Tree (DT), Ensembled Bagged Decision Tree (EBDT), Gradient Boosting (GB), Random Forest (RF), and Stacking classifier (SC), attaining the highest classification accuracy of 94% using RF with a greater Area Under Curve (AUC) value of 0.97 compared to the existing works and is best suited for small datasets.
Collapse
Affiliation(s)
| | - T Vigneswaran
- School of Electronics Engineering, Vellore Institute of Technology, Chennai, 600127, India.
| |
Collapse
|
11
|
Cui J, Sun Y, Jing H, Chen Q, Huang Z, Qi X, Cui H. A Novel Continuous Sleep State Artificial Neural Network Model Based on Multi-Feature Fusion of Polysomnographic Data. Nat Sci Sleep 2024; 16:769-786. [PMID: 38894976 PMCID: PMC11182880 DOI: 10.2147/nss.s463897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/12/2024] [Accepted: 06/03/2024] [Indexed: 06/21/2024] Open
Abstract
Purpose Sleep structure is crucial in sleep research, characterized by its dynamic nature and temporal progression. Traditional 30-second epochs falter in capturing the intricate subtleties of various micro-sleep states. This paper introduces an innovative artificial neural network model to generate continuous sleep depth value (SDV), utilizing a novel multi-feature fusion approach with EEG data, seamlessly integrating temporal consistency. Methods The study involved 50 normal and 100 obstructive sleep apnea-hypopnea syndrome (OSAHS) participants. After segmenting the sleep data into 3-second intervals, a diverse array of 38 feature values were meticulously extracted, including power, spectrum entropy, frequency band duration and so on. The ensemble random forest model calculated the timing fitness value for all the features, from which the top 7 time-correlated features were selected to create detailed sleep sample values ranging from 0 to 1. Subsequently, an artificial neural network (ANN) model was trained to delineate sleep continuity details, unravel concealed patterns, and far surpassed the traditional 5-stage categorization (W, N1, N2, N3, and REM). Results The SDV changes from wakeful stage (mean 0.7021, standard deviation 0.2702) to stage N3 (mean 0.0396, standard deviation 0.0969). During the arousal epochs, the SDV increases from the range (0.1 to 0.3) to the range around 0.7, and decreases below 0.3. When in the deep sleep (≤0.1), the probability of arousal of normal individuals is less than 10%, while the average arousal probability of OSA patients is close to 30%. Conclusion A sleep continuity model is proposed based on multi-feature fusion, which generates SDV ranging from 0 to 1 (representing deep sleep to wakefulness). It can capture the nuances of the traditional five stages and subtle differences in microstates of sleep, considered as a complement or even an alternative to traditional sleep analysis.
Collapse
Affiliation(s)
- Jian Cui
- Department of Big Data and Fundamental Sciences, Shandong Institute of Petroleum and Chemical Technology, Dongying, Shandong, 257061, People’s Republic of China
| | - Yunliang Sun
- Department of Respiratory and Sleep Medicine, Bin Zhou Medical University Hospital, Binzhou, Shandong, 256600, People’s Republic of China
| | - Haifeng Jing
- College of Software and Microelectronics, Peking University, Beijing, 100000, People’s Republic of China
| | - Qiang Chen
- Department of Big Data and Fundamental Sciences, Shandong Institute of Petroleum and Chemical Technology, Dongying, Shandong, 257061, People’s Republic of China
| | - Zhihao Huang
- Department of Big Data and Fundamental Sciences, Shandong Institute of Petroleum and Chemical Technology, Dongying, Shandong, 257061, People’s Republic of China
| | - Xin Qi
- Department of Big Data and Fundamental Sciences, Shandong Institute of Petroleum and Chemical Technology, Dongying, Shandong, 257061, People’s Republic of China
| | - Hao Cui
- Department of Big Data and Fundamental Sciences, Shandong Institute of Petroleum and Chemical Technology, Dongying, Shandong, 257061, People’s Republic of China
| |
Collapse
|
12
|
Liang Y, Zhang C, An S, Wang Z, Shi K, Peng T, Ma Y, Xie X, He J, Zheng K. FetchEEG: a hybrid approach combining feature extraction and temporal-channel joint attention for EEG-based emotion classification. J Neural Eng 2024; 21:036011. [PMID: 38701773 DOI: 10.1088/1741-2552/ad4743] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2024] [Accepted: 05/03/2024] [Indexed: 05/05/2024]
Abstract
Objective. Electroencephalogram (EEG) analysis has always been an important tool in neural engineering, and the recognition and classification of human emotions are one of the important tasks in neural engineering. EEG data, obtained from electrodes placed on the scalp, represent a valuable resource of information for brain activity analysis and emotion recognition. Feature extraction methods have shown promising results, but recent trends have shifted toward end-to-end methods based on deep learning. However, these approaches often overlook channel representations, and their complex structures pose certain challenges to model fitting.Approach. To address these challenges, this paper proposes a hybrid approach named FetchEEG that combines feature extraction and temporal-channel joint attention. Leveraging the advantages of both traditional feature extraction and deep learning, the FetchEEG adopts a multi-head self-attention mechanism to extract representations between different time moments and channels simultaneously. The joint representations are then concatenated and classified using fully-connected layers for emotion recognition. The performance of the FetchEEG is verified by comparison experiments on a self-developed dataset and two public datasets.Main results. In both subject-dependent and subject-independent experiments, the FetchEEG demonstrates better performance and stronger generalization ability than the state-of-the-art methods on all datasets. Moreover, the performance of the FetchEEG is analyzed for different sliding window sizes and overlap rates in the feature extraction module. The sensitivity of emotion recognition is investigated for three- and five-frequency-band scenarios.Significance. FetchEEG is a novel hybrid method based on EEG for emotion classification, which combines EEG feature extraction with Transformer neural networks. It has achieved state-of-the-art performance on both self-developed datasets and multiple public datasets, with significantly higher training efficiency compared to end-to-end methods, demonstrating its effectiveness and feasibility.
Collapse
Affiliation(s)
- Yu Liang
- Faculty of Information Technology, Beijing University of Technology, Beijing, People's Republic of China
| | - Chenlong Zhang
- Faculty of Information Technology, Beijing University of Technology, Beijing, People's Republic of China
| | - Shan An
- JD Health International Inc., Beijing, People's Republic of China
| | - Zaitian Wang
- Faculty of Information Technology, Beijing University of Technology, Beijing, People's Republic of China
| | - Kaize Shi
- University of Technology Sydney, Sydney, Australia
| | - Tianhao Peng
- Beihang University, Beijing, People's Republic of China
| | - Yuqing Ma
- Beihang University, Beijing, People's Republic of China
| | - Xiaoyang Xie
- Faculty of Information Technology, Beijing University of Technology, Beijing, People's Republic of China
| | - Jian He
- Faculty of Information Technology, Beijing University of Technology, Beijing, People's Republic of China
| | - Kun Zheng
- Faculty of Information Technology, Beijing University of Technology, Beijing, People's Republic of China
| |
Collapse
|
13
|
Jirakittayakorn N, Wongsawat Y, Mitrirattanakul S. ZleepAnlystNet: a novel deep learning model for automatic sleep stage scoring based on single-channel raw EEG data using separating training. Sci Rep 2024; 14:9859. [PMID: 38684765 PMCID: PMC11058251 DOI: 10.1038/s41598-024-60796-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Accepted: 04/26/2024] [Indexed: 05/02/2024] Open
Abstract
Numerous models for sleep stage scoring utilizing single-channel raw EEG signal have typically employed CNN and BiLSTM architectures. While these models, incorporating temporal information for sequence classification, demonstrate superior overall performance, they often exhibit low per-class performance for N1-stage, necessitating an adjustment of loss function. However, the efficacy of such adjustment is constrained by the training process. In this study, a pioneering training approach called separating training is introduced, alongside a novel model, to enhance performance. The developed model comprises 15 CNN models with varying loss function weights for feature extraction and 1 BiLSTM for sequence classification. Due to its architecture, this model cannot be trained using an end-to-end approach, necessitating separate training for each component using the Sleep-EDF dataset. Achieving an overall accuracy of 87.02%, MF1 of 82.09%, Kappa of 0.8221, and per-class F1-socres (W 90.34%, N1 54.23%, N2 89.53%, N3 88.96%, and REM 87.40%), our model demonstrates promising performance. Comparison with sleep technicians reveals a Kappa of 0.7015, indicating alignment with reference sleep stags. Additionally, cross-dataset validation and adaptation through training with the SHHS dataset yield an overall accuracy of 84.40%, MF1 of 74.96% and Kappa of 0.7785 when tested with the Sleep-EDF-13 dataset. These findings underscore the generalization potential in model architecture design facilitated by our novel training approach.
Collapse
Affiliation(s)
- Nantawachara Jirakittayakorn
- Institute for Innovative Learning, Mahidol University, Nakhon Pathom, Thailand
- Faculty of Dentistry, Mahidol University, Bangkok, Thailand
| | - Yodchanan Wongsawat
- Department of Biomedical Engineering, Faculty of Engineering, Mahidol University, Nakhon Pathom, Thailand
| | - Somsak Mitrirattanakul
- Department of Masticatory Science, Faculty of Dentistry, Mahidol University, Bangkok, Thailand.
| |
Collapse
|
14
|
Qin Y, Zhang W, Tao X. TBEEG: A Two-Branch Manifold Domain Enhanced Transformer Algorithm for Learning EEG Decoding. IEEE Trans Neural Syst Rehabil Eng 2024; 32:1466-1476. [PMID: 38526885 DOI: 10.1109/tnsre.2024.3380595] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/27/2024]
Abstract
The electroencephalogram-based (EEG) brain-computer interface (BCI) has garnered significant attention in recent research. However, the practicality of EEG remains constrained by the lack of efficient EEG decoding technology. The challenge lies in effectively translating intricate EEG into meaningful, generalizable information. EEG signal decoding primarily relies on either time domain or frequency domain information. There lacks a method capable of simultaneously and effectively extracting both time and frequency domain features, as well as efficiently fuse these features. Addressing these limitations, a two-branch Manifold Domain enhanced transformer algorithm is designed to holistically capture EEG's spatio-temporal information. Our method projects the time-domain information of EEG signals into the Riemannian spaces to fully decode the time dependence of EEG signals. Using wavelet transform, the time domain information is converted into frequency domain information, and the spatial information contained in the frequency domain information of EEG signal is mined through the spectrogram. The effectiveness of the proposed TBEEG algorithm is validated on BCIC-IV-2a dataset and MAMEM-SSVEP-II datasets.
Collapse
|
15
|
Kontras K, Chatzichristos C, Phan H, Suykens J, De Vos M. CoRe-Sleep: A Multimodal Fusion Framework for Time Series Robust to Imperfect Modalities. IEEE Trans Neural Syst Rehabil Eng 2024; 32:840-849. [PMID: 38224506 DOI: 10.1109/tnsre.2024.3354388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2024]
Abstract
Sleep abnormalities can have severe health consequences. Automated sleep staging, i.e. labelling the sequence of sleep stages from the patient's physiological recordings, could simplify the diagnostic process. Previous work on automated sleep staging has achieved great results, mainly relying on the EEG signal. However, often multiple sources of information are available beyond EEG. This can be particularly beneficial when the EEG recordings are noisy or even missing completely. In this paper, we propose CoRe-Sleep, a Coordinated Representation multimodal fusion network that is particularly focused on improving the robustness of signal analysis on imperfect data. We demonstrate how appropriately handling multimodal information can be the key to achieving such robustness. CoRe-Sleep tolerates noisy or missing modalities segments, allowing training on incomplete data. Additionally, it shows state-of-the-art performance when testing on both multimodal and unimodal data using a single model on SHHS-1, the largest publicly available study that includes sleep stage labels. The results indicate that training the model on multimodal data does positively influence performance when tested on unimodal data. This work aims at bridging the gap between automated analysis tools and their clinical utility.
Collapse
|
16
|
Ji X, Li Y, Wen P, Barua P, Acharya UR. MixSleepNet: A Multi-Type Convolution Combined Sleep Stage Classification Model. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 244:107992. [PMID: 38218118 DOI: 10.1016/j.cmpb.2023.107992] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Revised: 12/09/2023] [Accepted: 12/19/2023] [Indexed: 01/15/2024]
Abstract
BACKGROUND AND OBJECTIVE Sleep staging is an essential step for sleep disorder diagnosis, which is time-intensive and laborious for experts to perform this work manually. Automatic sleep stage classification methods not only alleviate experts from these demanding tasks but also enhance the accuracy and efficiency of the classification process. METHODS A novel multi-channel biosignal-based model constructed by the combination of a 3D convolutional operation and a graph convolutional operation is proposed for the automated sleep stages using various physiological signals. Both the 3D convolution and graph convolution can aggregate information from neighboring brain areas, which helps to learn intrinsic connections from the biosignals. Electroencephalogram (EEG), electromyogram (EMG), electrooculogram (EOG) and electrocardiogram (ECG) signals are employed to extract time domain and frequency domain features. Subsequently, these signals are input to the 3D convolutional and graph convolutional branches, respectively. The 3D convolution branch can explore the correlations between multi-channel signals and multi-band waves in each channel in the time series, while the graph convolution branch can explore the connections between each channel and each frequency band. In this work, we have developed the proposed multi-channel convolution combined sleep stage classification model (MixSleepNet) using ISRUC datasets (Subgroup 3 and 50 random samples from Subgroup 1). RESULTS Based on the first expert's label, our generated MixSleepNet yielded an accuracy, F1-score and Cohen kappa scores of 0.830, 0.821 and 0.782, respectively for ISRUC-S3. It obtained accuracy, F1-score and Cohen kappa scores of 0.812, 0.786, and 0.756, respectively for the ISRUC-S1 dataset. In accordance with the evaluations conducted by the second expert, the comprehensive accuracies, F1-scores, and Cohen kappa coefficients for the ISRUC-S3 and ISRUC-S1 datasets are determined to be 0.837, 0.820, 0.789, and 0.829, 0.791, 0.775, respectively. CONCLUSION The results of the performance metrics by the proposed method are much better than those from all the compared models. Additional experiments were carried out on the ISRUC-S3 sub-dataset to evaluate the contributions of each module towards the classification performance.
Collapse
Affiliation(s)
- Xiaopeng Ji
- School of Mathematics, Physics and Computing, University of Southern Queensland, Toowoomba, QLD 4350, Australia.
| | - Yan Li
- School of Mathematics, Physics and Computing, University of Southern Queensland, Toowoomba, QLD 4350, Australia.
| | - Peng Wen
- School of Engineering, University of Southern Queensland, Toowoomba, QLD 4350, Australia.
| | - Prabal Barua
- Cogninet Brain Team, Sydney, NSW 2010, Australia.
| | - U Rajendra Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Toowoomba, QLD 4350, Australia.
| |
Collapse
|
17
|
Chandrasekharan S, Jacob JE, Cherian A, Iype T. Exploring recurrence quantification analysis and fractal dimension algorithms for diagnosis of encephalopathy. Cogn Neurodyn 2024; 18:133-146. [PMID: 38406203 PMCID: PMC10881913 DOI: 10.1007/s11571-023-09929-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Revised: 12/11/2022] [Accepted: 01/09/2023] [Indexed: 02/05/2023] Open
Abstract
Electroencephalography (EEG) is a crucial non-invasive medical tool for diagnosing neurological disorder called encephalopathy. There is a requirement for powerful signal processing algorithms as EEG patterns in encephalopathies are not specific to a particular etiology. As visual examination and linear methods of EEG analysis are not sufficient to get the subtle information regarding various neuro pathologies, non-linear analysis methods can be employed for exploring the dynamic, complex and chaotic nature of EEG signals. This work aims identifying and differentiating the patterns specific to cerebral dysfunctions associated with Encephalopathy using Recurrence Quantification Analysis and Fractal Dimension algorithms. This study analysed six RQA features, namely, recurrence rate, determinism, laminarity, diagonal length, diagonal entropy and trapping time and comparing them with fractal dimensions, namely, Higuchi's and Katz's fractal dimension. Fractal dimensions were found to be lower for encephalopathy cases showing decreased complexity when compared to that of normal healthy subjects. On the other hand, RQA features were found to be higher for encephalopathy cases indicating higher recurrence and more periodic patterns in EEGs of encephalopathy compared to that of normal healthy controls. The feature reduction was then performed using Principal Component Analysis and fed to three promising classifiers: SVM, Random Forest and Multi-layer Perceptron. The resultant system provides a practically realizable pipeline for the diagnosis of encephalopathy.
Collapse
Affiliation(s)
| | - Jisu Elsa Jacob
- Department of Electronics and Communication Engineering, Sree Chitra Thirunal College of Engineering, Thiruvananthapuram, 695018 Kerala India
| | - Ajith Cherian
- Department of Neurology, SCTIMST, Thiruvananthapuram, Kerala India
| | - Thomas Iype
- Department of Neurology, Government Medical College, Thiruvananthapuram, Kerala India
| |
Collapse
|
18
|
Wang H, Jiang J, Gan JQ, Wang H. Motor Imagery EEG Classification Based on a Weighted Multi-Branch Structure Suitable for Multisubject Data. IEEE Trans Biomed Eng 2023; 70:3040-3051. [PMID: 37186527 DOI: 10.1109/tbme.2023.3274231] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/17/2023]
Abstract
OBJECTIVE Electroencephalogram (EEG) signal recognition based on deep learning technology requires the support of sufficient data. However, training data scarcity usually occurs in subject-specific motor imagery tasks unless multisubject data can be used to enlarge training data. Unfortunately, because of the large discrepancies between data distributions from different subjects, model performance could only be improved marginally or even worsened by simply training on multisubject data. METHOD This article proposes a novel weighted multi-branch (WMB) structure for handling multisubject data to solve the problem, in which each branch is responsible for fitting a pair of source-target subject data and adaptive weights are used to integrate all branches or select branches with the largest weights to make the final decision. The proposed WMB structure was applied to six well-known deep learning models (EEGNet, Shallow ConvNet, Deep ConvNet, ResNet, MSFBCNN, and EEG_TCNet) and comprehensive experiments were conducted on EEG datasets BCICIV-2a, BCICIV-2b, high gamma dataset (HGD) and two supplementary datasets. RESULT Superior results against the state-of-the-art models have demonstrated the efficacy of the proposed method in subject-specific motor imagery EEG classification. For example, the proposed WMB_EEGNet achieved classification accuracies of 84.14%, 90.23%, and 97.81% on BCICIV-2a, BCICIV-2b and HGD, respectively. CONCLUSION It is clear that the proposed WMB structure is capable to make good use of multisubject data with large distribution discrepancies for subject-specific EEG classification.
Collapse
|
19
|
Hangaragi S, Nizampatnam N, Kaliyaperumal D, Özer T. An evolutionary model for sleep quality analytics using fuzzy system. Proc Inst Mech Eng H 2023; 237:1215-1227. [PMID: 37667998 DOI: 10.1177/09544119231195177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/06/2023]
Abstract
Electroencephalography (EEG) is a neuro signal reflecting brain activity. These signals provide information about brain activity, eye movements, and muscle tone, which can be used to determine the sleep stage. Categorizing sleep stages can be done manually by visually. Alternatively, automated algorithms can be developed using machine learning techniques to classify sleep stages based on signal features and patterns. This paper aims to automatically classify sleep stages based on extracted patterns from EEG signals. A fuzzy min-max neural network is proposed and implemented for sleep stage classification and clustering. The paper concludes that the fuzzy min-max neural network outperforms other tested methods in sleep stage classification. The models implemented in the study include K-Nearest Neighbor (KNN), Random Forest, Decision Tree, XGBoost, AdaBoost, Linear Discriminant Analysis (LDA), Quadratic Discriminant Analysis (QDA), Convolutional Neural Network (CNN), and the fuzzy min-max classifier. The results indicate that the fuzzy classifier achieves the highest accuracy of 86%, followed by the CNN model with 81%. Among the machine learning algorithms, Random Forest with an accuracy of 55.46%, followed by XGBoost with 53.18%, surpassing the other algorithms used in the experiment. AdaBoost and Gaussian Naive Bayes both achieve an accuracy of 45.10%. Decision Tree, KNN, LDA, and QDA yield accuracies of 37.66%, 16.46%, 28.57%, and 29.5% respectively. These findings demonstrate the efficiency of the fuzzy min-max neural network and the superiority of the fuzzy classifier and CNN models in sleep stage classification, indicating their potential for accurate automated sleep stage analysis.
Collapse
Affiliation(s)
- Shivalila Hangaragi
- Department of Electrionics & Communication Engineering, Amrita School of Engineering, Bengaluru-Amrita Vishwa Vidyapeetham, Bengaluru, Karnataka, India
| | - Neelima Nizampatnam
- Department of Electrionics & Communication Engineering, Amrita School of Engineering, Bengaluru-Amrita Vishwa Vidyapeetham, Bengaluru, Karnataka, India
| | - Deepa Kaliyaperumal
- Department of Electrical & Electronics Engineering, Amrita School of Engineering, Bengaluru-Amrita Vishwa Vidyapeetham, Bengaluru, Karnataka, India
| | - Tolga Özer
- Department of Electrical & Electronics Engineering, Afyon Kocatepe University, Afyonkarahisar, Turkey
| |
Collapse
|
20
|
Li T, Gong Y, Lv Y, Wang F, Hu M, Wen Y. GAC-SleepNet: A dual-structured sleep staging method based on graph structure and Euclidean structure. Comput Biol Med 2023; 165:107477. [PMID: 37717528 DOI: 10.1016/j.compbiomed.2023.107477] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 08/16/2023] [Accepted: 09/04/2023] [Indexed: 09/19/2023]
Abstract
Sleep staging is a precondition for the diagnosis and treatment of sleep disorders. However, how to fully exploit the relationship between spatial features of the brain and sleep stages is an important task. Many current classical algorithms only extract the characteristic information of the brain in the Euclidean space without considering other spatial structures. In this study, a sleep staging network named GAC-SleepNet is designed. GAC-SleepNet uses the characteristic information in the dual structure of the graph structure and the Euclidean structure for the classification of sleep stages. In the graph structure, this study uses a graph convolutional neural network to learn the deep features of each sleep stage and converts the features in the topological structure into feature vectors by a multilayer perceptron. In the Euclidean structure, this study uses convolutional neural networks to learn the temporal features of sleep information and combine attention mechanism to portray the connection between different sleep periods and EEG signals, while enhancing the description of global features to avoid local optima. In this study, the performance of the proposed network is evaluated on two public datasets. The experimental results show that the dual spatial structure captures more adequate and comprehensive information about sleep features and shows advancement in terms of different evaluation metrics.
Collapse
Affiliation(s)
- Tianxing Li
- School of Electronic Information Engineering, Changchun University of Science and Technology, Changchun, 130000, China
| | - Yulin Gong
- School of Electronic Information Engineering, Changchun University of Science and Technology, Changchun, 130000, China.
| | - Yudan Lv
- The Department of Neurology, First Hospital of Jilin University, Changchun, 130000, China
| | - Fatong Wang
- School of Electronic Information Engineering, Changchun University of Science and Technology, Changchun, 130000, China
| | - Mingjia Hu
- School of Electronic Information Engineering, Changchun University of Science and Technology, Changchun, 130000, China
| | - Yinke Wen
- School of Electronic Information Engineering, Changchun University of Science and Technology, Changchun, 130000, China
| |
Collapse
|
21
|
Yao W, Yao W, Wang J. Threshold distribution of equal states for quantitative amplitude fluctuations. Physiol Meas 2023; 44:095004. [PMID: 37666257 DOI: 10.1088/1361-6579/acf6a6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Accepted: 09/04/2023] [Indexed: 09/06/2023]
Abstract
Objective. The distribution of equal states (DES) quantifies amplitude fluctuations in biomedical signals. However, under certain conditions, such as a high resolution of data collection or special signal processing techniques, equal states may be very rare, whereupon the DES fails to measure the amplitude fluctuations.Approach. To address this problem, we develop a novel threshold DES (tDES) that measures the distribution of differential states within a threshold. To evaluate the proposed tDES, we first analyze five sets of synthetic signals generated in different frequency bands. We then analyze sleep electroencephalography (EEG) datasets taken from the public PhysioNet.Main results. Synthetic signals and detrend-filtered sleep EEGs have no neighboring equal values; however, tDES can effectively measure the amplitude fluctuations within these data. The tDES of EEG data increases significantly as the sleep stage increases, even with datasets covering very short periods, indicating decreased amplitude fluctuations in sleep EEGs. Generally speaking, the presence of more low-frequency components in a physiological series reflects smaller amplitude fluctuations and larger DES.Significance. The tDES provides a reliable computing method for quantifying amplitude fluctuations, exhibiting the characteristics of conceptual simplicity and computational robustness. Our findings broaden the application of quantitative amplitude fluctuations and contribute to the classification of sleep stages based on EEG data.
Collapse
Affiliation(s)
- Wenpo Yao
- School of Geographic and Biologic Information, Smart Health Big Data Analysis and Location Services Engineering Lab of Jiangsu Province, Nanjing University of Posts and Telecommunications, Nanjing 210023, People's Republic of China
- Key Laboratory of Computational Neuroscience and Brain-Inspired Intelligence (Fudan University), Ministry of Education, People's Republic of China
| | - Wenli Yao
- State Key Laboratory of Hydroscience and Engineering, Tsinghua University, Beijing 100084, People's Republic of China
| | - Jun Wang
- School of Geographic and Biologic Information, Smart Health Big Data Analysis and Location Services Engineering Lab of Jiangsu Province, Nanjing University of Posts and Telecommunications, Nanjing 210023, People's Republic of China
| |
Collapse
|
22
|
Toban G, Poudel K, Hong D. REM Sleep Stage Identification with Raw Single-Channel EEG. Bioengineering (Basel) 2023; 10:1074. [PMID: 37760176 PMCID: PMC10525287 DOI: 10.3390/bioengineering10091074] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 08/21/2023] [Accepted: 09/05/2023] [Indexed: 09/29/2023] Open
Abstract
This paper focused on creating an interpretable model for automatic rapid eye movement (REM) and non-REM sleep stage scoring for a single-channel electroencephalogram (EEG). Many methods attempt to extract meaningful information to provide to a learning algorithm. This method attempts to let the model extract the meaningful interpretable information by providing a smaller number of time-invariant signal filters for five frequency ranges using five CNN algorithms. A bi-directional GRU algorithm was applied to the output to incorporate time transition information. Training and tests were run on the well-known sleep-EDF-expanded database. The best results produced 97% accuracy, 93% precision, and 89% recall.
Collapse
Affiliation(s)
- Gabriel Toban
- Computational & Data Science Ph.D. Program, Middle Tennessee State University, Murfreesboro, TN 37132, USA; (K.P.); (D.H.)
| | - Khem Poudel
- Computational & Data Science Ph.D. Program, Middle Tennessee State University, Murfreesboro, TN 37132, USA; (K.P.); (D.H.)
- Department of Computer Science, Middle Tennessee State University, Murfreesboro, TN 37132, USA
| | - Don Hong
- Computational & Data Science Ph.D. Program, Middle Tennessee State University, Murfreesboro, TN 37132, USA; (K.P.); (D.H.)
- Department of Mathematical Sciences, Middle Tennessee State University, Murfreesboro, TN 37132, USA
| |
Collapse
|
23
|
Ji X, Li Y, Wen P. 3DSleepNet: A Multi-Channel Bio-Signal Based Sleep Stages Classification Method Using Deep Learning. IEEE Trans Neural Syst Rehabil Eng 2023; 31:3513-3523. [PMID: 37639413 DOI: 10.1109/tnsre.2023.3309542] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/31/2023]
Abstract
A novel multi-channel-based 3D convolutional neural network (3D-CNN) is proposed in this paper to classify sleep stages. Time domain features, frequency domain features, and time-frequency domain features are extracted from electroencephalography (EEG), electromyogram (EMG), and electrooculogram (EOG) channels and fed into the 3D-CNN model to classify sleep stages. Intrinsic connections among different bio-signals and different frequency bands in time series and time-frequency are learned by 3D convolutional layers, while the frequency relations are learned by 2D convolutional layers. Partial dot-product attention layers help this model find the most important channels and frequency bands in different sleep stages. A long short-term memory unit is added to learn the transition rules among neighboring epochs. Classification experiments were conducted using both ISRUC-S3 datasets and ISRUC-S1, sleep-disorder datasets. The experimental results showed that the overall accuracy achieved 0.832 and the F1-score and Cohen's kappa reached 0.814 and 0.783, respectively, on ISRUC-S3, which are a competitive classification performance with the state-of-the-art baselines. The overall accuracy, F1-score, and Cohen's kappa on ISRUC-S1 achieved 0.820, 0.797, and 0.768, respectively, which also demonstrate its generality on unhealthy subjects. Further experiments were conducted on ISRUC-S3 subset to evaluate its training time. The training time on 10 subjects from ISRUC-S3 with 8549 epochs is 4493s, which indicates its highest calculation speed compared with the existing high-performance graph convolutional networks and [Formula: see text]Net architecture algorithms.
Collapse
|
24
|
Shi X, Li B, Wang W, Qin Y, Wang H, Wang X. Classification Algorithm for Electroencephalogram-based Motor Imagery Using Hybrid Neural Network with Spatio-temporal Convolution and Multi-head Attention Mechanism. Neuroscience 2023; 527:64-73. [PMID: 37517788 DOI: 10.1016/j.neuroscience.2023.07.020] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Revised: 07/11/2023] [Accepted: 07/16/2023] [Indexed: 08/01/2023]
Abstract
Motor imagery (MI) is a brain-computer interface (BCI) technique in which specific brain regions are activated when people imagine their limbs (or muscles) moving, even without actual movement. The technology converts electroencephalogram (EEG) signals generated by the brain into computer-readable commands by measuring neural activity. Classification of motor imagery is one of the tasks in BCI. Researchers have done a lot of work on motor imagery classification, and the existing literature has relatively mature decoding methods for two-class motor tasks. However, as the categories of EEG-based motor imagery tasks increase, further exploration is needed for decoding research on four-class motor imagery tasks. In this study, we designed a hybrid neural network that combines spatiotemporal convolution and attention mechanisms. Specifically, the data is first processed by spatiotemporal convolution to extract features and then processed by a Multi-branch Convolution block. Finally, the processed data is input into the encoder layer of the Transformer for a self-attention calculation to obtain the classification results. Our approach was tested on the well-known MI datasets BCI Competition IV 2a and 2b, and the results show that the 2a dataset has a global average classification accuracy of 83.3% and a kappa value of 0.78. Experimental results show that the proposed method outperforms most of the existing methods.
Collapse
Affiliation(s)
- Xingbin Shi
- The School of Electrical Engineering, Shanghai DianJi University, Shanghai, China; Intelligent Decision and Control Technology Institute, Shanghai Dianji University, Shanghai, China
| | - Baojiang Li
- The School of Electrical Engineering, Shanghai DianJi University, Shanghai, China; Intelligent Decision and Control Technology Institute, Shanghai Dianji University, Shanghai, China.
| | - Wenlong Wang
- The School of Electrical Engineering, Shanghai DianJi University, Shanghai, China; Intelligent Decision and Control Technology Institute, Shanghai Dianji University, Shanghai, China
| | - Yuxin Qin
- The School of Electrical Engineering, Shanghai DianJi University, Shanghai, China; Intelligent Decision and Control Technology Institute, Shanghai Dianji University, Shanghai, China
| | - Haiyan Wang
- The School of Electrical Engineering, Shanghai DianJi University, Shanghai, China; Intelligent Decision and Control Technology Institute, Shanghai Dianji University, Shanghai, China
| | - Xichao Wang
- The School of Electrical Engineering, Shanghai DianJi University, Shanghai, China; Intelligent Decision and Control Technology Institute, Shanghai Dianji University, Shanghai, China
| |
Collapse
|
25
|
Wu Z, Tang X, Wu J, Huang J, Shen J, Hong H. Portable deep-learning decoder for motor imaginary EEG signals based on a novel compact convolutional neural network incorporating spatial-attention mechanism. Med Biol Eng Comput 2023; 61:2391-2404. [PMID: 37095297 DOI: 10.1007/s11517-023-02840-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Accepted: 04/13/2023] [Indexed: 04/26/2023]
Abstract
Due to high computational requirements, deep-learning decoders for motor imaginary (MI) electroencephalography (EEG) signals are usually implemented on bulky and heavy computing devices that are inconvenient for physical actions. To date, the application of deep-learning techniques in independent portable brain-computer-interface (BCI) devices has not been extensively explored. In this study, we proposed a high-accuracy MI EEG decoder by incorporating spatial-attention mechanism into convolution neural network (CNN), and deployed it on fully integrated single-chip microcontroller unit (MCU). After the CNN model was trained on workstation computer using GigaDB MI datasets (52 subjects), its parameters were then extracted and converted to build deep-learning architecture interpreter on MCU. For comparison, EEG-Inception model was also trained using the same dataset, and was deployed on MCU. The results indicate that our deep-learning model can independently decode imaginary left-/right-hand motions. The mean accuracy of the proposed compact CNN reaches 96.75 ± 2.41% (8 channels: Frontocentral3 (FC3), FC4, Central1 (C1), C2, Central-Parietal1 (CP1), CP2, C3, and C4), versus 76.96 ± 19.08% of EEG-Inception (6 channels: FC3, FC4, C1, C2, CP1, and CP2). To the best of our knowledge, this is the first portable deep-learning decoder for MI EEG signals. The findings demonstrate high-accuracy deep-learning decoding of MI EEG in a portable mode, which has great implications for hand-disabled patients. Our portable system can be used for developing artificial-intelligent wearable BCI devices, as it is less computationally expensive and convenient for real-life application.
Collapse
Affiliation(s)
- Zhanxiong Wu
- School of Electronic Information, Hangzhou Dianzi University, Hangzhou, 310018, Zhejiang, China.
| | - Xudong Tang
- School of Electronic Information, Hangzhou Dianzi University, Hangzhou, 310018, Zhejiang, China
| | - Jinhui Wu
- School of Electronic Information, Hangzhou Dianzi University, Hangzhou, 310018, Zhejiang, China
| | - Jiye Huang
- School of Electronic Information, Hangzhou Dianzi University, Hangzhou, 310018, Zhejiang, China
| | - Jian Shen
- Neurosurgery Department, The First Affiliated Hospital of Zhejiang University School of Medicine, Zhejiang University, Hangzhou, 310003, Zhejiang, China
| | - Hui Hong
- School of Electronic Information, Hangzhou Dianzi University, Hangzhou, 310018, Zhejiang, China
| |
Collapse
|
26
|
Lal U, Mathavu Vasanthsena S, Hoblidar A. Temporal Feature Extraction and Machine Learning for Classification of Sleep Stages Using Telemetry Polysomnography. Brain Sci 2023; 13:1201. [PMID: 37626557 PMCID: PMC10452545 DOI: 10.3390/brainsci13081201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Revised: 08/09/2023] [Accepted: 08/10/2023] [Indexed: 08/27/2023] Open
Abstract
Accurate sleep stage detection is crucial for diagnosing sleep disorders and tailoring treatment plans. Polysomnography (PSG) is considered the gold standard for sleep assessment since it captures a diverse set of physiological signals. While various studies have employed complex neural networks for sleep staging using PSG, our research emphasises the efficacy of a simpler and more efficient architecture. We aimed to integrate a diverse set of feature extraction measures with straightforward machine learning, potentially offering a more efficient avenue for sleep staging. We also aimed to conduct a comprehensive comparative analysis of feature extraction measures, including the power spectral density, Higuchi fractal dimension, singular value decomposition entropy, permutation entropy, and detrended fluctuation analysis, coupled with several machine-learning models, including XGBoost, Extra Trees, Random Forest, and LightGBM. Furthermore, data augmentation methods like the Synthetic Minority Oversampling Technique were also employed to rectify the inherent class imbalance in sleep data. The subsequent results highlighted that the XGBoost classifier, when used with a combination of all feature extraction measures as an ensemble, achieved the highest performance, with accuracies of 87%, 90%, 93%, 96%, and 97% and average F1-scores of 84.6%, 89%, 90.33%, 93.5%, and 93.5% for distinguishing between five-stage, four-stage, three-stage, and two distinct two-stage sleep configurations, respectively. This combined feature extraction technique represents a novel addition to the body of research since it achieves higher performance than many recently developed deep neural networks by utilising simpler machine-learning models.
Collapse
Affiliation(s)
- Utkarsh Lal
- Department of Computer Science and Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, Karnataka, India;
| | - Suhas Mathavu Vasanthsena
- Department of Electronics and Communication Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, Karnataka, India;
| | - Anitha Hoblidar
- Department of Electronics and Communication Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, Karnataka, India;
| |
Collapse
|
27
|
Zhang D, Li H, Xie J. MI-CAT: A transformer-based domain adaptation network for motor imagery classification. Neural Netw 2023; 165:451-462. [PMID: 37336030 DOI: 10.1016/j.neunet.2023.06.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Revised: 04/03/2023] [Accepted: 06/02/2023] [Indexed: 06/21/2023]
Abstract
Due to its convenience and safety, electroencephalography (EEG) data is one of the most widely used signals in motor imagery (MI) brain-computer interfaces (BCIs). In recent years, methods based on deep learning have been widely applied to the field of BCIs, and some studies have gradually tried to apply Transformer to EEG signal decoding due to its superior global information focusing ability. However, EEG signals vary from subject to subject. Based on Transformer, how to effectively use data from other subjects (source domain) to improve the classification performance of a single subject (target domain) remains a challenge. To fill this gap, we propose a novel architecture called MI-CAT. The architecture innovatively utilizes Transformer's self-attention and cross-attention mechanisms to interact features to resolve differential distribution between different domains. Specifically, we adopt a patch embedding layer for the extracted source and target features to divide the features into multiple patches. Then, we comprehensively focus on the intra-domain and inter-domain features by stacked multiple Cross-Transformer Blocks (CTBs), which can adaptively conduct bidirectional knowledge transfer and information exchange between domains. Furthermore, we also utilize two non-shared domain-based attention blocks to efficiently capture domain-dependent information, optimizing the features extracted from the source and target domains to assist in feature alignment. To evaluate our method, we conduct extensive experiments on two real public EEG datasets, Dataset IIb and Dataset IIa, achieving competitive performance with an average classification accuracy of 85.26% and 76.81%, respectively. Experimental results demonstrate that our method is a powerful model for decoding EEG signals and facilitates the development of the Transformer for brain-computer interfaces (BCIs).
Collapse
Affiliation(s)
- Dongxue Zhang
- Jilin University, College of Computer Science and Technology, Changchun, Jilin Province, China; Key Laboratory of Symbol Computation and Knowledge Engineering, Jilin University, Changchun 130012, China.
| | - Huiying Li
- Jilin University, College of Computer Science and Technology, Changchun, Jilin Province, China; Key Laboratory of Symbol Computation and Knowledge Engineering, Jilin University, Changchun 130012, China.
| | - Jingmeng Xie
- Xi'an Jiaotong University, College of Electronic information, Xi'an, Shanxi Province, China.
| |
Collapse
|
28
|
Wang DX, Ng N, Seger SE, Ekstrom AD, Kriegel JL, Lega BC. Machine learning classifiers for electrode selection in the design of closed-loop neuromodulation devices for episodic memory improvement. Cereb Cortex 2023; 33:8150-8163. [PMID: 36997155 PMCID: PMC10321120 DOI: 10.1093/cercor/bhad105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Revised: 03/04/2023] [Accepted: 03/05/2023] [Indexed: 04/01/2023] Open
Abstract
Successful neuromodulation approaches to alter episodic memory require closed-loop stimulation predicated on the effective classification of brain states. The practical implementation of such strategies requires prior decisions regarding electrode implantation locations. Using a data-driven approach, we employ support vector machine (SVM) classifiers to identify high-yield brain targets on a large data set of 75 human intracranial electroencephalogram subjects performing the free recall (FR) task. Further, we address whether the conserved brain regions provide effective classification in an alternate (associative) memory paradigm along with FR, as well as testing unsupervised classification methods that may be a useful adjunct to clinical device implementation. Finally, we use random forest models to classify functional brain states, differentiating encoding versus retrieval versus non-memory behavior such as rest and mathematical processing. We then test how regions that exhibit good classification for the likelihood of recall success in the SVM models overlap with regions that differentiate functional brain states in the random forest models. Finally, we lay out how these data may be used in the design of neuromodulation devices.
Collapse
Affiliation(s)
- David X Wang
- Department of Neurosurgery, The University of Texas – Southwestern Medical Center, Dallas, Texas 75390, United States
| | - Nicole Ng
- Department of Neurosurgery, The University of Texas – Southwestern Medical Center, Dallas, Texas 75390, United States
| | - Sarah E Seger
- Department of Neuroscience, University of Arizona, Tucson, Arizona 85721, United States
| | - Arne D Ekstrom
- Department of Neuroscience, University of Arizona, Tucson, Arizona 85721, United States
- Department of Psychology, University of Arizona, Tucson, Arizona 85721, United States
| | - Jennifer L Kriegel
- Department of Neurosurgery, The University of Texas – Southwestern Medical Center, Dallas, Texas 75390, United States
| | - Bradley C Lega
- Department of Neurosurgery, The University of Texas – Southwestern Medical Center, Dallas, Texas 75390, United States
| |
Collapse
|
29
|
Li X, Ono C, Warita N, Shoji T, Nakagawa T, Usukura H, Yu Z, Takahashi Y, Ichiji K, Sugita N, Kobayashi N, Kikuchi S, Kimura R, Hamaie Y, Hino M, Kunii Y, Murakami K, Ishikuro M, Obara T, Nakamura T, Nagami F, Takai T, Ogishima S, Sugawara J, Hoshiai T, Saito M, Tamiya G, Fuse N, Fujii S, Nakayama M, Kuriyama S, Yamamoto M, Yaegashi N, Homma N, Tomita H. Comprehensive evaluation of machine learning algorithms for predicting sleep-wake conditions and differentiating between the wake conditions before and after sleep during pregnancy based on heart rate variability. Front Psychiatry 2023; 14:1104222. [PMID: 37415686 PMCID: PMC10322181 DOI: 10.3389/fpsyt.2023.1104222] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Accepted: 05/19/2023] [Indexed: 07/08/2023] Open
Abstract
Introduction Perinatal women tend to have difficulties with sleep along with autonomic characteristics. This study aimed to identify a machine learning algorithm capable of achieving high accuracy in predicting sleep-wake conditions and differentiating between the wake conditions before and after sleep during pregnancy based on heart rate variability (HRV). Methods Nine HRV indicators (features) and sleep-wake conditions of 154 pregnant women were measured for 1 week, from the 23rd to the 32nd weeks of pregnancy. Ten machine learning and three deep learning methods were applied to predict three types of sleep-wake conditions (wake, shallow sleep, and deep sleep). In addition, the prediction of four conditions, in which the wake conditions before and after sleep were differentiated-shallow sleep, deep sleep, and the two types of wake conditions-was also tested. Results and Discussion In the test for predicting three types of sleep-wake conditions, most of the algorithms, except for Naïve Bayes, showed higher areas under the curve (AUCs; 0.82-0.88) and accuracy (0.78-0.81). The test using four types of sleep-wake conditions with differentiation between the wake conditions before and after sleep also resulted in successful prediction by the gated recurrent unit with the highest AUC (0.86) and accuracy (0.79). Among the nine features, seven made major contributions to predicting sleep-wake conditions. Among the seven features, "the number of interval differences of successive RR intervals greater than 50 ms (NN50)" and "the proportion dividing NN50 by the total number of RR intervals (pNN50)" were useful to predict sleep-wake conditions unique to pregnancy. These findings suggest alterations in the vagal tone system specific to pregnancy.
Collapse
Affiliation(s)
- Xue Li
- Department of Psychiatry, Tohoku University Graduate School of Medicine, Sendai, Japan
| | - Chiaki Ono
- Department of Psychiatry, Tohoku University Hospital, Sendai, Japan
| | - Noriko Warita
- Department of Preventive Medicine and Epidemiology, Tohoku University Tohoku Medical Megabank Organization, Sendai, Japan
| | - Tomoka Shoji
- Department of Psychiatry, Tohoku University Graduate School of Medicine, Sendai, Japan
- Department of Preventive Medicine and Epidemiology, Tohoku University Tohoku Medical Megabank Organization, Sendai, Japan
| | - Takashi Nakagawa
- Department of Psychiatry, Tohoku University Graduate School of Medicine, Sendai, Japan
- Department of Psychiatry, Tohoku University Hospital, Sendai, Japan
| | - Hitomi Usukura
- Department of Disaster Psychiatry, International Research Institute of Disaster Sciences, Tohoku University, Sendai, Japan
| | - Zhiqian Yu
- Department of Disaster Psychiatry, International Research Institute of Disaster Sciences, Tohoku University, Sendai, Japan
| | - Yuta Takahashi
- Department of Psychiatry, Tohoku University Hospital, Sendai, Japan
| | - Kei Ichiji
- Department of Radiological Imaging and Informatics, Tohoku University Graduate School of Medicine, Sendai, Japan
| | - Norihiro Sugita
- Department of Management Science and Technology, Graduate School of Engineering, Tohoku University, Sendai, Japan
| | | | - Saya Kikuchi
- Department of Psychiatry, Tohoku University Hospital, Sendai, Japan
| | - Ryoko Kimura
- Department of Psychiatry, Tohoku University Graduate School of Medicine, Sendai, Japan
| | - Yumiko Hamaie
- Department of Psychiatry, Tohoku University Hospital, Sendai, Japan
- Department of Disaster Psychiatry, International Research Institute of Disaster Sciences, Tohoku University, Sendai, Japan
| | - Mizuki Hino
- Department of Disaster Psychiatry, International Research Institute of Disaster Sciences, Tohoku University, Sendai, Japan
| | - Yasuto Kunii
- Department of Psychiatry, Tohoku University Hospital, Sendai, Japan
- Department of Disaster Psychiatry, International Research Institute of Disaster Sciences, Tohoku University, Sendai, Japan
| | - Keiko Murakami
- Department of Preventive Medicine and Epidemiology, Tohoku University Tohoku Medical Megabank Organization, Sendai, Japan
| | - Mami Ishikuro
- Department of Preventive Medicine and Epidemiology, Tohoku University Tohoku Medical Megabank Organization, Sendai, Japan
| | - Taku Obara
- Department of Preventive Medicine and Epidemiology, Tohoku University Tohoku Medical Megabank Organization, Sendai, Japan
| | - Tomohiro Nakamura
- Department of Health Record Informatics, Tohoku University Tohoku Medical Megabank Organization, Sendai, Japan
| | - Fuji Nagami
- Department of Public Relations and Planning, Tohoku University Tohoku Medical Megabank Organization, Sendai, Japan
| | - Takako Takai
- Department of Health Record Informatics, Tohoku University Tohoku Medical Megabank Organization, Sendai, Japan
| | - Soichi Ogishima
- Department of Health Record Informatics, Tohoku University Tohoku Medical Megabank Organization, Sendai, Japan
| | - Junichi Sugawara
- Department of Community Medical Supports, Tohoku University Tohoku Medical Megabank Organization, Sendai, Japan
| | - Tetsuro Hoshiai
- Department of Obstetrics, Tohoku University Graduate School of Medicine, Sendai, Japan
| | - Masatoshi Saito
- Department of Obstetrics, Tohoku University Graduate School of Medicine, Sendai, Japan
| | - Gen Tamiya
- Department of Integrative Genomics, Tohoku University Tohoku Medical Megabank Organization, Sendai, Japan
| | - Nobuo Fuse
- Department of Integrative Genomics, Tohoku University Tohoku Medical Megabank Organization, Sendai, Japan
| | - Susumu Fujii
- Department of Disaster Medical Informatics, International Research Institute of Disaster Sciences, Tohoku University, Sendai, Japan
| | - Masaharu Nakayama
- Department of Disaster Medical Informatics, International Research Institute of Disaster Sciences, Tohoku University, Sendai, Japan
| | - Shinichi Kuriyama
- Department of Preventive Medicine and Epidemiology, Tohoku University Tohoku Medical Megabank Organization, Sendai, Japan
- Department of Disaster Public Health, International Research Institute of Disaster Sciences, Tohoku University, Sendai, Japan
| | - Masayuki Yamamoto
- Department of Management Science and Technology, Graduate School of Engineering, Tohoku University, Sendai, Japan
- Department of Integrative Genomics, Tohoku University Tohoku Medical Megabank Organization, Sendai, Japan
| | - Nobuo Yaegashi
- Department of Public Relations and Planning, Tohoku University Tohoku Medical Megabank Organization, Sendai, Japan
- Department of Obstetrics, Tohoku University Graduate School of Medicine, Sendai, Japan
| | - Noriyasu Homma
- Department of Radiological Imaging and Informatics, Tohoku University Graduate School of Medicine, Sendai, Japan
| | - Hiroaki Tomita
- Department of Psychiatry, Tohoku University Graduate School of Medicine, Sendai, Japan
- Department of Psychiatry, Tohoku University Hospital, Sendai, Japan
- Department of Preventive Medicine and Epidemiology, Tohoku University Tohoku Medical Megabank Organization, Sendai, Japan
- Department of Disaster Psychiatry, International Research Institute of Disaster Sciences, Tohoku University, Sendai, Japan
| |
Collapse
|
30
|
Liu Z, Zhu B, Hu M, Deng Z, Zhang J. Revised Tunable Q-Factor Wavelet Transform for EEG-Based Epileptic Seizure Detection. IEEE Trans Neural Syst Rehabil Eng 2023; 31:1707-1720. [PMID: 37028382 DOI: 10.1109/tnsre.2023.3257306] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/17/2023]
Abstract
Electroencephalogram (EEG) signals are an essential tool for the detection of epilepsy. Because of the complex time series and frequency features of EEG signals, traditional feature extraction methods have difficulty meeting the requirements of recognition performance. The tunable Q-factor wavelet transform (TQWT), which is a constant-Q transform that is easily invertible and modestly oversampled, has been successfully used for feature extraction of EEG signals. Because the constant-Q is set in advance and cannot be optimized, further applications of the TQWT are restricted. To solve this problem, the revised tunable Q-factor wavelet transform (RTQWT) is proposed in this paper. RTQWT is based on the weighted normalized entropy and overcomes the problems of a nontunable Q-factor and the lack of an optimized tunable criterion. In contrast to the continuous wavelet transform and the raw tunable Q-factor wavelet transform, the wavelet transform corresponding to the revised Q-factor, i.e., RTQWT, is sufficiently better adapted to the nonstationary nature of EEG signals. Therefore, the precise and specific characteristic subspaces obtained can improve the classification accuracy of EEG signals. The classification of the extracted features was performed using the decision tree, linear discriminant, naive Bayes, SVM and KNN classifiers. The performance of the new approach was tested by evaluating the accuracies of five time-frequency distributions: FT, EMD, DWT, CWT and TQWT. The experiments showed that the RTQWT proposed in this paper can be used to extract detailed features more effectively and improve the classification accuracy of EEG signals.
Collapse
|
31
|
Al-Salman W, Li Y, Oudah AY, Almaged S. Sleep stage classification in EEG signals using the clustering approach based probability distribution features coupled with classification algorithms. Neurosci Res 2023; 188:51-67. [PMID: 36152918 DOI: 10.1016/j.neures.2022.09.009] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Revised: 08/20/2022] [Accepted: 09/20/2022] [Indexed: 10/14/2022]
Abstract
Sleep scoring is one of the primary tasks for the classification of sleep stages in Electroencephalogram (EEG) signals. Manual visual scoring of sleep stages is time-consuming as well as being dependent on the experience of a highly qualified sleep expert. This paper aims to address these issues by developing a new method to automatically classify sleep stages in EEG signals. In this research, a robust method has been presented based on the clustering approach, coupled with probability distribution features, to identify six sleep stages with the use of EEG signals. Using this method, each 30-second EEG signal is firstly segmented into small epochs and then each epoch is divided into 60 sub-segments. Each sub-segment is decomposed into five levels by using a discrete wavelet transform (DWT) to obtain the approximation and detailed coefficient. The wavelet coefficient of each level is clustered using the k-means algorithm. Subsequently, features are extracted based on the probability distribution for each wavelet coefficient. The extracted features then are forwarded to the least squares support vector machine classifier (LS-SVM) to identify sleep stages. Comparisons with several existing methods are also made in this study. The proposed method for the classification of the sleep stages achieves an average accuracy rate of 97.4%. It can be an effective tool for sleep stages classification and can be useful for doctors and neurologists for diagnosing sleep disorders.
Collapse
Affiliation(s)
- Wessam Al-Salman
- School of Mathematics, Physics and Computing, University of Southern Queensland, Australia; University of Thi-Qar, College of Education for Pure Science, Iraq.
| | - Yan Li
- School of Mathematics, Physics and Computing, University of Southern Queensland, Australia; School of Electrical and Electronic Engineering, Hubei University of Technology, Wuhan, China
| | - Atheer Y Oudah
- University of Thi-Qar, College of Education for Pure Science, Iraq; Information and Communication Technology Research Group, Scientific Research Centre, Al-Ayen University, Thi-Qar, Iraq
| | | |
Collapse
|
32
|
Haghayegh S, Hu K, Stone K, Redline S, Schernhammer E. Automated Sleep Stages Classification Using Convolutional Neural Network From Raw and Time-Frequency Electroencephalogram Signals: Systematic Evaluation Study. J Med Internet Res 2023; 25:e40211. [PMID: 36763454 PMCID: PMC9960035 DOI: 10.2196/40211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Revised: 09/09/2022] [Accepted: 01/09/2023] [Indexed: 02/11/2023] Open
Abstract
BACKGROUND Most existing automated sleep staging methods rely on multimodal data, and scoring a specific epoch requires not only the current epoch but also a sequence of consecutive epochs that precede and follow the epoch. OBJECTIVE We proposed and tested a convolutional neural network called SleepInceptionNet, which allows sleep classification of a single epoch using a single-channel electroencephalogram (EEG). METHODS SleepInceptionNet is based on our systematic evaluation of the effects of different EEG preprocessing methods, EEG channels, and convolutional neural networks on automatic sleep staging performance. The evaluation was performed using polysomnography data of 883 participants (937,975 thirty-second epochs). Raw data of individual EEG channels (ie, frontal, central, and occipital) and 3 specific transformations of the data, including power spectral density, continuous wavelet transform, and short-time Fourier transform, were used separately as the inputs of the convolutional neural network models. To classify sleep stages, 7 sequential deep neural networks were tested for the 1D data (ie, raw EEG and power spectral density), and 16 image classifier convolutional neural networks were tested for the 2D data (ie, continuous wavelet transform and short-time Fourier transform time-frequency images). RESULTS The best model, SleepInceptionNet, which uses time-frequency images developed by the continuous wavelet transform method from central single-channel EEG data as input to the InceptionV3 image classifier algorithm, achieved a Cohen κ agreement of 0.705 (SD 0.077) in reference to the gold standard polysomnography. CONCLUSIONS SleepInceptionNet may allow real-time automated sleep staging in free-living conditions using a single-channel EEG, which may be useful for on-demand intervention or treatment during specific sleep stages.
Collapse
Affiliation(s)
- Shahab Haghayegh
- Harvard Medical School, Boston, MA, United States
- Brigham and Women's Hospital, Boston, MA, United States
| | - Kun Hu
- Harvard Medical School, Boston, MA, United States
- Brigham and Women's Hospital, Boston, MA, United States
| | - Katie Stone
- California Pacific Medical Center Research Institute, San Francisco, CA, United States
| | - Susan Redline
- Harvard Medical School, Boston, MA, United States
- Brigham and Women's Hospital, Boston, MA, United States
| | - Eva Schernhammer
- Harvard Medical School, Boston, MA, United States
- Brigham and Women's Hospital, Boston, MA, United States
- Medical University of Vienna, Vienna, Austria
| |
Collapse
|
33
|
Nazih W, Shahin M, Eldesouki MI, Ahmed B. Influence of Channel Selection and Subject's Age on the Performance of the Single Channel EEG-Based Automatic Sleep Staging Algorithms. SENSORS (BASEL, SWITZERLAND) 2023; 23:899. [PMID: 36679711 PMCID: PMC9866121 DOI: 10.3390/s23020899] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Revised: 01/08/2023] [Accepted: 01/10/2023] [Indexed: 06/17/2023]
Abstract
The electroencephalogram (EEG) signal is a key parameter used to identify the different sleep stages present in an overnight sleep recording. Sleep staging is crucial in the diagnosis of several sleep disorders; however, the manual annotation of the EEG signal is a costly and time-consuming process. Automatic sleep staging algorithms offer a practical and cost-effective alternative to manual sleep staging. However, due to the limited availability of EEG sleep datasets, the reliability of existing sleep staging algorithms is questionable. Furthermore, most reported experimental results have been obtained using adult EEG signals; the effectiveness of these algorithms using pediatric EEGs is unknown. In this paper, we conduct an intensive study of two state-of-the-art single-channel EEG-based sleep staging algorithms, namely DeepSleepNet and AttnSleep, using a recently released large-scale sleep dataset collected from 3984 patients, most of whom are children. The paper studies how the performance of these sleep staging algorithms varies when applied on different EEG channels and across different age groups. Furthermore, all results were analyzed within individual sleep stages to understand how each stage is affected by the choice of EEG channel and the participants' age. The study concluded that the selection of the channel is crucial for the accuracy of the single-channel EEG-based automatic sleep staging methods. For instance, channels O1-M2 and O2-M1 performed consistently worse than other channels for both algorithms and through all age groups. The study also revealed the challenges in the automatic sleep staging of newborns and infants (1-52 weeks).
Collapse
Affiliation(s)
- Waleed Nazih
- College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University, Al Kharj 11942, Saudi Arabia
| | - Mostafa Shahin
- School of Electrical Engineering and Telecommunications, University of New South Wales, Sydney, NSW 2052, Australia
| | - Mohamed I. Eldesouki
- College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University, Al Kharj 11942, Saudi Arabia
| | - Beena Ahmed
- School of Electrical Engineering and Telecommunications, University of New South Wales, Sydney, NSW 2052, Australia
| |
Collapse
|
34
|
Kim H, Kim D, Oh J. Automation of classification of sleep stages and estimation of sleep efficiency using actigraphy. Front Public Health 2023; 10:1092222. [PMID: 36699913 PMCID: PMC9869419 DOI: 10.3389/fpubh.2022.1092222] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Accepted: 12/12/2022] [Indexed: 01/11/2023] Open
Abstract
Introduction Sleep is a fundamental and essential physiological process for recovering physiological function. Sleep disturbance or deprivation has been known to be a causative factor of various physiological and psychological disorders. Therefore, sleep evaluation is vital for diagnosing or monitoring those disorders. Although PSG (polysomnography) has been the gold standard for assessing sleep quality and classifying sleep stages, PSG has various limitations for common uses. In substitution for PSG, there has been vigorous research using actigraphy. Methods For classifying sleep stages automatically, we propose machine learning models with HRV (heart rate variability)-related features and acceleration features, which were processed from the actigraphy (Maxim band) data. Those classification results were transformed into a binary classification for estimating sleep efficiency. With 30 subjects, we conducted PSG, and they slept overnight with wrist-type actigraphy. We assessed the performance of four proposed machine learning models. Results With HRV-related and raw features of actigraphy, Cohen's kappa was 0.974 (p < 0.001) for classifying sleep stages into five stages: wake (W), REM (Rapid Eye Movement) (R), Sleep N1 (Non-Rapid Eye Movement Stage 1, S1), Sleep N2 (Non-Rapid Eye Movement Stage 2, S2), Sleep N3 (Non-Rapid Eye Movement Stage 3, S3). In addition, our machine learning model for the estimation of sleep efficiency showed an accuracy of 0.86. Discussion Our model demonstrated that automated sleep classification results could perfectly match the PSG results. Since models with acceleration features showed modest performance in differentiating some sleep stages, further research on acceleration features must be done. In addition, the sleep efficiency model demonstrated modest results. However, an investigation into the effects of HRV-derived and acceleration features is required.
Collapse
Affiliation(s)
- Hyejin Kim
- College of Pharmacy, Sookmyung Women's University, Seoul, Republic of Korea
| | | | - Junhyoung Oh
- Center for Information Security Technologies, International Center for Conversing Technology Building, Anam Campus (Science), Korea University, Seoul, Republic of Korea,*Correspondence: Junhyoung Oh ✉
| |
Collapse
|
35
|
Chen X, Gupta RS, Gupta L. Exploiting the Cone of Influence for Improving the Performance of Wavelet Transform-Based Models for ERP/EEG Classification. Brain Sci 2022; 13:21. [PMID: 36672003 PMCID: PMC9856575 DOI: 10.3390/brainsci13010021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Revised: 12/10/2022] [Accepted: 12/19/2022] [Indexed: 12/24/2022] Open
Abstract
Features extracted from the wavelet transform coefficient matrix are widely used in the design of machine learning models to classify event-related potential (ERP) and electroencephalography (EEG) signals in a wide range of brain activity research and clinical studies. This novel study is aimed at dramatically improving the performance of such wavelet-based classifiers by exploiting information offered by the cone of influence (COI) of the continuous wavelet transform (CWT). The COI is a boundary that is superimposed on the wavelet scalogram to delineate the coefficients that are accurate from those that are inaccurate due to edge effects. The features derived from the inaccurate coefficients are, therefore, unreliable. In this study, it is hypothesized that the classifier performance would improve if unreliable features, which are outside the COI, are zeroed out, and the performance would improve even further if those features are cropped out completely. The entire, zeroed out, and cropped scalograms are referred to as the "same" (S)-scalogram, "zeroed out" (Z)-scalogram, and the "valid" (V)-scalogram, respectively. The strategy to validate the hypotheses is to formulate three classification approaches in which the feature vectors are extracted from the (a) S-scalogram in the standard manner, (b) Z-scalogram, and (c) V-scalogram. A subsampling strategy is developed to generate small-sample ERP ensembles to enable customized classifier design for single subjects, and a strategy is developed to select a subset of channels from multiple ERP channels. The three scalogram approaches are implemented using support vector machines, random forests, k-nearest neighbor, multilayer perceptron neural networks, and deep learning convolution neural networks. In order to validate the performance hypotheses, experiments are designed to classify the multi-channel ERPs of five subjects engaged in distinguishing between synonymous and non-synonymous word pairs. The results confirm that the classifiers using the Z-scalogram features outperform those using the S-scalogram features, and the classifiers using the V-scalogram features outperform those using the Z-scalogram features. Most importantly, the relative improvement of the V-scalogram classifiers over the standard S-scalogram classifiers is dramatic. Additionally, enabling the design of customized classifiers for individual subjects is an important contribution to ERP/EEG-based studies and diagnoses of patient-specific disorders.
Collapse
Affiliation(s)
- Xiaoqian Chen
- School of Electrical, Computer, and Biomedical Engineering, Southern Illinois University, Carbondale, IL 62901, USA
| | - Resh S. Gupta
- Center of Excellence for Stress and Mental Health, VA San Diego Healthcare System, San Diego, CA 92161, USA
| | - Lalit Gupta
- School of Electrical, Computer, and Biomedical Engineering, Southern Illinois University, Carbondale, IL 62901, USA
| |
Collapse
|
36
|
Li X, Huang Y, Lhatoo SD, Tao S, Vilella Bertran L, Zhang GQ, Cui L. A hybrid unsupervised and supervised learning approach for postictal generalized EEG suppression detection. Front Neuroinform 2022; 16:1040084. [PMID: 36601382 PMCID: PMC9806125 DOI: 10.3389/fninf.2022.1040084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Accepted: 11/07/2022] [Indexed: 12/23/2022] Open
Abstract
Sudden unexpected death of epilepsy (SUDEP) is a catastrophic and fatal complication of epilepsy and is the primary cause of mortality in those who have uncontrolled seizures. While several multifactorial processes have been implicated including cardiac, respiratory, autonomic dysfunction leading to arrhythmia, hypoxia, and cessation of cerebral and brainstem function, the mechanisms underlying SUDEP are not completely understood. Postictal generalized electroencephalogram (EEG) suppression (PGES) is a potential risk marker for SUDEP, as studies have shown that prolonged PGES was significantly associated with a higher risk of SUDEP. Automated PGES detection techniques have been developed to efficiently obtain PGES durations for SUDEP risk assessment. However, real-world data recorded in epilepsy monitoring units (EMUs) may contain high-amplitude signals due to physiological artifacts, such as breathing, muscle, and movement artifacts, making it difficult to determine the end of PGES. In this paper, we present a hybrid approach that combines the benefits of unsupervised and supervised learning for PGES detection using multi-channel EEG recordings. A K-means clustering model is leveraged to group EEG recordings with similar artifact features. We introduce a new learning strategy for training a set of random forest (RF) models based on clustering results to improve PGES detection performance. Our approach achieved a 5-second tolerance-based detection accuracy of 64.92%, a 10-second tolerance-based detection accuracy of 79.85%, and an average predicted time distance of 8.26 seconds with 286 EEG recordings using leave-one-out (LOO) cross-validation. The results demonstrated that our hybrid approach provided better performance compared to other existing approaches.
Collapse
Affiliation(s)
- Xiaojin Li
- Department of Neurology, The University of Texas Health Science Center at Houston, Houston, TX, United States,Texas Institute for Restorative Neurotechnologies, The University of Texas Health Science Center at Houston, Houston, TX, United States
| | - Yan Huang
- Department of Neurology, The University of Texas Health Science Center at Houston, Houston, TX, United States,Texas Institute for Restorative Neurotechnologies, The University of Texas Health Science Center at Houston, Houston, TX, United States
| | - Samden D. Lhatoo
- Department of Neurology, The University of Texas Health Science Center at Houston, Houston, TX, United States,Texas Institute for Restorative Neurotechnologies, The University of Texas Health Science Center at Houston, Houston, TX, United States
| | - Shiqiang Tao
- Department of Neurology, The University of Texas Health Science Center at Houston, Houston, TX, United States,Texas Institute for Restorative Neurotechnologies, The University of Texas Health Science Center at Houston, Houston, TX, United States
| | - Laura Vilella Bertran
- Department of Neurology, The University of Texas Health Science Center at Houston, Houston, TX, United States,Texas Institute for Restorative Neurotechnologies, The University of Texas Health Science Center at Houston, Houston, TX, United States
| | - Guo-Qiang Zhang
- Department of Neurology, The University of Texas Health Science Center at Houston, Houston, TX, United States,Texas Institute for Restorative Neurotechnologies, The University of Texas Health Science Center at Houston, Houston, TX, United States,School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, United States,*Correspondence: Guo-Qiang Zhang
| | - Licong Cui
- Texas Institute for Restorative Neurotechnologies, The University of Texas Health Science Center at Houston, Houston, TX, United States,School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, United States,Licong Cui
| |
Collapse
|
37
|
ElMoaqet H, Eid M, Ryalat M, Penzel T. A Deep Transfer Learning Framework for Sleep Stage Classification with Single-Channel EEG Signals. SENSORS (BASEL, SWITZERLAND) 2022; 22:8826. [PMID: 36433422 PMCID: PMC9693852 DOI: 10.3390/s22228826] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Revised: 11/07/2022] [Accepted: 11/08/2022] [Indexed: 06/16/2023]
Abstract
The polysomnogram (PSG) is the gold standard for evaluating sleep quality and disorders. Attempts to automate this process have been hampered by the complexity of the PSG signals and heterogeneity among subjects and recording hardwares. Most of the existing methods for automatic sleep stage scoring rely on hand-engineered features that require prior knowledge of sleep analysis. This paper presents an end-to-end deep transfer learning framework for automatic feature extraction and sleep stage scoring based on a single-channel EEG. The proposed framework was evaluated over the three primary signals recommended by the American Academy of Sleep Medicine (C4-M1, F4-M1, O2-M1) from two data sets that have different properties and are recorded with different hardware. Different Time-Frequency (TF) imaging approaches were evaluated to generate TF representations for the 30 s EEG sleep epochs, eliminating the need for complex EEG signal pre-processing or manual feature extraction. Several training and detection scenarios were investigated using transfer learning of convolutional neural networks (CNN) and combined with recurrent neural networks. Generating TF images from continuous wavelet transform along with a deep transfer architecture composed of a pre-trained GoogLeNet CNN followed by a bidirectional long short-term memory (BiLSTM) network showed the best scoring performance among all tested scenarios. Using 20-fold cross-validation applied on the C4-M1 channel, the proposed framework achieved an average per-class accuracy of 91.2%, sensitivity of 77%, specificity of 94.1%, and precision of 75.9%. Our results demonstrate that without changing the model architecture and the training algorithm, our model could be applied to different single-channel EEGs from different data sets. Most importantly, the proposed system receives a single EEG epoch as an input at a time and produces a single corresponding output label, making it suitable for real time monitoring outside sleep labs as well as to help sleep lab specialists arrive at a more accurate diagnoses.
Collapse
Affiliation(s)
- Hisham ElMoaqet
- Department of Mechatronics Engineering, German Jordanian University, Amman 11180, Jordan
| | - Mohammad Eid
- Department of Biomedical Engineering, German Jordanian University, Amman 11180, Jordan
| | - Mutaz Ryalat
- Department of Mechatronics Engineering, German Jordanian University, Amman 11180, Jordan
| | - Thomas Penzel
- Interdisciplinary Center of Sleep Medicine, Charité-Universitätsmedizin Berlin, 10117 Berlin, Germany
| |
Collapse
|
38
|
Kim H, Lee SM, Choi S. Automatic sleep stages classification using multi-level fusion. Biomed Eng Lett 2022; 12:413-420. [PMID: 36238370 PMCID: PMC9550904 DOI: 10.1007/s13534-022-00244-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2022] [Revised: 07/12/2022] [Accepted: 07/25/2022] [Indexed: 10/15/2022] Open
Abstract
Sleep efficiency is a factor that can determine a person's healthy life. Sleep efficiency can be calculated by analyzing the results of the sleep stage classification. There have been many studies to classify sleep stages automatically using multiple signals to improve the accuracy of the sleep stage classification. The fusion method is used to process multi-signal data. Fusion methods include data-level fusion, feature-level fusion, and decision-level fusion methods. We propose a multi-level fusion method to increase the accuracy of the sleep stage classification when using multi-signal data consisting of electroencephalography and electromyography signals. First, we used feature-level fusion to fuse the extracted features using a convolutional neural network for multi-signal data. Then, after obtaining each classified result using the fused feature data, the sleep stage was derived using a decision-level fusion method that fused classified results. We used public datasets, Sleep-EDF, to measure performance; we confirmed that the proposed multi-level fusion method yielded a higher accuracy of 87.2%, respectively, compared to single-level fusion method and more existing methods. The proposed multi-level fusion method showed the most improved performance in classifying N1 stage, where existing methods had the lowest performance.
Collapse
Affiliation(s)
- Hyungjik Kim
- Department of Secured Smart Electric Vehicle, Kookmin University, 02707 Seoul, Korea
| | - Seung Min Lee
- Department of Electrical Engineering, Kookmin University, 02707 Seoul, Korea
| | - Sunwoong Choi
- Department of Electrical Engineering, Kookmin University, 02707 Seoul, Korea
| |
Collapse
|
39
|
L-Tetrolet Pattern-Based Sleep Stage Classification Model Using Balanced EEG Datasets. Diagnostics (Basel) 2022; 12:diagnostics12102510. [PMID: 36292199 PMCID: PMC9600064 DOI: 10.3390/diagnostics12102510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2022] [Revised: 10/10/2022] [Accepted: 10/13/2022] [Indexed: 11/24/2022] Open
Abstract
Background: Sleep stage classification is a crucial process for the diagnosis of sleep or sleep-related diseases. Currently, this process is based on manual electroencephalogram (EEG) analysis, which is resource-intensive and error-prone. Various machine learning models have been recommended to standardize and automate the analysis process to address these problems. Materials and methods: The well-known cyclic alternating pattern (CAP) sleep dataset is used to train and test an L-tetrolet pattern-based sleep stage classification model in this research. By using this dataset, the following three cases are created, and they are: Insomnia, Normal, and Fused cases. For each of these cases, the machine learning model is tasked with identifying six sleep stages. The model is structured in terms of feature generation, feature selection, and classification. Feature generation is established with a new L-tetrolet (Tetris letter) function and multiple pooling decomposition for level creation. We fuse ReliefF and iterative neighborhood component analysis (INCA) feature selection using a threshold value. The hybrid and iterative feature selectors are named threshold selection-based ReliefF and INCA (TSRFINCA). The selected features are classified using a cubic support vector machine. Results: The presented L-tetrolet pattern and TSRFINCA-based sleep stage classification model yield 95.43%, 91.05%, and 92.31% accuracies for Insomnia, Normal dataset, and Fused cases, respectively. Conclusion: The recommended L-tetrolet pattern and TSRFINCA-based model push the envelope of current knowledge engineering by accurately classifying sleep stages even in the presence of sleep disorders.
Collapse
|
40
|
Wang G, Yin Z, Zhao M, Tian Y, Sun Z. Identification of human mental workload levels in a language comprehension task with imbalance neurophysiological data. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 224:107011. [PMID: 35863122 DOI: 10.1016/j.cmpb.2022.107011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/09/2021] [Revised: 05/23/2022] [Accepted: 07/06/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Operator's capability for accurately comprehending verbal commands is critically important to maintain the performance of human-machine interaction. It can be evaluated by human mental workload measured with electroencephalography (EEG). However, the time duration of different workload conditions within a task session is unequal due to varied psychophysiological processes across individuals. It leads to data imbalance of the EEG for training workload classifiers. METHODS In this study, we propose an EEG feature oversampling technique, Gaussian-SMOTE based feature ensemble (GSMOTE-FE), for workload recognition with imbalanced classes. First, artificial EEG instances are drawn from a Gaussian distribution in the margin between the minority and majority workload classes. Tomek links are detected as clues to remove redundant feature vectors. Then, we embed a feature selection module based on the GINI importance while an ensemble classifier committee with bootstrap aggregating is used to further enhance classification performance. RESULTS We validate the GSMOTE-FE framework based on an experiment that simulates operators to understand the correct meaning of the instructions in the Chinese language. Participants' EEG signals and reaction time data were both recorded to validate the proposed workload classifier. Workload classification accuracy and Macro-F1 values are 0.6553 and 0.5862, respectively. Corresponding G-mean and AUC achieve at 0.5757 and 0.5958, respectively. CONCLUSIONS The performance of the GSMOTE-FE is demonstrated to be comparable with the advanced oversampling techniques. The workload classifier has the capability to indicate low and high levels of the task demand of the Chinese language understanding task.
Collapse
Affiliation(s)
- Guangying Wang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, PR China
| | - Zhong Yin
- Engineering Research Center of Optical Instrument and System, Ministry of Education, Shanghai Key Lab of Modern Optical System, University of Shanghai for Science and Technology, Shanghai, 200093, PR China; School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, PR China.
| | - Mengyuan Zhao
- College of Foreign Languages, University of Shanghai for Science and Technology, Shanghai, 200093, PR China
| | - Ying Tian
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, PR China
| | - Zhanquan Sun
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, PR China
| |
Collapse
|
41
|
Nondestructive classification of soft rot disease in napa cabbage using hyperspectral imaging analysis. Sci Rep 2022; 12:14707. [PMID: 36038711 PMCID: PMC9424267 DOI: 10.1038/s41598-022-19169-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Accepted: 08/25/2022] [Indexed: 11/18/2022] Open
Abstract
Identification of soft rot disease in napa cabbage, an essential ingredient of kimchi, is challenging at the industrial scale. Therefore, nondestructive imaging techniques are necessary. Here, we investigated the potential of hyperspectral imaging (HSI) processing in the near-infrared region (900–1700 nm) for classifying napa cabbage quality using nondestructive measurements. We determined the microbiological and physicochemical qualitative properties of napa cabbage for intercomparison of HSI information, extracted HSI characteristics from hyperspectral images to predict and classify freshness, and established a novel approach for classifying healthy and rotten napa cabbage. The second derivative Savitzky–Golay method for data preprocessing was implemented, followed by wavelength selection using variable importance in projection scores. For multivariate data of the classification models, partial least square discriminant analysis (PLS-DA), support vector machine (SVM), and random forests were used for predicting cabbage conditions. The SVM model accurately distinguished the cabbage exhibiting soft rot disease symptoms from the healthy cabbage. This study presents the potential of HSI systems for separating soft rot disease-infected napa cabbages from healthy napa cabbages using the SVM model, especially under the most effective wavelengths (970, 980, 1180, 1070, 1120, and 978 nm), prior to processing. These results are applicable to industrial multispectral images.
Collapse
|
42
|
Kurban H, Kurban M, Dalkilic MM. Rapidly predicting Kohn-Sham total energy using data-centric AI. Sci Rep 2022; 12:14403. [PMID: 36002504 PMCID: PMC9402589 DOI: 10.1038/s41598-022-18366-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Accepted: 08/10/2022] [Indexed: 11/28/2022] Open
Abstract
Predicting material properties by solving the Kohn-Sham (KS) equation, which is the basis of modern computational approaches to electronic structures, has provided significant improvements in materials sciences. Despite its contributions, both DFT and DFTB calculations are limited by the number of electrons and atoms that translate into increasingly longer run-times. In this work we introduce a novel, data-centric machine learning framework that is used to rapidly and accurately predicate the KS total energy of anatase [Formula: see text] nanoparticles (NPs) at different temperatures using only a small amount of theoretical data. The proposed framework that we call co-modeling eliminates the need for experimental data and is general enough to be used over any NPs to determine electronic structure and, consequently, more efficiently study physical and chemical properties. We include a web service to demonstrate the effectiveness of our approach.
Collapse
Affiliation(s)
- Hasan Kurban
- Applied Data Science Department, San José State University, San Jose, CA, 95192, USA.
- Computer Science Department, Indiana University, Bloomington, IN, 47405, US.
| | - Mustafa Kurban
- Department of Electrical and Electronics Engineering, Kırşehir Ahi Evran University, 40100, Kırşehir, Turkey
| | - Mehmet M Dalkilic
- Computer Science Department, Indiana University, Bloomington, IN, 47405, US
| |
Collapse
|
43
|
Zhuang L, Dai M, Zhou Y, Sun L. Intelligent automatic sleep staging model based on CNN and LSTM. Front Public Health 2022; 10:946833. [PMID: 35968483 PMCID: PMC9364961 DOI: 10.3389/fpubh.2022.946833] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Accepted: 06/22/2022] [Indexed: 01/10/2023] Open
Abstract
Since electroencephalogram (EEG) is a significant basis to treat and diagnose somnipathy, sleep electroencephalogram automatic staging methods play important role in the treatment and diagnosis of sleep disorders. Due to the characteristics of weak signals, EEG needs accurate and efficient algorithms to extract feature information before applying it in the sleep stages. Conventional feature extraction methods have low efficiency and are difficult to meet the time validity of fast staging. In addition, it can easily lead to the omission of key features owing to insufficient a priori knowledge. Deep learning networks, such as convolutional neural networks (CNNs), have powerful processing capabilities in data analysis and data mining. In this study, a deep learning network is introduced into the study of the sleep stage. In this study, the feature fusion method is presented, and long-term and short-term memory (LSTM) is selected as the classification network to improve the accuracy of sleep stage recognition. First, based on EEG and deep learning network, an automatic sleep phase method based on a multi-channel EGG is proposed. Second, CNN-LSTM is used to monitor EEG and EOG samples during sleep. In addition, without any signal preprocessing or feature extraction, data expansion (DA) can be realized for unbalanced data, and special data and non-general data can be deleted. Finally, the MIT-BIH dataset is used to train and evaluate the proposed model. The experimental results show that the EEG-based sleep phase method proposed in this paper provides an effective method for the diagnosis and treatment of sleep disorders, and hence has a practical application value.
Collapse
Affiliation(s)
- Lan Zhuang
- Staff Hospital, Central South University, Changsha, China
| | - Minhui Dai
- Teaching and Research Section of Clinical Nursing, Xiangya Hospital of Central South University, Changsha, China
- Department of Ophthalmology, Xiangya Hospital, Central South University, Changsha, China
| | - Yi Zhou
- Department of Ophthalmology, Xiangya Hospital, Central South University, Changsha, China
| | - Lingyu Sun
- Teaching and Research Section of Clinical Nursing, Xiangya Hospital of Central South University, Changsha, China
- Department of Ophthalmology, Xiangya Hospital, Central South University, Changsha, China
- *Correspondence: Lingyu Sun
| |
Collapse
|
44
|
Lee H, Li B, DeForte S, Splaingard ML, Huang Y, Chi Y, Linwood SL. A large collection of real-world pediatric sleep studies. Sci Data 2022; 9:421. [PMID: 35853958 PMCID: PMC9296671 DOI: 10.1038/s41597-022-01545-6] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Accepted: 07/08/2022] [Indexed: 11/09/2022] Open
Abstract
Despite being crucial to health and quality of life, sleep-especially pediatric sleep-is not yet well understood. This is exacerbated by lack of access to sufficient pediatric sleep data with clinical annotation. In order to accelerate research on pediatric sleep and its connection to health, we create the Nationwide Children's Hospital (NCH) Sleep DataBank and publish it at Physionet and the National Sleep Research Resource (NSRR), which is a large sleep data common with physiological data, clinical data, and tools for analyses. The NCH Sleep DataBank consists of 3,984 polysomnography studies and over 5.6 million clinical observations on 3,673 unique patients between 2017 and 2019 at NCH. The novelties of this dataset include: (1) large-scale sleep dataset suitable for discovering new insights via data mining, (2) explicit focus on pediatric patients, (3) gathered in a real-world clinical setting, and (4) the accompanying rich set of clinical data. The NCH Sleep DataBank is a valuable resource for advancing automatic sleep scoring and real-time sleep disorder prediction, among many other potential scientific discoveries.
Collapse
Affiliation(s)
- Harlin Lee
- Department of Electrical and Computer Engineering, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA, 15213, USA
| | - Boyue Li
- Department of Electrical and Computer Engineering, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA, 15213, USA
| | - Shelly DeForte
- Nationwide Children's Hospital, 700 Children's Drive, Columbus, OH, 43205, USA
| | - Mark L Splaingard
- Nationwide Children's Hospital, 700 Children's Drive, Columbus, OH, 43205, USA
| | - Yungui Huang
- Nationwide Children's Hospital, 700 Children's Drive, Columbus, OH, 43205, USA
| | - Yuejie Chi
- Department of Electrical and Computer Engineering, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA, 15213, USA.
| | - Simon L Linwood
- School of Medicine, University of California, Riverside, 92521 Botanic Gardens Drive, Riverside, CA, 92507, USA.
| |
Collapse
|
45
|
Fatimah B, Singhal A, Singh P. A multi-modal assessment of sleep stages using adaptive Fourier decomposition and machine learning. Comput Biol Med 2022; 148:105877. [PMID: 35853400 DOI: 10.1016/j.compbiomed.2022.105877] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2022] [Revised: 06/29/2022] [Accepted: 07/09/2022] [Indexed: 11/30/2022]
Abstract
Healthy sleep is essential for the rejuvenation of the body and helps in maintaining good health. Many people suffer from sleep disorders that are characterized by abnormal sleep patterns. Automated assessment of such disorders using biomedical signals has been an active subject of research. Electroencephalogram (EEG) is a popular diagnostic used in this regard. We consider a widely-used publicly available database and process the signals using the Fourier decomposition method (FDM) to obtain narrowband signal components. Statistical features extracted from these components are passed on to machine learning classifiers to identify different stages of sleep. A novel feature measuring the non-stationarity of the signal is also used to capture salient information. It is shown that classification results can be improved by using multi-channel EEG instead of single-channel EEG data. Simultaneous utilization of multiple modalities, such as Electromyogram (EMG), Electrooculogram (EOG) along with EEG data leads to further enhancement in the obtained results. The proposed method can be efficiently implemented in real-time using fast Fourier transform (FFT), and it provides better classification results than the other algorithms existing in the literature. It can assist in the development of low-cost sensor-based setups for continuous patient monitoring and feedback.
Collapse
Affiliation(s)
| | - Amit Singhal
- Netaji Subhas University of Technology, Delhi, India.
| | - Pushpendra Singh
- National Institute of Technology Hamirpur, Himachal Pradesh, India
| |
Collapse
|
46
|
Wei K, Nie H, Li Y, Wang X, Liu Y, Zhao Y, Shi H, Huang H, Liu Y, Kang Z. Carbon dots with different energy levels regulate the activity of metal-free catalyst for hydrogen peroxide photoproduction. J Colloid Interface Sci 2022; 616:769-780. [PMID: 35247814 DOI: 10.1016/j.jcis.2022.02.107] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Revised: 02/19/2022] [Accepted: 02/22/2022] [Indexed: 10/19/2022]
Abstract
Artificial photoproduction of hydrogen peroxide (H2O2) from H2O and O2 by metal-free catalysts (e.g., graphitic carbon nitride) is regarded as an ultra-clean approach. Metal-free catalysts are often hindered by unpropitious rapid charge recombination and unfavorable selectivity. Herein, three carbon dots (CDs1 to CDs3) decorated modified-carbon nitride (CDs1-NCN, CDs2-NCN and CDs3-NCN) were designed and fabricated, which show diverse activity of H2O2 photoproduction. Among them, CDs1-NCN, as a two-channel photocatalyst, achieves H2O2 production with high efficiency (1938 μmol h-1 g-1). This process is at normal pressure and without sacrificial agent under visible region (λ≥420nm), which is 27.5- times higher than that of pristine C3N4. The apparent quantum efficiency can be calculated to 7.03 % (λ=365nm). In this system, CDs with different energy levels dominate the activity of metal-free catalyst for hydrogen peroxide photoproduction. Combining with photoelectrochemical test and transient photovoltage analysis, the active site and the catalytic mechanism of these composite catalysts are also clarified. Our work provides a clearly insight for understanding of the regulation of interfacial electron transport in metal-free photocatalysts.
Collapse
Affiliation(s)
- Kaiqiang Wei
- Institute of Functional Nano and Soft Materials (FUNSOM), Jiangsu Key Laboratory for Carbon-based Functional Materials and Devices, Soochow University, Suzhou 215123, China
| | - Haodong Nie
- Institute of Functional Nano and Soft Materials (FUNSOM), Jiangsu Key Laboratory for Carbon-based Functional Materials and Devices, Soochow University, Suzhou 215123, China
| | - Yi Li
- Institute of Functional Nano and Soft Materials (FUNSOM), Jiangsu Key Laboratory for Carbon-based Functional Materials and Devices, Soochow University, Suzhou 215123, China
| | - Xiao Wang
- Institute of Functional Nano and Soft Materials (FUNSOM), Jiangsu Key Laboratory for Carbon-based Functional Materials and Devices, Soochow University, Suzhou 215123, China
| | - Yan Liu
- Institute of Functional Nano and Soft Materials (FUNSOM), Jiangsu Key Laboratory for Carbon-based Functional Materials and Devices, Soochow University, Suzhou 215123, China
| | - Yajie Zhao
- Institute of Functional Nano and Soft Materials (FUNSOM), Jiangsu Key Laboratory for Carbon-based Functional Materials and Devices, Soochow University, Suzhou 215123, China
| | - Hong Shi
- Institute of Functional Nano and Soft Materials (FUNSOM), Jiangsu Key Laboratory for Carbon-based Functional Materials and Devices, Soochow University, Suzhou 215123, China
| | - Hui Huang
- Institute of Functional Nano and Soft Materials (FUNSOM), Jiangsu Key Laboratory for Carbon-based Functional Materials and Devices, Soochow University, Suzhou 215123, China.
| | - Yang Liu
- Institute of Functional Nano and Soft Materials (FUNSOM), Jiangsu Key Laboratory for Carbon-based Functional Materials and Devices, Soochow University, Suzhou 215123, China.
| | - Zhenhui Kang
- Institute of Functional Nano and Soft Materials (FUNSOM), Jiangsu Key Laboratory for Carbon-based Functional Materials and Devices, Soochow University, Suzhou 215123, China; Macao Institute of Materials Science and Engineering, Macau University of Science and Technology, Taipa, Macau SAR 999078, China.
| |
Collapse
|
47
|
Li H, Wu L. EEG Classification of Normal and Alcoholic by Deep Learning. Brain Sci 2022; 12:778. [PMID: 35741663 PMCID: PMC9220822 DOI: 10.3390/brainsci12060778] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Revised: 06/06/2022] [Accepted: 06/11/2022] [Indexed: 12/21/2022] Open
Abstract
Alcohol dependence is a common mental disease worldwide. Excessive alcohol consumption may lead to alcoholism and many complications. In severe cases, it will lead to inhibition and paralysis of the centers of the respiratory and circulatory systems and even death. In addition, there is a lack of effective standard test procedures to detect alcoholism. EEG signals are data obtained by measuring brain changes in the cerebral cortex and can be used for the diagnosis of alcoholism. Existing diagnostic methods mainly employ machine learning techniques, which rely on human intervention to learn. In contrast, deep learning, as an end-to-end learning method, can automatically extract EEG signal features, which is more convenient. Nonetheless, there are few studies on the classification of alcohol's EEG signals using deep learning models. Therefore, in this paper, a new deep learning method is proposed to automatically extract and classify EEG's features. The method first adopts a multilayer discrete wavelet transform to denoise the input data. Then, the denoised data are used as input, and a convolutional neural network and bidirectional long short-term memory network are used for feature extraction. Finally, alcohol EEG signal classification is performed. The experimental results show that the method proposed in this study can be utilized to effectively diagnose patients with alcoholism, achieving a diagnostic accuracy of 99.32%, which is better than most current algorithms.
Collapse
Affiliation(s)
- Houchi Li
- School of Computer Science and Engineering, Hunan University of Science and Technology, Xiangtan 411100, China;
| | - Lei Wu
- Hunan Engineering Research Center for Intelligent Decision Making and Big Data on Industrial Development, Hunan University of Science and Technology, Xiangtan 411100, China
| |
Collapse
|
48
|
Multi-Classification of Motor Imagery EEG Signals Using Bayesian Optimization-Based Average Ensemble Approach. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12125807] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
Motor Imagery (MI) classification using electroencephalography (EEG) has been extensively applied in healthcare scenarios for rehabilitation aims. EEG signal decoding is a difficult process due to its complexity and poor signal-to-noise ratio. Convolutional neural networks (CNN) have demonstrated their ability to extract time–space characteristics from EEG signals for better classification results. However, to discover dynamic correlations in these signals, CNN models must be improved. Hyperparameter choice strongly affects the robustness of CNNs. It is still challenging since the manual tuning performed by domain experts lacks the high performance needed for real-life applications. To overcome these limitations, we presented a fusion of three optimum CNN models using the Average Ensemble strategy, a method that is utilized for the first time for MI movement classification. Moreover, we adopted the Bayesian Optimization (BO) algorithm to reach the optimal hyperparameters’ values. The experimental results demonstrate that without data augmentation, our approach reached 92% accuracy, whereas Linear Discriminate Analysis, Support Vector Machine, Random Forest, Multi-Layer Perceptron, and Gaussian Naive Bayes achieved 68%, 70%, 58%, 64%, and 40% accuracy, respectively. Further, we surpassed state-of-the-art strategies on the BCI competition IV-2a multiclass MI database by a wide margin, proving the benefit of combining the output of CNN models with automated hyperparameter tuning.
Collapse
|
49
|
Liu L, Ren J, Li Z, Yang C. A review of MEG dynamic brain network research. Proc Inst Mech Eng H 2022; 236:763-774. [PMID: 35465768 DOI: 10.1177/09544119221092503] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The dynamic description of neural networks has attracted the attention of researchers for dynamic networks may carry more information compared with resting-state networks. As a non-invasive electrophysiological data with high temporal and spatial resolution, magnetoencephalogram (MEG) can provide rich information for the analysis of dynamic functional brain networks. In this review, the development of MEG brain network was summarized. Several analysis methods such as sliding window, Hidden Markov model, and time-frequency based methods used in MEG dynamic brain network studies were discussed. Finally, the current research about multi-modal brain network analysis and their applications with MEG neurophysiology, which are prospected to be one of the research directions in the future, were concluded.
Collapse
Affiliation(s)
- Lu Liu
- Faculty of Environment and Life, Beijing University of Technology, Beijing, China
| | - Jiechuan Ren
- Department of Internal Neurology, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Zhimei Li
- Department of Internal Neurology, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Chunlan Yang
- Faculty of Environment and Life, Beijing University of Technology, Beijing, China
| |
Collapse
|
50
|
Ji X, Li Y, Wen P. Jumping Knowledge Based Spatial-temporal Graph Convolutional Networks for Automatic Sleep Stage Classification. IEEE Trans Neural Syst Rehabil Eng 2022; 30:1464-1472. [PMID: 35584068 DOI: 10.1109/tnsre.2022.3176004] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
A novel jumping knowledge spatial-temporal graph convolutional network (JK-STGCN) is proposed in this paper to classify sleep stages. Based on this method, different types of multi-channel bio-signals, including electroencephalography (EEG), electromyogram (EMG), electrooculogram (EOG), and electrocardiogram (ECG) are utilized to classify sleep stages, after extracting features by a standard convolutional neural network (CNN) named FeatureNet. Intrinsic connections among different bio-signal channels from the identical epoch and neighboring epochs can be obtained through two adaptive adjacency matrices learning methods. A jumping knowledge spatial-temporal graph convolution module helps the JK-STGCN model to extract spatial features from the graph convolutions efficiently and temporal features are extracted from its common standard convolutions to learn the transition rules among sleep stages. Experimental results on the ISRUC-S3 dataset showed that the overall accuracy achieved 0.831 and the F1-score and Cohen kappa reached 0.814 and 0.782, respectively, which are the competitive classification performance with the state-of-the-art baselines. Further experiments on the ISRUC-S3 dataset are also conducted to evaluate the execution efficiency of the JK-STGCN model. The training time on 10 subjects is 2621s and the testing time on 50 subjects is 6.8s, which indicates its highest calculation speed compared with the existing high-performance graph convolutional networks and U-Net architecture algorithms. Experimental results on the ISRUC-S1 dataset also demonstrate its generality, whose accuracy, F1-score, and Cohen kappa achieve 0.820, 0.798, and 0.767 respectively.
Collapse
|