1
|
Zhou W, Zhu H, Chen W, Chen C, Xu J. Outlier Handling Strategy of Ensembled-Based Sequential Convolutional Neural Networks for Sleep Stage Classification. Bioengineering (Basel) 2024; 11:1226. [PMID: 39768044 PMCID: PMC11673830 DOI: 10.3390/bioengineering11121226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2024] [Revised: 11/22/2024] [Accepted: 11/30/2024] [Indexed: 01/11/2025] Open
Abstract
The pivotal role of sleep has led to extensive research endeavors aimed at automatic sleep stage classification. However, existing methods perform poorly when classifying small groups or individuals, and these results are often considered outliers in terms of overall performance. These outliers may introduce bias during model training, adversely affecting feature selection and diminishing model performance. To address the above issues, this paper proposes an ensemble-based sequential convolutional neural network (E-SCNN) that incorporates a clustering module and neural networks. E-SCNN effectively ensembles machine learning and deep learning techniques to minimize outliers, thereby enhancing model robustness at the individual level. Specifically, the clustering module categorizes individuals based on similarities in feature distribution and assigns personalized weights accordingly. Subsequently, by combining these tailored weights with the robust feature extraction capabilities of convolutional neural networks, the model generates more accurate sleep stage classifications. The proposed model was verified on two public datasets, and experimental results demonstrate that the proposed method obtains overall accuracies of 84.8% on the Sleep-EDF Expanded dataset and 85.5% on the MASS dataset. E-SCNN can alleviate the outlier problem, which is important for improving sleep quality monitoring for individuals.
Collapse
Affiliation(s)
- Wei Zhou
- Jiangsu Key Laboratory of Intelligent Medical Image Computing, Nanjing 210044, China;
- School of Future Technology, Nanjing University of Information Science and Technology, Nanjing 210044, China
| | - Hangyu Zhu
- Center for Intelligent Medical Electronics (CIME), School of Information Science and Engineering, Fudan University, Shanghai 200433, China;
| | - Wei Chen
- School of Biomedical Engineering, The University of Sydney, Sydney, NSW 2006, Australia;
| | - Chen Chen
- Center for Medical Research and Innovation, Shanghai Pudong Hosptial, Fudan University Pudong Medical Center, Shanghai 201203, China
- Human Phenome Institute, Fudan University, Shanghai 200438, China
| | - Jun Xu
- Jiangsu Key Laboratory of Intelligent Medical Image Computing, Nanjing 210044, China;
- School of Future Technology, Nanjing University of Information Science and Technology, Nanjing 210044, China
| |
Collapse
|
2
|
Stuart N, Manners J, Kemps E, Nguyen P, Lechat B, Catcheside P, Scott H. Tripolar concentric ring electrodes for capturing localised electroencephalography signals during sleep. J Sleep Res 2024; 33:e14203. [PMID: 38544356 PMCID: PMC11597005 DOI: 10.1111/jsr.14203] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2024] [Revised: 02/25/2024] [Accepted: 03/14/2024] [Indexed: 11/28/2024]
Abstract
By design, tripolar concentric ring electrodes (TCRE) provide more focal brain activity signals than conventional electroencephalography (EEG) electrodes placed further apart. This study compared spectral characteristics and rates of data loss to noisy epochs with TCRE versus conventional EEG signals recorded during sleep. A total of 20 healthy sleepers (12 females; mean [standard deviation] age 27.8 [9.6] years) underwent a 9-h sleep study. Participants were set up for polysomnography recording with TCRE to assess brain activity from 18 sites and conventional electrodes for EEG, eyes, and muscle movement. A fast Fourier transform using multitaper-based estimation was applied in 5-s epochs to scored sleep. Odds ratios with Bonferroni-adjusted 95% confidence intervals were calculated to determine the proportional differences in the number of noisy epochs between electrode types. Relative power was compared in frequency bands throughout sleep. Linear mixed models showed significant main effects of signal type (p < 0.001) and sleep stage (p < 0.001) on relative spectral power in each power band, with lower relative spectral power across all stages in TCRE versus EEG in alpha, beta, sigma, and theta activity, and greater delta power in all stages. Scalp topography plots showed distinct beta activation in the right parietal lobe with TCRE versus EEG. EEG showed higher rates of noisy epochs compared to TCRE (1.3% versus 0.8%, p < 0.001). TCRE signals showed marked differences in brain activity compared to EEG, consistent with more focal measurements and region-specific differences during sleep. TCRE may be useful for evaluating regional differences in brain activity with reduced muscle artefact compared to conventional EEG.
Collapse
Affiliation(s)
- Nicole Stuart
- Flinders Health and Medical Research Institute: Sleep HealthFlinders UniversityAdelaideSouth AustraliaAustralia
- College of Education, Psychology and Social WorkFlinders UniversityAdelaideSouth AustraliaAustralia
| | - Jack Manners
- Flinders Health and Medical Research Institute: Sleep HealthFlinders UniversityAdelaideSouth AustraliaAustralia
- College of Education, Psychology and Social WorkFlinders UniversityAdelaideSouth AustraliaAustralia
| | - Eva Kemps
- College of Education, Psychology and Social WorkFlinders UniversityAdelaideSouth AustraliaAustralia
| | - Phuc Nguyen
- Flinders Health and Medical Research Institute: Sleep HealthFlinders UniversityAdelaideSouth AustraliaAustralia
| | - Bastien Lechat
- Flinders Health and Medical Research Institute: Sleep HealthFlinders UniversityAdelaideSouth AustraliaAustralia
| | - Peter Catcheside
- Flinders Health and Medical Research Institute: Sleep HealthFlinders UniversityAdelaideSouth AustraliaAustralia
| | - Hannah Scott
- Flinders Health and Medical Research Institute: Sleep HealthFlinders UniversityAdelaideSouth AustraliaAustralia
| |
Collapse
|
3
|
Zhu H, Xu Y, Wu Y, Shen N, Wang L, Chen C, Chen W. A Sequential End-to-End Neonatal Sleep Staging Model with Squeeze and Excitation Blocks and Sequential Multi-Scale Convolution Neural Networks. Int J Neural Syst 2024; 34:2450013. [PMID: 38369905 DOI: 10.1142/s0129065724500138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/20/2024]
Abstract
Automatic sleep staging offers a quick and objective assessment for quantitatively interpreting sleep stages in neonates. However, most of the existing studies either do not encompass any temporal information, or simply apply neural networks to exploit temporal information at the expense of high computational overhead and modeling ambiguity. This limits the application of these methods to multiple scenarios. In this paper, a sequential end-to-end sleep staging model, SeqEESleepNet, which is competent for parallelly processing sequential epochs and has a fast training rate to adapt to different scenarios, is proposed. SeqEESleepNet consists of a sequence epoch generation (SEG) module, a sequential multi-scale convolution neural network (SMSCNN) and squeeze and excitation (SE) blocks. The SEG module expands independent epochs into sequential signals, enabling the model to learn the temporal information between sleep stages. SMSCNN is a multi-scale convolution neural network that can extract both multi-scale features and temporal information from the signal. Subsequently, the followed SE block can reassign the weights of features through mapping and pooling. Experimental results exhibit that in a clinical dataset, the proposed method outperforms the state-of-the-art approaches, achieving an overall accuracy, F1-score, and Kappa coefficient of 71.8%, 71.8%, and 0.684 on a three-class classification task with a single channel EEG signal. Based on our overall results, we believe the proposed method could pave the way for convenient multi-scenario neonatal sleep staging methods.
Collapse
Affiliation(s)
- Hangyu Zhu
- Center for Intelligent Medical Electronics, School of Information Science and Technology, Fudan University, Shanghai 200433, P. R. China
| | - Yan Xu
- Department of Neurology, Children's Hospital of Fudan University, National Children's Medical Center, Shanghai, P. R. China
| | - Yonglin Wu
- Center for Intelligent Medical Electronics, School of Information Science and Technology, Fudan University, Shanghai 200433, P. R. China
| | - Ning Shen
- Center for Intelligent Medical Electronics, School of Information Science and Technology, Fudan University, Shanghai 200433, P. R. China
| | - Laishuan Wang
- Department of Neurology, Children's Hospital of Fudan University, National Children's Medical Center, Shanghai, P. R. China
| | - Chen Chen
- Human Phenome Institute, Fudan University, 825 Zhangheng Road, Shanghai 201203, P. R. China
| | - Wei Chen
- Center for Intelligent Medical Electronics, School of Information Science and Technology, Fudan University, Shanghai 200433, P. R. China
| |
Collapse
|
4
|
Hu Y, Shi W, Yeh CH. Spatiotemporal convolution sleep network based on graph attention mechanism with automatic feature extraction. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 244:107930. [PMID: 38008039 DOI: 10.1016/j.cmpb.2023.107930] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Revised: 11/12/2023] [Accepted: 11/13/2023] [Indexed: 11/28/2023]
Abstract
BACKGROUND AND OBJECTIVE Graph neural networks (GNNs) are widely used for automatic sleep staging. However, the majority of GNNs are based on spectral approaches, as far as we know, which heavily depend on the Laplacian eigenbasis determined by the graph structure with a large computing cost. METHODS We introduced a non-spectral approach named graph attention networks v2 (GATv2) as the core of our network to extract spatial information (S-GATv2 in our work), which is more flexible and intuitive than the routined spectral method. Meanwhile, to resolve the issue of weak generalization of using traditional feature extraction, the multi-convolutional layers are implemented to automatically extract features. In this work, the proposed spatiotemporal convolution sleep network (ST-GATv2) consists of multi-convolution layers and a GATv2 block. Of note, the graph attention technique to the time domain was applied to construct temporal GATv2 (T-GATv2), which intends to capture the connection between two channels in the adjacent sleep stages. Besides, the modified function is further proposed to capture the hidden changing trend information by the difference in the feature's value of the two adjacent stages. RESULTS In our experiment, we used the SS3 datasets in the MASS as our test datasets to compare with other advanced models. Our result reveals our model achieves the highest accuracy at 89.0 %. Besides, the proposed T-GATv2 block and modified function bring an approximate 0.5 % improvement in Kappa and F1-score. CONCLUSIONS Our results support the potential of graph attention mechanisms and creative blocks (T-GATv2 and modified function) in sleep classification. We suggest the proposed ST-GATv2 model as an effective tool in sleep staging in either healthy or diseased states.
Collapse
Affiliation(s)
- Yidong Hu
- School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China; School of Cyberspace Security, Beijing Institute of Technology, Beijing 100081, China
| | - Wenbin Shi
- School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China; Key Laboratory of Brain Health Intelligent Evaluation and Intervention, Ministry of Education (Beijing Institute of Technology), Beijing 100081, China
| | - Chien-Hung Yeh
- School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China; Key Laboratory of Brain Health Intelligent Evaluation and Intervention, Ministry of Education (Beijing Institute of Technology), Beijing 100081, China.
| |
Collapse
|
5
|
Zhu H, Wu Y, Guo Y, Fu C, Shu F, Yu H, Chen W, Chen C. Towards Real-Time Sleep Stage Prediction and Online Calibration Based on Architecturally Switchable Deep Learning Models. IEEE J Biomed Health Inform 2024; 28:470-481. [PMID: 37878423 DOI: 10.1109/jbhi.2023.3327470] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2023]
Abstract
Despite the recent advances in automatic sleep staging, few studies have focused on real-time sleep staging to promote the regulation of sleep or the intervention of sleep disorders. In this paper, a novel network named SwSleepNet, that can handle both precisely offline sleep staging, and online sleep stages prediction and calibration is proposed. For offline analysis, the proposed network coordinates sequence broadening module (SBM), sequential CNN (SCNN), squeeze and excitation (SE) block, and sequence consolidation module (SCM) to balance the operational efficiency of the network and the comprehensive feature extraction. For online analysis, only SCNN and SE are involved in predicting the sleep stage within a short-time segment of the recordings. Once more than two successive segments have disparate predictions, the calibration mechanism will be triggered, and contextual information will be involved. In addition, to investigate the appropriate time of the segment that is suitable to predict a sleep stage, segments with five-second, three-second, and two-second data are analyzed. The performance of SwSleepNet is validated on two publicly available datasets Sleep-EDF Expanded and Montreal Archive of Sleep Studies (MASS), and one clinical dataset Huashan Hospital Fudan University (HSFU), with the offline accuracy of 84.5%, 86.7%, and 81.8%, respectively, which outperforms the state-of-the-art methods. Additionally, for the online sleep staging, the dedicated calibration mechanism allows SwSleepNet to achieve high accuracy over 80% on three datasets with the short-time segments, demonstrating the robustness and stability of SwSleepNet. This study presents a real-time sleep staging architecture, which is expected to pave the way for accurate sleep regulation and intervention.
Collapse
|
6
|
Einizade A, Nasiri S, Sardouie SH, Clifford GD. ProductGraphSleepNet: Sleep staging using product spatio-temporal graph learning with attentive temporal aggregation. Neural Netw 2023; 164:667-680. [PMID: 37245479 DOI: 10.1016/j.neunet.2023.05.016] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Revised: 02/23/2023] [Accepted: 05/09/2023] [Indexed: 05/30/2023]
Abstract
The classification of sleep stages plays a crucial role in understanding and diagnosing sleep pathophysiology. Sleep stage scoring relies heavily on visual inspection by an expert, which is a time-consuming and subjective procedure. Recently, deep learning neural network approaches have been leveraged to develop a generalized automated sleep staging and account for shifts in distributions that may be caused by inherent inter/intra-subject variability, heterogeneity across datasets, and different recording environments. However, these networks (mostly) ignore the connections among brain regions and disregard modeling the connections between temporally adjacent sleep epochs. To address these issues, this work proposes an adaptive product graph learning-based graph convolutional network, named ProductGraphSleepNet, for learning joint spatio-temporal graphs along with a bidirectional gated recurrent unit and a modified graph attention network to capture the attentive dynamics of sleep stage transitions. Evaluation on two public databases: the Montreal Archive of Sleep Studies (MASS) SS3; and the SleepEDF, which contain full night polysomnography recordings of 62 and 20 healthy subjects, respectively, demonstrates performance comparable to the state-of-the-art (Accuracy: 0.867;0.838, F1-score: 0.818;0.774 and Kappa: 0.802;0.775, on each database respectively). More importantly, the proposed network makes it possible for clinicians to comprehend and interpret the learned spatial and temporal connectivity graphs for sleep stages.
Collapse
Affiliation(s)
- Aref Einizade
- Department of Electrical Engineering, Sharif University of Technology, Tehran, Iran.
| | - Samaneh Nasiri
- Massachusetts General Hospital, Harvard Medical School, MA, USA
| | | | - Gari D Clifford
- Georgia Institute of Technology, GA, USA; Emory School of Medicine, GA, USA
| |
Collapse
|
7
|
Zhu H, Fu C, Shu F, Yu H, Chen C, Chen W. The Effect of Coupled Electroencephalography Signals in Electrooculography Signals on Sleep Staging Based on Deep Learning Methods. Bioengineering (Basel) 2023; 10:573. [PMID: 37237643 PMCID: PMC10215192 DOI: 10.3390/bioengineering10050573] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Revised: 04/20/2023] [Accepted: 04/26/2023] [Indexed: 05/28/2023] Open
Abstract
The influence of the coupled electroencephalography (EEG) signal in electrooculography (EOG) on EOG-based automatic sleep staging has been ignored. Since the EOG and prefrontal EEG are collected at close range, it is not clear whether EEG couples in EOG or not, and whether or not the EOG signal can achieve good sleep staging results due to its intrinsic characteristics. In this paper, the effect of a coupled EEG signal in an EOG signal on automatic sleep staging is explored. The blind source separation algorithm was used to extract a clean prefrontal EEG signal. Then the raw EOG signal and clean prefrontal EEG signal were processed to obtain EOG signals coupled with different EEG signal contents. Afterwards, the coupled EOG signals were fed into a hierarchical neural network, including a convolutional neural network and recurrent neural network for automatic sleep staging. Finally, an exploration was performed using two public datasets and one clinical dataset. The results showed that using a coupled EOG signal could achieve an accuracy of 80.4%, 81.1%, and 78.9% for the three datasets, slightly better than the accuracy of sleep staging using the EOG signal without coupled EEG. Thus, an appropriate content of coupled EEG signal in an EOG signal improved the sleep staging results. This paper provides an experimental basis for sleep staging with EOG signals.
Collapse
Affiliation(s)
- Hangyu Zhu
- School of Information Science and Technology, Fudan University, Shanghai 200433, China
| | - Cong Fu
- Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai 200040, China
| | - Feng Shu
- Academy for Engineering and Technology, Fudan University, Shanghai 200433, China
| | - Huan Yu
- Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai 200040, China
| | - Chen Chen
- Human Phenome Institute, Fudan University, Shanghai 201203, China
| | - Wei Chen
- School of Information Science and Technology, Fudan University, Shanghai 200433, China
- Human Phenome Institute, Fudan University, Shanghai 201203, China
| |
Collapse
|