1
|
Yang J, Wang Q, Dong X, Shen T. Synergistic integration of brain networks and time-frequency multi-view feature for sleep stage classification. Health Inf Sci Syst 2025; 13:15. [PMID: 39802081 PMCID: PMC11723870 DOI: 10.1007/s13755-024-00328-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2024] [Accepted: 12/17/2024] [Indexed: 01/16/2025] Open
Abstract
For diagnosing mental health conditions and assessing sleep quality, the classification of sleep stages is essential. Although deep learning-based methods are effective in this field, they often fail to capture sufficient features or adequately synthesize information from various sources. For the purpose of improving the accuracy of sleep stage classification, our methodology includes extracting a diverse array of features from polysomnography signals, along with their transformed graph and time-frequency representations. We have developed specific feature extraction modules tailored for each distinct view. To efficiently integrate and categorize the features derived from these different perspectives, we propose a cross-attention fusion mechanism. This mechanism is designed to adaptively merge complex sleep features, facilitating a more robust classification process. More specifically, our strategy includes the development of an efficient fusion network with multi-view features for classifying sleep stages that incorporates brain connectivity and combines both temporal and spectral elements for sleep stage analysis. This network employs a systematic approach to extract spatio-temporal-frequency features and uses cross-attention to merge features from different views effectively. In the experiments we conducted on the ISRUC public datasets, we found that our approach outperformed other proposed methods. In the ablation experiments, there was also a 2% improvement over the baseline model. Our research indicates that multi-view feature fusion methods with a cross-attention mechanism have strong potential in sleep stage classification.
Collapse
Affiliation(s)
- Jun Yang
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, No.727 Jingming South Road, Kunming, 650504 Yunnan China
| | - Qichen Wang
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, No.727 Jingming South Road, Kunming, 650504 Yunnan China
| | - Xiaoxing Dong
- First People’s Hospital of Yunnan Province, No.157 Jinbi Road, Kunming, 650032 Yunnan China
| | - Tao Shen
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, No.727 Jingming South Road, Kunming, 650504 Yunnan China
| |
Collapse
|
2
|
Jalali H, Pouladian M, Nasrabadi AM, Movahed A. Sleep stages classification based on feature extraction from music of brain. Heliyon 2025; 11:e41147. [PMID: 39807512 PMCID: PMC11728888 DOI: 10.1016/j.heliyon.2024.e41147] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2024] [Revised: 12/10/2024] [Accepted: 12/10/2024] [Indexed: 01/16/2025] Open
Abstract
Sleep stages classification one of the essential factors concerning sleep disorder diagnoses, which can contribute to many functional disease treatments or prevent the primary cognitive risks in daily activities. In this study, A novel method of mapping EEG signals to music is proposed to classify sleep stages. A total of 4.752 selected 1-min sleep records extracted from the capsleep database are applied as the statistical population for this assessment. In this process, first, the tempo and scale parameters are extracted from the signal according to the rules of music, and next by applying them and changing the dominant frequency of the pre-processed single-channel EEG signal, a sequence of musical notes is produced. A total of 19 features are extracted from the sequence of notes and fed into feature reduction algorithms; the selected features are applied to a two-stage classification structure: 1) the classification of 5 classes (merging S1 and REM-S2-S3-S4-W) is made with an accuracy of 89.5 % (Cap sleep database), 85.9 % (Sleep-EDF database), 86.5 % (Sleep-EDF expanded database), and 2) the classification of 2 classes (S1 vs. REM) is made with an accuracy of 90.1 % (Cap sleep database),88.9 % (Sleep-EDF database), 90.1 % (Sleep-EDF expanded database). The overall percentage of correct classification for 6 sleep stages are 88.13 %, 84.3 % and 86.1 % for those databases, respectively. The other objective of this study is to present a new single-channel EEG sonification method, The classification accuracy obtained is higher or comparable to contemporary methods. This shows the efficiency of our proposed method.
Collapse
Affiliation(s)
- Hamidreza Jalali
- Department of Biomedical Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Majid Pouladian
- Department of Biomedical Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Ali Motie Nasrabadi
- Biomedical Engineering Department, Faculty of Engineering, Shahed University, Tehran, Iran
| | - Azin Movahed
- School of Music, College of Fine Arts, University of Tehran, Tehran, Iran
| |
Collapse
|
3
|
Yazdi M, Samaee M, Massicotte D. A Review on Automated Sleep Study. Ann Biomed Eng 2024; 52:1463-1491. [PMID: 38493234 DOI: 10.1007/s10439-024-03486-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Accepted: 02/25/2024] [Indexed: 03/18/2024]
Abstract
In recent years, research on automated sleep analysis has witnessed significant growth, reflecting advancements in understanding sleep patterns and their impact on overall health. This review synthesizes findings from an exhaustive analysis of 87 papers, systematically retrieved from prominent databases such as Google Scholar, PubMed, IEEE Xplore, and ScienceDirect. The selection criteria prioritized studies focusing on methods employed, signal modalities utilized, and machine learning algorithms applied in automated sleep analysis. The overarching goal was to critically evaluate the strengths and weaknesses of the proposed methods, shedding light on the current landscape and future directions in sleep research. An in-depth exploration of the reviewed literature revealed a diverse range of methodologies and machine learning approaches employed in automated sleep studies. Notably, K-Nearest Neighbors (KNN), Ensemble Learning Methods, and Support Vector Machine (SVM) emerged as versatile and potent classifiers, exhibiting high accuracies in various applications. However, challenges such as performance variability and computational demands were observed, necessitating judicious classifier selection based on dataset intricacies. In addition, the integration of traditional feature extraction methods with deep structures and the combination of different deep neural networks were identified as promising strategies to enhance diagnostic accuracy in sleep-related studies. The reviewed literature emphasized the need for adaptive classifiers, cross-modality integration, and collaborative efforts to drive the field toward more accurate, robust, and accessible sleep-related diagnostic solutions. This comprehensive review serves as a solid foundation for researchers and practitioners, providing an organized synthesis of the current state of knowledge in automated sleep analysis. By highlighting the strengths and challenges of various methodologies, this review aims to guide future research toward more effective and nuanced approaches to sleep diagnostics.
Collapse
Affiliation(s)
- Mehran Yazdi
- Laboratory of Signal and System Integration, Department of Electrical and Computer Engineering, Université du Québec à Trois-Rivières, Trois-Rivières, Canada.
- Signal and Image Processing Laboratory, School of Electrical and Computer Engineering, Shiraz University, Shiraz, Iran.
| | - Mahdi Samaee
- Signal and Image Processing Laboratory, School of Electrical and Computer Engineering, Shiraz University, Shiraz, Iran
| | - Daniel Massicotte
- Laboratory of Signal and System Integration, Department of Electrical and Computer Engineering, Université du Québec à Trois-Rivières, Trois-Rivières, Canada
| |
Collapse
|
4
|
An P, Zhao J, Du B, Zhao W, Zhang T, Yuan Z. Amplitude-Time Dual-View Fused EEG Temporal Feature Learning for Automatic Sleep Staging. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:6492-6506. [PMID: 36215384 DOI: 10.1109/tnnls.2022.3210384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Electroencephalogram (EEG) plays an important role in studying brain function and human cognitive performance, and the recognition of EEG signals is vital to develop an automatic sleep staging system. However, due to the complex nonstationary characteristics and the individual difference between subjects, how to obtain the effective signal features of the EEG for practical application is still a challenging task. In this article, we investigate the EEG feature learning problem and propose a novel temporal feature learning method based on amplitude-time dual-view fusion for automatic sleep staging. First, we explore the feature extraction ability of convolutional neural networks for the EEG signal from the perspective of interpretability and construct two new representation signals for the raw EEG from the views of amplitude and time. Then, we extract the amplitude-time signal features that reflect the transformation between different sleep stages from the obtained representation signals by using conventional 1-D CNNs. Furthermore, a hybrid dilation convolution module is used to learn the long-term temporal dependency features of EEG signals, which can overcome the shortcoming that the small-scale convolution kernel can only learn the local signal variation information. Finally, we conduct attention-based feature fusion for the learned dual-view signal features to further improve sleep staging performance. To evaluate the performance of the proposed method, we test 30-s-epoch EEG signal samples for healthy subjects and subjects with mild sleep disorders. The experimental results from the most commonly used datasets show that the proposed method has better sleep staging performance and has the potential for the development and application of an EEG-based automatic sleep staging system.
Collapse
|
5
|
Jirakittayakorn N, Wongsawat Y, Mitrirattanakul S. ZleepAnlystNet: a novel deep learning model for automatic sleep stage scoring based on single-channel raw EEG data using separating training. Sci Rep 2024; 14:9859. [PMID: 38684765 PMCID: PMC11058251 DOI: 10.1038/s41598-024-60796-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Accepted: 04/26/2024] [Indexed: 05/02/2024] Open
Abstract
Numerous models for sleep stage scoring utilizing single-channel raw EEG signal have typically employed CNN and BiLSTM architectures. While these models, incorporating temporal information for sequence classification, demonstrate superior overall performance, they often exhibit low per-class performance for N1-stage, necessitating an adjustment of loss function. However, the efficacy of such adjustment is constrained by the training process. In this study, a pioneering training approach called separating training is introduced, alongside a novel model, to enhance performance. The developed model comprises 15 CNN models with varying loss function weights for feature extraction and 1 BiLSTM for sequence classification. Due to its architecture, this model cannot be trained using an end-to-end approach, necessitating separate training for each component using the Sleep-EDF dataset. Achieving an overall accuracy of 87.02%, MF1 of 82.09%, Kappa of 0.8221, and per-class F1-socres (W 90.34%, N1 54.23%, N2 89.53%, N3 88.96%, and REM 87.40%), our model demonstrates promising performance. Comparison with sleep technicians reveals a Kappa of 0.7015, indicating alignment with reference sleep stags. Additionally, cross-dataset validation and adaptation through training with the SHHS dataset yield an overall accuracy of 84.40%, MF1 of 74.96% and Kappa of 0.7785 when tested with the Sleep-EDF-13 dataset. These findings underscore the generalization potential in model architecture design facilitated by our novel training approach.
Collapse
Affiliation(s)
- Nantawachara Jirakittayakorn
- Institute for Innovative Learning, Mahidol University, Nakhon Pathom, Thailand
- Faculty of Dentistry, Mahidol University, Bangkok, Thailand
| | - Yodchanan Wongsawat
- Department of Biomedical Engineering, Faculty of Engineering, Mahidol University, Nakhon Pathom, Thailand
| | - Somsak Mitrirattanakul
- Department of Masticatory Science, Faculty of Dentistry, Mahidol University, Bangkok, Thailand.
| |
Collapse
|
6
|
Lu N, Zhao X, Yao L. 3D Visual Discomfort Assessment With a Weakly Supervised Graph Convolution Neural Network Based on Inaccurately Labeled EEG. IEEE Trans Neural Syst Rehabil Eng 2024; 32:1164-1176. [PMID: 38421840 DOI: 10.1109/tnsre.2024.3371704] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/02/2024]
Abstract
Visual discomfort significantly limits the broader application of stereoscopic display technology. Hence, the accurate assessment of stereoscopic visual discomfort is a crucial topic in this field. Electroencephalography (EEG) data, which can reflect changes in brain activity, have received increasing attention in objective assessment research. However, inaccurately labeled data, resulting from the presence of individual differences, restrict the effectiveness of the widely used supervised learning methods in visual discomfort assessment tasks. Simultaneously, visual discomfort assessment methods should pay greater attention to the information provided by the visual cortical areas of the brain. To tackle these challenges, we need to consider two key aspects: maximizing the utilization of inaccurately labeled data for enhanced learning and integrating information from the brain's visual cortex for feature representation purposes. Therefore, we propose the weakly supervised graph convolution neural network for visual discomfort (WSGCN-VD). In the classification part, a center correction loss serves as a weakly supervised loss, employing a progressive selection strategy to identify accurately labeled data while constraining the involvement of inaccurately labeled data that are influenced by individual differences during the model learning process. In the feature extraction part, a feature graph module pays particular attention to the construction of spatial connections among the channels in the visual regions of the brain and combines them with high-dimensional temporal features to obtain visually dependent spatio-temporal representations. Through extensive experiments conducted in various scenarios, we demonstrate the effectiveness of our proposed model. Further analysis reveals that the proposed model mitigates the impact of inaccurately labeled data on the accuracy of assessment.
Collapse
|
7
|
Ji X, Li Y, Wen P, Barua P, Acharya UR. MixSleepNet: A Multi-Type Convolution Combined Sleep Stage Classification Model. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 244:107992. [PMID: 38218118 DOI: 10.1016/j.cmpb.2023.107992] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Revised: 12/09/2023] [Accepted: 12/19/2023] [Indexed: 01/15/2024]
Abstract
BACKGROUND AND OBJECTIVE Sleep staging is an essential step for sleep disorder diagnosis, which is time-intensive and laborious for experts to perform this work manually. Automatic sleep stage classification methods not only alleviate experts from these demanding tasks but also enhance the accuracy and efficiency of the classification process. METHODS A novel multi-channel biosignal-based model constructed by the combination of a 3D convolutional operation and a graph convolutional operation is proposed for the automated sleep stages using various physiological signals. Both the 3D convolution and graph convolution can aggregate information from neighboring brain areas, which helps to learn intrinsic connections from the biosignals. Electroencephalogram (EEG), electromyogram (EMG), electrooculogram (EOG) and electrocardiogram (ECG) signals are employed to extract time domain and frequency domain features. Subsequently, these signals are input to the 3D convolutional and graph convolutional branches, respectively. The 3D convolution branch can explore the correlations between multi-channel signals and multi-band waves in each channel in the time series, while the graph convolution branch can explore the connections between each channel and each frequency band. In this work, we have developed the proposed multi-channel convolution combined sleep stage classification model (MixSleepNet) using ISRUC datasets (Subgroup 3 and 50 random samples from Subgroup 1). RESULTS Based on the first expert's label, our generated MixSleepNet yielded an accuracy, F1-score and Cohen kappa scores of 0.830, 0.821 and 0.782, respectively for ISRUC-S3. It obtained accuracy, F1-score and Cohen kappa scores of 0.812, 0.786, and 0.756, respectively for the ISRUC-S1 dataset. In accordance with the evaluations conducted by the second expert, the comprehensive accuracies, F1-scores, and Cohen kappa coefficients for the ISRUC-S3 and ISRUC-S1 datasets are determined to be 0.837, 0.820, 0.789, and 0.829, 0.791, 0.775, respectively. CONCLUSION The results of the performance metrics by the proposed method are much better than those from all the compared models. Additional experiments were carried out on the ISRUC-S3 sub-dataset to evaluate the contributions of each module towards the classification performance.
Collapse
Affiliation(s)
- Xiaopeng Ji
- School of Mathematics, Physics and Computing, University of Southern Queensland, Toowoomba, QLD 4350, Australia.
| | - Yan Li
- School of Mathematics, Physics and Computing, University of Southern Queensland, Toowoomba, QLD 4350, Australia.
| | - Peng Wen
- School of Engineering, University of Southern Queensland, Toowoomba, QLD 4350, Australia.
| | - Prabal Barua
- Cogninet Brain Team, Sydney, NSW 2010, Australia.
| | - U Rajendra Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Toowoomba, QLD 4350, Australia.
| |
Collapse
|
8
|
Grassi M, Daccò S, Caldirola D, Perna G, Schruers K, Defillo A. Enhanced sleep staging with artificial intelligence: a validation study of new software for sleep scoring. Front Artif Intell 2023; 6:1278593. [PMID: 38145233 PMCID: PMC10739507 DOI: 10.3389/frai.2023.1278593] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Accepted: 11/14/2023] [Indexed: 12/26/2023] Open
Abstract
Manual sleep staging (MSS) using polysomnography is a time-consuming task, requires significant training, and can lead to significant variability among scorers. STAGER is a software program based on machine learning algorithms that has been developed by Medibio Limited (Savage, MN, USA) to perform automatic sleep staging using only EEG signals from polysomnography. This study aimed to extensively investigate its agreement with MSS performed during clinical practice and by three additional expert sleep technicians. Forty consecutive polysomnographic recordings of patients referred to three US sleep clinics for sleep evaluation were retrospectively collected and analyzed. Three experienced technicians independently staged the recording using the electroencephalography, electromyography, and electrooculography signals according to the American Academy of Sleep Medicine guidelines. The staging initially performed during clinical practice was also considered. Several agreement statistics between the automatic sleep staging (ASS) and MSS, among the different MSSs, and their differences were calculated. Bootstrap resampling was used to calculate 95% confidence intervals and the statistical significance of the differences. STAGER's ASS was most comparable with, or statistically significantly better than the MSS, except for a partial reduction in the positive percent agreement in the wake stage. These promising results indicate that STAGER software can perform ASS of inpatient polysomnographic recordings accurately in comparison with MSS.
Collapse
Affiliation(s)
- Massimiliano Grassi
- Medibio Limited, Savage, MN, United States
- Department of Biomedical Sciences, Humanitas University, Pieve Emanuele, Italy
- Department of Clinical Neurosciences, Villa San Benedetto Menni Hospital, Hermanas Hospitalarias, Albese con Cassano, Italy
| | - Silvia Daccò
- Medibio Limited, Savage, MN, United States
- Department of Biomedical Sciences, Humanitas University, Pieve Emanuele, Italy
- Department of Clinical Neurosciences, Villa San Benedetto Menni Hospital, Hermanas Hospitalarias, Albese con Cassano, Italy
- Humanitas San Pio X, Personalized Medicine Center for Anxiety and Panic Disorders, Milan, Italy
| | - Daniela Caldirola
- Department of Biomedical Sciences, Humanitas University, Pieve Emanuele, Italy
- Department of Clinical Neurosciences, Villa San Benedetto Menni Hospital, Hermanas Hospitalarias, Albese con Cassano, Italy
- Humanitas San Pio X, Personalized Medicine Center for Anxiety and Panic Disorders, Milan, Italy
| | - Giampaolo Perna
- Medibio Limited, Savage, MN, United States
- Department of Biomedical Sciences, Humanitas University, Pieve Emanuele, Italy
- Department of Clinical Neurosciences, Villa San Benedetto Menni Hospital, Hermanas Hospitalarias, Albese con Cassano, Italy
- Humanitas San Pio X, Personalized Medicine Center for Anxiety and Panic Disorders, Milan, Italy
- Department of Psychiatry and Neuropsychology, Faculty of Health, Medicine, and Life Sciences, Research Institute of Mental Health and Neuroscience, Maastricht University, Maastricht, Netherlands
| | - Koen Schruers
- Department of Psychiatry and Neuropsychology, Faculty of Health, Medicine, and Life Sciences, Research Institute of Mental Health and Neuroscience, Maastricht University, Maastricht, Netherlands
| | | |
Collapse
|
9
|
Li T, Gong Y, Lv Y, Wang F, Hu M, Wen Y. GAC-SleepNet: A dual-structured sleep staging method based on graph structure and Euclidean structure. Comput Biol Med 2023; 165:107477. [PMID: 37717528 DOI: 10.1016/j.compbiomed.2023.107477] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 08/16/2023] [Accepted: 09/04/2023] [Indexed: 09/19/2023]
Abstract
Sleep staging is a precondition for the diagnosis and treatment of sleep disorders. However, how to fully exploit the relationship between spatial features of the brain and sleep stages is an important task. Many current classical algorithms only extract the characteristic information of the brain in the Euclidean space without considering other spatial structures. In this study, a sleep staging network named GAC-SleepNet is designed. GAC-SleepNet uses the characteristic information in the dual structure of the graph structure and the Euclidean structure for the classification of sleep stages. In the graph structure, this study uses a graph convolutional neural network to learn the deep features of each sleep stage and converts the features in the topological structure into feature vectors by a multilayer perceptron. In the Euclidean structure, this study uses convolutional neural networks to learn the temporal features of sleep information and combine attention mechanism to portray the connection between different sleep periods and EEG signals, while enhancing the description of global features to avoid local optima. In this study, the performance of the proposed network is evaluated on two public datasets. The experimental results show that the dual spatial structure captures more adequate and comprehensive information about sleep features and shows advancement in terms of different evaluation metrics.
Collapse
Affiliation(s)
- Tianxing Li
- School of Electronic Information Engineering, Changchun University of Science and Technology, Changchun, 130000, China
| | - Yulin Gong
- School of Electronic Information Engineering, Changchun University of Science and Technology, Changchun, 130000, China.
| | - Yudan Lv
- The Department of Neurology, First Hospital of Jilin University, Changchun, 130000, China
| | - Fatong Wang
- School of Electronic Information Engineering, Changchun University of Science and Technology, Changchun, 130000, China
| | - Mingjia Hu
- School of Electronic Information Engineering, Changchun University of Science and Technology, Changchun, 130000, China
| | - Yinke Wen
- School of Electronic Information Engineering, Changchun University of Science and Technology, Changchun, 130000, China
| |
Collapse
|
10
|
Sharma M, Verma S, Anand D, Gadre VM, Acharya UR. CAPSCNet: A novel scattering network for automated identification of phasic cyclic alternating patterns of human sleep using multivariate EEG signals. Comput Biol Med 2023; 164:107259. [PMID: 37544251 DOI: 10.1016/j.compbiomed.2023.107259] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2023] [Revised: 07/03/2023] [Accepted: 07/04/2023] [Indexed: 08/08/2023]
Abstract
The Cyclic Alternating Pattern (CAP) can be considered a physiological marker of sleep instability. The CAP can examine various sleep-related disorders. Certain short events (A and B phases) manifest related to a specific physiological process or pathology during non-rapid eye movement (NREM) sleep. These phases unexpectedly modify EEG oscillations; hence, manual detection is challenging. Therefore, it is highly desirable to have an automated system for detecting the A-phases (AP). Deep convolution neural networks (CNN) have shown high performance in various healthcare applications. A variant of the deep neural network called the Wavelet Scattering Network (WSN) has been used to overcome the specific limitations of CNN, such as the need for a large amount of data to train the model. WSN is an optimized network that can learn features that help discriminate patterns hidden inside signals. Also, WSNs are invariant to local perturbations, making the network significantly more reliable and effective. It can also help improve performance on tasks where data is minimal. In this study, we proposed a novel WSN-based CAPSCNet to automatically detect AP using EEG signals. Seven dataset variants of cyclic alternating pattern (CAP) sleep cohort is employed for this study. Two electroencephalograms (EEG) derivations, namely: C4-A1 and F4-C4, are used to develop the CAPSCNet. The model is examined using healthy subjects and patients tormented by six different sleep disorders, namely: sleep-disordered breathing (SDB), insomnia, nocturnal frontal lobe epilepsy (NFLE), narcolepsy, periodic leg movement disorder (PLM) and rapid eye movement behavior disorder (RBD) subjects. Several different machine-learning algorithms were used to classify the features obtained from the WSN. The proposed CAPSCNet has achieved the highest average classification accuracy of 83.4% using a trilayered neural network classifier for the healthy data variant. The proposed CAPSCNet is efficient and computationally faster.
Collapse
Affiliation(s)
- Manish Sharma
- Department of Electrical and Computer Science Engineering, Institute of Infrastructure, Technology, Research and Management (IITRAM), Ahmedabad, India.
| | - Sarv Verma
- Department of Electrical and Computer Science Engineering, Institute of Infrastructure, Technology, Research and Management (IITRAM), Ahmedabad, India.
| | - Divyansh Anand
- Department of Electrical and Computer Science Engineering, Institute of Infrastructure, Technology, Research and Management (IITRAM), Ahmedabad, India.
| | - Vikram M Gadre
- Department of Electrical Engineering, Indian Institute of Technology, Bombay, Mumbai, India.
| | - U Rajendra Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Springfield 4300, Australia.
| |
Collapse
|
11
|
Abbasi SF, Abbasi QH, Saeed F, Alghamdi NS. A convolutional neural network-based decision support system for neonatal quiet sleep detection. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:17018-17036. [PMID: 37920045 DOI: 10.3934/mbe.2023759] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/04/2023]
Abstract
Sleep plays an important role in neonatal brain and physical development, making its detection and characterization important for assessing early-stage development. In this study, we propose an automatic and computationally efficient algorithm to detect neonatal quiet sleep (QS) using a convolutional neural network (CNN). Our study used 38-hours of electroencephalography (EEG) recordings, collected from 19 neonates at Fudan Children's Hospital in Shanghai, China (Approval No. (2020) 22). To train and test the CNN, we extracted 12 prominent time and frequency domain features from 9 bipolar EEG channels. The CNN architecture comprised two convolutional layers with pooling and rectified linear unit (ReLU) activation. Additionally, a smoothing filter was applied to hold the sleep stage for 3 minutes. Through performance testing, our proposed method achieved impressive results, with 94.07% accuracy, 89.70% sensitivity, 94.40% specificity, 79.82% F1-score and a 0.74 kappa coefficient when compared to human expert annotations. A notable advantage of our approach is its computational efficiency, with the entire training and testing process requiring only 7.97 seconds. The proposed algorithm has been validated using leave one subject out (LOSO) validation, which demonstrates its consistent performance across a diverse range of neonates. Our findings highlight the potential of our algorithm for real-time neonatal sleep stage classification, offering a fast and cost-effective solution. This research opens avenues for further investigations in early-stage development monitoring and the assessment of neonatal health.
Collapse
Affiliation(s)
- Saadullah Farooq Abbasi
- Department of Biomedical Engineering, Riphah International University, Islamabad 44000, Pakistan
| | - Qammer Hussain Abbasi
- James Watt School of Engineering, University of Glasgow, Glasgow, G4 0PE, United Kingdom
| | - Faisal Saeed
- DAAI Research Group, Department of Computing and Data Science, School of Computing and Digital Technology, Birmingham City University, Birmingham B4 7XG, UK
| | - Norah Saleh Alghamdi
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| |
Collapse
|
12
|
Bandyopadhyay A, Goldstein C. Clinical applications of artificial intelligence in sleep medicine: a sleep clinician's perspective. Sleep Breath 2023; 27:39-55. [PMID: 35262853 PMCID: PMC8904207 DOI: 10.1007/s11325-022-02592-4] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Revised: 01/25/2022] [Accepted: 03/02/2022] [Indexed: 12/27/2022]
Abstract
BACKGROUND The past few years have seen a rapid emergence of artificial intelligence (AI)-enabled technology in the field of sleep medicine. AI refers to the capability of computer systems to perform tasks conventionally considered to require human intelligence, such as speech recognition, decision-making, and visual recognition of patterns and objects. The practice of sleep tracking and measuring physiological signals in sleep is widely practiced. Therefore, sleep monitoring in both the laboratory and ambulatory environments results in the accrual of massive amounts of data that uniquely positions the field of sleep medicine to gain from AI. METHOD The purpose of this article is to provide a concise overview of relevant terminology, definitions, and use cases of AI in sleep medicine. This was supplemented by a thorough review of relevant published literature. RESULTS Artificial intelligence has several applications in sleep medicine including sleep and respiratory event scoring in the sleep laboratory, diagnosing and managing sleep disorders, and population health. While still in its nascent stage, there are several challenges which preclude AI's generalizability and wide-reaching clinical applications. Overcoming these challenges will help integrate AI seamlessly within sleep medicine and augment clinical practice. CONCLUSION Artificial intelligence is a powerful tool in healthcare that may improve patient care, enhance diagnostic abilities, and augment the management of sleep disorders. However, there is a need to regulate and standardize existing machine learning algorithms prior to its inclusion in the sleep clinic.
Collapse
Affiliation(s)
- Anuja Bandyopadhyay
- Department of Pediatrics, Indiana University School of Medicine, Indianapolis, IN, USA.
| | - Cathy Goldstein
- Department of Neurology, University of Michigan, Ann Arbor, MI, USA
| |
Collapse
|
13
|
Fu G, Zhou Y, Gong P, Wang P, Shao W, Zhang D. A Temporal-Spectral Fused and Attention-Based Deep Model for Automatic Sleep Staging. IEEE Trans Neural Syst Rehabil Eng 2023; 31:1008-1018. [PMID: 37022069 DOI: 10.1109/tnsre.2023.3238852] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
Sleep staging is a vital process for evaluating sleep quality and diagnosing sleep-related diseases. Most of the existing automatic sleep staging methods focus on time-domain information and often ignore the transformation relationship between sleep stages. To deal with the above problems, we propose a Temporal-Spectral fused and Attention-based deep neural Network model (TSA-Net) for automatic sleep staging, using a single-channel electroencephalogram (EEG) signal. The TSA-Net is composed of a two-stream feature extractor, feature context learning, and conditional random field (CRF). Specifically, the two-stream feature extractor module can automatically extract and fuse EEG features from time and frequency domains, considering that both temporal and spectral features can provide abundant distinguishing information for sleep staging. Subsequently, the feature context learning module learns the dependencies between features using the multi-head self-attention mechanism and outputs a preliminary sleep stage. Finally, the CRF module further applies transition rules to improve classification performance. We evaluate our model on two public datasets, Sleep-EDF-20 and Sleep-EDF-78. In terms of accuracy, the TSA-Net achieves 86.64% and 82.21% on the Fpz-Cz channel, respectively. The experimental results illustrate that our TSA-Net can optimize the performance of sleep staging and achieve better staging performance than state-of-the-art methods.
Collapse
|
14
|
An application of deep dual convolutional neural network for enhanced medical image denoising. Med Biol Eng Comput 2023; 61:991-1004. [PMID: 36639550 DOI: 10.1007/s11517-022-02731-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Accepted: 12/09/2022] [Indexed: 01/15/2023]
Abstract
This work investigates the medical image denoising (MID) application of the dual denoising network (DudeNet) model for chest X-ray (CXR). The DudeNet model comprises four components: a feature extraction block with a sparse mechanism, an enhancement block, a compression block, and a reconstruction block. The developed model uses residual learning to boost denoising performance and batch normalization to accelerate the training process. The name proposed for this model is dual convolutional medical image-enhanced denoising network (DCMIEDNet). The peak signal-to-noise ratio (PSNR) and structure similarity index measurement (SSIM) are used to assess the MID performance for five different additive white Gaussian noise (AWGN) levels of σ = 15, 25, 40, 50, and 60 in CXR images. Presented investigations revealed that the PSNR and SSIM offered by DCMIEDNet are better than several popular state-of-the-art models such as block matching and 3D filtering, denoising convolutional neural network, and feature-guided denoising convolutional neural network. In addition, it is also superior to the recently reported MID models like deep convolutional neural network with residual learning, real-valued medical image denoising network, and complex-valued medical image denoising network. Therefore, based on the presented experiments, it is concluded that applying the DudeNet methodology for DCMIEDNet promises to be quite helpful for physicians.
Collapse
|
15
|
Zhao Y, Lin X, Zhang Z, Wang X, He X, Yang L. STDP-based adaptive graph convolutional networks for automatic sleep staging. Front Neurosci 2023; 17:1158246. [PMID: 37152593 PMCID: PMC10157055 DOI: 10.3389/fnins.2023.1158246] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 04/03/2023] [Indexed: 05/09/2023] Open
Abstract
Automatic sleep staging is important for improving diagnosis and treatment, and machine learning with neuroscience explainability of sleep staging is shown to be a suitable method to solve this problem. In this paper, an explainable model for automatic sleep staging is proposed. Inspired by the Spike-Timing-Dependent Plasticity (STDP), an adaptive Graph Convolutional Network (GCN) is established to extract features from the Polysomnography (PSG) signal, named STDP-GCN. In detail, the channel of the PSG signal can be regarded as a neuron, the synapse strength between neurons can be constructed by the STDP mechanism, and the connection between different channels of the PSG signal constitutes a graph structure. After utilizing GCN to extract spatial features, temporal convolution is used to extract transition rules between sleep stages, and a fully connected neural network is used for classification. To enhance the strength of the model and minimize the effect of individual physiological signal discrepancies on classification accuracy, STDP-GCN utilizes domain adversarial training. Experiments demonstrate that the performance of STDP-GCN is comparable to the current state-of-the-art models.
Collapse
|
16
|
A comprehensive evaluation of contemporary methods used for automatic sleep staging. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103819] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
17
|
Automated classification of cyclic alternating pattern sleep phases in healthy and sleep-disordered subjects using convolutional neural network. Comput Biol Med 2022; 146:105594. [DOI: 10.1016/j.compbiomed.2022.105594] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 05/01/2022] [Accepted: 05/04/2022] [Indexed: 01/26/2023]
|
18
|
Zhu L, Wang C, He Z, Zhang Y. A lightweight automatic sleep staging method for children using single-channel EEG based on edge artificial intelligence. WORLD WIDE WEB 2021; 25:1883-1903. [PMID: 35002476 PMCID: PMC8717888 DOI: 10.1007/s11280-021-00983-3] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/27/2021] [Revised: 11/08/2021] [Accepted: 11/26/2021] [Indexed: 06/14/2023]
Abstract
With the development of telemedicine and edge computing, edge artificial intelligence (AI) will become a new development trend for smart medicine. On the other hand, nearly one-third of children suffer from sleep disorders. However, all existing sleep staging methods are for adults. Therefore, we adapted edge AI to develop a lightweight automatic sleep staging method for children using single-channel EEG. The trained sleep staging model will be deployed to edge smart devices so that the sleep staging can be implemented on edge devices which will greatly save network resources and improving the performance and privacy of sleep staging application. Then the results and hypnogram will be uploaded to the cloud server for further analysis by the physicians to get sleep disease diagnosis reports and treatment opinions. We utilized 1D convolutional neural networks (1D-CNN) and long short term memory (LSTM) to build our sleep staging model, named CSleepNet. We tested the model on our childrens sleep (CS) dataset and sleep-EDFX dataset. For the CS dataset, we experimented with F4-M1 channel EEG using four different loss functions, and the logcosh performed best with overall accuracy of 83.06% and F1-score of 76.50%. We used Fpz-Cz and Pz-Oz channel EEG to train our model in Sleep-EDFX dataset, and achieved an accuracy of 86.41% without manual feature extraction. The experimental results show that our method has great potential. It not only plays an important role in sleep-related research, but also can be widely used in the classification of other time sequences physiological signals.
Collapse
Affiliation(s)
- Liqiang Zhu
- College of Electronic and Information Engineering, Southwest University, Chongqing, 400715 China
| | - Changming Wang
- Department of Neurosurgery, Xuanwu Hospital, Capital Medical University, Beijing, 100053 China
- Brain-inspired Intelligence and Clinical Translational Research Center, Beijing, 100176 China
| | - Zhihui He
- Department of Pediatric Respiration, Chongqing Ninth People’s Hospital, Chongqing, 400700 China
| | - Yuan Zhang
- College of Electronic and Information Engineering, Southwest University, Chongqing, 400715 China
| |
Collapse
|
19
|
Di J, Demanuele C, Kettermann A, Karahanoglu FI, Cappelleri JC, Potter A, Bury D, Cedarbaum JM, Byrom B. Considerations to address missing data when deriving clinical trial endpoints from digital health technologies. Contemp Clin Trials 2021; 113:106661. [PMID: 34954098 DOI: 10.1016/j.cct.2021.106661] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Revised: 11/23/2021] [Accepted: 12/18/2021] [Indexed: 11/25/2022]
Abstract
Digital health technologies (DHTs) enable us to measure human physiology and behavior remotely, objectively and continuously. With the accelerated adoption of DHTs in clinical trials, there is an unmet need to identify statistical approaches to address missing data to ensure that the derived endpoints are valid, accurate, and reliable. It is not obvious how commonly used statistical methods to handle missing data in clinical trials can be directly applied to the complex data collected by DHTs. Meanwhile, current approaches used to address missing data from DHTs are of limited sophistication and focus on the exclusion of data where the quantity of missing data exceeds a given threshold. High-frequency time series data collected by DHTs are often summarized to derive epoch-level data, which are then processed to compute daily summary measures. In this article, we discuss characteristics of missing data collected by DHT, review emerging statistical approaches for addressing missingness in epoch-level data including within-patient imputations across common time periods, functional data analysis, and deep learning methods, as well as imputation approaches and robust modeling appropriate for handling missing data in daily summary measures. We discuss strategies for minimizing missing data by optimizing DHT deployment and by including the patients' perspective in the study design. We believe that these approaches provide more insight into preventing missing data when deriving digital endpoints. We hope this article can serve as a starting point for further discussion among clinical trial stakeholders.
Collapse
Affiliation(s)
- Junrui Di
- Pfizer Inc., United States of America.
| | | | | | | | | | | | | | - Jesse M Cedarbaum
- Yale University School of Medicine, United States of America; Coeruleus Clinical Sciences LLC, United States of America
| | - Bill Byrom
- Signant Health, United States of America
| |
Collapse
|
20
|
John A, Nundy KK, Cardiff B, John D. Multimodal Multiresolution Data Fusion Using Convolutional Neural Networks for IoT Wearable Sensing. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2021; 15:1161-1173. [PMID: 34882563 DOI: 10.1109/tbcas.2021.3134043] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
With advances in circuit design and sensing technology, the acquisition of data from a large number of Internet of Things (IoT) sensors simultaneously to enable more accurate inferences has become mainstream. In this work, we propose a novel convolutional neural network (CNN) model for the fusion of multimodal and multiresolution data obtained from several sensors. The proposed model enables the fusion of multiresolution sensor data, without having to resort to padding/ resampling to correct for frequency resolution differences even when carrying out temporal inferences like high-resolution event detection. The performance of the proposed model is evaluated for sleep apnea event detection, by fusing three different sensor signals obtained from UCD St. Vincent University Hospital's sleep apnea database. The proposed model is generalizable and this is demonstrated by incremental performance improvements, proportional to the number of sensors used for fusion. A selective dropout technique is used to prevent overfitting of the model to any specific high-resolution input, and increase the robustness of fusion to signal corruption from any sensor source. A fusion model with electrocardiogram (ECG), Peripheral oxygen saturation signal (SpO2), and abdominal movement signal achieved an accuracy of 99.72% and a sensitivity of 98.98%. Energy per classification of the proposed fusion model was estimated to be approximately 5.61 μJ for on-chip implementation. The feasibility of pruning to reduce the complexity of the fusion models was also studied.
Collapse
|
21
|
Jia Z, Lin Y, Wang J, Ning X, He Y, Zhou R, Zhou Y, Lehman LWH. Multi-View Spatial-Temporal Graph Convolutional Networks With Domain Generalization for Sleep Stage Classification. IEEE Trans Neural Syst Rehabil Eng 2021; 29:1977-1986. [PMID: 34487495 PMCID: PMC8556658 DOI: 10.1109/tnsre.2021.3110665] [Citation(s) in RCA: 41] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
Sleep stage classification is essential for sleep assessment and disease diagnosis. Although previous attempts to classify sleep stages have achieved high classification performance, several challenges remain open: 1) How to effectively utilize time-varying spatial and temporal features from multi-channel brain signals remains challenging. Prior works have not been able to fully utilize the spatial topological information among brain regions. 2) Due to the many differences found in individual biological signals, how to overcome the differences of subjects and improve the generalization of deep neural networks is important. 3) Most deep learning methods ignore the interpretability of the model to the brain. To address the above challenges, we propose a multi-view spatial-temporal graph convolutional networks (MSTGCN) with domain generalization for sleep stage classification. Specifically, we construct two brain view graphs for MSTGCN based on the functional connectivity and physical distance proximity of the brain regions. The MSTGCN consists of graph convolutions for extracting spatial features and temporal convolutions for capturing the transition rules among sleep stages. In addition, attention mechanism is employed for capturing the most relevant spatial-temporal information for sleep stage classification. Finally, domain generalization and MSTGCN are integrated into a unified framework to extract subject-invariant sleep features. Experiments on two public datasets demonstrate that the proposed model outperforms the state-of-the-art baselines.
Collapse
|
22
|
Alvarez-Estevez D, Rijsman RM. Inter-database validation of a deep learning approach for automatic sleep scoring. PLoS One 2021; 16:e0256111. [PMID: 34398931 PMCID: PMC8366993 DOI: 10.1371/journal.pone.0256111] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2021] [Accepted: 08/01/2021] [Indexed: 12/17/2022] Open
Abstract
STUDY OBJECTIVES Development of inter-database generalizable sleep staging algorithms represents a challenge due to increased data variability across different datasets. Sharing data between different centers is also a problem due to potential restrictions due to patient privacy protection. In this work, we describe a new deep learning approach for automatic sleep staging, and address its generalization capabilities on a wide range of public sleep staging databases. We also examine the suitability of a novel approach that uses an ensemble of individual local models and evaluate its impact on the resulting inter-database generalization performance. METHODS A general deep learning network architecture for automatic sleep staging is presented. Different preprocessing and architectural variant options are tested. The resulting prediction capabilities are evaluated and compared on a heterogeneous collection of six public sleep staging datasets. Validation is carried out in the context of independent local and external dataset generalization scenarios. RESULTS Best results were achieved using the CNN_LSTM_5 neural network variant. Average prediction capabilities on independent local testing sets achieved 0.80 kappa score. When individual local models predict data from external datasets, average kappa score decreases to 0.54. Using the proposed ensemble-based approach, average kappa performance on the external dataset prediction scenario increases to 0.62. To our knowledge this is the largest study by the number of datasets so far on validating the generalization capabilities of an automatic sleep staging algorithm using external databases. CONCLUSIONS Validation results show good general performance of our method, as compared with the expected levels of human agreement, as well as to state-of-the-art automatic sleep staging methods. The proposed ensemble-based approach enables flexible and scalable design, allowing dynamic integration of local models into the final ensemble, preserving data locality, and increasing generalization capabilities of the resulting system at the same time.
Collapse
Affiliation(s)
- Diego Alvarez-Estevez
- Sleep Center, Haaglanden Medisch Centrum, The Hague, South-Holland, The Netherlands
- Center for Information and Communications Technology Research (CITIC), University of A Coruña, A Coruña, Spain
| | - Roselyne M. Rijsman
- Sleep Center, Haaglanden Medisch Centrum, The Hague, South-Holland, The Netherlands
| |
Collapse
|
23
|
Sharma M, Patel V, Tiwari J, Acharya UR. Automated Characterization of Cyclic Alternating Pattern Using Wavelet-Based Features and Ensemble Learning Techniques with EEG Signals. Diagnostics (Basel) 2021; 11:diagnostics11081380. [PMID: 34441314 PMCID: PMC8393617 DOI: 10.3390/diagnostics11081380] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2021] [Revised: 07/26/2021] [Accepted: 07/27/2021] [Indexed: 12/03/2022] Open
Abstract
Sleep is highly essential for maintaining metabolism of the body and mental balance for increased productivity and concentration. Often, sleep is analyzed using macrostructure sleep stages which alone cannot provide information about the functional structure and stability of sleep. The cyclic alternating pattern (CAP) is a physiological recurring electroencephalogram (EEG) activity occurring in the brain during sleep and captures microstructure of the sleep and can be used to identify sleep instability. The CAP can also be associated with various sleep-related pathologies, and can be useful in identifying various sleep disorders. Conventionally, sleep is analyzed using polysomnogram (PSG) in various sleep laboratories by trained physicians and medical practitioners. However, PSG-based manual sleep analysis by trained medical practitioners is onerous, tedious and unfavourable for patients. Hence, a computerized, simple and patient convenient system is highly desirable for monitoring and analysis of sleep. In this study, we have proposed a system for automated identification of CAP phase-A and phase-B. To accomplish the task, we have utilized the openly accessible CAP sleep database. The study is performed using two single-channel EEG modalities and their combination. The model is developed using EEG signals of healthy subjects as well as patients suffering from six different sleep disorders namely nocturnal frontal lobe epilepsy (NFLE), sleep-disordered breathing (SDB), narcolepsy, periodic leg movement disorder (PLM), insomnia and rapid eye movement behavior disorder (RBD) subjects. An optimal orthogonal wavelet filter bank is used to perform the wavelet decomposition and subsequently, entropy and Hjorth parameters are extracted from the decomposed coefficients. The extracted features have been applied to different machine learning algorithms. The best performance is obtained using ensemble of bagged tress (EBagT) classifier. The proposed method has obtained the average classification accuracy of 84%, 83%, 81%, 78%, 77%, 76% and 72% for NFLE, healthy, SDB, narcolepsy, PLM, insomnia and RBD subjects, respectively in discriminating phases A and B using a balanced database. Our developed model yielded an average accuracy of 78% when all 77 subjects including healthy and sleep disordered patients are considered. Our proposed system can assist the sleep specialists in an automated and efficient analysis of sleep using sleep microstructure.
Collapse
Affiliation(s)
- Manish Sharma
- Department of Electrical and Computer Science Engineering, Institute of Infrastructure, Technology, Research and Management (IITRAM), Ahmedabad 380026, India; (V.P.); (J.T.)
- Correspondence:
| | - Virendra Patel
- Department of Electrical and Computer Science Engineering, Institute of Infrastructure, Technology, Research and Management (IITRAM), Ahmedabad 380026, India; (V.P.); (J.T.)
| | - Jainendra Tiwari
- Department of Electrical and Computer Science Engineering, Institute of Infrastructure, Technology, Research and Management (IITRAM), Ahmedabad 380026, India; (V.P.); (J.T.)
| | - U. Rajendra Acharya
- School of Engineering, Ngee Ann Polytechnic, Singapore 599489, Singapore;
- Department of Bioinformatics and Medical Engineering, Asia University, Taichung 41354, Taiwan
- School of Management and Enterprise, University of Southern Queensland, Springfield 4300, Australia
| |
Collapse
|
24
|
Zhao D, Jiang R, Feng M, Yang J, Wang Y, Hou X, Wang X. A deep learning algorithm based on 1D CNN-LSTM for automatic sleep staging. Technol Health Care 2021; 30:323-336. [PMID: 34180436 DOI: 10.3233/thc-212847] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
BACKGROUND Sleep staging is an important part of sleep research. Traditional automatic sleep staging based on machine learning requires extensive feature extraction and selection. OBJECTIVE This paper proposed a deep learning algorithm without feature extraction based on one-dimensional convolutional neural network and long short-term memory. METHODS The algorithm can automatically divide sleep into 5 phases including awake period, non-rapid eye movement sleep period (N1 ∼ N3) and rapid eye movement using the electroencephalogram signals. The raw signal was processed by the wavelet transform. Then, the processed signal was directly input into the deep learning algorithm to obtain the staging result. RESULTS The accuracy of staging is 93.47% using the Fpz-Cz electroencephalogram signal. When using the Fpz-Cz and electroencephalogram signal, the algorithm can obtain the highest accuracy of 94.15%. CONCLUSION These results show that this algorithm is suitable for different physiological signals and can realize end-to-end automatic sleep staging without any manual feature extraction.
Collapse
Affiliation(s)
- Dechun Zhao
- College of Bioinformatics, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Renpin Jiang
- School of Automation, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Mingyang Feng
- College of Bioinformatics, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Jiaxin Yang
- College of Bioinformatics, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Yi Wang
- College of Bioinformatics, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Xiaorong Hou
- College of Medical Informatics, Chongqing Medical University, Chongqing, China
| | - Xing Wang
- College of Bioengineering, Chongqing University, Chongqing, China
| |
Collapse
|
25
|
Krauss P, Metzner C, Joshi N, Schulze H, Traxdorf M, Maier A, Schilling A. Analysis and visualization of sleep stages based on deep neural networks. Neurobiol Sleep Circadian Rhythms 2021; 10:100064. [PMID: 33763623 PMCID: PMC7973384 DOI: 10.1016/j.nbscr.2021.100064] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2020] [Revised: 02/27/2021] [Accepted: 03/01/2021] [Indexed: 11/28/2022] Open
Abstract
Automatic sleep stage scoring based on deep neural networks has come into focus of sleep researchers and physicians, as a reliable method able to objectively classify sleep stages would save human resources and simplify clinical routines. Due to novel open-source software libraries for machine learning, in combination with enormous recent progress in hardware development, a paradigm shift in the field of sleep research towards automatic diagnostics might be imminent. We argue that modern machine learning techniques are not just a tool to perform automatic sleep stage classification, but are also a creative approach to find hidden properties of sleep physiology. We have already developed and established algorithms to visualize and cluster EEG data, facilitating first assessments on sleep health in terms of sleep-apnea and consequently reduced daytime vigilance. In the following study, we further analyze cortical activity during sleep by determining the probabilities of momentary sleep stages, represented as hypnodensity graphs and then computing vectorial cross-correlations of different EEG channels. We can show that this measure serves to estimate the period length of sleep cycles and thus can help to find disturbances due to pathological conditions.
Collapse
Affiliation(s)
- Patrick Krauss
- Neuroscience Lab, Experimental Otolaryngology, University Hospital Erlangen, Germany
- Cognitive Computational Neuroscience Group at the Chair of English Philology and Linguistics, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Germany
- Cognitive Neuroscience Center, University of Groningen, the Netherlands
| | - Claus Metzner
- Neuroscience Lab, Experimental Otolaryngology, University Hospital Erlangen, Germany
- Biophysics, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Germany
| | - Nidhi Joshi
- Neuroscience Lab, Experimental Otolaryngology, University Hospital Erlangen, Germany
| | - Holger Schulze
- Neuroscience Lab, Experimental Otolaryngology, University Hospital Erlangen, Germany
| | - Maximilian Traxdorf
- Department of Otolaryngology, Head and Neck Surgery, University Hospital Erlangen, Germany
| | - Andreas Maier
- Machine Intelligence, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Germany
| | - Achim Schilling
- Neuroscience Lab, Experimental Otolaryngology, University Hospital Erlangen, Germany
- Cognitive Computational Neuroscience Group at the Chair of English Philology and Linguistics, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Germany
- Laboratory of Sensory and Cognitive Neuroscience, Aix-Marseille University, Marseille, France
| |
Collapse
|
26
|
Neng W, Lu J, Xu L. CCRRSleepNet: A Hybrid Relational Inductive Biases Network for Automatic Sleep Stage Classification on Raw Single-Channel EEG. Brain Sci 2021; 11:456. [PMID: 33918506 PMCID: PMC8065855 DOI: 10.3390/brainsci11040456] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2020] [Revised: 03/20/2021] [Accepted: 03/30/2021] [Indexed: 01/31/2023] Open
Abstract
In the inference process of existing deep learning models, it is usually necessary to process the input data level-wise, and impose a corresponding relational inductive bias on each level. This kind of relational inductive bias determines the theoretical performance upper limit of the deep learning method. In the field of sleep stage classification, only a single relational inductive bias is adopted at the same level in the mainstream methods based on deep learning. This will make the feature extraction method of deep learning incomplete and limit the performance of the method. In view of the above problems, a novel deep learning model based on hybrid relational inductive biases is proposed in this paper. It is called CCRRSleepNet. The model divides the single channel Electroencephalogram (EEG) data into three levels: frame, epoch, and sequence. It applies hybrid relational inductive biases from many aspects based on three levels. Meanwhile, multiscale atrous convolution block (MSACB) is adopted in CCRRSleepNet to learn the features of different attributes. However, in practice, the actual performance of the deep learning model depends on the nonrelational inductive biases, so a variety of matching nonrelational inductive biases are adopted in this paper to optimize CCRRSleepNet. The CCRRSleepNet is tested on the Fpz-Cz and Pz-Oz channel data of the Sleep-EDF dataset. The experimental results show that the method proposed in this paper is superior to many existing methods.
Collapse
Affiliation(s)
- Wenpeng Neng
- College of Computer Science and Technology, Heilongjiang University, Harbin 150080, China; (W.N.); (L.X.)
| | - Jun Lu
- College of Computer Science and Technology, Heilongjiang University, Harbin 150080, China; (W.N.); (L.X.)
- Key Laboratory of Database and Parallel Computing of Heilongjiang Province, Heilongjiang University, Harbin 150080, China
| | - Lei Xu
- College of Computer Science and Technology, Heilongjiang University, Harbin 150080, China; (W.N.); (L.X.)
| |
Collapse
|
27
|
Zhang J, Tang Z, Gao J, Lin L, Liu Z, Wu H, Liu F, Yao R. Automatic Detection of Obstructive Sleep Apnea Events Using a Deep CNN-LSTM Model. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:5594733. [PMID: 33859679 PMCID: PMC8009718 DOI: 10.1155/2021/5594733] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Revised: 03/05/2021] [Accepted: 03/13/2021] [Indexed: 01/16/2023]
Abstract
Obstructive sleep apnea (OSA) is a common sleep-related respiratory disorder. Around the world, more and more people are suffering from OSA. Because of the limitation of monitor equipment, many people with OSA remain undetected. Therefore, we propose a sleep-monitoring model based on single-channel electrocardiogram using a convolutional neural network (CNN), which can be used in portable OSA monitor devices. To learn different scale features, the first convolution layer comprises three types of filters. The long short-term memory (LSTM) is used to learn the long-term dependencies such as the OSA transition rules. The softmax function is connected to the final fully connected layer to obtain the final decision. To detect a complete OSA event, the raw ECG signals are segmented by a 10 s overlapping sliding window. The proposed model is trained with the segmented raw signals and is subsequently tested to evaluate its event detection performance. According to experiment analysis, the proposed model exhibits Cohen's kappa coefficient of 0.92, a sensitivity of 96.1%, a specificity of 96.2%, and an accuracy of 96.1% with respect to the Apnea-ECG dataset. The proposed model is significantly higher than the results from the baseline method. The results prove that our approach could be a useful tool for detecting OSA on the basis of a single-lead ECG.
Collapse
Affiliation(s)
- Junming Zhang
- College of Information Engineering, Huanghuai University, Zhumadian, Henan 463000, China
- Henan Key Laboratory of Smart Lighting, Zhumadian, Henan 463000, China
- Henan Joint International Research Laboratory of Behavior Optimization Control for Smart Robots, Zhumadian, Henan 463000, China
- Zhumadian Artificial Intelligence & Medical Engineering Technical Research Centre, Zhumadian, Henan 463000, China
- Academy of Industry Innovation and Development, Huanghuai University, Zhumadian, Henan 463000, China
| | - Zhen Tang
- College of Information Engineering, Huanghuai University, Zhumadian, Henan 463000, China
| | - Jinfeng Gao
- College of Information Engineering, Huanghuai University, Zhumadian, Henan 463000, China
- Henan Key Laboratory of Smart Lighting, Zhumadian, Henan 463000, China
| | - Li Lin
- College of Information Engineering, Huanghuai University, Zhumadian, Henan 463000, China
| | - Zhiliang Liu
- College of Information Engineering, Huanghuai University, Zhumadian, Henan 463000, China
| | - Haitao Wu
- College of Information Engineering, Huanghuai University, Zhumadian, Henan 463000, China
- Henan Key Laboratory of Smart Lighting, Zhumadian, Henan 463000, China
| | - Fang Liu
- College of Information Engineering, Huanghuai University, Zhumadian, Henan 463000, China
- Henan Joint International Research Laboratory of Behavior Optimization Control for Smart Robots, Zhumadian, Henan 463000, China
| | - Ruxian Yao
- College of Information Engineering, Huanghuai University, Zhumadian, Henan 463000, China
- Henan Key Laboratory of Smart Lighting, Zhumadian, Henan 463000, China
| |
Collapse
|
28
|
Fu M, Wang Y, Chen Z, Li J, Xu F, Liu X, Hou F. Deep Learning in Automatic Sleep Staging With a Single Channel Electroencephalography. Front Physiol 2021; 12:628502. [PMID: 33746774 PMCID: PMC7965953 DOI: 10.3389/fphys.2021.628502] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2020] [Accepted: 02/01/2021] [Indexed: 11/13/2022] Open
Abstract
This study centers on automatic sleep staging with a single channel electroencephalography (EEG), with some significant findings for sleep staging. In this study, we proposed a deep learning-based network by integrating attention mechanism and bidirectional long short-term memory neural network (AT-BiLSTM) to classify wakefulness, rapid eye movement (REM) sleep and non-REM (NREM) sleep stages N1, N2 and N3. The AT-BiLSTM network outperformed five other networks and achieved an accuracy of 83.78%, a Cohen's kappa coefficient of 0.766 and a macro F1-score of 82.14% on the PhysioNet Sleep-EDF Expanded dataset, and an accuracy of 81.72%, a Cohen's kappa coefficient of 0.751 and a macro F1-score of 80.74% on the DREAMS Subjects dataset. The proposed AT-BiLSTM network even achieved a higher accuracy than the existing methods based on traditional feature extraction. Moreover, better performance was obtained by the AT-BiLSTM network with the frontal EEG derivations than with EEG channels located at the central, occipital or parietal lobe. As EEG signal can be easily acquired using dry electrodes on the forehead, our findings might provide a promising solution for automatic sleep scoring without feature extraction and may prove very useful for the screening of sleep disorders.
Collapse
Affiliation(s)
- Mingyu Fu
- School of Science, China Pharmaceutical University, Nanjing, China
| | - Yitian Wang
- School of Science, China Pharmaceutical University, Nanjing, China
| | - Zixin Chen
- College of Engineering, University of California, Berkeley, Berkeley, CA, United States
| | - Jin Li
- College of Physics and Information Technology, Shaanxi Normal University, Xi’an, China
| | - Fengguo Xu
- Key Laboratory of Drug Quality Control and Pharmacovigilance, China Pharmaceutical University, Nanjing, China
| | - Xinyu Liu
- School of Science, China Pharmaceutical University, Nanjing, China
| | - Fengzhen Hou
- School of Science, China Pharmaceutical University, Nanjing, China
| |
Collapse
|
29
|
Zhang J, Wu Y. Competition convolutional neural network for sleep stage classification. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2020.102318] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
30
|
Xiao J, Jia Y, Jiang X, Wang S. Circular Complex-Valued GMDH-Type Neural Network for Real-Valued Classification Problems. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:5285-5299. [PMID: 32078563 DOI: 10.1109/tnnls.2020.2966031] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Recently, applications of complex-valued neural networks (CVNNs) to real-valued classification problems have attracted significant attention. However, most existing CVNNs are black-box models with poor explanation performance. This study extends the real-valued group method of data handling (RGMDH)-type neural network to the complex field and constructs a circular complex-valued group method of data handling (C-CGMDH)-type neural network, which is a white-box model. First, a complex least squares method is proposed for parameter estimation. Second, a new complex-valued symmetric regularity criterion is constructed with a logarithmic function to represent explicitly the magnitude and phase of the actual and predicted complex output to evaluate and select the middle candidate models. Furthermore, the property of this new complex-valued external criterion is proven to be similar to that of the real external criterion. Before training this model, a circular transformation is used to transform the real-valued input features to the complex field. Twenty-five real-valued classification data sets from the UCI Machine Learning Repository are used to conduct the experiments. The results show that both RGMDH and C-CGMDH models can select the most important features from the complete feature space through a self-organizing modeling process. Compared with RGMDH, the C-CGMDH model converges faster and selects fewer features. Furthermore, its classification performance is statistically significantly better than the benchmark complex-valued and real-valued models. Regarding time complexity, the C-CGMDH model is comparable with other models in dealing with the data sets that have few features. Finally, we demonstrate that the GMDH-type neural network can be interpretable.
Collapse
|
31
|
Tang T, Goh WL, Yao L, Gao Y. A TDM-Based 16-Channel AFE ASIC With Enhanced System-Level CMRR for Wearable EEG Recording With Dry Electrodes. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2020; 14:516-524. [PMID: 32167908 DOI: 10.1109/tbcas.2020.2979931] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
A multi-channel analog front-end (AFE) ASIC for wearable EEG recording application is presented in this article. Two techniques, namely chopping stabilization (CS) and time-division-multiplexing (TDM) are combined in a unified manner to improve the input-referred noise and the system level common-mode rejection ratio (CMRR) for multi-channel AFE. With the proposed TDM/CS structure, multiple channels can share single second-stage amplifier for significant reduction in chip size and power consumption. Dual feedback loops for input impedance boosting as well as electrode offset cancellation are incorporated in the system. Implemented in a 0.18-μm CMOS process, the AFE consumes 24 μW under 1 V supply. The input referred noise is 0.63 μVrms in 0.5 Hz-100 Hz and the input impedance is boosted to 560 MΩ at 50 Hz. The measured amplifier intrinsic CMRR and system-level AFE CMRR are 89 dB and 82 dB, respectively.
Collapse
|
32
|
Alvarez-Estevez D, Fernández-Varela I. Addressing database variability in learning from medical data: An ensemble-based approach using convolutional neural networks and a case of study applied to automatic sleep scoring. Comput Biol Med 2020; 119:103697. [PMID: 32339128 DOI: 10.1016/j.compbiomed.2020.103697] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2020] [Revised: 03/03/2020] [Accepted: 03/04/2020] [Indexed: 10/24/2022]
|
33
|
|
34
|
Xu Z, Yang X, Sun J, Liu P, Qin W. Sleep Stage Classification Using Time-Frequency Spectra From Consecutive Multi-Time Points. Front Neurosci 2020; 14:14. [PMID: 32047422 PMCID: PMC6997491 DOI: 10.3389/fnins.2020.00014] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2019] [Accepted: 01/08/2020] [Indexed: 11/20/2022] Open
Abstract
Sleep stage classification is an open challenge in the field of sleep research. Considering the relatively small size of datasets used by previous studies, in this paper we used the Sleep Heart Health Study dataset from the National Sleep Research Resource database. A long short-term memory (LSTM) network using a time-frequency spectra of several consecutive 30 s time points as an input was used to perform the sleep stage classification. Four classical convolutional neural networks (CNNs) using a time-frequency spectra of a single 30 s time point as an input were used for comparison. Results showed that, when considering the temporal information within the time-frequency spectrum of a single 30 s time point, the LSTM network had a better classification performance than the CNNs. Moreover, when additional temporal information was taken into consideration, the classification performance of the LSTM network gradually increased. It reached its peak when temporal information from three consecutive 30 s time points was considered, with a classification accuracy of 87.4% and a Cohen’s Kappa coefficient of 0.8216. Compared with CNNs, our results indicate that for sleep stage classification, the temporal information within the data or the features extracted from the data should be considered. LSTM networks take this temporal information into account, and thus, may be more suitable for sleep stage classification.
Collapse
Affiliation(s)
- Ziliang Xu
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Sciences and Technology, Xidian University, Xi'an, China
| | - Xuejuan Yang
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Sciences and Technology, Xidian University, Xi'an, China
| | - Jinbo Sun
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Sciences and Technology, Xidian University, Xi'an, China
| | - Peng Liu
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Sciences and Technology, Xidian University, Xi'an, China
| | - Wei Qin
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Sciences and Technology, Xidian University, Xi'an, China
| |
Collapse
|
35
|
Gopan K. G, Prabhu SS, Sinha N. Sleep EEG analysis utilizing inter-channel covariance matrices. Biocybern Biomed Eng 2020. [DOI: 10.1016/j.bbe.2020.01.013] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
36
|
Zhang J, Yao R, Ge W, Gao J. Orthogonal convolutional neural networks for automatic sleep stage classification based on single-channel EEG. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 183:105089. [PMID: 31586788 DOI: 10.1016/j.cmpb.2019.105089] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/22/2019] [Revised: 09/19/2019] [Accepted: 09/22/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVE In recent years, several automatic sleep stage classification methods based on convolutional neural networks (CNN) by learning hierarchical feature representation automatically from raw EEG data have been proposed. However, the state-of-the-art of such methods are quite complex. Using a simple CNN architecture to classify sleep stages is important for portable sleep devices. In addition, employing CNNs to learn rich and diverse representations remains a challenge. Therefore, we propose a novel CNN model for sleep stage classification. METHODS Generally, EEG signals are better described in the frequency domain; thus, we convert EEG data to a time-frequency representation via Hilbert-Huang transform. To learn rich and effective feature representations, we propose an orthogonal convolutional neural network (OCNN). First, we construct an orthogonal initialization of weights. Second, to avoid destroying the orthogonality of the weights in the training process, orthogonality regularizations are proposed to maintain the orthogonality of weights. Simultaneously, a squeeze-and-excitation (SE) block is employed to perform feature recalibration across different channels. RESULTS The proposed method achieved a total classification accuracy of 88.4% and 87.6% on two public datasets, respectively. The classification performances of different convolutional neural networks models were compared to that of the proposed method. The experiment results demonstrated that the proposed method is effective for sleep stage classification. CONCLUSIONS Experiment results indicate that the proposed OCNN can learn rich and diverse feature representations from time-frequency images of EEG data, which is important for deep learning. In addition, the proposed orthogonality regularization is simple and can be easily adapted to other architectures.
Collapse
Affiliation(s)
- Junming Zhang
- College of Information Engineering, Huanghuai University, Henan 463000, China; Henan Key Laboratory of Smart Lighting, Henan 463000, China; Henan Joint International Research Laboratory of Behavior Optimization Control for Smart Robots, Henan 463000, China; Academy of Industry innovation and Development, Huanghuai University, Henan 463000, China
| | - Ruxian Yao
- College of Information Engineering, Huanghuai University, Henan 463000, China; Henan Key Laboratory of Smart Lighting, Henan 463000, China
| | - Wengeng Ge
- College of Information Engineering, Huanghuai University, Henan 463000, China; Henan Key Laboratory of Smart Lighting, Henan 463000, China
| | - Jinfeng Gao
- College of Information Engineering, Huanghuai University, Henan 463000, China; Henan Key Laboratory of Smart Lighting, Henan 463000, China.
| |
Collapse
|
37
|
Jiang D, Ma Y, Wang Y. Sleep stage classification using covariance features of multi-channel physiological signals on Riemannian manifolds. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 178:19-30. [PMID: 31416548 DOI: 10.1016/j.cmpb.2019.06.008] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/25/2019] [Revised: 05/31/2019] [Accepted: 06/09/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVE The recognition of many sleep related pathologies highly relies on an accurate classification of sleep stages. Clinically, sleep stages are usually labelled by sleep experts through visually inspecting the whole-night polysomnography (PSG) recording of patients, wherein electroencephalogram (EEG), electrooculogram (EOG) and electromyogram (EMG) play the dominant role. Developing an automatic sleep staging system based on multi-channel physiological signals could relieve the burden of manual labeling by experts, and obtain reliable and repeatable recognition results as well. METHODS In this work, we find the correlation between the spatial covariance matrices of multi-channel signals and their corresponding sleep stages. Based on that, we propose two novel sleep stage classification methods based on the features extracted from the covariance matrices of multi-channel signals. Sleep stages are classified using a minimum distance classifier according to their corresponding covariance matrices mapped on Riemannian manifolds. An alternative way to classify these covariance matrices is to represent the features of covariance matrices on the tangent space of Riemannian manifolds and classify them with an ensemble learning classifier. After any of these classification methods, a rule-free refinement process is utilized to further optimize the classification results. RESULTS On the MASS dataset that includes 61 whole-night PSG recordings, both two methods provide satisfactory classification results while the one based on tangent space projection has better performance. On average, an accuracy of 0.812 and a Cohen's Kappa coefficient of 0.722 are obtained under leave-one-subject-out cross validation, using EEG, EOG and EMG signals. Meanwhile, the most effective combinations of EEG channels for sleep staging have been found in this work. CONCLUSIONS The correlation between spatial covariance matrices of multi-channel signals and their corresponding sleep stages have been found. Features based on that are used for sleep stage classification, and experimental results show the superior performance of proposed methods compared to state-of-the-art works. Results of this work are expected to provide a new vision for dealing with multi-channel or multi-modal signal processing tasks in various applications.
Collapse
Affiliation(s)
- Dihong Jiang
- Department of Electronic Engineering, Fudan University, Shanghai 200433, China.
| | - Yu Ma
- Department of Electronic Engineering, Fudan University, Shanghai 200433, China; Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention (MICCAI) of Shanghai, Shanghai 200032, China.
| | - Yuanyuan Wang
- Department of Electronic Engineering, Fudan University, Shanghai 200433, China; Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention (MICCAI) of Shanghai, Shanghai 200032, China.
| |
Collapse
|
38
|
Wei X, Zhou L, Zhang Z, Chen Z, Zhou Y. Early prediction of epileptic seizures using a long-term recurrent convolutional network. J Neurosci Methods 2019; 327:108395. [PMID: 31408651 DOI: 10.1016/j.jneumeth.2019.108395] [Citation(s) in RCA: 46] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2018] [Revised: 08/05/2019] [Accepted: 08/06/2019] [Indexed: 01/24/2023]
Abstract
BACKGROUND A seizure prediction system can detect seizures prior to their occurrence and allow clinicians to provide timely treatment for patients with epilepsy. Research on seizure prediction has progressed from signal processing analyses to machine learning. However, most prediction methods are hand-engineered and have high computational complexity, increasing the difficulty of obtaining real-time predictions. Some forecasting and early warning methods have achieved good results in the short term but have low applicability in practical situations over the long term. NEW METHODS First, electroencephalogram (EEG) time series were converted into two-dimensional images for multichannel fusion. A feasible method, a long-term recurrent convolutional network (LRCN), was proposed to create a spatiotemporal deep learning model for predicting epileptic seizures. The convolutional network block was used to automatically extract deep features from the data. The long short-term memory (LSTM) block was incorporated into learning a time sequence for identifying the preictal segments. New network settings and a postprocessing strategy were proposed in the seizure prediction model. RESULTS The deep seizure prediction model achieved an accuracy of 93.40%, prediction sensitivity of 91.88% and specificity of 86.13% in segment-based evaluations. For the event-based evaluations, 164 seizures were predicted. The proposed method provides high sensitivity and a low false prediction rate (FPR) of 0.04 F P/h. COMPARISON WITH EXISTING METHODS We employed different methods, including the LRCN, deep learning and traditional machine learning methods, and compared them using the same data in this paper. Overall, the LRCN offers approximately 5-9% increased sensitivity and specificity. CONCLUSION This study describes the LRCN network for analyzing EEG data to predict epileptic seizures, thereby enabling the implementation of an early warning system that detects epileptic seizures before they occur in clinical applications.
Collapse
Affiliation(s)
- Xiaoyan Wei
- Department of Biomedical Engineering, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, 510080, Guangdong Province, China.
| | - Lin Zhou
- Software Engineering, School of Computer and Data Science, Sun Yat-sen University, Guangzhou, 510006, Guangdong Province, China.
| | - Zhen Zhang
- Department of Biomedical Engineering, School of Biomedical Engineering, Shanghai Jiao Tong University, 200240, Shanghai, China.
| | - Ziyi Chen
- Department of Neurology, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510080, Guangdong Province, China.
| | - Yi Zhou
- Department of Biomedical Engineering, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, 510080, Guangdong Province, China.
| |
Collapse
|
39
|
Tobore I, Li J, Yuhang L, Al-Handarish Y, Kandwal A, Nie Z, Wang L. Deep Learning Intervention for Health Care Challenges: Some Biomedical Domain Considerations. JMIR Mhealth Uhealth 2019; 7:e11966. [PMID: 31376272 PMCID: PMC6696854 DOI: 10.2196/11966] [Citation(s) in RCA: 61] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2018] [Revised: 04/14/2019] [Accepted: 06/12/2019] [Indexed: 01/10/2023] Open
Abstract
The use of deep learning (DL) for the analysis and diagnosis of biomedical and health care problems has received unprecedented attention in the last decade. The technique has recorded a number of achievements for unearthing meaningful features and accomplishing tasks that were hitherto difficult to solve by other methods and human experts. Currently, biological and medical devices, treatment, and applications are capable of generating large volumes of data in the form of images, sounds, text, graphs, and signals creating the concept of big data. The innovation of DL is a developing trend in the wake of big data for data representation and analysis. DL is a type of machine learning algorithm that has deeper (or more) hidden layers of similar function cascaded into the network and has the capability to make meaning from medical big data. Current transformation drivers to achieve personalized health care delivery will be possible with the use of mobile health (mHealth). DL can provide the analysis for the deluge of data generated from mHealth apps. This paper reviews the fundamentals of DL methods and presents a general view of the trends in DL by capturing literature from PubMed and the Institute of Electrical and Electronics Engineers database publications that implement different variants of DL. We highlight the implementation of DL in health care, which we categorize into biological system, electronic health record, medical image, and physiological signals. In addition, we discuss some inherent challenges of DL affecting biomedical and health domain, as well as prospective research directions that focus on improving health management by promoting the application of physiological signals and modern internet technology.
Collapse
Affiliation(s)
- Igbe Tobore
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China.,Graduate University, Chinese Academy of Sciences, Beijing, China
| | - Jingzhen Li
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Liu Yuhang
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yousef Al-Handarish
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Abhishek Kandwal
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Zedong Nie
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Lei Wang
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
40
|
Liang SF, Shih YH, Chen PY, Kuo CE. Development of a human-computer collaborative sleep scoring system for polysomnography recordings. PLoS One 2019; 14:e0218948. [PMID: 31291270 PMCID: PMC6619661 DOI: 10.1371/journal.pone.0218948] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2018] [Accepted: 06/12/2019] [Indexed: 11/19/2022] Open
Abstract
The overnight polysomnographic (PSG) recordings of patients were scored by an expert to diagnose sleep disorders. Visual sleep scoring is a time-consuming and subjective process. Automatic sleep staging methods can help; however, the mechanism and reliability of these methods are not fully understood. Therefore, experts often need to rescore the recordings to obtain reliable results. Here, we propose a human-computer collaborative sleep scoring system. It is a rule-based automatic sleep scoring method that follows the American Academy of Sleep Medicine (AASM) guidelines to perform an initial scoring. Then, the reliability level of each epoch is analyzed based on physiological patterns during sleep and the characteristics of various stage changes. Finally, experts would only need to rescore epochs with a low-reliability level. The experimental results show that the average agreement rate between our system and fully manual scorings can reach 90.42% with a kappa coefficient of 0.85. Over 50% of the manual scoring time can be reduced. Due to the demonstrated robustness and applicability, the proposed approach can be integrated with various PSG systems or automatic sleep scoring methods for sleep monitoring in clinical or homecare applications in the future.
Collapse
Affiliation(s)
- Sheng-Fu Liang
- Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan, Taiwan
- AI Biomedical Research Center at NCKU, Ministry of Science and Technology, Tainan, Taiwan
| | - Yu-Hsuan Shih
- Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan, Taiwan
| | - Peng-Yu Chen
- Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan, Taiwan
| | - Chih-En Kuo
- Department of Automatic Control Engineering, Feng Chia University, Taichung, Taiwan
| |
Collapse
|
41
|
Phan H, Andreotti F, Cooray N, Chén OY, De Vos M. Joint Classification and Prediction CNN Framework for Automatic Sleep Stage Classification. IEEE Trans Biomed Eng 2019; 66:1285-1296. [PMID: 30346277 PMCID: PMC6487915 DOI: 10.1109/tbme.2018.2872652] [Citation(s) in RCA: 146] [Impact Index Per Article: 24.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2018] [Accepted: 09/22/2018] [Indexed: 11/07/2022]
Abstract
Correctly identifying sleep stages is important in diagnosing and treating sleep disorders. This paper proposes a joint classification-and-prediction framework based on convolutional neural networks (CNNs) for automatic sleep staging, and, subsequently, introduces a simple yet efficient CNN architecture to power the framework. Given a single input epoch, the novel framework jointly determines its label (classification) and its neighboring epochs' labels (prediction) in the contextual output. While the proposed framework is orthogonal to the widely adopted classification schemes, which take one or multiple epochs as contextual inputs and produce a single classification decision on the target epoch, we demonstrate its advantages in several ways. First, it leverages the dependency among consecutive sleep epochs while surpassing the problems experienced with the common classification schemes. Second, even with a single model, the framework has the capacity to produce multiple decisions, which are essential in obtaining a good performance as in ensemble-of-models methods, with very little induced computational overhead. Probabilistic aggregation techniques are then proposed to leverage the availability of multiple decisions. To illustrate the efficacy of the proposed framework, we conducted experiments on two public datasets: Sleep-EDF Expanded (Sleep-EDF), which consists of 20 subjects, and Montreal Archive of Sleep Studies (MASS) dataset, which consists of 200 subjects. The proposed framework yields an overall classification accuracy of 82.3% and 83.6%, respectively. We also show that the proposed framework not only is superior to the baselines based on the common classification schemes but also outperforms existing deep-learning approaches. To our knowledge, this is the first work going beyond the standard single-output classification to consider multitask neural networks for automatic sleep staging. This framework provides avenues for further studies of different neural-network architectures for automatic sleep staging.
Collapse
Affiliation(s)
- Huy Phan
- Institute of Biomedical EngineeringUniversity of OxfordOxfordOX3 7DQU.K.
| | | | - Navin Cooray
- Institute of Biomedical EngineeringUniversity of Oxford
| | | | | |
Collapse
|
42
|
|
43
|
Zhang P, Wang X, Zhang W, Chen J. Learning Spatial-Spectral-Temporal EEG Features With Recurrent 3D Convolutional Neural Networks for Cross-Task Mental Workload Assessment. IEEE Trans Neural Syst Rehabil Eng 2018; 27:31-42. [PMID: 30507536 DOI: 10.1109/tnsre.2018.2884641] [Citation(s) in RCA: 77] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Mental workload assessment is essential for maintaining human health and preventing accidents. Most research on this issue is limited to a single task. However, cross-task assessment is indispensable for extending a pre-trained model to new workload conditions. Because brain dynamics are complex across different tasks, it is difficult to propose efficient human-designed features based on prior knowledge. Therefore, this paper proposes a concatenated structure of deep recurrent and 3D convolutional neural networks (R3DCNNs) to learn EEG features across different tasks without prior knowledge. First, this paper adds frequency and time dimensions to EEG topographic maps based on a Morlet wavelet transformation. Then, R3DCNN is proposed to simultaneously learn EEG features from the spatial, spectral, and temporal dimensions. The proposed model is validated based on the EEG signals collected from 20 subjects. This paper employs a binary classification of low and high mental workload across spatial n-back and arithmetic tasks. The results show that the R3DCNN achieves an average accuracy of 88.9%, which is a significant increase compared with that of the state-of-the-art methods. In addition, the visualization of the convolutional layers demonstrates that the deep neural network can extract detailed features. These results indicate that R3DCNN is capable of identifying the mental workload levels for cross-task conditions.
Collapse
|
44
|
Ugon A, Kotti A, Séroussi B, Sedki K, Bouaud J, Ganascia JG, Garda P, Philippe C, Pinna A. Knowledge-based decision system for automatic sleep staging using symbolic fusion in a turing machine-like decision process formalizing the sleep medicine guidelines. EXPERT SYSTEMS WITH APPLICATIONS 2018; 114:414-427. [DOI: 10.1016/j.eswa.2018.07.023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2025]
|
45
|
Zhang Z, Wei S, Zhu G, Liu F, Li Y, Dong X, Liu C, Liu F. Efficient sleep classification based on entropy features and a support vector machine classifier. Physiol Meas 2018; 39:115005. [PMID: 30475743 DOI: 10.1088/1361-6579/aae943] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
OBJECTIVE Sleep quality helps to reflect on the physical and mental condition, and efficient sleep stage scoring promises considerable advantages to health care. The aim of this study is to propose a simple and efficient sleep classification method based on entropy features and a support vector machine classifier, named SC-En&SVM. APPROACH Entropy features, including fuzzy measure entropy (FuzzMEn), fuzzy entropy, and sample entropy are applied for the analysis and classification of sleep stages. FuzzyMEn has been used for heart rate variability analysis since it was proposed, while this is the first time it has been used for sleep scoring. The three features are extracted from 6 376 730 s epochs from Fpz-Cz electroencephalogram (EEG), Pz-Oz EEG and horizontal electrooculogram (EOG) signals in the sleep-EDF database. The independent samples t-test shows that the entropy values have significant differences among six sleep stages. The multi-class support vector machine (SVM) with a one-against-all class approach is utilized in this specific application for the first time. We perform 10-fold cross-validation as well as leave-one-subject-out cross-validation for 61 subjects to test the effectiveness and reliability of SC-En&SVM. MAIN RESULTS The 10-fold cross-validation shows an effective performance with high stability of SC-En&SVM. The average accuracy and standard deviation for 2-6 states are 97.02 ± 0.58, 92.74 ± 1.32, 89.08 ± 0.90, 86.02 ± 1.06 and 83.94 ± 1.61, respectively. While for a more practical evaluation, the independent scheme is further performed, and the results show that our method achieved similar or slightly better average accuracies for 2-6 states of 94.15%, 85.06%, 80.96%, 78.68% and 75.98% compared with state-of-the-art methods. The corresponding kappa coefficients (0.81, 0.74, 0.72, 0.71, 0.67) guarantee substantial agreement of the classification. SIGNIFICANCE We propose a novel sleep stage scoring method, SC-En&SVM, with easily accessible features and a simple classification algorithm, without reducing the classification performance compared with other approaches.
Collapse
Affiliation(s)
- Zhimin Zhang
- School of Control Science and Engineering, Shandong University, Jinan, People's Republic of China. School of Information Technology and Electrical Engineering, University of Queensland, Queensland, Australia
| | | | | | | | | | | | | | | |
Collapse
|
46
|
Eagleman SL, Drover DR. Calculations of consciousness: electroencephalography analyses to determine anesthetic depth. Curr Opin Anaesthesiol 2018; 31:431-438. [PMID: 29847364 DOI: 10.1097/aco.0000000000000618] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
PURPOSE OF REVIEW Electroencephalography (EEG) was introduced into anesthesia practice in the 1990s as a tool to titrate anesthetic depth. However, limitations in current analysis techniques have called into question whether these techniques improve standard of care, or instead call for improved, more ubiquitously applicable measures to assess anesthetic transitions and depth. This review highlights emerging analytical approaches and techniques from neuroscience research that have the potential to better capture anesthetic transitions to provide better measurements of anesthetic depth. RECENT FINDINGS Since the introduction of electroencephalography, neuroscientists, engineers, mathematicians, and clinicians have all been developing new ways of analyzing continuous electrical signals. Collaborations between these fields have proliferated several analytical techniques that demonstrate how anesthetics affect brain dynamics and conscious transitions. Here, we review techniques in the following categories: network science, integration and information, nonlinear dynamics, and artificial intelligence. SUMMARY Up-and-coming techniques have the potential to better clinically define and characterize altered consciousness time points. Such new techniques used alongside traditional measures have the potential to improve depth of anesthesia measurements and enhance an understanding of how the brain is affected by anesthetic agents. However, new measures will be needed to be tested for robustness in real-world environments and on diverse experimental protocols.
Collapse
Affiliation(s)
- Sarah L Eagleman
- Department of Anesthesiology, Perioperative and Pain Medicine, Stanford University School of Medicine, California, USA
| | | |
Collapse
|
47
|
Zhang J, Wu Y. Complex-valued unsupervised convolutional neural networks for sleep stage classification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 164:181-191. [PMID: 30195426 DOI: 10.1016/j.cmpb.2018.07.015] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/26/2018] [Revised: 07/05/2018] [Accepted: 07/25/2018] [Indexed: 06/08/2023]
Abstract
BACKGROUND AND OBJECTIVE Despite numerous deep learning methods being developed for automatic sleep stage classification, almost all the models need labeled data. However, obtaining labeled data is a subjective process. Therefore, the labels will be different between two experts. At the same time, obtaining labeled data also is a time-consuming task. Even an experienced expert requires hours to annotate the sleep stage patterns. More important, as the development of wearable sleep devices, it is very difficult to obtain labeled sleep data. Therefore, unsupervised training algorithm is very important for sleep stage classification. Hence, a new sleep stage classification method named complex-valued unsupervised convolutional neural networks (CUCNN) is proposed in this study. METHODS The CUCNN operates with complex-valued inputs, outputs, and weights, and its training strategy is greedy layer-wise training. It is composed of three phases: phase encoder, unsupervised training and complex-valued classification. Phase encoder is used to translate real-valued inputs into complex numbers. In the unsupervised training phase, the complex-valued K-means is used to learn filters which will be used in the convolution. RESULTS The classification performances of handcrafted features are compared with those of learned features via CUCNN. The total accuracy (TAC) and kappa coefficient of the sleep stage from UCD dataset are 87% and 0.8, respectively. Moreover, the comparison experiments indicate that the TACs of the CUCNN from UCD and MIT-BIH datasets outperform these of unsupervised convolutional neural networks (UCNN) by 12.9% and 13%, respectively. Additionally, the convergence of CUCNN is much faster than that of UCNN in most cases. CONCLUSIONS The proposed method is fully automated and can learn features in an unsupervised fashion. Results show that unsupervised training and automatic feature extraction on sleep data are possible, which are very important for home sleep monitoring.
Collapse
Affiliation(s)
- Junming Zhang
- College of Electronics & Information Engineering, Tongji University, Shanghai 201804, China
| | - Yan Wu
- College of Electronics & Information Engineering, Tongji University, Shanghai 201804, China.
| |
Collapse
|
48
|
Phan H, Andreotti F, Cooray N, Oliver Chen Y, De Vos M. DNN Filter Bank Improves 1-Max Pooling CNN for Single-Channel EEG Automatic Sleep Stage Classification. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2018; 2018:453-456. [PMID: 30440432 DOI: 10.1109/embc.2018.8512286] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
We present in this paper an efficient convolutional neural network (CNN) running on time-frequency image features for automatic sleep stage classification. Opposing to deep architectures which have been used for the task, the proposed CNN is much simpler However, the CNN's convolutional layer is able to support convolutional kernels with different sizes, and therefore, capable of learning features at multiple temporal resolutions. In addition, the 1-max pooling strategy is employed at the pooling layer to better capture the shift-invariance property of EEG signals. We further propose a method to discriminatively learn a frequency-domain filter bank with a deep neural network (DNN) to preprocess the time-frequency image features. Our experiments show that the proposed 1-max pooling CNN performs comparably with the very deep CNNs in the literature on the Sleep- EDF dataset. Preprocessing the time-frequency image features with the learned filter bank before presenting them to the CNN leads to significant improvements on the classification accuracy, setting the state- of-the-art performance on the dataset.
Collapse
|
49
|
Automated diagnosis of arrhythmia using combination of CNN and LSTM techniques with variable length heart beats. Comput Biol Med 2018; 102:278-287. [PMID: 29903630 DOI: 10.1016/j.compbiomed.2018.06.002] [Citation(s) in RCA: 242] [Impact Index Per Article: 34.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2018] [Revised: 06/01/2018] [Accepted: 06/02/2018] [Indexed: 11/22/2022]
Abstract
Arrhythmia is a cardiac conduction disorder characterized by irregular heartbeats. Abnormalities in the conduction system can manifest in the electrocardiographic (ECG) signal. However, it can be challenging and time-consuming to visually assess the ECG signals due to the very low amplitudes. Implementing an automated system in the clinical setting can potentially help expedite diagnosis of arrhythmia, and improve the accuracies. In this paper, we propose an automated system using a combination of convolutional neural network (CNN) and long short-term memory (LSTM) for diagnosis of normal sinus rhythm, left bundle branch block (LBBB), right bundle branch block (RBBB), atrial premature beats (APB) and premature ventricular contraction (PVC) on ECG signals. The novelty of this work is that we used ECG segments of variable length from the MIT-BIT arrhythmia physio bank database. The proposed system demonstrated high classification performance in the handling of variable-length data, achieving an accuracy of 98.10%, sensitivity of 97.50% and specificity of 98.70% using ten-fold cross validation strategy. Our proposed model can aid clinicians to detect common arrhythmias accurately on routine screening ECG.
Collapse
|
50
|
Use of features from RR-time series and EEG signals for automated classification of sleep stages in deep neural network framework. Biocybern Biomed Eng 2018. [DOI: 10.1016/j.bbe.2018.05.005] [Citation(s) in RCA: 88] [Impact Index Per Article: 12.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
|