1
|
Satapathy SK, Brahma B, Panda B, Barsocchi P, Bhoi AK. Machine learning-empowered sleep staging classification using multi-modality signals. BMC Med Inform Decis Mak 2024; 24:119. [PMID: 38711099 DOI: 10.1186/s12911-024-02522-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2023] [Accepted: 04/29/2024] [Indexed: 05/08/2024] Open
Abstract
The goal is to enhance an automated sleep staging system's performance by leveraging the diverse signals captured through multi-modal polysomnography recordings. Three modalities of PSG signals, namely electroencephalogram (EEG), electrooculogram (EOG), and electromyogram (EMG), were considered to obtain the optimal fusions of the PSG signals, where 63 features were extracted. These include frequency-based, time-based, statistical-based, entropy-based, and non-linear-based features. We adopted the ReliefF (ReF) feature selection algorithms to find the suitable parts for each signal and superposition of PSG signals. Twelve top features were selected while correlated with the extracted feature sets' sleep stages. The selected features were fed into the AdaBoost with Random Forest (ADB + RF) classifier to validate the chosen segments and classify the sleep stages. This study's experiments were investigated by obtaining two testing schemes: epoch-wise testing and subject-wise testing. The suggested research was conducted using three publicly available datasets: ISRUC-Sleep subgroup1 (ISRUC-SG1), sleep-EDF(S-EDF), Physio bank CAP sleep database (PB-CAPSDB), and S-EDF-78 respectively. This work demonstrated that the proposed fusion strategy overestimates the common individual usage of PSG signals.
Collapse
Affiliation(s)
- Santosh Kumar Satapathy
- Department of Information and Communication Technology, Pandit Deendayal Energy University, Gandhinagar, Gujarat, 382007, India.
| | - Biswajit Brahma
- McKesson Corporation, 1 Post St, San Francisco, CA, 94104, USA
| | - Baidyanath Panda
- LTIMindtree, 1 American Row, 3Rd Floor, Hartford, CT, 06103, USA
| | - Paolo Barsocchi
- Institute of Information Science and Technologies, National Research Council, 56124, Pisa, Italy.
| | - Akash Kumar Bhoi
- Directorate of Research, Sikkim Manipal University, Gangtok, 737102, Sikkim, India.
| |
Collapse
|
2
|
Yun R, Rembado I, Perlmutter SI, Rao RPN, Fetz EE. Local field potentials and single unit dynamics in motor cortex of unconstrained macaques during different behavioral states. Front Neurosci 2023; 17:1273627. [PMID: 38075283 PMCID: PMC10702227 DOI: 10.3389/fnins.2023.1273627] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2023] [Accepted: 11/09/2023] [Indexed: 02/12/2024] Open
Abstract
Different sleep stages have been shown to be vital for a variety of brain functions, including learning, memory, and skill consolidation. However, our understanding of neural dynamics during sleep and the role of prominent LFP frequency bands remain incomplete. To elucidate such dynamics and differences between behavioral states we collected multichannel LFP and spike data in primary motor cortex of unconstrained macaques for up to 24 h using a head-fixed brain-computer interface (Neurochip3). Each 8-s bin of time was classified into awake-moving (Move), awake-resting (Rest), REM sleep (REM), or non-REM sleep (NREM) by using dimensionality reduction and clustering on the average spectral density and the acceleration of the head. LFP power showed high delta during NREM, high theta during REM, and high beta when the animal was awake. Cross-frequency phase-amplitude coupling typically showed higher coupling during NREM between all pairs of frequency bands. Two notable exceptions were high delta-high gamma and theta-high gamma coupling during Move, and high theta-beta coupling during REM. Single units showed decreased firing rate during NREM, though with increased short ISIs compared to other states. Spike-LFP synchrony showed high delta synchrony during Move, and higher coupling with all other frequency bands during NREM. These results altogether reveal potential roles and functions of different LFP bands that have previously been unexplored.
Collapse
Affiliation(s)
- Richy Yun
- Department of Bioengineering, University of Washington, Seattle, WA, United States
- Center for Neurotechnology, University of Washington, Seattle, WA, United States
- Washington National Primate Research Center, University of Washington, Seattle, WA, United States
| | - Irene Rembado
- Washington National Primate Research Center, University of Washington, Seattle, WA, United States
- Department of Physiology and Biophysics, University of Washington, Seattle, WA, United States
| | - Steve I. Perlmutter
- Center for Neurotechnology, University of Washington, Seattle, WA, United States
- Washington National Primate Research Center, University of Washington, Seattle, WA, United States
- Department of Physiology and Biophysics, University of Washington, Seattle, WA, United States
| | - Rajesh P. N. Rao
- Center for Neurotechnology, University of Washington, Seattle, WA, United States
- Allen School of Computer Science and Engineering, University of Washington, Seattle, WA, United States
| | - Eberhard E. Fetz
- Department of Bioengineering, University of Washington, Seattle, WA, United States
- Center for Neurotechnology, University of Washington, Seattle, WA, United States
- Washington National Primate Research Center, University of Washington, Seattle, WA, United States
- Department of Physiology and Biophysics, University of Washington, Seattle, WA, United States
| |
Collapse
|
3
|
Ellis CA, Sendi MSE, Zhang R, Carbajal DA, Wang MD, Miller RL, Calhoun VD. Novel methods for elucidating modality importance in multimodal electrophysiology classifiers. Front Neuroinform 2023; 17:1123376. [PMID: 37006636 PMCID: PMC10050434 DOI: 10.3389/fninf.2023.1123376] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Accepted: 03/01/2023] [Indexed: 03/17/2023] Open
Abstract
IntroductionMultimodal classification is increasingly common in electrophysiology studies. Many studies use deep learning classifiers with raw time-series data, which makes explainability difficult, and has resulted in relatively few studies applying explainability methods. This is concerning because explainability is vital to the development and implementation of clinical classifiers. As such, new multimodal explainability methods are needed.MethodsIn this study, we train a convolutional neural network for automated sleep stage classification with electroencephalogram (EEG), electrooculogram, and electromyogram data. We then present a global explainability approach that is uniquely adapted for electrophysiology analysis and compare it to an existing approach. We present the first two local multimodal explainability approaches. We look for subject-level differences in the local explanations that are obscured by global methods and look for relationships between the explanations and clinical and demographic variables in a novel analysis.ResultsWe find a high level of agreement between methods. We find that EEG is globally the most important modality for most sleep stages and that subject-level differences in importance arise in local explanations that are not captured in global explanations. We further show that sex, followed by medication and age, had significant effects upon the patterns learned by the classifier.DiscussionOur novel methods enhance explainability for the growing field of multimodal electrophysiology classification, provide avenues for the advancement of personalized medicine, yield unique insights into the effects of demographic and clinical variables upon classifiers, and help pave the way for the implementation of multimodal electrophysiology clinical classifiers.
Collapse
Affiliation(s)
- Charles A. Ellis
- The Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology, Emory University, Atlanta, GA, United States
- Tri-Institutional Center for Translational Research in Neuroimaging and Data Science, Georgia State University, Georgia Institute of Technology, Emory University, Atlanta, GA, United States
- *Correspondence: Charles A. Ellis,
| | - Mohammad S. E. Sendi
- Tri-Institutional Center for Translational Research in Neuroimaging and Data Science, Georgia State University, Georgia Institute of Technology, Emory University, Atlanta, GA, United States
- McLean Hospital and Harvard Medical School, Boston, MA, United States
| | - Rongen Zhang
- Hankamer School of Business, Baylor University, Waco, TX, United States
| | - Darwin A. Carbajal
- The Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology, Atlanta, GA, United States
| | - May D. Wang
- The Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology, Emory University, Atlanta, GA, United States
| | - Robyn L. Miller
- Tri-Institutional Center for Translational Research in Neuroimaging and Data Science, Georgia State University, Georgia Institute of Technology, Emory University, Atlanta, GA, United States
- Department of Computer Science, Georgia State University, Atlanta, GA, United States
| | - Vince D. Calhoun
- The Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology, Emory University, Atlanta, GA, United States
- Tri-Institutional Center for Translational Research in Neuroimaging and Data Science, Georgia State University, Georgia Institute of Technology, Emory University, Atlanta, GA, United States
- Department of Computer Science, Georgia State University, Atlanta, GA, United States
| |
Collapse
|
4
|
Murugan S, Sivakumar PK, Kavitha C, Harichandran A, Lai WC. An Electro-Oculogram (EOG) Sensor's Ability to Detect Driver Hypovigilance Using Machine Learning. SENSORS (BASEL, SWITZERLAND) 2023; 23:2944. [PMID: 36991654 PMCID: PMC10058593 DOI: 10.3390/s23062944] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Revised: 02/23/2023] [Accepted: 03/02/2023] [Indexed: 06/19/2023]
Abstract
Driving safely is crucial to avoid death, injuries, or financial losses that can be sustained in an accident. Thus, a driver's physical state should be monitored to prevent accidents, rather than vehicle-based or behavioral measurements, and provide reliable information in this regard. Electrocardiography (ECG), electroencephalography (EEG), electrooculography (EOG), and surface electromyography (sEMG) signals are used to monitor a driver's physical state during a drive. The purpose of this study was to detect driver hypovigilance (drowsiness, fatigue, as well as visual and cognitive inattention) using signals collected from 10 drivers while they were driving. EOG signals from the driver were preprocessed to remove noise, and 17 features were extracted. ANOVA (analysis of variance) was used to select statistically significant features that were then loaded into a machine learning algorithm. We then reduced the features by using principal component analysis (PCA) and trained three classifiers: support vector machine (SVM), k-nearest neighbor (KNN), and ensemble. A maximum accuracy of 98.7% was obtained for the classification of normal and cognitive classes under the category of two-class detection. Upon considering hypovigilance states as five-class, a maximum accuracy of 90.9% was achieved. In this case, the number of detection classes increased, resulting in a reduction in the accuracy of detecting more driver states. However, with the possibility of incorrect identification and the presence of issues, the ensemble classifier's performance produced an enhanced accuracy when compared to others.
Collapse
Affiliation(s)
- Suganiya Murugan
- Department of Computing Technologies, SRM Institute of Science and Technology—KTR, Chennai 603203, India
| | - Pradeep Kumar Sivakumar
- Department of Electrical and Electronics Engineering, Vels Institute of Science Technology and Advanced Studies, Chennai 600117, India
| | - C. Kavitha
- Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai 600119, India
| | - Anandhi Harichandran
- Department of Biomedical Engineering, Agni College of Technology, Chennai 600130, India
| | - Wen-Cheng Lai
- Bachelor Program in Industrial Projects, National Yunlin University of Science and Technology, Douliu 640301, Taiwan
- Department of Electronic Engineering, National Yunlin University of Science and Technology, Douliu 640301, Taiwan
| |
Collapse
|
5
|
Validation Study on Automated Sleep Stage Scoring Using a Deep Learning Algorithm. MEDICINA (KAUNAS, LITHUANIA) 2022; 58:medicina58060779. [PMID: 35744042 PMCID: PMC9228793 DOI: 10.3390/medicina58060779] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Revised: 06/06/2022] [Accepted: 06/07/2022] [Indexed: 11/16/2022]
Abstract
Background and Objectives: Polysomnography is manually scored by sleep experts. However, manual scoring is a time-consuming and labor-intensive task. The goal of this study was to verify the accuracy of automated sleep-stage scoring based on a deep learning algorithm compared to manual sleep-stage scoring. Materials and Methods: A total of 602 polysomnography datasets from subjects (Male:Female = 397:205) aged 19 to 65 years (mean age, 43.8, standard deviation = 12.2) were included in the study. The performance of the proposed model was evaluated based on kappa value and bootstrapped point-estimate of median percent agreement with a 95% bootstrap confidence interval and R = 1000. The proposed model was trained using 482 datasets and validated using 48 datasets. For testing, 72 datasets were selected randomly. Results: The proposed model exhibited good concordance rates with manual scoring for stages W (94%), N1 (83.9%), N2 (89%), N3 (92%), and R (93%). The average kappa value was 0.84. For the bootstrap method, high overall agreement between the automated deep learning algorithm and manual scoring was observed in stages W (98%), N1 (94%), N2 (92%), N3 (99%), and R (98%) and total (96%). Conclusions: Automated sleep-stage scoring using the proposed model may be a reliable method for sleep-stage classification.
Collapse
|
6
|
Duan L, Li M, Wang C, Qiao Y, Wang Z, Sha S, Li M. A Novel Sleep Staging Network Based on Data Adaptation and Multimodal Fusion. Front Hum Neurosci 2021; 15:727139. [PMID: 34690720 PMCID: PMC8531206 DOI: 10.3389/fnhum.2021.727139] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2021] [Accepted: 09/20/2021] [Indexed: 11/15/2022] Open
Abstract
Sleep staging is one of the important methods to diagnosis and treatment of sleep diseases. However, it is laborious and time-consuming, therefore, computer assisted sleep staging is necessary. Most of the existing sleep staging researches using hand-engineered features rely on prior knowledges of sleep analysis, and usually single channel electroencephalogram (EEG) is used for sleep staging task. Prior knowledge is not always available, and single channel EEG signal cannot fully represent the patient’s sleeping physiological states. To tackle the above two problems, we propose an automatic sleep staging network model based on data adaptation and multimodal feature fusion using EEG and electrooculogram (EOG) signals. 3D-CNN is used to extract the time-frequency features of EEG at different time scales, and LSTM is used to learn the frequency evolution of EOG. The nonlinear relationship between the High-layer features of EEG and EOG is fitted by deep probabilistic network. Experiments on SLEEP-EDF and a private dataset show that the proposed model achieves state-of-the-art performance. Moreover, the prediction result is in accordance with that from the expert diagnosis.
Collapse
Affiliation(s)
- Lijuan Duan
- Faculty of Information Technology, Beijing University of Technology, Beijing, China.,Beijing Key Laboratory of Trusted Computing, Beijing, China.,National Engineering Laboratory for Critical Technologies of Information Security Classified Protection, Beijing, China
| | - Mengying Li
- Faculty of Information Technology, Beijing University of Technology, Beijing, China.,Beijing Key Laboratory of Trusted Computing, Beijing, China.,National Engineering Laboratory for Critical Technologies of Information Security Classified Protection, Beijing, China
| | - Changming Wang
- Brain-Inspired Intelligence and Clinical Translational Research Center, Beijing, China.,Department of Neurosurgery, Xuanwu Hospital, Capital Medical University, Beijing, China
| | - Yuanhua Qiao
- College of Applied Sciences, Beijing University of Technology, Beijing, China
| | - Zeyu Wang
- Faculty of Information Technology, Beijing University of Technology, Beijing, China.,Beijing Key Laboratory of Trusted Computing, Beijing, China.,National Engineering Laboratory for Critical Technologies of Information Security Classified Protection, Beijing, China
| | - Sha Sha
- Beijing Anding Hospital, Capital Medical University, Beijing, China
| | - Mingai Li
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
| |
Collapse
|
7
|
Pathak S, Lu C, Nagaraj SB, van Putten M, Seifert C. STQS: Interpretable multi-modal Spatial-Temporal-seQuential model for automatic Sleep scoring. Artif Intell Med 2021; 114:102038. [PMID: 33875157 DOI: 10.1016/j.artmed.2021.102038] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2020] [Revised: 01/27/2021] [Accepted: 02/16/2021] [Indexed: 10/22/2022]
Abstract
Sleep scoring is an important step for the detection of sleep disorders and usually performed by visual analysis. Since manual sleep scoring is time consuming, machine-learning based approaches have been proposed. Though efficient, these algorithms are black-box in nature and difficult to interpret by clinicians. In this paper, we propose a deep learning architecture for multi-modal sleep scoring, investigate the model's decision making process, and compare the model's reasoning with the annotation guidelines in the AASM manual. Our architecture, called STQS, uses convolutional neural networks (CNN) to automatically extract spatio-temporal features from 3 modalities (EEG, EOG and EMG), a bidirectional long short-term memory (Bi-LSTM) to extract sequential information, and residual connections to combine spatio-temporal and sequential features. We evaluated our model on two large datasets, obtaining an accuracy of 85% and 77% and a macro F1 score of 79% and 73% on SHHS and an in-house dataset, respectively. We further quantify the contribution of various architectural components and conclude that adding LSTM layers improves performance over a spatio-temporal CNN, while adding residual connections does not. Our interpretability results show that the output of the model is well aligned with AASM guidelines, and therefore, the model's decisions correspond to domain knowledge. We also compare multi-modal models and single-channel models and suggest that future research should focus on improving multi-modal models.
Collapse
Affiliation(s)
| | | | | | - Michel van Putten
- University of Twente, Netherlands; Medisch Spectrum Twente, Netherlands
| | - Christin Seifert
- University of Twente, Netherlands; University of Duisburg-Essen, Germany
| |
Collapse
|
8
|
Imtiaz SA. A Systematic Review of Sensing Technologies for Wearable Sleep Staging. SENSORS (BASEL, SWITZERLAND) 2021; 21:1562. [PMID: 33668118 PMCID: PMC7956647 DOI: 10.3390/s21051562] [Citation(s) in RCA: 48] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Revised: 02/13/2021] [Accepted: 02/20/2021] [Indexed: 12/15/2022]
Abstract
Designing wearable systems for sleep detection and staging is extremely challenging due to the numerous constraints associated with sensing, usability, accuracy, and regulatory requirements. Several researchers have explored the use of signals from a subset of sensors that are used in polysomnography (PSG), whereas others have demonstrated the feasibility of using alternative sensing modalities. In this paper, a systematic review of the different sensing modalities that have been used for wearable sleep staging is presented. Based on a review of 90 papers, 13 different sensing modalities are identified. Each sensing modality is explored to identify signals that can be obtained from it, the sleep stages that can be reliably identified, the classification accuracy of systems and methods using the sensing modality, as well as the usability constraints of the sensor in a wearable system. It concludes that the two most common sensing modalities in use are those based on electroencephalography (EEG) and photoplethysmography (PPG). EEG-based systems are the most accurate, with EEG being the only sensing modality capable of identifying all the stages of sleep. PPG-based systems are much simpler to use and better suited for wearable monitoring but are unable to identify all the sleep stages.
Collapse
Affiliation(s)
- Syed Anas Imtiaz
- Wearable Technologies Lab, Imperial College London, London SW7 2AZ, UK
| |
Collapse
|
9
|
Sleep staging from single-channel EEG with multi-scale feature and contextual information. Sleep Breath 2019; 23:1159-1167. [PMID: 30863994 DOI: 10.1007/s11325-019-01789-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2018] [Revised: 01/16/2019] [Accepted: 01/26/2019] [Indexed: 01/16/2023]
Abstract
PURPOSE Portable sleep monitoring devices with less-attached sensors and high-accuracy sleep staging methods can expedite sleep disorder diagnosis. The aim of this study was to propose a single-channel EEG sleep staging model, SleepStageNet, which extracts sleep EEG features by multi-scale convolutional neural networks (CNN) and then infers the type of sleep stages by capturing the contextual information between adjacent epochs using recurrent neural networks (RNN) and conditional random field (CRF). METHODS To verify the feasibility of our model, two datasets, one composed by two different single-channel EEGs (Fpz-Cz and Pz-Oz) on 20 healthy people and one composed by a single-channel EEG (F4-M1) on 104 obstructive sleep apnea (OSA) patients with different severities, were examined. The corresponding sleep stages were scored as four states (wake, REM, light sleep, and deep sleep). The accuracy measures were obtained from epoch-by-epoch comparison between the model and PSG scorer, and the agreement between them was quantified with Cohen's kappa (ҡ). RESULTS Our model achieved superior performance with average accuracy (Fpz-Cz, 0.88; Pz-Oz, 0.85) and ҡ (Fpz-Cz, 0.82; Pz-Oz, 0.77) on the healthy people. Furthermore, we validated this model on the OSA patients with average accuracy (F4-M1, 0.80) and ҡ (F4-M1, 0.67). Our model significantly improved the accuracy and ҡ compared to previous methods. CONCLUSIONS The proposed SleepStageNet has proved feasible for assessment of sleep architecture among OSA patients using single-channel EEG. We suggest that this technological advancement could augment the current use of home sleep apnea testing.
Collapse
|
10
|
Yan R, Zhang C, Spruyt K, Wei L, Wang Z, Tian L, Li X, Ristaniemi T, Zhang J, Cong F. Multi-modality of polysomnography signals’ fusion for automatic sleep scoring. Biomed Signal Process Control 2019. [DOI: 10.1016/j.bspc.2018.10.001] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
11
|
Mohammadi SM, Enshaeifar S, Ghavami M, Sanei S. Classification of awake, REM, and NREM from EEG via singular spectrum analysis. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2016; 2015:4769-72. [PMID: 26737360 DOI: 10.1109/embc.2015.7319460] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
In this study, a single-channel electroencephalography (EEG) analysis method has been proposed for automated 3-state-sleep classification to discriminate Awake, NREM (non-rapid eye movement) and REM (rapid eye movement). For this purpose, singular spectrum analysis (SSA) is applied to automatically extract four brain rhythms: delta, theta, alpha, and beta. These subbands are then used to generate the appropriate features for sleep classification using a multi class support vector machine (M-SVM). The proposed method provided 0.79 agreement between the manual and automatic scores.
Collapse
|
12
|
Imtiaz SA, Rodriguez-Villegas E. A low computational cost algorithm for REM sleep detection using single channel EEG. Ann Biomed Eng 2014; 42:2344-59. [PMID: 25113231 PMCID: PMC4204008 DOI: 10.1007/s10439-014-1085-6] [Citation(s) in RCA: 44] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2014] [Accepted: 07/31/2014] [Indexed: 11/26/2022]
Abstract
The push towards low-power and wearable sleep systems requires using minimum number of recording channels to enhance battery life, keep processing load small and be more comfortable for the user. Since most sleep stages can be identified using EEG traces, enormous power savings could be achieved by using a single channel of EEG. However, detection of REM sleep from one channel EEG is challenging due to its electroencephalographic similarities with N1 and Wake stages. In this paper we investigate a novel feature in sleep EEG that demonstrates high discriminatory ability for detecting REM phases. We then use this feature, that is based on spectral edge frequency (SEF) in the 8–16 Hz frequency band, together with the absolute power and the relative power of the signal, to develop a simple REM detection algorithm. We evaluate the performance of this proposed algorithm with overnight single channel EEG recordings of 5 training and 15 independent test subjects. Our algorithm achieved sensitivity of 83%, specificity of 89% and selectivity of 61% on a test database consisting of 2221 REM epochs. It also achieved sensitivity and selectivity of 81 and 75% on PhysioNet Sleep-EDF database consisting of 8 subjects. These results demonstrate that SEF can be a useful feature for automatic detection of REM stages of sleep from a single channel EEG.
Collapse
Affiliation(s)
- Syed Anas Imtiaz
- Department of Electrical and Electronic Engineering, Imperial College London, London, UK
| | | |
Collapse
|
13
|
Ebrahimi F, Mikaeili M, Estrada E, Nazeran H. Automatic sleep stage classification based on EEG signals by using neural networks and wavelet packet coefficients. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2009; 2008:1151-4. [PMID: 19162868 DOI: 10.1109/iembs.2008.4649365] [Citation(s) in RCA: 126] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Currently in the world there is an alarming number of people who suffer from sleep disorders. A number of biomedical signals, such as EEG, EMG, ECG and EOG are used in sleep labs among others for diagnosis and treatment of sleep related disorders. The usual method for sleep stage classification is visual inspection by a sleep specialist. This is a very time consuming and laborious exercise. Automatic sleep stage classification can facilitate this process. The definition of sleep stages and the sleep literature show that EEG signals are similar in Stage 1 of non-rapid eye movement (NREM) sleep and rapid eye movement (REM) sleep. Therefore, in this work an attempt was made to classify four sleep stages consisting of Awake, Stage 1 + REM, Stage 2 and Slow Wave Stage based on the EEG signal alone. Wavelet packet coefficients and artificial neural networks were deployed for this purpose. Seven all night recordings from Physionet database were used in the study. The results demonstrated that these four sleep stages could be automatically discriminated from each other with a specificity of 94.4 +/- 4.5%, a of sensitivity 84.2+3.9% and an accuracy of 93.0 +/- 4.0%.
Collapse
Affiliation(s)
- Farideh Ebrahimi
- Biomedical Engineering Department, Shahed University, Tehran, Iran
| | | | | | | |
Collapse
|