1
|
Kim JW, Kim S, Cho E, Kyung H, Park SK, Hong J, Lee D, Oh J, Yoon IY. Evaluation of sound-based sleep stage prediction in shared sleeping settings. Sleep Med 2025; 132:106533. [PMID: 40315671 DOI: 10.1016/j.sleep.2025.106533] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/05/2025] [Revised: 04/09/2025] [Accepted: 04/21/2025] [Indexed: 05/04/2025]
Abstract
BACKGROUND/OBJECTIVE Sound-based AI models for sleep staging face challenges in shared sleeping environments due to acoustic interference from bed partners. This study aimed to evaluate the performance of a sound-based model in two-person polysomnography (PSG) scenarios, with independently recorded sound data for each participant. METHODS Eighty-eight participants (37 males, 51 females) were recruited, including 74 from mixed-gender pairs and 14 from all-female pairs. Bed partners underwent simultaneous PSG in a shared room, with sound recorded separately for each participant using MEMS microphones placed 1.2 m from the bed, oriented toward the closest participant. Sleep staging was classified into 4-stage (wake, REM, light NREM, deep NREM), 3-stage (wake, REM, NREM), and 2-stage (wake, sleep) categories. Macro F1 scores were used to evaluate the model's performance. RESULTS The model achieved mean macro F1 scores of 0.590, 0.665, and 0.741 and Cohen's kappa of 0.470, 0.525, and 0.499 for 4-stage, 3-stage, and 2-stage classifications, respectively, across subjects. Performance for 4-stage classification varied by group composition, with macro F1 score and Cohen's kappa of 0.585 and 0.458 for mixed-gender pairs and 0.616 and 0.529 for all-female pairs. Subgroup analyses revealed higher macro F1 scores in males (0.683) compared to females (0.523) and in individuals with higher BMI (0.674), higher AHI (0.659), and higher sleep efficiency (0.641). CONCLUSION This study demonstrates the ability of a sound-based model to predict sleep stages effectively in shared sleeping environments, overcoming the interference challenges from bed partners. Future research will aim to refine the model for broader demographic applicability.
Collapse
Affiliation(s)
- Jeong-Whun Kim
- Department of Otorhinolaryngology-Head and Neck Surgery, Seoul National University College of Medicine, Seoul National University Bundang Hospital, Seongnam, South Korea
| | | | | | | | | | | | | | - Jayoung Oh
- Department of Otorhinolaryngology-Head and Neck Surgery, Seoul National University College of Medicine, Seoul National University Bundang Hospital, Seongnam, South Korea
| | - In-Young Yoon
- Department of Psychiatry, Seoul National University College of Medicine, Seoul National University Bundang Hospital, Seongnam, South Korea.
| |
Collapse
|
2
|
Szmola B, Hornig L, Wolf KI, Radeloff A, Witt K, Kollmeier B. Feasibility of Radar Vital Sign Monitoring Using Multiple Range Bin Selection. SENSORS (BASEL, SWITZERLAND) 2025; 25:2596. [PMID: 40285284 PMCID: PMC12031119 DOI: 10.3390/s25082596] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/26/2025] [Revised: 04/15/2025] [Accepted: 04/16/2025] [Indexed: 04/29/2025]
Abstract
Radars are promising tools for contactless vital sign monitoring. As a screening device, radars could supplement polysomnography, the gold standard in sleep medicine. When the radar is placed lateral to the person, vital signs can be extracted simultaneously from multiple body parts. Here, we present a method to select every available breathing and heartbeat signal, instead of selecting only one optimal signal. Using multiple concurrent signals can enhance vital rate robustness and accuracy. We built an algorithm based on persistence diagrams, a modern tool for time series analysis from the field of topological data analysis. Multiple criteria were evaluated on the persistence diagrams to detect breathing and heartbeat signals. We tested the feasibility of the method on simultaneous overnight radar and polysomnography recordings from six healthy participants. Compared against single bin selection, multiple selection lead to improved accuracy for both breathing (mean absolute error: 0.29 vs. 0.20 breaths per minute) and heart rate (mean absolute error: 1.97 vs. 0.66 beats per minute). Additionally, fewer artifactual segments were selected. Furthermore, the distribution of chosen vital signs along the body aligned with basic physiological assumptions. In conclusion, contactless vital sign monitoring could benefit from the improved accuracy achieved by multiple selection. The distribution of vital signs along the body could provide additional information for sleep monitoring.
Collapse
Affiliation(s)
- Benedek Szmola
- Department of Neurology, School of Medicine and Health Science, Carl von Ossietzky Universität Oldenburg, 26129 Oldenburg, Germany;
- Medizinische Physik, Carl von Ossietzky Universität Oldenburg, 26129 Oldenburg, Germany;
- Fraunhofer Institute for Digital Media Technology IDMT, Oldenburg Branch for Hearing, Speech and Audio Technology HSA, Marie-Curie-Straße 2, 26129 Oldenburg, Germany; (L.H.); (K.I.W.)
| | - Lars Hornig
- Fraunhofer Institute for Digital Media Technology IDMT, Oldenburg Branch for Hearing, Speech and Audio Technology HSA, Marie-Curie-Straße 2, 26129 Oldenburg, Germany; (L.H.); (K.I.W.)
| | - Karen Insa Wolf
- Fraunhofer Institute for Digital Media Technology IDMT, Oldenburg Branch for Hearing, Speech and Audio Technology HSA, Marie-Curie-Straße 2, 26129 Oldenburg, Germany; (L.H.); (K.I.W.)
| | - Andreas Radeloff
- Division of Otolaryngology, Head and Neck Surgery, Carl von Ossietzky Universität Oldenburg, 26129 Oldenburg, Germany;
| | - Karsten Witt
- Department of Neurology, School of Medicine and Health Science, Carl von Ossietzky Universität Oldenburg, 26129 Oldenburg, Germany;
| | - Birger Kollmeier
- Medizinische Physik, Carl von Ossietzky Universität Oldenburg, 26129 Oldenburg, Germany;
| |
Collapse
|
3
|
Baumert M, Phan H. A perspective on automated rapid eye movement sleep assessment. J Sleep Res 2025; 34:e14223. [PMID: 38650539 PMCID: PMC11911057 DOI: 10.1111/jsr.14223] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Revised: 02/18/2024] [Accepted: 04/08/2024] [Indexed: 04/25/2024]
Abstract
Rapid eye movement sleep is associated with distinct changes in various biomedical signals that can be easily captured during sleep, lending themselves to automated sleep staging using machine learning systems. Here, we provide a perspective on the critical characteristics of biomedical signals associated with rapid eye movement sleep and how they can be exploited for automated sleep assessment. We summarise key historical developments in automated sleep staging systems, having now achieved classification accuracy on par with human expert scorers and their role in the clinical setting. We also discuss rapid eye movement sleep assessment with consumer sleep trackers and its potential for unprecedented sleep assessment on a global scale. We conclude by providing a future outlook of computerised rapid eye movement sleep assessment and the role AI systems may play.
Collapse
Affiliation(s)
- Mathias Baumert
- Discipline of Biomedical Engineering, School of Electrical and Mechanical EngineeringThe University of AdelaideAdelaideAustralia
| | | |
Collapse
|
4
|
Lee J, Kim HC, Lee YJ, Lee S. Development of generalizable automatic sleep staging using heart rate and movement based on large databases. Biomed Eng Lett 2023; 13:649-658. [PMID: 37872992 PMCID: PMC10590335 DOI: 10.1007/s13534-023-00288-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Revised: 05/12/2023] [Accepted: 05/14/2023] [Indexed: 10/25/2023] Open
Abstract
Purpose With the advancement of deep neural networks in biosignals processing, the performance of automatic sleep staging algorithms has improved significantly. However, sleep staging using only non-electroencephalogram features has not been as successful, especially following the current American Association of Sleep Medicine (AASM) standards. This study presents a fine-tuning based approach to widely generalizable automatic sleep staging using heart rate and movement features trained and validated on large databases of polysomnography. Methods A deep neural network is used to predict sleep stages using heart rate and movement features. The model is optimized on a dataset of 8731 nights of polysomnography recordings labeled using the Rechtschaffen & Kales scoring system, and fine-tuned to a smaller dataset of 1641 AASM-labeled recordings. The model prior to and after fine-tuning is validated on two AASM-labeled external datasets totaling 1183 recordings. In order to measure the performance of the model, the output of the optimized model is compared to reference expert-labeled sleep stages using accuracy and Cohen's κ as key metrics. Results The fine-tuned model showed accuracy of 76.6% with Cohen's κ of 0.606 in one of the external validation datasets, outperforming a previously reported result, and showed accuracy of 81.0% with Cohen's κ of 0.673 in another external validation dataset. Conclusion These results indicate that the proposed model is generalizable and effective in predicting sleep stages using features which can be extracted from non-contact sleep monitors. This holds valuable implications for future development of home sleep evaluation systems.
Collapse
Affiliation(s)
| | - Hee Chan Kim
- Department of Biomedical Engineering, Seoul National University College of Medicine, Seoul, 03080 South Korea
- Institute of Medical and Biological Engineering, Medical Research Center, Seoul National University, Seoul, 08826 South Korea
| | - Yu Jin Lee
- Department of Neuropsychiatry, Seoul National University Hospital, Seoul, 03080 South Korea
- Center for Sleep and Chronobiology, Seoul National University Hospital, Seoul, 03080 South Korea
| | - Saram Lee
- Transdisciplinary Department of Medicine and Advanced Technology, Seoul National University Hospital, Seoul, 03080 South Korea
| |
Collapse
|
5
|
Khanna A, Jones G. Toward Personalized Medicine Approaches for Parkinson Disease Using Digital Technologies. JMIR Form Res 2023; 7:e47486. [PMID: 37756050 PMCID: PMC10568402 DOI: 10.2196/47486] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 09/03/2023] [Accepted: 09/05/2023] [Indexed: 09/28/2023] Open
Abstract
Parkinson disease (PD) is a complex neurodegenerative disorder that afflicts over 10 million people worldwide, resulting in debilitating motor and cognitive impairment. In the United States alone (with approximately 1 million cases), the economic burden for treating and caring for persons with PD exceeds US $50 billion and myriad therapeutic approaches are under development, including both symptomatic- and disease-modifying agents. The challenges presented in addressing PD are compounded by observations that numerous, statistically distinct patient phenotypes present with a wide variety of motor and nonmotor symptomatic profiles, varying responses to current standard-of-care symptom-alleviating medications (L-DOPA and dopaminergic agonists), and different disease trajectories. The existence of these differing phenotypes highlights the opportunities in personalized approaches to symptom management and disease control. The prodromal period of PD can span across several decades, allowing the potential to leverage the unique array of composite symptoms presented to trigger early interventions. This may be especially beneficial as disease progression in PD (alongside Alzheimer disease and Huntington disease) may be influenced by biological processes such as oxidative stress, offering the potential for individual lifestyle factors to be tailored to delay disease onset. In this viewpoint, we offer potential scenarios where emerging diagnostic and monitoring strategies might be tailored to the individual patient under the tenets of P4 medicine (predict, prevent, personalize, and participate). These approaches may be especially relevant as the causative factors and biochemical pathways responsible for the observed neurodegeneration in patients with PD remain areas of fluid debate. The numerous observational patient cohorts established globally offer an excellent opportunity to test and refine approaches to detect, characterize, control, modify the course, and ultimately stop progression of this debilitating disease. Such approaches may also help development of parallel interventive strategies in other diseases such as Alzheimer disease and Huntington disease, which share common traits and etiologies with PD. In this overview, we highlight near-term opportunities to apply P4 medicine principles for patients with PD and introduce the concept of composite orthogonal patient monitoring.
Collapse
Affiliation(s)
- Amit Khanna
- Neuroscience Global Drug Development, Novartis Pharma AG, Basel, Switzerland
| | - Graham Jones
- GDD Connected Health and Innovation Group, Novartis Pharmaceuticals, East Hanover, NJ, United States
- Clinical and Translational Science Institute, Tufts University Medical Center, Boston, MA, United States
| |
Collapse
|
6
|
Yoon H, Choi SH. Technologies for sleep monitoring at home: wearables and nearables. Biomed Eng Lett 2023; 13:313-327. [PMID: 37519880 PMCID: PMC10382403 DOI: 10.1007/s13534-023-00305-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Revised: 06/17/2023] [Accepted: 07/03/2023] [Indexed: 08/01/2023] Open
Abstract
Sleep is an essential part of our lives and daily sleep monitoring is crucial for maintaining good health and well-being. Traditionally, the gold standard method for sleep monitoring is polysomnography using various sensors attached to the body; however, it is limited with regards to long-term sleep monitoring in a home environment. Recent advancements in wearable and nearable technology have made it possible to monitor sleep at home. In this review paper, the technologies that are currently available for sleep stages and sleep disorder monitoring at home are reviewed using wearable and nearable devices. Wearables are devices that are worn on the body, while nearables are placed near the body. These devices can accurately monitor sleep stages and sleep disorder in a home environment. In this study, the benefits and limitations of each technology are discussed, along with their potential to improve sleep quality.
Collapse
Affiliation(s)
- Heenam Yoon
- Department of Human-Centered Artificial Intelligence, Sangmyung University, Seoul, 03016 Korea
| | - Sang Ho Choi
- School of Computer and Information Engineering, Kwangwoon University, Seoul, 01897 Korea
| |
Collapse
|
7
|
Lambert I, Peter-Derex L. Spotlight on Sleep Stage Classification Based on EEG. Nat Sci Sleep 2023; 15:479-490. [PMID: 37405208 PMCID: PMC10317531 DOI: 10.2147/nss.s401270] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Accepted: 06/21/2023] [Indexed: 07/06/2023] Open
Abstract
The recommendations for identifying sleep stages based on the interpretation of electrophysiological signals (electroencephalography [EEG], electro-oculography [EOG], and electromyography [EMG]), derived from the Rechtschaffen and Kales manual, were published in 2007 at the initiative of the American Academy of Sleep Medicine, and regularly updated over years. They offer an important tool to assess objective markers in different types of sleep/wake subjective complaints. With the aims and advantages of simplicity, reproducibility and standardization of practices in research and, most of all, in sleep medicine, they have overall changed little in the way they describe sleep. However, our knowledge on sleep/wake physiology and sleep disorders has evolved since then. High-density electroencephalography and intracranial electroencephalography studies have highlighted local regulation of sleep mechanisms, with spatio-temporal heterogeneity in vigilance states. Progress in the understanding of sleep disorders has allowed the identification of electrophysiological biomarkers better correlated with clinical symptoms and outcomes than standard sleep parameters. Finally, the huge development of sleep medicine, with a demand for explorations far exceeding the supply, has led to the development of alternative studies, which can be carried out at home, based on a smaller number of electrophysiological signals and on their automatic analysis. In this perspective article, we aim to examine how our description of sleep has been constructed, has evolved, and may still be reshaped in the light of advances in knowledge of sleep physiology and the development of technical recording and analysis tools. After presenting the strengths and limitations of the classification of sleep stages, we propose to challenge the "EEG-EOG-EMG" paradigm by discussing the physiological signals required for sleep stages identification, provide an overview of new tools and automatic analysis methods and propose avenues for the development of new approaches to describe and understand sleep/wake states.
Collapse
Affiliation(s)
- Isabelle Lambert
- APHM, Timone Hospital, Sleep Unit, Epileptology and Cerebral Rhythmology, Marseille, France
- Aix Marseille University, INSERM, Institut de Neuroscience des Systemes, Marseille, France
| | - Laure Peter-Derex
- Center for Sleep Medicine and Respiratory Diseases, Croix-Rousse Hospital, Hospices Civils de Lyon, Lyon 1 University, Lyon, France
- Lyon Neuroscience Research Center, PAM Team, INSERM U1028, CNRS UMR 5292, Lyon, France
| |
Collapse
|
8
|
Teplitzky TB, Zauher AJ, Isaiah A. Alternatives to Polysomnography for the Diagnosis of Pediatric Obstructive Sleep Apnea. Diagnostics (Basel) 2023; 13:diagnostics13111956. [PMID: 37296808 DOI: 10.3390/diagnostics13111956] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Revised: 05/16/2023] [Accepted: 05/30/2023] [Indexed: 06/12/2023] Open
Abstract
Diagnosis of obstructive sleep apnea (OSA) in children with sleep-disordered breathing (SDB) requires hospital-based, overnight level I polysomnography (PSG). Obtaining a level I PSG can be challenging for children and their caregivers due to the costs, barriers to access, and associated discomfort. Less burdensome methods that approximate pediatric PSG data are needed. The goal of this review is to evaluate and discuss alternatives for evaluating pediatric SDB. To date, wearable devices, single-channel recordings, and home-based PSG have not been validated as suitable replacements for PSG. However, they may play a role in risk stratification or as screening tools for pediatric OSA. Further studies are needed to determine if the combined use of these metrics could predict OSA.
Collapse
Affiliation(s)
- Taylor B Teplitzky
- Department of Otorhinolaryngology-Head and Neck Surgery, University of Maryland School of Medicine, Baltimore, MD 21201, USA
| | - Audrey J Zauher
- Department of Otorhinolaryngology-Head and Neck Surgery, University of Maryland School of Medicine, Baltimore, MD 21201, USA
| | - Amal Isaiah
- Department of Otorhinolaryngology-Head and Neck Surgery, University of Maryland School of Medicine, Baltimore, MD 21201, USA
- Department of Pediatrics, University of Maryland School of Medicine, Baltimore, MD 21201, USA
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, MD 21201, USA
| |
Collapse
|
9
|
Onal M, Onal O. In reference to Prediction of Oxygen Desaturation by Using Sound Data From a Noncontact Device: A Proof-of-Concept Study. Laryngoscope 2023; 133:E23. [PMID: 36317738 DOI: 10.1002/lary.30476] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Accepted: 07/28/2022] [Indexed: 02/24/2023]
Affiliation(s)
- Merih Onal
- Department of Otorhinolaryngology, Selcuk University Faculty of Medicine, Konya, Turkey
| | - Ozkan Onal
- Department of Outcomes Research, Anesthesiology Institute, Cleveland Clinic Main Hospital, Cleveland, Ohio, USA.,Department of Anesthesiology and Reanimation, Selcuk University Faculty of Medicine, Konya, Turkey
| |
Collapse
|
10
|
Akyol S, Yildirim M, Alatas B. Multi-feature fusion and improved BO and IGWO metaheuristics based models for automatically diagnosing the sleep disorders from sleep sounds. Comput Biol Med 2023; 157:106768. [PMID: 36907034 DOI: 10.1016/j.compbiomed.2023.106768] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Revised: 02/21/2023] [Accepted: 03/07/2023] [Indexed: 03/12/2023]
Abstract
A night of regular and quality sleep is vital in human life. Sleep quality has a great impact on the daily life of people and those around them. Sounds such as snoring reduce not only the sleep quality of the person but also reduce the sleep quality of the partner. Sleep disorders can be eliminated by examining the sounds that people make at night. It is a very difficult process to follow and treat this process by experts. Therefore, this study, it is aimed to diagnose sleep disorders using computer-aided systems. In the study, the used data set contains seven hundred sound data which has seven different sound class such as cough, farting, laugh, scream, sneeze, sniffle, and snore. In the model proposed in the study, firstly, the feature maps of the sound signals in the data set were extracted. Three different methods were used in the feature extraction process. These methods are MFCC, Mel-spectrogram, and Chroma. The features extracted in these three methods are combined. Thanks to this method, the features of the same sound signal extracted in three different methods are used. This increases the performance of the proposed model. Later, the combined feature maps were analyzed using the proposed New Improved Gray Wolf Optimization (NI-GWO), which is the improved version of the Improved Gray Wolf Optimization (I-GWO) algorithm, and the proposed Improved Bonobo Optimizer (IBO) algorithm, which is the improved version of the Bonobo Optimizer (BO). In this way, it is aimed to run the models faster, reduce the number of features, and obtain the most optimum result. Finally, Support Vector Machine (SVM) and k-nearest neighbors (KNN) supervised shallow machine learning methods were used to calculate the metaheuristic algorithms' fitness values. Different types of metrics such as accuracy, sensitivity, F1 etc., were used for the performance comparison. Using the feature maps optimized by the proposed NI-GWO and IBO algorithms, the highest accuracy value was obtained from the SVM classifier with 99.28% for both metaheuristic algorithms.
Collapse
Affiliation(s)
- Sinem Akyol
- Department of Software Engineering, Firat University, 23100, Elazig, Turkey
| | - Muhammed Yildirim
- Department of Computer Engineering, Malatya Turgut Ozal University, 44200, Malatya, Turkey
| | - Bilal Alatas
- Department of Software Engineering, Firat University, 23100, Elazig, Turkey.
| |
Collapse
|
11
|
Zhu Q, Wada H, Onuki K, Kitazawa T, Furuya R, Miyakawa M, Sato S, Yonemoto N, Ueda Y, Nakano H, Gozal D, Tanigawa T. Validity and reliability of the Japanese version of the severity hierarchy score for pediatric obstructive sleep apnea screening. Sleep Med 2023; 101:357-364. [PMID: 36493656 DOI: 10.1016/j.sleep.2022.11.023] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 11/18/2022] [Accepted: 11/19/2022] [Indexed: 11/23/2022]
Abstract
OBJECTIVE This study aimed to evaluate the validity and reliability of the Japanese version of the severity hierarchy score (J-SHS) in the screening of pediatric obstructive sleep apnea (OSA) among Japanese community children. METHODS A total of 922 children from elementary schools in Tokyo were recruited. Their parents completed the J-SHS questionnaire, and the children underwent an overnight Tracheal Sound (TS) recording. The reliability of the J-SHS was assessed by Cronbach's alpha coefficients and Spearman's correlation. Construct validity was determined by factor analysis. The discriminative ability to diagnose OSA was evaluated by constructing ROC curves. RESULTS Five hundred and seventeen children (51.8% male, mean age 7.1 ± 0.7 years) were included. Cronbach's alpha coefficient was 0.80. Factor analysis resulted in a two-factor structure, with factor loadings all above 0.4. A J-SHS score of >1.88 exhibited a 60% sensitivity, 93% specificity, and an area under the curve (AUC) of 0.78 for detecting an apnea-hypopnea index (AHI) of ≥5/h; a J-SHS score of >2.06 exhibited a 75% sensitivity, 84% specificity and AUC of 0.84 for detecting an AHI of ≥3/h among the children with a snoring frequency above two nights/wk. CONCLUSION The J-SHS exhibits good performance as a screening tool providing a quick and straightforward approach for identifying Japanese children at risk for OSA.
Collapse
Affiliation(s)
- Qinye Zhu
- Department of Public Health, Juntendo University Graduate School of Medicine, Hongo, Bunkyo-Ku, Tokyo, Japan
| | - Hiroo Wada
- Department of Public Health, Juntendo University Graduate School of Medicine, Hongo, Bunkyo-Ku, Tokyo, Japan
| | - Keisike Onuki
- Department of Public Health, Juntendo University Graduate School of Medicine, Hongo, Bunkyo-Ku, Tokyo, Japan
| | - Takayuki Kitazawa
- Department of Public Health, Juntendo University Graduate School of Medicine, Hongo, Bunkyo-Ku, Tokyo, Japan
| | - Ritsuko Furuya
- Department of Public Health, Juntendo University Graduate School of Medicine, Hongo, Bunkyo-Ku, Tokyo, Japan
| | - Mariko Miyakawa
- Department of Public Health, Juntendo University Graduate School of Medicine, Hongo, Bunkyo-Ku, Tokyo, Japan
| | - Setsuko Sato
- Department of Public Health, Juntendo University Graduate School of Medicine, Hongo, Bunkyo-Ku, Tokyo, Japan
| | - Naohiro Yonemoto
- Department of Public Health, Juntendo University Graduate School of Medicine, Hongo, Bunkyo-Ku, Tokyo, Japan
| | - Yuito Ueda
- Department of Public Health, Juntendo University Graduate School of Medicine, Hongo, Bunkyo-Ku, Tokyo, Japan
| | - Hiroshi Nakano
- Sleep Disorders Centre, National Hospital Organization Fukuoka National Hospital, Yakatabaru, Minami-Ku, Fukuoka City, Japan
| | - David Gozal
- Department of Child Health, University of Missouri School of Medicine, Columbia, MO, USA
| | - Takeshi Tanigawa
- Department of Public Health, Juntendo University Graduate School of Medicine, Hongo, Bunkyo-Ku, Tokyo, Japan.
| |
Collapse
|
12
|
Sholeyan AE, Rahatabad FN, Setarehdan SK. Designing an Automatic Sleep Staging System Using Deep Convolutional Neural Network Fed by Nonlinear Dynamic Transformation. J Med Biol Eng 2022. [DOI: 10.1007/s40846-022-00771-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
|
13
|
Cho SW, Jung SJ, Shin JH, Won TB, Rhee CS, Kim JW. Evaluating Prediction Models of Sleep Apnea From Smartphone-Recorded Sleep Breathing Sounds. JAMA Otolaryngol Head Neck Surg 2022; 148:515-521. [PMID: 35420648 PMCID: PMC9011176 DOI: 10.1001/jamaoto.2022.0244] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Abstract
Importance Breathing sounds during sleep are an important characteristic feature of obstructive sleep apnea (OSA) and have been regarded as a potential biomarker. Breathing sounds during sleep can be easily recorded using a microphone, which is found in most smartphone devices. Therefore, it may be easy to implement an evaluation tool for prescreening purposes. Objective To evaluate OSA prediction models using smartphone-recorded sounds and identify optimal settings with regard to noise processing and sound feature selection. Design, Setting, and Participants A cross-sectional study was performed among patients who visited the sleep center of Seoul National University Bundang Hospital for snoring or sleep apnea from August 2015 to August 2019. Audio recordings during sleep were performed using a smartphone during routine, full-night, in-laboratory polysomnography. Using a random forest algorithm, binary classifications were separately conducted for 3 different threshold criteria according to an apnea hypopnea index (AHI) threshold of 5, 15, or 30 events/h. Four regression models were created according to noise reduction and feature selection from the input sound to predict actual AHI: (1) noise reduction without feature selection, (2) noise reduction with feature selection, (3) neither noise reduction nor feature selection, and (4) feature selection without noise reduction. Clinical and polysomnographic parameters that may have been associated with errors were assessed. Data were analyzed from September 2019 to September 2020. Main Outcomes and Measures Accuracy of OSA prediction models. Results A total of 423 patients (mean [SD] age, 48.1 [12.8] years; 356 [84.1%] male) were analyzed. Data were split into training (n = 256 [60.5%]) and test data sets (n = 167 [39.5%]). Accuracies were 88.2%, 82.3%, and 81.7%, and the areas under curve were 0.90, 0.89, and 0.90 for an AHI threshold of 5, 15, and 30 events/h, respectively. In the regression analysis, using recorded sounds that had not been denoised and had only selected attributes resulted in the highest correlation coefficient (r = 0.78; 95% CI, 0.69-0.88). The AHI (β = 0.33; 95% CI, 0.24-0.42) and sleep efficiency (β = -0.20; 95% CI, -0.35 to -0.05) were found to be associated with estimation error. Conclusions and Relevance In this cross-sectional study, recorded sleep breathing sounds using a smartphone were used to create reasonably accurate OSA prediction models. Future research should focus on real-life recordings using various smartphone devices.
Collapse
Affiliation(s)
- Sung-Woo Cho
- Department of Otorhinolaryngology-Head and Neck Surgery, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Korea
| | - Sung Jae Jung
- Big Data Center, Seoul National University Bundang Hospital, Seongnam, Korea
| | - Jin Ho Shin
- Big Data Center, Seoul National University Bundang Hospital, Seongnam, Korea
| | - Tae-Bin Won
- Department of Otorhinolaryngology-Head and Neck Surgery, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Korea.,Department of Otorhinolaryngology-Head and Neck Surgery, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Korea
| | - Chae-Seo Rhee
- Department of Otorhinolaryngology-Head and Neck Surgery, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Korea.,Department of Otorhinolaryngology-Head and Neck Surgery, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Korea.,Sensory Organ Research Institute, Seoul National University Medical Research Center, Seoul, Korea
| | - Jeong-Whun Kim
- Department of Otorhinolaryngology-Head and Neck Surgery, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Korea.,Sensory Organ Research Institute, Seoul National University Medical Research Center, Seoul, Korea
| |
Collapse
|
14
|
Phan H, Mikkelsen K. Automatic sleep staging of EEG signals: recent development, challenges, and future directions. Physiol Meas 2022; 43. [PMID: 35320788 DOI: 10.1088/1361-6579/ac6049] [Citation(s) in RCA: 48] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Accepted: 03/23/2022] [Indexed: 11/11/2022]
Abstract
Modern deep learning holds a great potential to transform clinical practice on human sleep. Teaching a machine to carry out routine tasks would be a tremendous reduction in workload for clinicians. Sleep staging, a fundamental step in sleep practice, is a suitable task for this and will be the focus in this article. Recently, automatic sleep staging systems have been trained to mimic manual scoring, leading to similar performance to human sleep experts, at least on scoring of healthy subjects. Despite tremendous progress, we have not seen automatic sleep scoring adopted widely in clinical environments. This review aims to give a shared view of the authors on the most recent state-of-the-art development in automatic sleep staging, the challenges that still need to be addressed, and the future directions for automatic sleep scoring to achieve clinical value.
Collapse
Affiliation(s)
- Huy Phan
- School of Electronic Engineering and Computer Science, Queen Mary University of London, Mile End Rd, London, E1 4NS, UNITED KINGDOM OF GREAT BRITAIN AND NORTHERN IRELAND
| | - Kaare Mikkelsen
- Department of Electrical and Computer Engineering, Aarhus Universitet, Finlandsgade 22, Aarhus, 8000, DENMARK
| |
Collapse
|
15
|
Sunshine J. Smart Speakers: The Next Frontier in mHealth. JMIR Mhealth Uhealth 2022; 10:e28686. [PMID: 35188467 PMCID: PMC8902676 DOI: 10.2196/28686] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2021] [Revised: 06/11/2021] [Accepted: 01/07/2022] [Indexed: 11/21/2022] Open
Abstract
The rapid dissemination and adoption of smart speakers has enabled substantial opportunities to improve human health. Just as the introduction of the mobile phone led to considerable health innovation, smart speaker computing systems carry several unique advantages that have the potential to catalyze new fields of health research, particularly in out-of-hospital environments. The recent rise and ubiquity of these smart computing systems holds significant potential for enhancing chronic disease management, enabling passive identification of unwitnessed medical emergencies, detecting subtle changes in human behavior and cognition, limiting isolation, and potentially allowing widespread, passive, remote monitoring of respiratory diseases that impact public health. There are 3 broad mechanisms for how a smart speaker can interact with a person to improve health. These include (1) as an intelligent conversational agent, (2) as a passive identifier of medically relevant diagnostic sounds, and (3) by active sensing using the device's internal hardware to measure physiologic parameters, such as with active sonar, radar, or computer vision. Each of these different modalities has specific clinical use cases, all of which need to be balanced against potential privacy concerns, equity concerns related to system access, and regulatory frameworks which have not yet been developed for this unique type of passive data collection.
Collapse
Affiliation(s)
- Jacob Sunshine
- Department of Anesthesiology & Pain Medicine, University of Washington, Seattle, WA, United States
- Paul G Allen School of Computer Science and Engineering, University of Washington, Seattle, WA, United States
| |
Collapse
|
16
|
Eni M, Mordoh V, Zigel Y. Cough detection using a non-contact microphone: A nocturnal cough study. PLoS One 2022; 17:e0262240. [PMID: 35045111 PMCID: PMC8769326 DOI: 10.1371/journal.pone.0262240] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2021] [Accepted: 12/19/2021] [Indexed: 11/19/2022] Open
Abstract
An automatic non-contact cough detector designed especially for night audio recordings that can distinguish coughs from snores and other sounds is presented. Two different classifiers were implemented and tested: a Gaussian Mixture Model (GMM) and a Deep Neural Network (DNN). The detected coughs were analyzed and compared in different sleep stages and in terms of severity of Obstructive Sleep Apnea (OSA), along with age, Body Mass Index (BMI), and gender. The database was composed of nocturnal audio signals from 89 subjects recorded during a polysomnography study. The DNN-based system outperformed the GMM-based system, at 99.8% accuracy, with a sensitivity and specificity of 86.1% and 99.9%, respectively (Positive Predictive Value (PPV) of 78.4%). Cough events were significantly more frequent during wakefulness than in the sleep stages (p < 0.0001) and were significantly less frequent during deep sleep than in other sleep stages (p < 0.0001). A positive correlation was found between BMI and the number of nocturnal coughs (R = 0.232, p < 0.05), and between the number of nocturnal coughs and OSA severity in men (R = 0.278, p < 0.05). This non-contact cough detection system may thus be implemented to track the progression of respiratory illnesses and test reactions to different medications even at night when a contact sensor is uncomfortable or infeasible.
Collapse
Affiliation(s)
- Marina Eni
- Department of Biomedical Engineering, Ben-Gurion University of the Negev, Beer-Sheva, Israel
- * E-mail:
| | - Valeria Mordoh
- Department of Biomedical Engineering, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Yaniv Zigel
- Department of Biomedical Engineering, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| |
Collapse
|
17
|
Hong J, Tran HH, Jung J, Jang H, Lee D, Yoon IY, Hong JK, Kim JW. End-to-End Sleep Staging Using Nocturnal Sounds from Microphone Chips for Mobile Devices. Nat Sci Sleep 2022; 14:1187-1201. [PMID: 35783665 PMCID: PMC9241996 DOI: 10.2147/nss.s361270] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 06/03/2022] [Indexed: 02/04/2023] Open
Abstract
PURPOSE Nocturnal sounds contain numerous information and are easily obtainable by a non-contact manner. Sleep staging using nocturnal sounds recorded from common mobile devices may allow daily at-home sleep tracking. The objective of this study is to introduce an end-to-end (sound-to-sleep stages) deep learning model for sound-based sleep staging designed to work with audio from microphone chips, which are essential in mobile devices such as modern smartphones. PATIENTS AND METHODS Two different audio datasets were used: audio data routinely recorded by a solitary microphone chip during polysomnography (PSG dataset, N=1154) and audio data recorded by a smartphone (smartphone dataset, N=327). The audio was converted into Mel spectrogram to detect latent temporal frequency patterns of breathing and body movement from ambient noise. The proposed neural network model learns to first extract features from each 30-second epoch and then analyze inter-epoch relationships of extracted features to finally classify the epochs into sleep stages. RESULTS Our model achieved 70% epoch-by-epoch agreement for 4-class (wake, light, deep, REM) sleep stage classification and robust performance across various signal-to-noise conditions. The model performance was not considerably affected by sleep apnea or periodic limb movement. External validation with smartphone dataset also showed 68% epoch-by-epoch agreement. CONCLUSION The proposed end-to-end deep learning model shows potential of low-quality sounds recorded from microphone chips to be utilized for sleep staging. Future study using nocturnal sounds recorded from mobile devices at home environment may further confirm the use of mobile device recording as an at-home sleep tracker.
Collapse
Affiliation(s)
- Joonki Hong
- Asleep Inc., Seoul, Korea.,Korea Advanced Institute of Science and Technology, Daejeon, Korea
| | | | | | | | | | - In-Young Yoon
- Department of Psychiatry, Seoul National University Bundang Hospital, Seongnam, Korea.,Seoul National University College of Medicine, Seoul, Korea
| | - Jung Kyung Hong
- Department of Psychiatry, Seoul National University Bundang Hospital, Seongnam, Korea.,Seoul National University College of Medicine, Seoul, Korea
| | - Jeong-Whun Kim
- Seoul National University College of Medicine, Seoul, Korea.,Department of Otorhinolaryngology, Seoul National University Bundang Hospital, Seongnam, Korea
| |
Collapse
|
18
|
Kim JW, Shin J, Lee K, Won TB, Rhee CS, Cho SW. Prediction of Oxygen Desaturation by Using Sound Data From a Noncontact Device: A Proof-of-Concept Study. Laryngoscope 2021; 132:901-905. [PMID: 34873695 DOI: 10.1002/lary.29971] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2021] [Revised: 11/04/2021] [Accepted: 11/24/2021] [Indexed: 11/11/2022]
Abstract
OBJECTIVES/HYPOTHESIS Prediction of the apnea-hypopnea index (AHI) from breathing sounds during sleep could be used to prescreen for obstructive sleep apnea (OSA). In addition, the oxygen desaturation index (ODI) is a known risk factor for developing cardiovascular disease in OSA patients. This study focused on estimation of ODI from a noncontact manner from sleep breathing sounds. STUDY DESIGN Retrospective study. METHODS Patients who visited the sleep center due to snoring or sleep apnea underwent polysomnography in lab overnight. Sound recordings were made during polysomnography using a microphone. After noise reduction, the sound data were segmented into 5 seconds windows and features were extracted. Binary classification and regression analyses were performed to estimate the ODI during sleep (model 1). This was re-tested after inclusion of body mass index (BMI) and age as additional features (model 2: BMI only, model 3: BMI and age). RESULTS We included 116 patients. The mean age and AHI of all patients were 50.4 ± 16.7 years and 23.0 ± 24.0 events/hr. In binary classification, for ODI cutoff values of 5, 15, and 30 events/hr, the areas under the curve were 0.88, 0.93, 0.91, respectively, and accuracies were 85.34, 86.21, and 87.07, respectively. In regression analysis, the correlation coefficient and mean absolute error were 0.80 and 9.60 events/hr, respectively. In models 2 and 3, the correlation coefficient and mean absolute error were 0.82, 9.44 events/hr and 0.81, 9.6 events/hr, respectively. CONCLUSION Prediction of ODI from sleep sound seems to be feasible. Additional clinical feature such as BMI may increase overall predictability. LEVEL OF EVIDENCE IV Laryngoscope, 2021.
Collapse
Affiliation(s)
- Jeong-Whun Kim
- Department of Otorhinolaryngology-Head and Neck Surgery, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, South Korea.,Sensory Organ Research Institute, Seoul National University Medical Research Center, Seoul National University Medical Research Center, Seoul, Korea
| | - Jaeyoung Shin
- Music and Audio Research Group, Graduate School of Convergence Science and Technology, Seoul National University, Suwon, South Korea
| | - Kyogu Lee
- Music and Audio Research Group, Graduate School of Convergence Science and Technology, Seoul National University, Suwon, South Korea
| | - Tae-Bin Won
- Department of Otorhinolaryngology-Head and Neck Surgery, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, South Korea.,Department of Otorhinolaryngology-Head and Neck Surgery, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, South Korea
| | - Chae-Seo Rhee
- Department of Otorhinolaryngology-Head and Neck Surgery, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, South Korea.,Department of Otorhinolaryngology-Head and Neck Surgery, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, South Korea.,Sensory Organ Research Institute, Seoul National University Medical Research Center, Seoul National University Medical Research Center, Seoul, Korea
| | - Sung-Woo Cho
- Department of Otorhinolaryngology-Head and Neck Surgery, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, South Korea
| |
Collapse
|
19
|
Vanbuis J, Feuilloy M, Baffet G, Meslier N, Gagnadoux F, Girault JM. A New Sleep Staging System for Type III Sleep Studies Equipped with a Tracheal Sound Sensor. IEEE Trans Biomed Eng 2021; 69:1225-1236. [PMID: 34665717 DOI: 10.1109/tbme.2021.3120927] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Type III sleep studies record cardio-respiratory channels only. Compared with polysomnography, which also records electrophysiological channels, they present many advantages: they are less expensive, less time-consuming, and more likely to be performed at home. However, their accuracy is limited by missing sleep information. That is why many studies present specific cardio-respiratory parameters to assess the causal effects of sleep stages upon cardiac or respiratory activities. For this paper, we gathered many parameters proposed in literature, leading to 1,111 features. The pulse oximeter, the PneaVoX sensor (recording tracheal sounds), respiratory inductance plethysmography belts, the nasal cannula and the actimeter provided the 112 worthiest ones for automatic sleep scoring. Then, a 3-step model was implemented: classification with a multi-layer perceptron, sleep transition rules corrections (from the AASM guidelines), and sequence corrections using a Viterbi hidden Markov model. The whole process was trained and tested using 300 and 100 independent recordings provided from patients suspected of having sleep breathing disorders. Results indicated that the system achieves substantial agreement with manual scoring for classifications into 2 stages (wake vs. sleep: mean Cohen's Kappa of 0.63 and accuracy rate Acc of 87.8%) and 3 stages (wake vs. R stage vs. NREM stage: mean of 0.60 and Acc of 78.5%). It indicates that the method could provide information to help specialists while diagnosing sleep. The presented model had promising results and may enhance clinical diagnosis.
Collapse
|
20
|
Devos P, Bruyneel M. IoT snoring sound detector prototype as a model of future participatory healthcare. Technol Health Care 2021; 30:491-496. [PMID: 34657858 DOI: 10.3233/thc-213145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
BACKGROUND Traditional healthcare is centred around providing in-hospital services using hospital owned medical instruments. The COVID-19 pandemic has shown that this approach lacks flexibility to insure follow-up and treatment of common medical problems. In an alternative setting adapted to this problem, participatory healthcare can be considered centred around data provided by patients owning and operating medical data collection equipment in their homes. OBJECTIVE In order to trigger such a shift reliable and price attractive devices need to become available. Snoring, as a human sound production during sleep, can reflect sleeping behaviour and indicate sleep problems as an element of the overall health condition of a person. METHODS The use of off-the-shelf hardware from Internet of Things platforms and standard audio components allows the development of such devices. A prototype of a snoring sound detector with this purpose is developed. RESULTS The device, controlled by the patient and with specific snoring recording and analysing functions is demonstrated as a model for future participatory healthcare. CONCLUSIONS Design of monitoring devices following this model could allow market introduction of new equipment for participatory healthcare, bringing a care complementary to traditional healthcare to the reach of patients, and could result in benefits from enhanced patient participation.
Collapse
Affiliation(s)
- Paul Devos
- WAVES Research Group, Department of Information Technology, Ghent University, Ghent, Belgium
| | - Marie Bruyneel
- Dept of Pneumology, CHU Saint Pierre, Université Libre de Bruxelles, Brussels, Belgium
| |
Collapse
|
21
|
Schutte-Rodin S, Deak M, Khosla S, Goldstein CA, Yurcheshen M, Chiang A, Gault D, Kern J, O'Hearn D, Ryals S, Verma N, Kirsch DB, Baron K, Holfinger S, Miller J, Patel R, Bhargava S, Ramar K. Evaluating consumer and clinical sleep technologies: an American Academy of Sleep Medicine update. J Clin Sleep Med 2021; 17:2275-2282. [PMID: 34314344 DOI: 10.5664/jcsm.9580] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Affiliation(s)
- Sharon Schutte-Rodin
- University of Pennsylvania Perelman School of Medicine, Philadelphia, Pennsylvania
| | | | - Seema Khosla
- North Dakota Center for Sleep, Fargo, North Dakota
| | | | | | - Ambrose Chiang
- Louis Stokes Cleveland VA Medical Center, Case Western Reserve University, Cleveland, Ohio
| | - Dominic Gault
- Greenville Health System, University of South Carolina, Greenville, South Carolina
| | - Joseph Kern
- New Mexico VA Health Care System, Albuquerque, New Mexico
| | - Daniel O'Hearn
- Department of Medicine, University of Washington, Seattle, Washington
| | - Scott Ryals
- University of Florida Health Sleep Center, Gainesville, Florida
| | | | - Douglas B Kirsch
- Carolinas Healthcare Medical Group Sleep Services, Charlotte, North Carolina
| | - Kelly Baron
- Univeristy of Utah Sleep-Wake Center, Salt Lake City, Utah
| | | | | | - Ruchir Patel
- The Insomnia and Sleep Institute of Arizona, Scottsdale, Arizona
| | - Sumit Bhargava
- Lucille Packard Children's Hospital at Stanford, Palo Alto, California
| | | |
Collapse
|
22
|
Mordoh V, Zigel Y. Audio source separation to reduce sleeping partner sounds: a simulation study. Physiol Meas 2021; 42. [PMID: 34038872 DOI: 10.1088/1361-6579/ac0592] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Accepted: 05/26/2021] [Indexed: 12/31/2022]
Abstract
Objective.When recording a subject in an at-home environment for sleep evaluation or for other breathing disorder diagnoses using non-contact microphones, the breathing recordings (audio signals) can be distorted by sounds such as TV, outside noise, or air-conditioners. If two people are sleeping together, both may produce breathing/snoring sounds that need to be separated. In this study, we present signal processing and source separation algorithms for the enhancement of individual breathing/snoring audio signals in a simulated environment.Approach.We developed a computer simulation of mixed signals derived from genuine nocturnal recordings of 110 subjects. Two main source separation approaches were tested: (1) changing the basis vectors for the mixtures in the time domain (principal and independent component analysis, PCA/ICA) and (2) converting the mixtures to their time-frequency representations (degenerate un-mixing estimation technique, DUET). In addition to these source separation techniques, a beamforming approach was tested.Main results.The separation results with a reverberation time of 0.15 s and zero SNR between signals showed good performance (mean source to interference ratio (SIR): DUET = 12.831 dB, ICA = 3.388 dB, PCA = 4.452 dB), and for beamforming (SIR = -0.304 dB). To evaluate our source separation results, we propose two new measures: an evaluation measure based on a spectral similarity score (mel-SID) between the target source and its estimation (after separation) and a breathing energy ratio measure (BER). The results with the new proposed measures yielded comparable conclusions (mel-SID: DUET = 1.320, ICA = 2.732, PCA = 1.927, and beamforming = 2.590, BER: DUET = 10.241 dB, ICA = 0.270 dB, PCA = -2.847 dB, and beamforming = -1.151 dB), but better differentiated the differences between the performance of the algorithms. The DUET is superior on all measures. Its main advantage is that it only uses two microphones for separation.Significance. The separated audio signal can thus contribute to a more informed diagnosis of sleep-related and non-sleep-related diseases. The Institutional Review Committee of Soroka University Medical Center approved this study protocol (protocol number 10141) and all methods were performed in accordance with the relevant guidelines and regulations.
Collapse
Affiliation(s)
- Valeria Mordoh
- Department of Biomedical Engineering, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Yaniv Zigel
- Department of Biomedical Engineering, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| |
Collapse
|
23
|
Dogan S, Akbal E, Tuncer T, Acharya UR. Application of substitution box of present cipher for automated detection of snoring sounds. Artif Intell Med 2021; 117:102085. [PMID: 34127246 DOI: 10.1016/j.artmed.2021.102085] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2020] [Revised: 04/30/2021] [Accepted: 05/03/2021] [Indexed: 01/06/2023]
Abstract
BACKGROUND AND PURPOSE Snoring is one of the sleep disorders, and snoring sounds have been used to diagnose many sleep-related diseases. However, the snoring sound classification is done manually which is time-consuming and prone to human errors. An automated snoring sound classification model is proposed to overcome these problems. MATERIAL AND METHOD This work proposes an automated snoring sound classification method using three new methods. These methods are maximum absolute pooling (MAP), the nonlinear present pattern, and two-layered neighborhood component analysis, and iterative neighborhood component analysis (NCAINCA) selector. Using these methods, a new snoring sound classification (SSC) model is presented. The MAP decomposition model is applied to snoring sounds to extract both low and high-level features. The presented model aims to attain high performance for SSC problem. The developed present pattern (Present-Pat) uses substitution box (SBox) and statistical feature generator. By deploying these feature generators, both textural and statistical features are generated. NCAINCA chooses the most informative/valuable features, and these selected features are fed to k-nearest neighbor (kNN) classifier with leave-one-out cross-validation (LOOCV). The Present-Pat based SSC system is developed using Munich-Passau Snore Sound Corpus (MPSSC) dataset comprising of four categories. RESULTS Our model reached an accuracy and unweighted average recall (UAR) of 97.10 % and 97.60 %, respectively, using LOOCV. Moreover, a nocturnal sound dataset is used to show the universal success of the presented model. Our model attained an accuracy of 98.14 % using the used nocturnal sound dataset. CONCLUSIONS Our developed classification model is ready to be tested with more data and can be used by sleep specialists to diagnose the sleep disorders based on snoring sounds.
Collapse
Affiliation(s)
- Sengul Dogan
- Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig, Turkey.
| | - Erhan Akbal
- Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig, Turkey
| | - Turker Tuncer
- Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig, Turkey
| | - U Rajendra Acharya
- Ngee Ann Polytechnic, Department of Electronics and Computer Engineering, 599489, Singapore; Department of Biomedical Engineering, School of Science and Technology, SUSS University, Singapore; Department of Biomedical Informatics and Medical Engineering, Asia University, Taichung, Taiwan
| |
Collapse
|
24
|
Photoplethysmography in Normal and Pathological Sleep. SENSORS 2021; 21:s21092928. [PMID: 33922042 PMCID: PMC8122413 DOI: 10.3390/s21092928] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Revised: 04/19/2021] [Accepted: 04/20/2021] [Indexed: 01/20/2023]
Abstract
This article presents an overview of the advancements that have been made in the use of photoplethysmography (PPG) for unobtrusive sleep studies. PPG is included in the quickly evolving and very popular landscape of wearables but has specific interesting properties, particularly the ability to capture the modulation of the autonomic nervous system during sleep. Recent advances have been made in PPG signal acquisition and processing, including coupling it with accelerometry in order to construct hypnograms in normal and pathologic sleep and also to detect sleep-disordered breathing (SDB). The limitations of PPG (e.g., oxymetry signal failure, motion artefacts, signal processing) are reviewed as well as technical solutions to overcome these issues. The potential medical applications of PPG are numerous, including home-based detection of SDB (for triage purposes), and long-term monitoring of insomnia, circadian rhythm sleep disorders (to assess treatment effects), and treated SDB (to ensure disease control). New contact sensor combinations to improve future wearables seem promising, particularly tools that allow for the assessment of brain activity. In this way, in-ear EEG combined with PPG and actigraphy could be an interesting focus for future research.
Collapse
|
25
|
Behar JA, Liu C, Zigel Y, Laguna P, Clifford GD. Editorial on Remote Health Monitoring: from chronic diseases to pandemics. Physiol Meas 2021; 41:100401. [PMID: 33393486 DOI: 10.1088/1361-6579/abbb6d] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
26
|
An Unsupervised Behavioral Modeling and Alerting System Based on Passive Sensing for Elderly Care. FUTURE INTERNET 2020. [DOI: 10.3390/fi13010006] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023] Open
Abstract
Artificial Intelligence in combination with the Internet of Medical Things enables remote healthcare services through networks of environmental and/or personal sensors. We present a remote healthcare service system which collects real-life data through an environmental sensor package, including binary motion, contact, pressure, and proximity sensors, installed at households of elderly people. Its aim is to keep the caregivers informed of subjects’ health-status progressive trajectory, and alert them of health-related anomalies to enable objective on-demand healthcare service delivery at scale. The system was deployed in 19 households inhabited by an elderly person with post-stroke condition in the Emilia–Romagna region in Italy, with maximal and median observation durations of 98 and 55 weeks. Among these households, 17 were multi-occupancy residences, while the other 2 housed elderly patients living alone. Subjects’ daily behavioral diaries were extracted and registered from raw sensor signals, using rule-based data pre-processing and unsupervised algorithms. Personal behavioral habits were identified and compared to typical patterns reported in behavioral science, as a quality-of-life indicator. We consider the activity patterns extracted across all users as a dictionary, and represent each patient’s behavior as a ‘Bag of Words’, based on which patients can be categorized into sub-groups for precision cohort treatment. Longitudinal trends of the behavioral progressive trajectory and sudden abnormalities of a patient were detected and reported to care providers. Due to the sparse sensor setting and the multi-occupancy living condition, the sleep profile was used as the main indicator in our system. Experimental results demonstrate the ability to report on subjects’ daily activity pattern in terms of sleep, outing, visiting, and health-status trajectories, as well as predicting/detecting 75% hospitalization sessions up to 11 days in advance. 65% of the alerts were confirmed to be semantically meaningful by the users. Furthermore, reduced social interaction (outing and visiting), and lower sleep quality could be observed during the COVID-19 lockdown period across the cohort.
Collapse
|
27
|
Montazeri Ghahjaverestan N, Akbarian S, Hafezi M, Saha S, Zhu K, Gavrilovic B, Taati B, Yadollahi A. Sleep/Wakefulness Detection Using Tracheal Sounds and Movements. Nat Sci Sleep 2020; 12:1009-1021. [PMID: 33235534 PMCID: PMC7680175 DOI: 10.2147/nss.s276107] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/08/2020] [Accepted: 10/08/2020] [Indexed: 11/23/2022] Open
Abstract
PURPOSE The current gold standard to detect sleep/wakefulness is based on electroencephalogram, which is inconvenient if included in portable sleep screening devices. Therefore, a challenge in the portable devices is sleeping time estimation. Without sleeping time, sleep parameters such as apnea/hypopnea index (AHI), an index for quantifying sleep apnea severity, can be underestimated. Recent studies have used tracheal sounds and movements for sleep screening and calculating AHI without considering sleeping time. In this study, we investigated the detection of sleep/wakefulness states and estimation of sleep parameters using tracheal sounds and movements. MATERIALS AND METHODS Participants with suspected sleep apnea who were referred for sleep screening were included in this study. Simultaneously with polysomnography, tracheal sounds and movements were recorded with a small wearable device, called the Patch, attached over the trachea. Each 30-second epoch of tracheal data was scored as sleep or wakefulness using an automatic classification algorithm. The performance of the algorithm was compared to the sleep/wakefulness scored blindly based on the polysomnography. RESULTS Eighty-eight subjects were included in this study. The accuracy of sleep/wakefulness detection was 82.3±8.66% with a sensitivity of 87.8±10.8 % (sleep), specificity of 71.4±18.5% (awake), F1 of 88.1±9.3% and Cohen's kappa of 0.54. The correlations between the estimated and polysomnography-based measures for total sleep time and sleep efficiency were 0.78 (p<0.001) and 0.70 (p<0.001), respectively. CONCLUSION Sleep/wakefulness periods can be detected using tracheal sound and movements. The results of this study combined with our previous studies on screening sleep apnea with tracheal sounds provide strong evidence that respiratory sounds analysis can be used to develop robust, convenient and cost-effective portable devices for sleep apnea monitoring.
Collapse
Affiliation(s)
- Nasim Montazeri Ghahjaverestan
- Kite - Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada.,Institute of Biomedical Engineering, University of Toronto, Toronto, ON, Canada
| | - Sina Akbarian
- Kite - Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada.,Institute of Biomedical Engineering, University of Toronto, Toronto, ON, Canada
| | - Maziar Hafezi
- Kite - Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada.,Institute of Biomedical Engineering, University of Toronto, Toronto, ON, Canada
| | - Shumit Saha
- Kite - Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada.,Institute of Biomedical Engineering, University of Toronto, Toronto, ON, Canada
| | - Kaiyin Zhu
- Kite - Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
| | - Bojan Gavrilovic
- Kite - Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
| | - Babak Taati
- Kite - Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada.,Institute of Biomedical Engineering, University of Toronto, Toronto, ON, Canada.,Computer Science, University of Toronto, Toronto, ON, Canada
| | - Azadeh Yadollahi
- Kite - Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada.,Institute of Biomedical Engineering, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
28
|
Radha M, Fonseca P, Moreau A, Ross M, Cerny A, Anderer P, Long X, Aarts RM. Sleep stage classification from heart-rate variability using long short-term memory neural networks. Sci Rep 2019; 9:14149. [PMID: 31578345 PMCID: PMC6775145 DOI: 10.1038/s41598-019-49703-y] [Citation(s) in RCA: 69] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2019] [Accepted: 07/10/2019] [Indexed: 01/29/2023] Open
Abstract
Automated sleep stage classification using heart rate variability (HRV) may provide an ergonomic and low-cost alternative to gold standard polysomnography, creating possibilities for unobtrusive home-based sleep monitoring. Current methods however are limited in their ability to take into account long-term sleep architectural patterns. A long short-term memory (LSTM) network is proposed as a solution to model long-term cardiac sleep architecture information and validated on a comprehensive data set (292 participants, 584 nights, 541.214 annotated 30 s sleep segments) comprising a wide range of ages and pathological profiles, annotated according to the Rechtschaffen and Kales (R&K) annotation standard. It is shown that the model outperforms state-of-the-art approaches which were often limited to non-temporal or short-term recurrent classifiers. The model achieves a Cohen's k of 0.61 ± 0.15 and accuracy of 77.00 ± 8.90% across the entire database. Further analysis revealed that the performance for individuals aged 50 years and older may decline. These results demonstrate the merit of deep temporal modelling using a diverse data set and advance the state-of-the-art for HRV-based sleep stage classification. Further research is warranted into individuals over the age of 50 as performance tends to worsen in this sub-population.
Collapse
Affiliation(s)
- Mustafa Radha
- Royal Philips, Research, High Tech Campus 34, 5656 AE, Eindhoven, The Netherlands.
- Eindhoven University of Technology, P.O. Box 513, 5600 MB, Eindhoven, The Netherlands.
| | - Pedro Fonseca
- Royal Philips, Research, High Tech Campus 34, 5656 AE, Eindhoven, The Netherlands
- Eindhoven University of Technology, P.O. Box 513, 5600 MB, Eindhoven, The Netherlands
| | - Arnaud Moreau
- Philips Austria GmbH, Kranichberggasse 4, 1120, Vienna, Austria
| | - Marco Ross
- Philips Austria GmbH, Kranichberggasse 4, 1120, Vienna, Austria
| | - Andreas Cerny
- Philips Austria GmbH, Kranichberggasse 4, 1120, Vienna, Austria
| | - Peter Anderer
- Philips Austria GmbH, Kranichberggasse 4, 1120, Vienna, Austria
| | - Xi Long
- Royal Philips, Research, High Tech Campus 34, 5656 AE, Eindhoven, The Netherlands
- Eindhoven University of Technology, P.O. Box 513, 5600 MB, Eindhoven, The Netherlands
| | - Ronald M Aarts
- Royal Philips, Research, High Tech Campus 34, 5656 AE, Eindhoven, The Netherlands
- Eindhoven University of Technology, P.O. Box 513, 5600 MB, Eindhoven, The Netherlands
| |
Collapse
|
29
|
Xue B, Deng B, Hong H, Wang Z, Zhu X, Feng DD. Non-Contact Sleep Stage Detection Using Canonical Correlation Analysis of Respiratory Sound. IEEE J Biomed Health Inform 2019; 24:614-625. [PMID: 30990201 DOI: 10.1109/jbhi.2019.2910566] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Respiratory sound is able to differentiate sleep stages and provide a non-contact and cost-effective solution for the diagnosis and treatment monitoring of sleep-related diseases. While most of the existing respiratory sound-based methods focus on a limited number of sleep stages such as sleep/wake and wake/rapid eye movement (REM)/non-REM, it is essential to detect sleep stages at a finer level for sleep quality evaluation. In this paper, we for the first time study a sleep stage detection method aiming at classifying sleep states into four sleep stages: wake, REM, light sleep, and deep sleep from the respiratory sound. In addition to extracting time-domain features, frequency-domain features of respiratory sound, non-linear features of snoring sound are devised to better characterize snoring-related signals of respiratory sound. To effectively fuse the three sets of features, a novel feature fusion technique combining the generalized canonical correlation analysis with the ReliefF algorithm is proposed for discriminative feature selection. Final stage detection is achieved with popular classifiers including decision tree, support vector machines, K-nearest neighbor, and the ensemble classifier. To evaluate our proposed method, we built an in-house dataset, which is comprised of 13 nights of sleep audio data from a sleep laboratory. Experimental results indicate that our proposed method outperforms the existing related ones and is promising for large-scale non-contact sleep monitoring.
Collapse
|