1
|
Saha S, Ghahjaverestan NM, Yadollahi A. Separating obstructive and central respiratory events during sleep using breathing sounds: Utilizing transfer learning on deep convolutional networks. Sleep Med 2025; 131:106485. [PMID: 40188799 DOI: 10.1016/j.sleep.2025.106485] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/09/2024] [Revised: 02/25/2025] [Accepted: 03/28/2025] [Indexed: 05/20/2025]
Abstract
Sleep apnea diagnosis relies on polysomnography (PSG), which is resource-intensive and requires manual analysis to differentiate obstructive sleep apnea (OSA) from central sleep apnea (CSA). Existing portable devices, while valuable in detecting sleep apnea, often do not distinguish between the two types of apnea. Such differentiation is critical because OSA and CSA have distinct underlying causes and treatment approaches. This study addresses this gap by leveraging tracheal breathing sounds as a non-invasive and cost-effective method to classify central and obstructive events. We employed a transfer learning strategy on six pre-trained deep convolutional neural networks (CNNs), including Alexnet, Resnet18, Resnet50, Densenet161, VGG16, and VGG19. These networks were fine-tuned using spectrograms of tracheal sound signals recorded during PSG. The dataset, comprising 50 participants with a combination of central and obstructive events, was used to train and validate the model. Results showed high accuracy in differentiating central from obstructive respiratory events, with the combined CNN architecture achieving an overall accuracy of 83.66 % and a sensitivity and specificity above 83 %. The findings suggest that tracheal breathing sounds can effectively distinguish between OSA and CSA, providing a less invasive and more accessible alternative to traditional PSG. This methodology could be implemented in portable devices to enhance the diagnosis of sleep apnea, enabling targeted treatment. By facilitating earlier and more accurate diagnoses, this method supports personalized treatment strategies, optimizing therapy selection (e.g., CPAP for OSA, ASV for CSA) and ultimately enhancing clinical outcomes.
Collapse
Affiliation(s)
- Shumit Saha
- Department of Biomedical Data Science, School of Applied Computational Sciences, Meharry Medical College, Nashville, TN, USA; Institute of Biomedical Engineering, University of Toronto, Toronto, ON, Canada; KITE-Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada; Institute of Health Policy, Management, and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada
| | - Nasim Montazeri Ghahjaverestan
- KITE-Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada; Department of Electrical and Computer Engineering, Queens University, London, ON, Canada
| | - Azadeh Yadollahi
- Institute of Biomedical Engineering, University of Toronto, Toronto, ON, Canada; KITE-Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada.
| |
Collapse
|
2
|
Domingues DM, Rocha PR, Miachon ACMV, Giampá SQDC, Soares F, Genta PR, Lorenzi-Filho G. Sleep prediction using data from oximeter, accelerometer and snoring for portable monitor obstructive sleep apnea diagnosis. Sci Rep 2024; 14:24562. [PMID: 39427062 PMCID: PMC11490485 DOI: 10.1038/s41598-024-75935-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2024] [Accepted: 10/09/2024] [Indexed: 10/21/2024] Open
Abstract
The aim of this study was to build and validate an artificial neural network (ANN) algorithm to predict sleep using data from a portable monitor (Biologix system) consisting of a high-resolution oximeter with built-in accelerometer plus smartphone application with snoring recording and cloud analysis. A total of 268 patients with suspected obstructive sleep apnea (OSA) were submitted to standard polysomnography (PSG) with simultaneous Biologix (age: 56 ± 11 years; body mass index: 30.9 ± 4.6 kg/m 2 , apnea-hypopnea index [AHI]: 35 ± 30 events/h). Biologix channels were input features for construction an ANN model to predict sleep. A k-fold cross-validation method (k=10) was applied, ensuring that all sleep studies (N=268; 246,265 epochs) were included in both training and testing across all iterations. The final ANN model, evaluated as the mean performance across all folds, resulted in a sensitivity, specificity and accuracy of 91.5%, 71.0% and 86.1%, respectively, for detecting sleep. As compared to the oxygen desaturation index (ODI) from Biologix without sleep prediction, the bias (mean difference) between PSG-AHI and Biologix-ODI with sleep prediction (Biologix-Sleep-ODI) decreased significantly (3.40 vs. 1.02 events/h, p<0.001). We conclude that sleep prediction by an ANN model using data from oximeter, accelerometer, and snoring is accurate and improves Biologix system OSA diagnostic precision.
Collapse
Affiliation(s)
| | | | | | - Sara Quaglia de Campos Giampá
- Laboratório do Sono, LIM 63, Divisão de Pneumologia, Instituto do Coração, InCor, Hospital das Clínicas HCFMUSP, Universidade de São Paulo, Eneas de Carvalho Aguiar 44, 8º andar, São Paulo, SP, 05403-900, Brazil
| | | | - Pedro R Genta
- Laboratório do Sono, LIM 63, Divisão de Pneumologia, Instituto do Coração, InCor, Hospital das Clínicas HCFMUSP, Universidade de São Paulo, Eneas de Carvalho Aguiar 44, 8º andar, São Paulo, SP, 05403-900, Brazil
| | - Geraldo Lorenzi-Filho
- Laboratório do Sono, LIM 63, Divisão de Pneumologia, Instituto do Coração, InCor, Hospital das Clínicas HCFMUSP, Universidade de São Paulo, Eneas de Carvalho Aguiar 44, 8º andar, São Paulo, SP, 05403-900, Brazil
| |
Collapse
|
3
|
Kabir MM, Assadi A, Saha S, Gavrilovic B, Zhu K, Mak S, Yadollahi A. Unveiling the Impact of Respiratory Event-Related Hypoxia on Heart Sound Intensity During Sleep Using Novel Wearable Technology. Nat Sci Sleep 2024; 16:1623-1636. [PMID: 39430234 PMCID: PMC11491078 DOI: 10.2147/nss.s480687] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/31/2024] [Accepted: 10/08/2024] [Indexed: 10/22/2024] Open
Abstract
Purpose Cardiovascular disorders are the leading cause of mortality worldwide with obstructive sleep apnea (OSA) as the independent risk factor. Heart sounds are strong modalities to obtain clinically relevant information regarding the functioning of the heart valves and blood flow. The objective of this study was to use a small wearable device to record and investigate the changes in heart sounds during respiratory events (reduction and cessation of breathings) and their association with oxyhemoglobin desaturation (hypoxemia). Patients and Methods Sleep assessment and tracheal respiratory and heart sounds were recorded simultaneously from 58 individuals who were suspected of having OSA. Sleep assessment was performed using in-laboratory polysomnography. Tracheal respiratory and heart sounds were recorded over the suprasternal notch using a small device with embedded microphone and accelerometer called the Patch. Heart sounds were extracted from bandpass filtered tracheal sounds using smoothed Hilbert envelope on decomposed signal. For each individual, data from 20 obstructive events during Non-Rapid Eye Movement stage-2 of sleep were randomly selected for analysis. Results A significant increase in heart sounds' intensities from before to after the termination of respiratory events was observed. Also, there was a significant positive correlation between the magnitude of hypoxemia and the increase in heart sounds' intensities (r>0.82, p<0.001). In addition, the changes in heart sounds were significantly correlated with heart rate and blood pressure. Conclusion Our results indicate that heart sound analysis can be used as an alternative modality for assessing the cardiovascular burden of sleep apnea, which may indicate the risk of cardiovascular disorders.
Collapse
Affiliation(s)
- Muammar M Kabir
- KITE-Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
- Institute for Biomedical Engineering, University of Toronto, Toronto, ON, Canada
| | - Atousa Assadi
- KITE-Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
- Institute for Biomedical Engineering, University of Toronto, Toronto, ON, Canada
- Temerty Center for AI Research and Education in Medicine, University of Toronto, Toronto, ON, Canada
| | - Shumit Saha
- Department of Biomedical Data Science, School of Applied Computational Sciences (SACS), Meharry Medical College, Nashville, TN, USA
| | - Bojan Gavrilovic
- KITE-Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
| | - Kaiyin Zhu
- KITE-Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
| | - Susanna Mak
- Department of Medicine, Division of Cardiology, University of Toronto, Toronto, ON, Canada
| | - Azadeh Yadollahi
- KITE-Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
- Institute for Biomedical Engineering, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
4
|
Hong J, Tran HH, Jung J, Jang H, Lee D, Yoon IY, Hong JK, Kim JW. End-to-End Sleep Staging Using Nocturnal Sounds from Microphone Chips for Mobile Devices. Nat Sci Sleep 2022; 14:1187-1201. [PMID: 35783665 PMCID: PMC9241996 DOI: 10.2147/nss.s361270] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 06/03/2022] [Indexed: 02/04/2023] Open
Abstract
PURPOSE Nocturnal sounds contain numerous information and are easily obtainable by a non-contact manner. Sleep staging using nocturnal sounds recorded from common mobile devices may allow daily at-home sleep tracking. The objective of this study is to introduce an end-to-end (sound-to-sleep stages) deep learning model for sound-based sleep staging designed to work with audio from microphone chips, which are essential in mobile devices such as modern smartphones. PATIENTS AND METHODS Two different audio datasets were used: audio data routinely recorded by a solitary microphone chip during polysomnography (PSG dataset, N=1154) and audio data recorded by a smartphone (smartphone dataset, N=327). The audio was converted into Mel spectrogram to detect latent temporal frequency patterns of breathing and body movement from ambient noise. The proposed neural network model learns to first extract features from each 30-second epoch and then analyze inter-epoch relationships of extracted features to finally classify the epochs into sleep stages. RESULTS Our model achieved 70% epoch-by-epoch agreement for 4-class (wake, light, deep, REM) sleep stage classification and robust performance across various signal-to-noise conditions. The model performance was not considerably affected by sleep apnea or periodic limb movement. External validation with smartphone dataset also showed 68% epoch-by-epoch agreement. CONCLUSION The proposed end-to-end deep learning model shows potential of low-quality sounds recorded from microphone chips to be utilized for sleep staging. Future study using nocturnal sounds recorded from mobile devices at home environment may further confirm the use of mobile device recording as an at-home sleep tracker.
Collapse
Affiliation(s)
- Joonki Hong
- Asleep Inc., Seoul, Korea.,Korea Advanced Institute of Science and Technology, Daejeon, Korea
| | | | | | | | | | - In-Young Yoon
- Department of Psychiatry, Seoul National University Bundang Hospital, Seongnam, Korea.,Seoul National University College of Medicine, Seoul, Korea
| | - Jung Kyung Hong
- Department of Psychiatry, Seoul National University Bundang Hospital, Seongnam, Korea.,Seoul National University College of Medicine, Seoul, Korea
| | - Jeong-Whun Kim
- Seoul National University College of Medicine, Seoul, Korea.,Department of Otorhinolaryngology, Seoul National University Bundang Hospital, Seongnam, Korea
| |
Collapse
|
5
|
Skovgaard EL, Pedersen J, Møller NC, Grøntved A, Brønd JC. Manual Annotation of Time in Bed Using Free-Living Recordings of Accelerometry Data. SENSORS 2021; 21:s21248442. [PMID: 34960533 PMCID: PMC8707394 DOI: 10.3390/s21248442] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/12/2021] [Revised: 12/07/2021] [Accepted: 12/14/2021] [Indexed: 12/02/2022]
Abstract
With the emergence of machine learning for the classification of sleep and other human behaviors from accelerometer data, the need for correctly annotated data is higher than ever. We present and evaluate a novel method for the manual annotation of in-bed periods in accelerometer data using the open-source software Audacity®, and we compare the method to the EEG-based sleep monitoring device Zmachine® Insight+ and self-reported sleep diaries. For evaluating the manual annotation method, we calculated the inter- and intra-rater agreement and agreement with Zmachine and sleep diaries using interclass correlation coefficients and Bland–Altman analysis. Our results showed excellent inter- and intra-rater agreement and excellent agreement with Zmachine and sleep diaries. The Bland–Altman limits of agreement were generally around ±30 min for the comparison between the manual annotation and the Zmachine timestamps for the in-bed period. Moreover, the mean bias was minuscule. We conclude that the manual annotation method presented is a viable option for annotating in-bed periods in accelerometer data, which will further qualify datasets without labeling or sleep records.
Collapse
|
6
|
Yue H, Lin Y, Wu Y, Wang Y, Li Y, Guo X, Huang Y, Wen W, Zhao G, Pang X, Lei W. Deep Learning for Diagnosis and Classification of Obstructive Sleep Apnea: A Nasal Airflow-Based Multi-Resolution Residual Network. Nat Sci Sleep 2021; 13:361-373. [PMID: 33737850 PMCID: PMC7966385 DOI: 10.2147/nss.s297856] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Accepted: 02/19/2021] [Indexed: 12/17/2022] Open
Abstract
PURPOSE This study evaluated a novel approach for diagnosis and classification of obstructive sleep apnea (OSA), called Obstructive Sleep Apnea Smart System (OSASS), using residual networks and single-channel nasal pressure airflow signals. METHODS Data were collected from the sleep center of the First Affiliated Hospital, Sun Yat-sen University, and the Integrative Department of Guangdong Province Traditional Chinese Medical Hospital. We developed a new model called the multi-resolution residual network (Mr-ResNet) based on a residual network to detect nasal pressure airflow signals recorded by polysomnography (PSG) automatically. The performance of the model was assessed by its sensitivity, specificity, accuracy, and F1-score. We built OSASS based on Mr-ResNet to estimate the apnea‒hypopnea index (AHI) and to classify the severity of OSA, and compared the agreement between OSASS output and the registered polysomnographic technologist (RPSGT) score, assessed by two technologists. RESULTS In the primary test set, the sensitivity, specificity, accuracy, and F1-score of Mr-ResNet were 90.8%, 90.5%, 91.2%, and 90.5%, respectively. In the independent test set, the Spearman correlation for AHI between OSASS and the RPSGT score determined by two technologists was 0.94 (p < 0.001) and 0.96 (p < 0.001), respectively. Cohen's Kappa scores for classification between OSASS and the two technologists' scores were 0.81 and 0.84, respectively. CONCLUSION Our results indicated that OSASS can automatically diagnose and classify OSA using signals from a single-channel nasal pressure airflow, which is consistent with polysomnographic technologists' findings. Thus, OSASS holds promise for clinical application.
Collapse
Affiliation(s)
- Huijun Yue
- Otorhinolaryngology Hospital, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510080, People's Republic of China
| | - Yu Lin
- Otorhinolaryngology Hospital, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510080, People's Republic of China
| | - Yitao Wu
- School of Computer Science, South China Normal University, Guangzhou, 510631, People's Republic of China
| | - Yongquan Wang
- Otorhinolaryngology Hospital, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510080, People's Republic of China
| | - Yun Li
- Otorhinolaryngology Hospital, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510080, People's Republic of China
| | - Xueqin Guo
- Otorhinolaryngology Hospital, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510080, People's Republic of China
| | - Ying Huang
- Guangdong Province Traditional Chinese Medical Hospital, Guangzhou, 510000, People's Republic of China
| | - Weiping Wen
- Otorhinolaryngology Hospital, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510080, People's Republic of China
| | - Gansen Zhao
- School of Computer Science, South China Normal University, Guangzhou, 510631, People's Republic of China
| | - Xiongwen Pang
- School of Computer Science, South China Normal University, Guangzhou, 510631, People's Republic of China
| | - Wenbin Lei
- Otorhinolaryngology Hospital, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510080, People's Republic of China
| |
Collapse
|