1
|
Qiu X, Wang C, Li B, Tong H, Tan X, Yang L, Tao J, Huang J. An audio-semantic multimodal model for automatic obstructive sleep Apnea-Hypopnea Syndrome classification via multi-feature analysis of snoring sounds. Front Neurosci 2024; 18:1336307. [PMID: 38800571 PMCID: PMC11116639 DOI: 10.3389/fnins.2024.1336307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Accepted: 04/29/2024] [Indexed: 05/29/2024] Open
Abstract
Introduction Obstructive Sleep Apnea-Hypopnea Syndrome (OSAHS) is a common sleep-related breathing disorder that significantly impacts the daily lives of patients. Currently, the diagnosis of OSAHS relies on various physiological signal monitoring devices, requiring a comprehensive Polysomnography (PSG). However, this invasive diagnostic method faces challenges such as data fluctuation and high costs. To address these challenges, we propose a novel data-driven Audio-Semantic Multi-Modal model for OSAHS severity classification (i.e., ASMM-OSA) based on patient snoring sound characteristics. Methods In light of the correlation between the acoustic attributes of a patient's snoring patterns and their episodes of breathing disorders, we utilize the patient's sleep audio recordings as an initial screening modality. We analyze the audio features of snoring sounds during the night for subjects suspected of having OSAHS. Audio features were augmented via PubMedBERT to enrich their diversity and detail and subsequently classified for OSAHS severity using XGBoost based on the number of sleep apnea events. Results Experimental results using the OSAHS dataset from a collaborative university hospital demonstrate that our ASMM-OSA audio-semantic multimodal model achieves a diagnostic level in automatically identifying sleep apnea events and classifying the four-class severity (normal, mild, moderate, and severe) of OSAHS. Discussion Our proposed model promises new perspectives for non-invasive OSAHS diagnosis, potentially reducing costs and enhancing patient quality of life.
Collapse
Affiliation(s)
- Xihe Qiu
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai, China
| | - Chenghao Wang
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai, China
| | - Bin Li
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai, China
| | - Huijie Tong
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai, China
| | - Xiaoyu Tan
- INF Technology (Shanghai) Co., Ltd., Shanghai, China
| | - Long Yang
- Department of Otolaryngology, Shenzhen Second People's Hospital, Shenzhen, China
| | - Jing Tao
- Department of Otolaryngology, Shenzhen Second People's Hospital, Shenzhen, China
| | | |
Collapse
|
2
|
Li R, Li W, Yue K, Zhang R, Li Y. Automatic snoring detection using a hybrid 1D-2D convolutional neural network. Sci Rep 2023; 13:14009. [PMID: 37640790 PMCID: PMC10462688 DOI: 10.1038/s41598-023-41170-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Accepted: 08/23/2023] [Indexed: 08/31/2023] Open
Abstract
Snoring, as a prevalent symptom, seriously interferes with life quality of patients with sleep disordered breathing only (simple snorers), patients with obstructive sleep apnea (OSA) and their bed partners. Researches have shown that snoring could be used for screening and diagnosis of OSA. Therefore, accurate detection of snoring sounds from sleep respiratory audio at night has been one of the most important parts. Considered that the snoring is somewhat dangerously overlooked around the world, an automatic and high-precision snoring detection algorithm is required. In this work, we designed a non-contact data acquire equipment to record nocturnal sleep respiratory audio of subjects in their private bedrooms, and proposed a hybrid convolutional neural network (CNN) model for the automatic snore detection. This model consists of a one-dimensional (1D) CNN processing the original signal and a two-dimensional (2D) CNN representing images mapped by the visibility graph method. In our experiment, our algorithm achieves an average classification accuracy of 89.3%, an average sensitivity of 89.7%, an average specificity of 88.5%, and an average AUC of 0.947, which surpasses some state-of-the-art models trained on our data. In conclusion, our results indicate that the proposed method in this study could be effective and significance for massive screening of OSA patients in daily life. And our work provides an alternative framework for time series analysis.
Collapse
Affiliation(s)
- Ruixue Li
- Key Laboratory of RF Circuits and Systems, Hangzhou Dianzi University, Hangzhou, Zhejiang, China
| | - Wenjun Li
- Key Laboratory of RF Circuits and Systems, Hangzhou Dianzi University, Hangzhou, Zhejiang, China.
| | - Keqiang Yue
- Key Laboratory of RF Circuits and Systems, Hangzhou Dianzi University, Hangzhou, Zhejiang, China
| | - Rulin Zhang
- Key Laboratory of RF Circuits and Systems, Hangzhou Dianzi University, Hangzhou, Zhejiang, China
| | - Yilin Li
- Key Laboratory of RF Circuits and Systems, Hangzhou Dianzi University, Hangzhou, Zhejiang, China
| |
Collapse
|
3
|
Liu Y, Zhang E, Jia X, Wu Y, Liu J, Brewer LM, Yu L. Tracheal sound-based apnea detection using hidden Markov model in sedated volunteers and post anesthesia care unit patients. J Clin Monit Comput 2023; 37:1061-1070. [PMID: 37140851 DOI: 10.1007/s10877-023-01015-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Accepted: 04/13/2023] [Indexed: 05/05/2023]
Abstract
The current method of apnea detection based on tracheal sounds is limited in certain situations. In this work, the Hidden Markov Model (HMM) algorithm based on segmentation is used to classify the respiratory and non-respiratory states of tracheal sounds, to achieve the purpose of apnea detection. Three groups of tracheal sounds were used, including two groups of data collected in the laboratory and a group of patient data in the post anesthesia care unit (PACU). One was used for model training, and the others (laboratory test group and clinical test group) were used for testing and apnea detection. The trained HMMs were used to segment the tracheal sounds in laboratory test data and clinical test data. Apnea was detected according to the segmentation results and respiratory flow rate/pressure which was the reference signal in two test groups. The sensitivity, specificity, and accuracy were calculated. For the laboratory test data, apnea detection sensitivity, specificity, and accuracy were 96.9%, 95.5%, and 95.7%, respectively. For the clinical test data, apnea detection sensitivity, specificity, and accuracy were 83.1%, 99.0% and 98.6%. Apnea detection based on tracheal sound using HMM is accurate and reliable for sedated volunteers and patients in PACU.
Collapse
Affiliation(s)
- Yang Liu
- Department of Stomatology, The Fourth Affiliated Hospital of China Medical University, Shenyang, Liaoning, People's Republic of China
| | - Erpeng Zhang
- Department of Biomedical Engineering, School of Intelligent Medicine, China Medical University, No. 77, Puhe Road, Shenyang North New Area, Shenyang, 110122, Liaoning, People's Republic of China
| | - Xiuzhu Jia
- Department of Biomedical Engineering, School of Intelligent Medicine, China Medical University, No. 77, Puhe Road, Shenyang North New Area, Shenyang, 110122, Liaoning, People's Republic of China
| | - Yanan Wu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, People's Republic of China
| | - Jing Liu
- Department of Nuclear Medicine, Zhongnan Hospital of Wuhan University, Wuhan, Hubei, People's Republic of China
| | - Lara M Brewer
- Department of Anesthesiology, University of Utah, Salt Lake City, Utah, USA
| | - Lu Yu
- Department of Biomedical Engineering, School of Intelligent Medicine, China Medical University, No. 77, Puhe Road, Shenyang North New Area, Shenyang, 110122, Liaoning, People's Republic of China.
| |
Collapse
|
4
|
Bandyopadhyay A, Goldstein C. Clinical applications of artificial intelligence in sleep medicine: a sleep clinician's perspective. Sleep Breath 2023; 27:39-55. [PMID: 35262853 PMCID: PMC8904207 DOI: 10.1007/s11325-022-02592-4] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Revised: 01/25/2022] [Accepted: 03/02/2022] [Indexed: 12/27/2022]
Abstract
BACKGROUND The past few years have seen a rapid emergence of artificial intelligence (AI)-enabled technology in the field of sleep medicine. AI refers to the capability of computer systems to perform tasks conventionally considered to require human intelligence, such as speech recognition, decision-making, and visual recognition of patterns and objects. The practice of sleep tracking and measuring physiological signals in sleep is widely practiced. Therefore, sleep monitoring in both the laboratory and ambulatory environments results in the accrual of massive amounts of data that uniquely positions the field of sleep medicine to gain from AI. METHOD The purpose of this article is to provide a concise overview of relevant terminology, definitions, and use cases of AI in sleep medicine. This was supplemented by a thorough review of relevant published literature. RESULTS Artificial intelligence has several applications in sleep medicine including sleep and respiratory event scoring in the sleep laboratory, diagnosing and managing sleep disorders, and population health. While still in its nascent stage, there are several challenges which preclude AI's generalizability and wide-reaching clinical applications. Overcoming these challenges will help integrate AI seamlessly within sleep medicine and augment clinical practice. CONCLUSION Artificial intelligence is a powerful tool in healthcare that may improve patient care, enhance diagnostic abilities, and augment the management of sleep disorders. However, there is a need to regulate and standardize existing machine learning algorithms prior to its inclusion in the sleep clinic.
Collapse
Affiliation(s)
- Anuja Bandyopadhyay
- Department of Pediatrics, Indiana University School of Medicine, Indianapolis, IN, USA.
| | - Cathy Goldstein
- Department of Neurology, University of Michigan, Ann Arbor, MI, USA
| |
Collapse
|
5
|
Kao HH, Lin YC, Chiang JK, Yu HC, Wang CL, Kao YH. Dependable algorithm for visualizing snoring duration through acoustic analysis: A pilot study. Medicine (Baltimore) 2022; 101:e32538. [PMID: 36595844 PMCID: PMC9794359 DOI: 10.1097/md.0000000000032538] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/28/2022] Open
Abstract
Snoring is a nuisance for the bed partners of people who snore and is also associated with chronic diseases. Estimating the snoring duration from a whole-night sleep period is challenging. The authors present a dependable algorithm for visualizing snoring durations through acoustic analysis. Both instruments (Sony digital recorder and smartphone's SnoreClock app) were placed within 30 cm from the examinee's head during the sleep period. Subsequently, spectrograms were plotted based on audio files recorded from Sony recorders. The authors thereby developed an algorithm to validate snoring durations through visualization of typical snoring segments. In total, 37 snoring recordings obtained from 6 individuals were analyzed. The mean age of the participants was 44.6 ± 9.9 years. Every recorded file was tailored to a regular 600-second segment and plotted. Visualization revealed that the typical features of the clustered snores in the amplitude domains were near-isometric spikes (most had an ascending-descending trend). The recorded snores exhibited 1 or more visibly fixed frequency bands. Intervals were noted between the snoring clusters and were incorporated into the whole-night snoring calculation. The correlative coefficients of snoring rates from digitally recorded files examined between Examiners A and B were higher (0.865, P < .001) than those with SnoreClock app and Examiners (0.757, P < .001; 0.787, P < .001, respectively). A dependable algorithm with high reproducibility was developed for visualizing snoring durations.
Collapse
Affiliation(s)
- Hsueh-Hsin Kao
- Graduate Institute of Medicine, College of Medicine, Kaohsiung Medical University, Kaohsiung, Taiwan
- Department of Laboratory Medicine, Kaohsiung Medical University Hospital, Kaohsiung, Taiwan
| | | | - Jui-Kun Chiang
- Department of Family Medicine, Dalin Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, Chiayi, Taiwan
| | | | - Chun-Lung Wang
- School of Medicine, Tzu Chi University, Hualien, Taiwan
- Division of Pediatrics, Dalin Tzu Chi Hospital, Buddhish Tzu Chi Medical Foundation, Dalin Chiayi, Taiwan
| | - Yee-Hsin Kao
- Department of Family Medicine, Tainan Municipal Hospital (Managed by Show Chwan Medical Care Corporation), Tainan, Taiwan
- *Correspondence: Yee-Hsin Kao, 670 Chung Te Road, Tainan, 70173 Taiwan (e-mail: )
| |
Collapse
|
6
|
Mouth Closing to Improve the Efficacy of Mandibular Advancement Devices in Sleep Apnea. Ann Am Thorac Soc 2022; 19:1185-1192. [PMID: 35254967 DOI: 10.1513/annalsats.202109-1050oc] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Abstract
RATIONALE Mouth breathing increases upper airway collapsibility, leading to decreased efficacy of obstructive sleep apnea (OSA) treatments. We hypothesized that the use of mandibular advancement devices (MAD) increases mouth breathing and, thus, using an adhesive mouthpiece (AMT), to prevent mouth breathing, in combination with MAD can improve the treatment efficacy. OBJECTIVES To evaluate the efficacy of MAD + AMT in comparison to MAD alone. METHODS A prospective crossover pilot study was designed to test this hypothesis. Briefly, adult participants with an apnea-hypopnea index (AHI) between 10-50 events/h at the screening visit were randomized to no treatment (baseline), MAD treatment, AMT treatment, and MAD+AMT treatment. As a primary analysis, absolute AHI was compared between MAD and MAD + AMT arms. Secondary analyses included quantifying the percent change in AHI, percentage of complete (AHI < 5 events/h) and incomplete (AHI 5 - 10 events/h) responders, and the efficacy of AMT alone in comparison with other treatment arms. RESULTS A total 21 of participants were included. (Baseline AHI= 24.3±9.9 event/h) The median AHI (Interquartile [IQR]) in the MAD and MAD+AMT arms were 10.5 [5.4-19.6] events/h and 5.6 [2.2-11.7] events/h (p-value= 0.02), respectively. A total of 76% of individuals achieved an AHI < 10 events/h in the MAD + AMT arm vs. 43% in the MAD arm (p-value<0.01). Finally, the observed effect was similar in moderate to severe OSA (AHI ≥15 events/h) in terms of absolute reduction and treatment responders, and AMT alone did not significantly reduce the AHI compared to baseline. CONCLUSION Combination of an adhesive mouthpiece and MAD is a more effective therapy than MAD alone. These findings may help improve clinical decision-making in sleep apnea.
Collapse
|
7
|
Ijaz A, Nabeel M, Masood U, Mahmood T, Hashmi MS, Posokhova I, Rizwan A, Imran A. Towards using cough for respiratory disease diagnosis by leveraging Artificial Intelligence: A survey. INFORMATICS IN MEDICINE UNLOCKED 2022. [DOI: 10.1016/j.imu.2021.100832] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023] Open
|
8
|
Korkalainen H, Nikkonen S, Kainulainen S, Dwivedi AK, Myllymaa S, Leppänen T, Töyräs J. Self-Applied Home Sleep Recordings: The Future of Sleep Medicine. Sleep Med Clin 2021; 16:545-556. [PMID: 34711380 DOI: 10.1016/j.jsmc.2021.07.003] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Sleep disorders form a massive global health burden and there is an increasing need for simple and cost-efficient sleep recording devices. Recent machine learning-based approaches have already achieved scoring accuracy of sleep recordings on par with manual scoring, even with reduced recording montages. Simple and inexpensive monitoring over multiple consecutive nights with automatic analysis could be the answer to overcome the substantial economic burden caused by poor sleep and enable more efficient initial diagnosis, treatment planning, and follow-up monitoring for individuals suffering from sleep disorders.
Collapse
Affiliation(s)
- Henri Korkalainen
- Department of Applied Physics, University of Eastern Finland, PO Box 1627, Kuopio 70211, Finland; Diagnostic Imaging Center, Kuopio University Hospital, Kuopio, Finland.
| | - Sami Nikkonen
- Department of Applied Physics, University of Eastern Finland, PO Box 1627, Kuopio 70211, Finland; Diagnostic Imaging Center, Kuopio University Hospital, Kuopio, Finland
| | - Samu Kainulainen
- Department of Applied Physics, University of Eastern Finland, PO Box 1627, Kuopio 70211, Finland; Diagnostic Imaging Center, Kuopio University Hospital, Kuopio, Finland
| | - Amit Krishna Dwivedi
- Department of Applied Physics, University of Eastern Finland, PO Box 1627, Kuopio 70211, Finland; Diagnostic Imaging Center, Kuopio University Hospital, Kuopio, Finland
| | - Sami Myllymaa
- Department of Applied Physics, University of Eastern Finland, PO Box 1627, Kuopio 70211, Finland; Diagnostic Imaging Center, Kuopio University Hospital, Kuopio, Finland
| | - Timo Leppänen
- Department of Applied Physics, University of Eastern Finland, PO Box 1627, Kuopio 70211, Finland; Diagnostic Imaging Center, Kuopio University Hospital, Kuopio, Finland; School of Information Technology and Electrical Engineering, The University of Queensland, Brisbane, Australia
| | - Juha Töyräs
- Department of Applied Physics, University of Eastern Finland, PO Box 1627, Kuopio 70211, Finland; School of Information Technology and Electrical Engineering, The University of Queensland, Brisbane, Australia; Science Service Center, Kuopio University Hospital, Kuopio, Finland
| |
Collapse
|
9
|
Montazeri Ghahjaverestan N, Saha S, Kabir M, Gavrilovic B, Zhu K, Yadollahi A. Sleep apnea severity based on estimated tidal volume and snoring features from tracheal signals. J Sleep Res 2021; 31:e13490. [PMID: 34553793 DOI: 10.1111/jsr.13490] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2021] [Revised: 08/20/2021] [Accepted: 09/07/2021] [Indexed: 02/01/2023]
Abstract
Sleep apnea can be characterized by reductions in the respiratory tidal volume. Previous studies showed that the tidal volume can be estimated from tracheal sounds and movements called tracheal signals. Additionally, tracheal sounds include the sounds of snoring, a common symptom of obstructive sleep apnea. This study investigates the feasibility of estimating the severity of sleep apnea, as quantified by the apnea/hypopnea index (AHI), using the estimated tidal volume and snoring sounds extracted from tracheal signals. Tracheal signals were recorded simultaneously with polysomnography (PSG). The tidal volume was estimated from tracheal signals. The reductions in the tidal volume were detected as potential respiratory events. Additionally, features related to snoring sounds, which quantified variability, temporal clusters, and dominant frequency of snores, were extracted. A step-wise regression model and a greedy search algorithm were used sequentially to select the optimal set of features to estimate the apnea/hypopnea index and classify participants into healthy individuals and patients with sleep apnea. Sixty-one participants with suspected sleep apnea (age: 51 ± 16, body mass index: 29.5 ± 6.4 kg/m2 , apnea/hypopnea index: 20.2 ± 21.2 event/h) who were referred for a sleep test were recruited. The estimated apnea/hypopnea index was strongly correlated with the polysomnography-based apnea/hypopnea index (R2 = 0.76, p < 0.001). The accuracy of detecting sleep apnea for the apnea/hypopnea index cutoff of 15 events/h was 78.69% and 83.61% with and without using snore-related features. These findings suggest that acoustic estimation of airflow and snore-related features can provide a convenient and reliable method for screening of sleep apnea.
Collapse
Affiliation(s)
- Nasim Montazeri Ghahjaverestan
- KITE, Toronto Rehabilitation Institute-University Health Network, Toronto, ON, Canada.,Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, ON, Canada
| | - Shumit Saha
- KITE, Toronto Rehabilitation Institute-University Health Network, Toronto, ON, Canada.,Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, ON, Canada
| | - Muammar Kabir
- KITE, Toronto Rehabilitation Institute-University Health Network, Toronto, ON, Canada.,Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, ON, Canada
| | - Bojan Gavrilovic
- KITE, Toronto Rehabilitation Institute-University Health Network, Toronto, ON, Canada.,Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, ON, Canada
| | - Kaiyin Zhu
- KITE, Toronto Rehabilitation Institute-University Health Network, Toronto, ON, Canada
| | - Azadeh Yadollahi
- KITE, Toronto Rehabilitation Institute-University Health Network, Toronto, ON, Canada.,Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
10
|
Respiration Monitoring via Forcecardiography Sensors. SENSORS 2021; 21:s21123996. [PMID: 34207899 PMCID: PMC8228286 DOI: 10.3390/s21123996] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Revised: 06/04/2021] [Accepted: 06/07/2021] [Indexed: 12/26/2022]
Abstract
In the last few decades, a number of wearable systems for respiration monitoring that help to significantly reduce patients’ discomfort and improve the reliability of measurements have been presented. A recent research trend in biosignal acquisition is focusing on the development of monolithic sensors for monitoring multiple vital signs, which could improve the simultaneous recording of different physiological data. This study presents a performance analysis of respiration monitoring performed via forcecardiography (FCG) sensors, as compared to ECG-derived respiration (EDR) and electroresistive respiration band (ERB), which was assumed as the reference. FCG is a novel technique that records the cardiac-induced vibrations of the chest wall via specific force sensors, which provide seismocardiogram-like information, along with a novel component that seems to be related to the ventricular volume variations. Simultaneous acquisitions were obtained from seven healthy subjects at rest, during both quiet breathing and forced respiration at higher and lower rates. The raw FCG sensor signals featured a large, low-frequency, respiratory component (R-FCG), in addition to the common FCG signal. Statistical analyses of R-FCG, EDR and ERB signals showed that FCG sensors ensure a more sensitive and precise detection of respiratory acts than EDR (sensitivity: 100% vs. 95.8%, positive predictive value: 98.9% vs. 92.5%), as well as a superior accuracy and precision in interbreath interval measurement (linear regression slopes and intercepts: 0.99, 0.026 s (R2 = 0.98) vs. 0.98, 0.11 s (R2 = 0.88), Bland–Altman limits of agreement: ±0.61 s vs. ±1.5 s). This study represents a first proof of concept for the simultaneous recording of respiration signals and forcecardiograms with a single, local, small, unobtrusive, cheap sensor. This would extend the scope of FCG to monitoring multiple vital signs, as well as to the analysis of cardiorespiratory interactions, also paving the way for the continuous, long-term monitoring of patients with heart and pulmonary diseases.
Collapse
|
11
|
Xie J, Aubert X, Long X, van Dijk J, Arsenali B, Fonseca P, Overeem S. Audio-based snore detection using deep neural networks. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 200:105917. [PMID: 33434817 DOI: 10.1016/j.cmpb.2020.105917] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/27/2020] [Accepted: 12/20/2020] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE Snoring is a prevalent phenomenon. It may be benign, but can also be a symptom of obstructive sleep apnea (OSA) a prevalent sleep disorder. Accurate detection of snoring may help with screening and diagnosis of OSA. METHODS We introduce a snore detection algorithm based on the combination of a convolutional neural network (CNN) and a recurrent neural network (RNN). We obtained audio recordings of 38 subjects referred to a clinical center for a sleep study. All subjects were recorded by a total of 5 microphones placed at strategic positions around the bed. The CNN was used to extract features from the sound spectrogram, while the RNN was used to process the sequential CNN output and to classify the audio events to snore and non-snore events. We also addressed the impact of microphone placement on the performance of the algorithm. RESULTS The algorithm achieved an accuracy of 95.3 ± 0.5%, a sensitivity of 92.2 ± 0.9%, and a specificity of 97.7 ± 0.4% over all microphones in snore detection on our data set including 18412 sound events. The best accuracy (95.9%) was observed from the microphone placed about 70 cm above the subject's head and the worst (94.4%) was observed from the microphone placed about 130 cm above the subject's head. CONCLUSION Our results suggest that our method detects snore events from audio recordings with high accuracy and that microphone placement does not have a major impact on detection performance.
Collapse
Affiliation(s)
- Jiali Xie
- Biomedical Diagnostics Group, Department of Electrical Engineering, Eindhoven University of Technology, 5600 MB Eindhoven, The Netherlands
| | - Xavier Aubert
- Biomedical Diagnostics Group, Department of Electrical Engineering, Eindhoven University of Technology, 5600 MB Eindhoven, The Netherlands
| | - Xi Long
- Biomedical Diagnostics Group, Department of Electrical Engineering, Eindhoven University of Technology, 5600 MB Eindhoven, The Netherlands; Philips Research, High Tech Campus, 5656 AE Eindhoven, The Netherlands.
| | - Johannes van Dijk
- Biomedical Diagnostics Group, Department of Electrical Engineering, Eindhoven University of Technology, 5600 MB Eindhoven, The Netherlands; Sleep Medicine Center Kempenhaeghe, 5590 AB Heeze, The Netherlands
| | - Bruno Arsenali
- Biomedical Diagnostics Group, Department of Electrical Engineering, Eindhoven University of Technology, 5600 MB Eindhoven, The Netherlands
| | - Pedro Fonseca
- Biomedical Diagnostics Group, Department of Electrical Engineering, Eindhoven University of Technology, 5600 MB Eindhoven, The Netherlands; Philips Research, High Tech Campus, 5656 AE Eindhoven, The Netherlands
| | - Sebastiaan Overeem
- Biomedical Diagnostics Group, Department of Electrical Engineering, Eindhoven University of Technology, 5600 MB Eindhoven, The Netherlands; Sleep Medicine Center Kempenhaeghe, 5590 AB Heeze, The Netherlands
| |
Collapse
|
12
|
Tabatabaei SAH, Fischer P, Schneider H, Koehler U, Gross V, Sohrabi K. Methods for Adventitious Respiratory Sound Analyzing Applications Based on Smartphones: A Survey. IEEE Rev Biomed Eng 2021; 14:98-115. [PMID: 32746364 DOI: 10.1109/rbme.2020.3002970] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Detection and classification of adventitious acoustic lung sounds plays an important role in diagnosing, monitoring, controlling and, caring the patients with lung diseases. Such systems can be presented as different platforms like medical devices, standalone software or smartphone application. Ubiquity of smartphones and widespread use of the corresponding applications make such a device an attractive platform for hosting the detection and classification systems for adventitious lung sounds. In this paper, the smartphone-based systems for automatic detection and classification of the adventitious lung sounds are surveyed. Such adventitious sounds include cough, wheeze, crackle and, snore. Relevant sounds related to abnormal respiratory activities are considered as well. The methods are shortly described and the analyzing algorithms are explained. The analysis includes detection and/or classification of the sound events. A summary of the main surveyed methods together with the classification parameters and used features for the sake of comparison is given. Existing challenges, open issues and future trends will be discussed as well.
Collapse
|
13
|
Joyashiki T, Wada C. Validation of a Body-Conducted Sound Sensor for Respiratory Sound Monitoring and a Comparison with Several Sensors. SENSORS 2020; 20:s20030942. [PMID: 32050716 PMCID: PMC7038963 DOI: 10.3390/s20030942] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/20/2019] [Revised: 01/23/2020] [Accepted: 02/05/2020] [Indexed: 11/16/2022]
Abstract
The ideal respiratory sound sensor exhibits high sensitivity, wide-band frequency characteristics, and excellent anti-noise properties. We investigated the body-conducted sound sensor (BCS) and verified its usefulness in respiratory sound monitoring through comparison with an air-coupled microphone (ACM) and acceleration sensor (B & K: 8001). We conducted four experiments for comparison: (1) estimation by equivalent circuit model of sensors and measurement by a sensitivity evaluation system; (2) measurement of tissue-borne sensitivity-to-air-noise sensitivity ratio (SRTA); (3) respiratory sound measurement through a simulator; and (4) actual respiratory sound measurement using human subjects. For (1), the simulation and measured values of all the sensors showed good agreement; BCS demonstrated sensitivity ~10 dB higher than ACM and higher sensitivity in the high-frequency segments compared with 8001. In (2), BCS showed high SRTA in the 600–1000 and 1200–2000-Hz frequency segments. In (3), BCS detected wheezes in the high-frequency segments of the respiratory sound. Finally, in (4), the sensors showed similar characteristics and features in the high-frequency segments as the simulators, where typical breathing sound detection was possible. BCS displayed a higher sensitivity and anti-noise property in high-frequency segments compared with the other sensors and is a useful respiratory sound sensor.
Collapse
Affiliation(s)
- Takeshi Joyashiki
- Graduate School of Life Science and Systems Engineering, Kyushu Institute of Technology, 2–4 Hibikino, Wakamatsu-ku, Kitakyushu 808−0196, Japan;
- Saiseikai Yahata General Hospital Department of Clinical Engineering, Harunomachi5-9-27, Yahatahigasi-ku, Kitakyusyu 805-0050, Japan
- Correspondence: ; Tel.: +81-93-695-6058
| | - Chikamune Wada
- Graduate School of Life Science and Systems Engineering, Kyushu Institute of Technology, 2–4 Hibikino, Wakamatsu-ku, Kitakyushu 808−0196, Japan;
| |
Collapse
|
14
|
Dan Y, Song Y, Wang D, Zhang F, Liu W, Lu X. Research on Snoring Recognition Algorithms. JOURNAL OF ROBOTICS AND MECHATRONICS 2019. [DOI: 10.20965/jrm.2019.p0070] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
A snoring recognition algorithm based on machine learning is proposed to effectively and precisely recognize snoring. To obtain a dataset, the speech endpoint detection algorithm and Mel frequency cepstrum coefficient feature extraction algorithm are applied to process speech signal samples. The dataset is classified into snoring and nonsnoring data (other speech signals) using support vector machines. Experimental results show that the algorithm recognizes snoring signals with a high accuracy rate of 97% and positively impacts subsequent research and related engineering applications.
Collapse
|
15
|
A Bag of Wavelet Features for Snore Sound Classification. Ann Biomed Eng 2019; 47:1000-1011. [DOI: 10.1007/s10439-019-02217-0] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2018] [Accepted: 01/21/2019] [Indexed: 10/27/2022]
|
16
|
Niu J, Cai M, Shi Y, Ren S, Xu W, Gao W, Luo Z, Reinhardt JM. A Novel Method for Automatic Identification of Breathing State. Sci Rep 2019; 9:103. [PMID: 30643176 PMCID: PMC6331627 DOI: 10.1038/s41598-018-36454-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2018] [Accepted: 11/20/2018] [Indexed: 11/20/2022] Open
Abstract
Sputum deposition blocks the airways of patients and leads to blood oxygen desaturation. Medical staff must periodically check the breathing state of intubated patients. This process increases staff workload. In this paper, we describe a system designed to acquire respiratory sounds from intubated subjects, extract the audio features, and classify these sounds to detect the presence of sputum. Our method uses 13 features extracted from the time-frequency spectrum of the respiratory sounds. To test our system, 220 respiratory sound samples were collected. Half of the samples were collected from patients with sputum present, and the remainder were collected from patients with no sputum present. Testing was performed based on ten-fold cross-validation. In the ten-fold cross-validation experiment, the logistic classifier identified breath sounds with sputum present with a sensitivity of 93.36% and a specificity of 93.36%. The feature extraction and classification methods are useful and reliable for sputum detection. This approach differs from waveform research and can provide a better visualization of sputum conditions. The proposed system can be used in the ICU to inform medical staff when sputum is present in a patient's trachea.
Collapse
Affiliation(s)
- Jinglong Niu
- School of Automation Science and Electrical Engineering, Beihang University, Beijing, 100191, China
- Department of Biomedical Engineering, University of Iowa, Iowa City, IA, 52246, United States
| | - Maolin Cai
- School of Automation Science and Electrical Engineering, Beihang University, Beijing, 100191, China
| | - Yan Shi
- School of Automation Science and Electrical Engineering, Beihang University, Beijing, 100191, China.
| | - Shuai Ren
- School of Automation Science and Electrical Engineering, Beihang University, Beijing, 100191, China
| | - Weiqing Xu
- School of Automation Science and Electrical Engineering, Beihang University, Beijing, 100191, China
| | - Wei Gao
- Department of Respiration, Beijing Anzhen Hospital, Capital Medical University, Beijing, 100029, China.
| | - Zujin Luo
- Department of Respiratory and Critical Care Medicine, Beijing Engineering Research Center of Respiratory and Critical Care Medicine, Beijing Institute of Respiratory Medicine, Beijing Chao-Yang Hospital,Capital Medical University, Beijing, 100043, China
| | - Joseph M Reinhardt
- Department of Biomedical Engineering, University of Iowa, Iowa City, IA, 52246, United States
| |
Collapse
|
17
|
Arsenali B, van Dijk J, Ouweltjes O, den Brinker B, Pevernagie D, Krijn R, van Gilst M, Overeem S. Recurrent Neural Network for Classification of Snoring and Non-Snoring Sound Events. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2018; 2018:328-331. [PMID: 30440404 DOI: 10.1109/embc.2018.8512251] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Obstructive sleep apnea (OSA) is a disorder that affects up to 38% of the western population. It is characterized by repetitive episodes of partial or complete collapse of the upper airway during sleep. These episodes are almost always accompanied by loud snoring. Questionnaires such as STOP-BANG exploit snoring to screen for OSA. However, they are not quantitative and thus do not exploit its full potential. A method for automatic detection of snoring in whole-night recordings is required to enable its quantitative evaluation. In this study, we propose such a method. The centerpiece of the proposed method is a recurrent neural network for modeling of sequential data with variable length. Mel-frequency cepstral coefficients, which were extracted from snoring and non-snoring sound events, were used as inputs to the proposed network. A total of 20 subjects referred to clinical sleep recording were also recorded by a microphone that was placed 70 cm from the top end of the bed. These recordings were used to assess the performance of the proposed method. When it comes to the detection of snoring events, our results show that the proposed method has an accuracy of 95%, sensitivity of 92%, and specificity of 98%. In conclusion, our results suggest that the proposed method may improve the process of snoring detection and with that the process of OSA screening. Follow-up clinical studies are required to confirm this potential.
Collapse
|
18
|
Niu J, Shi Y, Cai M, Cao Z, Wang D, Zhang Z, Zhang XD. Detection of sputum by interpreting the time-frequency distribution of respiratory sound signal using image processing techniques. Bioinformatics 2018; 34:820-827. [PMID: 29040453 PMCID: PMC6192228 DOI: 10.1093/bioinformatics/btx652] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2017] [Revised: 07/25/2017] [Accepted: 10/12/2017] [Indexed: 11/14/2022] Open
Abstract
Motivation Sputum in the trachea is hard to expectorate and detect directly for the patients who are unconscious, especially those in Intensive Care Unit. Medical staff should always check the condition of sputum in the trachea. This is time-consuming and the necessary skills are difficult to acquire. Currently, there are few automatic approaches to serve as alternatives to this manual approach. Results We develop an automatic approach to diagnose the condition of the sputum. Our approach utilizes a system involving a medical device and quantitative analytic methods. In this approach, the time-frequency distribution of respiratory sound signals, determined from the spectrum, is treated as an image. The sputum detection is performed by interpreting the patterns in the image through the procedure of preprocessing and feature extraction. In this study, 272 respiratory sound samples (145 sputum sound and 127 non-sputum sound samples) are collected from 12 patients. We apply the method of leave-one out cross-validation to the 12 patients to assess the performance of our approach. That is, out of the 12 patients, 11 are randomly selected and their sound samples are used to predict the sound samples in the remaining one patient. The results show that our automatic approach can classify the sputum condition at an accuracy rate of 83.5%. Availability and implementation The matlab codes and examples of datasets explored in this work are available at Bioinformatics online. Contact yesoyou@gmail.com or douglaszhang@umac.mo. Supplementary information Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Jinglong Niu
- School of Automation Science and Electrical Engineering, Beihang
University, Beijing, China
- Beijing Engineering Research Center of Diagnosis and Treatment of
Respiratory and Critical Care Medicine, Beijing Chaoyang Hospital, Beijing, China
| | - Yan Shi
- School of Automation Science and Electrical Engineering, Beihang
University, Beijing, China
- Beijing Engineering Research Center of Diagnosis and Treatment of
Respiratory and Critical Care Medicine, Beijing Chaoyang Hospital, Beijing, China
- Faculty of Health Sciences, University of Macau, Taipa, Macau,
China
- The State Key Laboratory of Fluid Power Transmission and Control,
Zhejiang University, Hangzhou, China
| | - Maolin Cai
- School of Automation Science and Electrical Engineering, Beihang
University, Beijing, China
- Beijing Engineering Research Center of Diagnosis and Treatment of
Respiratory and Critical Care Medicine, Beijing Chaoyang Hospital, Beijing, China
| | - Zhixin Cao
- Beijing Engineering Research Center of Diagnosis and Treatment of
Respiratory and Critical Care Medicine, Beijing Chaoyang Hospital, Beijing, China
| | - Dandan Wang
- Faculty of Health Sciences, University of Macau, Taipa, Macau,
China
| | - Zhaozhi Zhang
- Department of Statistical Science, Duke University, Durham, NC,
USA
| | | |
Collapse
|
19
|
Detection of sleep breathing sound based on artificial neural network analysis. Biomed Signal Process Control 2018. [DOI: 10.1016/j.bspc.2017.11.005] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
20
|
Kim T, Kim JW, Lee K. Detection of sleep disordered breathing severity using acoustic biomarker and machine learning techniques. Biomed Eng Online 2018; 17:16. [PMID: 29391025 PMCID: PMC5796501 DOI: 10.1186/s12938-018-0448-x] [Citation(s) in RCA: 30] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2017] [Accepted: 01/17/2018] [Indexed: 11/18/2022] Open
Abstract
PURPOSE Breathing sounds during sleep are altered and characterized by various acoustic specificities in patients with sleep disordered breathing (SDB). This study aimed to identify acoustic biomarkers indicative of the severity of SDB by analyzing the breathing sounds collected from a large number of subjects during entire overnight sleep. METHODS The participants were patients who presented at a sleep center with snoring or cessation of breathing during sleep. They were subjected to full-night polysomnography (PSG) during which the breathing sound was recorded using a microphone. Then, audio features were extracted and a group of features differing significantly between different SDB severity groups was selected as a potential acoustic biomarker. To assess the validity of the acoustic biomarker, classification tasks were performed using several machine learning techniques. Based on the apnea-hypopnea index of the subjects, four-group classification and binary classification were performed. RESULTS Using tenfold cross validation, we achieved an accuracy of 88.3% in the four-group classification and an accuracy of 92.5% in the binary classification. Experimental evaluation demonstrated that the models trained on the proposed acoustic biomarkers can be used to estimate the severity of SDB. CONCLUSIONS Acoustic biomarkers may be useful to accurately predict the severity of SDB based on the patient's breathing sounds during sleep, without conducting attended full-night PSG. This study implies that any device with a microphone, such as a smartphone, could be potentially utilized outside specialized facilities as a screening tool for detecting SDB.
Collapse
Affiliation(s)
- Taehoon Kim
- Music and Audio Research Group, Graduate School of Convergence Science and Technology, Seoul National University, 1 Gwanak-ro, Seoul, 08826 Republic of Korea
| | - Jeong-Whun Kim
- Department of Otorhinolaryngology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Gumi-ro, Seongnam, 13620 Republic of Korea
| | - Kyogu Lee
- Music and Audio Research Group, Graduate School of Convergence Science and Technology, Seoul National University, 1 Gwanak-ro, Seoul, 08826 Republic of Korea
| |
Collapse
|
21
|
Guo J, Qian K, Zhang G, Xu H, Schuller B. Accelerating Biomedical Signal Processing Using GPU: A Case Study of Snore Sound Feature Extraction. Interdiscip Sci 2017; 9:550-555. [PMID: 28948531 DOI: 10.1007/s12539-017-0232-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2016] [Revised: 04/05/2017] [Accepted: 04/17/2017] [Indexed: 11/25/2022]
Abstract
The advent of 'Big Data' and 'Deep Learning' offers both, a great challenge and a huge opportunity for personalised health-care. In machine learning-based biomedical data analysis, feature extraction is a key step for 'feeding' the subsequent classifiers. With increasing numbers of biomedical data, extracting features from these 'big' data is an intensive and time-consuming task. In this case study, we employ a Graphics Processing Unit (GPU) via Python to extract features from a large corpus of snore sound data. Those features can subsequently be imported into many well-known deep learning training frameworks without any format processing. The snore sound data were collected from several hospitals (20 subjects, with 770-990 MB per subject - in total 17.20 GB). Experimental results show that our GPU-based processing significantly speeds up the feature extraction phase, by up to seven times, as compared to the previous CPU system.
Collapse
Affiliation(s)
- Jian Guo
- School of Computer Science and Engineering, Nanjing University of Science Technology, Nanjing, China
| | - Kun Qian
- Department of Electrical and Computer Engineering, MISP group, MMK Technische University Munchen, Munich, Germany
| | - Gongxuan Zhang
- School of Computer Science and Engineering, Nanjing University of Science Technology, Nanjing, China.
| | - Huijie Xu
- Department of Otolaryngology, Beijing Hospital, Beijing, China
| | - Björn Schuller
- Bjorn Schuller Department of Computing, Machine Learning Group Imperial College London, London, UK
| |
Collapse
|
22
|
Çavuşoğlu M, Poets CF, Urschitz MS. Acoustics of snoring and automatic snore sound detection in children. Physiol Meas 2017; 38:1919-1938. [PMID: 28871074 DOI: 10.1088/1361-6579/aa8a39] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
OBJECTIVE Acoustic analyses of snoring sounds have been used to objectively assess snoring and applied in various clinical problems for adult patients. Such studies require highly automatized tools to analyze the sound recordings of the whole night's sleep, in order to extract clinically relevant snore- related statistics. The existing techniques and software used for adults are not efficiently applicable to snoring sounds in children, basically because of different acoustic signal properties. In this paper, we present a broad range of acoustic characteristics of snoring sounds in children (N = 38) in comparison to adult (N = 30) patients. APPROACH Acoustic characteristics of the signals were calculated, including frequency domain representations, spectrogram-based characteristics, spectral envelope analysis, formant structures and loudness of the snoring sounds. MAIN RESULTS We observed significant differences in spectral features, formant structures and loudness of the snoring signals of children compared to adults that may arise from the diversity of the upper airway anatomy as the principal determinant of the snore sound generation mechanism. Furthermore, based on the specific audio features of snoring children, we proposed a novel algorithm for the automatic detection of snoring sounds from ambient acoustic data specifically in a pediatric population. The respiratory sounds were recorded using a pair of microphones and a multi-channel data acquisition system simultaneously with full-night polysomnography during sleep. Brief sound chunks of 0.5 s were classified as either belonging to a snoring event or not with a multi-layer perceptron, which was trained in a supervised fashion using stochastic gradient descent on a large hand-labeled dataset using frequency domain features. SIGNIFICANCE The method proposed here has been used to extract snore-related statistics that can be calculated from the detected snore episodes for the whole night's sleep, including number of snore episodes (total snoring time), ratio of snore to whole sleep time, variation of snoring rate, regularity of snoring episodes in time and amplitude and snore loudness. These statistics will ultimately serve as a clinical tool providing information for the objective evaluation of snoring for several clinical applications.
Collapse
Affiliation(s)
- M Çavuşoğlu
- Institute for Biomedical Engineering, ETH Zurich, Gloriastr. 35, 8092 Zurich, Switzerland
| | | | | |
Collapse
|
23
|
Shabtai NR, Zigel Y. Spatial acoustic radiation of respiratory sounds for sleep evaluation. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 142:1291. [PMID: 28964100 DOI: 10.1121/1.4999319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Body posture has an effect on sleeping quality and breathing disorders and therefore it is important to be recognized for the completion of the sleep evaluation process. Since humans have a directional acoustic radiation pattern, it is hypothesized that microphone arrays can be used to recognize different body postures, which is highly practical for sleep evaluation applications that already measure respiratory sounds using distant microphones. Furthermore, body posture may have an effect on distant microphone measurement; hence, the measurement can be compensated if the body posture is correctly recognized. A spherical harmonics decomposition approach to the spatial acoustic radiation is presented, assuming an array of eight microphones in a medium-sized audiology booth. The spatial sampling and reconstruction of the radiation pattern is discussed, and a final setup for the microphone array is recommended. A case study is shown using recorded segments of snoring and breathing sounds of three human subjects in three body postures in a silent but not anechoic audiology booth.
Collapse
Affiliation(s)
- Noam R Shabtai
- Department of Biomedical Engineering, Faculty of Engineering Sciences, Ben-Gurion University of the Negev, P.O.B. 653, Beer-Sheva 8410501, Israel
| | - Yaniv Zigel
- Department of Biomedical Engineering, Faculty of Engineering Sciences, Ben-Gurion University of the Negev, P.O.B. 653, Beer-Sheva 8410501, Israel
| |
Collapse
|
24
|
Swarnkar VR, Abeyratne UR, Sharan RV. Automatic picking of snore events from overnight breath sound recordings. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2017; 2017:2822-2825. [PMID: 29060485 DOI: 10.1109/embc.2017.8037444] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Snoring is one of the earliest symptoms of Obstructive Sleep Apnea (OSA). However, the unavailability of an objective snore definition is a major obstacle in developing automated snore analysis system for OSA screening. The objectives of this paper is to develop a method to identify and extract snore sounds from a continuous sound recording following an objective definition of snore that is independent of snore loudness. Nocturnal sounds from 34 subjects were recorded using a non-contact microphone and computerized data-acquisition system. Sound data were divided into non-overlapping training (n = 21) and testing (n = 13) datasets. Using training dataset an Artificial Neural Network (ANN) classifier were trained for snore and non-snore classification. Snore sounds were defined based on the key observation that sounds perceived as `snores' by human are characterized by repetitive packets of energy that are responsible for creating the vibratory sound peculiar to snorers. On the testing dataset, the accuracy of ANN classifier ranged between 86 - 89%. Our results indicate that it is possible to define snoring using loudness independent, objective criteria, and develop automated snore identification and extraction algorithms.
Collapse
|
25
|
Perez-Macias JM, Adavanne S, Viik J, Varri A, Himanen SL, Tenhunen M. Assessment of support vector machines and convolutional neural networks to detect snoring using Emfit mattress. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2017; 2017:2883-2886. [PMID: 29060500 DOI: 10.1109/embc.2017.8037459] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Snoring (SN) is an essential feature of sleep breathing disorders, such as obstructive sleep apnea (OSA). In this study, we evaluate epoch-based snoring detection methods using an unobtrusive electromechanical film transducer (Emfit) mattress sensor using polysomnography recordings as a reference. Two different approaches were investigated: a support vector machine (SVM) classifier fed with a subset of spectral features and convolutional neural network (CNN) fed with spectrograms. Representative 10-min normal breathing (NB) and SN periods were selected for analysis in 30 subjects and divided into thirty-second epochs. In the evaluation, average results over 10 fold Monte Carlo cross-validation with 80% training and 20% test split were reported. Highest performance was achieved using CNN, with 92% sensitivity, 96% specificity, 94% accuracy, and 0.983 area under the receiver operating characteristics curve (AROC). Results showed a 6% average increase of performance of the CNN over SVM and greater robustness, and similar performance to ambient microphones.
Collapse
|
26
|
Erdenebayar U, Park JU, Jeong P, Lee KJ. Obstructive Sleep Apnea Screening Using a Piezo-Electric Sensor. J Korean Med Sci 2017; 32:893-899. [PMID: 28480645 PMCID: PMC5426252 DOI: 10.3346/jkms.2017.32.6.893] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/01/2016] [Accepted: 03/04/2017] [Indexed: 11/20/2022] Open
Abstract
In this study, we propose a novel method for obstructive sleep apnea (OSA) detection using a piezo-electric sensor. OSA is a relatively common sleep disorder. However, more than 80% of OSA patients remain undiagnosed. We investigated the feasibility of OSA assessment using a single-channel physiological signal to simplify the OSA screening. We detected both snoring and heartbeat information by using a piezo-electric sensor, and snoring index (SI) and features based on pulse rate variability (PRV) analysis were extracted from the filtered piezo-electric sensor signal. A support vector machine (SVM) was used as a classifier to detect OSA events. The performance of the proposed method was evaluated on 45 patients from mild, moderate, and severe OSA groups. The method achieved a mean sensitivity, specificity, and accuracy of 72.5%, 74.2%, and 71.5%; 85.8%, 80.5%, and 80.0%; and 70.3%, 77.1%, and 71.9% for the mild, moderate, and severe groups, respectively. Finally, these results not only show the feasibility of OSA detection using a piezo-electric sensor, but also illustrate its usefulness for monitoring sleep and diagnosing OSA.
Collapse
Affiliation(s)
- Urtnasan Erdenebayar
- Department of Biomedical Engineering, School of Health Science, Yonsei University, Wonju, Korea
| | - Jong Uk Park
- Department of Biomedical Engineering, School of Health Science, Yonsei University, Wonju, Korea
| | - Pilsoo Jeong
- Department of Biomedical Engineering, School of Health Science, Yonsei University, Wonju, Korea
| | - Kyoung Joung Lee
- Department of Biomedical Engineering, School of Health Science, Yonsei University, Wonju, Korea.
| |
Collapse
|
27
|
Mlynczak M, Migacz E, Migacz M, Kukwa W. Detecting Breathing and Snoring Episodes Using a Wireless Tracheal Sensor-A Feasibility Study. IEEE J Biomed Health Inform 2016; 21:1504-1510. [PMID: 27913363 DOI: 10.1109/jbhi.2016.2632976] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
OBJECTIVE Sleep-disordered breathing is both a clinical and a social problem. This implies the need for convenient solutions to simplify screening and diagnosis. The aim of the study was to investigate the sensitivity and specificity of a novel wireless system in detecting breathing and snoring episodes during sleep. METHODS A wireless acoustic sensor was elaborated and implemented. Segmentation (based on spectral thresholding and heuristics) and classification of all breathing episodes during recording were implemented through a mobile application. The system was evaluated on 1520 manually labeled episodes registered from 40 real-world, whole-night recordings of 16 generally healthy subjects. RESULTS The differentiation between normal breathing and snoring had 88.8% accuracy. As the system is intended for screening, high specificity of 95% is reported. CONCLUSION The system is a compromise between nonmedical phone applications and medical sleep studies. The presented approach enables the study to be repetitive, personal, and inexpensive. It has additional value in the form of well-recorded data which are reliable and comparable. SIGNIFICANCE The system opens unexplored possibilities in sleep monitoring and study enabling a multinight recording strategy involving the collection and analysis of abundant data from thousands of people.
Collapse
|
28
|
Qian K, Janott C, Pandit V, Zhang Z, Heiser C, Hohenhorst W, Herzog M, Hemmert W, Schuller B. Classification of the Excitation Location of Snore Sounds in the Upper Airway by Acoustic Multifeature Analysis. IEEE Trans Biomed Eng 2016; 64:1731-1741. [PMID: 28113249 DOI: 10.1109/tbme.2016.2619675] [Citation(s) in RCA: 35] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
OBJECTIVE Obstructive sleep apnea (OSA) is a serious chronic disease and a risk factor for cardiovascular diseases. Snoring is a typical symptom of OSA patients. Knowledge of the origin of obstruction and vibration within the upper airways is essential for a targeted surgical approach. Aim of this paper is to systematically compare different acoustic features, and classifiers for their performance in the classification of the excitation location of snore sounds. METHODS Snore sounds from 40 male patients have been recorded during drug-induced sleep endoscopy, and categorized by Ear, Nose & Throat (ENT) experts. Crest Factor, fundamental frequency, spectral frequency features, subband energy ratio, mel-scale frequency cepstral coefficients, empirical mode decomposition-based features, and wavelet energy features have been extracted and fed into several classifiers. Using the ReliefF algorithm, features have been ranked and the selected feature subsets have been tested with the same classifiers. RESULTS A fusion of all features after a ReliefF feature selection step in combination with a random forests classifier showed the best classification results of 78% unweighted average recall by subject independent validation. CONCLUSION Multifeature analysis is a promising means to help identify the anatomical mechanisms of snore sound generation in individual subjects. SIGNIFICANCE This paper describes a novel approach for the machine-based multifeature classification of the excitation location of snore sounds in the upper airway.
Collapse
Affiliation(s)
- Kun Qian
- Machine Intelligence and Signal Processing Group, MMK, Technische Universität München, Munich, Germany
| | | | - Vedhas Pandit
- Chair of Complex and Intelligent SystemsUniversity of Passau
| | - Zixing Zhang
- Chair of Complex and Intelligent SystemsUniversity of Passau
| | - Clemens Heiser
- Department of Otorhinolaryngology/Head and Neck SurgeryTechnische Universität München
| | | | - Michael Herzog
- Clinic for ENT Medicine, Head and Neck SurgeryCarl-Thiem-Klinikum Cottbus
| | - Werner Hemmert
- Institute for Medical EngineeringTechnische Universität München
| | | |
Collapse
|
29
|
Shokrollahi M, Saha S, Hadi P, Rudzicz F, Yadollahi A. Snoring sound classification from respiratory signal. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2016; 2016:3215-3218. [PMID: 28268992 DOI: 10.1109/embc.2016.7591413] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Snoring is common in the general population and the irregularity could lead to the presence of Obstructive sleep apnea. Diagnosis of OSA could therefore be made by snoring sound analysis. However, there is still a shortage of robust methods to automatically detect snoring sounds without the need to calibrate for every individual. In this paper, a novel method based on neural network is proposed to classify breathing sound episodes from snoring and non-snoring sound segments. Our snore detection algorithm was applied to the tracheal sounds of nine individuals with different OSA severities. On the testing dataset, the classifier achieved a sensitivity and specificity of 95.9% and 97.6% respectively. Our results indicate that using such a method could help to detect snoring sounds with high accuracy which would be useful in the diagnosis of sleep apnea.
Collapse
|
30
|
Song C, Liu K, Zhang X, Chen L, Xian X. An Obstructive Sleep Apnea Detection Approach Using a Discriminative Hidden Markov Model From ECG Signals. IEEE Trans Biomed Eng 2016; 63:1532-42. [DOI: 10.1109/tbme.2015.2498199] [Citation(s) in RCA: 103] [Impact Index Per Article: 12.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
31
|
Rosenwein T, Dafna E, Tarasiuk A, Zigel Y. Detection of breathing sounds during sleep using non-contact audio recordings. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2016; 2014:1489-92. [PMID: 25570251 DOI: 10.1109/embc.2014.6943883] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Evaluation of respiratory activity during sleep is essential in order to reliably diagnose sleep disorder breathing (SDB); a condition associated with serious cardio-vascular morbidity and mortality. In the current study, we developed and validated a robust automatic breathing-sounds (i.e. inspiratory and expiratory sounds) detection system of audio signals acquired during sleep. Random forest classifier was trained and tested using inspiratory/expiratory/noise events (episodes), acquired from 84 subjects consecutively and prospectively referred to SDB diagnosis in sleep laboratory and in at-home environment. More than 560,000 events were analyzed, including a variety of recording devices and different environments. The system's overall accuracy rate is 88.8%, with accuracy rate of 91.2% and 83.6% in in-laboratory and at-home environments respectively, when classifying between inspiratory, expiratory, and noise classes. Here, we provide evidence that breathing-sounds can be reliably detected using non-contact audio technology in at-home environment. The proposed approach may improve our understanding of respiratory activity during sleep. This in return, will improve early SDB diagnosis and treatment.
Collapse
|
32
|
Nonaka R, Emoto T, Abeyratne UR, Jinnouchi O, Kawata I, Ohnishi H, Akutagawa M, Konaka S, Kinouchi Y. Automatic snore sound extraction from sleep sound recordings via auditory image modeling. Biomed Signal Process Control 2016. [DOI: 10.1016/j.bspc.2015.12.009] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
33
|
Lee HK, Kim H, Lee KJ. Nasal pressure recordings for automatic snoring detection. Med Biol Eng Comput 2015; 53:1103-11. [PMID: 26392181 DOI: 10.1007/s11517-015-1388-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2013] [Accepted: 09/07/2015] [Indexed: 11/26/2022]
Abstract
This study presents a rule-based method for automated, real-time snoring detection using nasal pressure recordings during overnight sleep. Although nasal pressure recordings provide information regarding nocturnal breathing abnormalities in a polysomnography (PSG) study or continuous positive airway pressure (CPAP) system, an objective assessment of snoring detection using these nasal pressure recordings has not yet been reported in the literature. Nasal pressure recordings were obtained from 55 patients with obstructive sleep apnea. The PSG data were also recorded simultaneously to evaluate the proposed method. This rule-based method for automatic, real-time snoring detection employed preprocessing, short-time energy and the central difference method. Using this methodology, a sensitivity of 85.4% and a positive predictive value of 92.0% were achieved in all patients. Therefore, we concluded that the proposed method is a simple, portable and cost-effective tool for real-time snoring detection in PSG and CPAP systems that does not require acoustic analysis using a microphone.
Collapse
Affiliation(s)
- Hyo-Ki Lee
- Interdisciplinary Consortium on Advanced Motion Performance (iCAMP), Department of Surgery, College of Medicine, The University of Arizona, Tucson, AZ, 85724, USA
| | - Hojoong Kim
- Division of Pulmonary and Critical Care Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Kyoung-Joung Lee
- Department of Biomedical Engineering, Yonsei University, 1 Yonseidae-gil, Wonju-si, Gangwon-do, 26493, Republic of Korea.
| |
Collapse
|
34
|
Hwang SH, Han CM, Yoon HN, Jung DW, Lee YJ, Jeong DU, Park KS. Polyvinylidene fluoride sensor-based method for unconstrained snoring detection. Physiol Meas 2015; 36:1399-414. [PMID: 26012381 DOI: 10.1088/0967-3334/36/7/1399] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
We established and tested a snoring detection method using a polyvinylidene fluoride (PVDF) sensor for accurate, fast, and motion-artifact-robust monitoring of snoring events during sleep. Twenty patients with obstructive sleep apnea participated in this study. The PVDF sensor was located between a mattress cover and mattress, and the patients' snoring signals were unconstrainedly measured with the sensor during polysomnography. The power ratio and peak frequency from the short-time Fourier transform were used to extract spectral features from the PVDF data. A support vector machine was applied to the spectral features to classify the data into either the snore or non-snore class. The performance of the method was assessed using manual labelling by three human observers as a reference. For event-by-event snoring detection, PVDF data that contained 'snoring' (SN), 'snoring with movement' (SM), and 'normal breathing' epochs were selected for each subject. As a result, the overall sensitivity and the positive predictive values were 94.6% and 97.5%, respectively, and there was no significant difference between the SN and SM results. The proposed method can be applied in both residential and ambulatory snoring monitoring systems.
Collapse
Affiliation(s)
- Su Hwan Hwang
- Interdisciplinary Program in Bioengineering, Seoul National University, Seoul, Korea
| | | | | | | | | | | | | |
Collapse
|
35
|
Soltanzadeh R, Moussavi Z. Sleep Stage Detection Using Tracheal Breathing Sounds: A Pilot Study. Ann Biomed Eng 2015; 43:2530-7. [PMID: 25739951 DOI: 10.1007/s10439-015-1290-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2014] [Accepted: 02/24/2015] [Indexed: 10/23/2022]
Abstract
Sleep stage detection is needed in many sleep studies and clinical assessments. Generally, sleep stages are identified using spectral analysis of electrocephologram (EEG) and electrooculogram (EOG) signals. This study, for the first time, has investigated the feasibility of detecting sleep stages using tracheal breathing sounds, and whether the change of breathing sounds due to sleeping stage differs at different periods of sleeping time; the motivation was seeking an alternative technique for sleep stage identification. The tracheal breathing sounds of 12 individuals, who were referred for full overnight polysomnography (PSG) assessment, were recorded using a microphone placed over the suprasternal notch, and analyzed using higher order statistical analysis. Five noise-and-snore-free breathing cycles from wakefulness, REM and Stage II of sleep were selected from each subject for analysis. Data of the REM and Stage II were selected from beginning, middle and close to end of sleeping time. Hurst exponent was calculated from the bispectra of the inspiratory sounds of each subject at each sleeping stage in different periods of sleeping time. The participants' sleep stage were determined by sleep lab technologists during the PSG study using EEG and EOG signals. The results show separate and non-overlapping clusters for wakefulness, REM and Stage II for each subject. Thus, using a simple linear classifier, we were able to classify REM and Stage II of each subject with 100% accuracy. In addition, the results show that the same pattern existed as long as the REM and Stage II segments were close (less than 3 h) to each other in terms of time.
Collapse
Affiliation(s)
- Ramin Soltanzadeh
- Biomedical Engineering Program, University of Manitoba, 75 Chancellor Circle, Winnipeg, MB, R3T 5V6, Canada
| | - Zahra Moussavi
- Biomedical Engineering Program, University of Manitoba, 75 Chancellor Circle, Winnipeg, MB, R3T 5V6, Canada.
| |
Collapse
|
36
|
Chen L, Zhang X, Wang H. An Obstructive Sleep Apnea Detection Approach Using Kernel Density Classification Based on Single-Lead Electrocardiogram. J Med Syst 2015; 39:47. [DOI: 10.1007/s10916-015-0222-6] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2015] [Accepted: 01/26/2015] [Indexed: 10/23/2022]
|
37
|
Saha S, Taheri M, Mossuavi Z, Yadollahi A. Effects of changing in the neck circumference during sleep on snoring sound characteristics. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2015; 2015:2235-2238. [PMID: 26736736 DOI: 10.1109/embc.2015.7318836] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Rostral fluid shift during sleep from the lower body part into the neck can increase neck circumference (NC) and narrow the upper airway. Such narrowing in the upper airway may increase turbulence of airflow passing through the upper airway; thus, induce snoring. The objective of this study was to investigate the effects of changes in NC during sleep on snoring sound characteristics. Fifteen non-obese men slept supine, and their sleep was monitored by a regular polysomnography. Snoring sounds were recorded with a microphone attached to the neck. NC was measured before and after sleep with a measuring tape. Snoring sounds' average power was calculated in different frequency ranges of 100 - 4000 Hz, 100 - 150 Hz, 150 - 450 Hz, 450 - 600 Hz, 600 - 1200 Hz, 1200 - 1800 Hz, 1800 - 2500 Hz and 2500 - 4000 Hz. Statistical analysis showed that increases in NC after sleep were strongly correlated with higher average power of the snoring sounds in the frequency ranges of 100-4000 Hz (r=0.74, P=0.004), 100-150 Hz (r=0.70, P=0.008), 150-450 Hz (r=0.73, P=0.005), and 450 - 600 Hz (r= 0.65, P=0.025). These results encourage the use of snoring sound analysis for monitoring the effects of fluid accumulation in the neck in relation to sleep apnea.
Collapse
|
38
|
Manfredi C, Dejonckere PH. Voice dosimetry and monitoring, with emphasis on professional voice diseases: Critical review and framework for future research. LOGOP PHONIATR VOCO 2014; 41:49-65. [PMID: 25530457 DOI: 10.3109/14015439.2014.970228] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
Professional voice has become an important issue in the field of occupational health. Similarly, voice diseases related to occupations gain interest in insurance medicine, particularly within the frame of specific insurance systems for occupational diseases. Technological developments have made possible dosimetry of voice loading in the work-place, as well as long-term monitoring of relevant voice parameters during professional activities. A critical review is given, with focus on the specificity of occupational voice use and on the point of view of insurance medicine. Remaining questions and suggestions for future research are proposed.
Collapse
Affiliation(s)
- Claudia Manfredi
- a Department of Information Engineering , Università degli Studi di Firenze , Via S. Marta, Firenze , Italy
| | - Philippe H Dejonckere
- b Catholic University of Leuven, Neurosciences , Exp. ORL , Belgium.,c Federal Institute of Occupational Diseases , Brussels , Belgium
| |
Collapse
|
39
|
|
40
|
Qian K, Guo J, Xu H, Zhu Z, Zhang G. Snore related signals processing in a private cloud computing system. Interdiscip Sci 2014; 6:216-21. [PMID: 25205499 DOI: 10.1007/s12539-013-0203-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2013] [Revised: 10/07/2013] [Accepted: 02/09/2014] [Indexed: 10/24/2022]
Abstract
Snore related signals (SRS) have been demonstrated to carry important information about the obstruction site and degree in the upper airway of Obstructive Sleep Apnea-Hypopnea Syndrome (OSAHS) patients in recent years. To make this acoustic signal analysis method more accurate and robust, big SRS data processing is inevitable. As an emerging concept and technology, cloud computing has motivated numerous researchers and engineers to exploit applications both in academic and industry field, which could have an ability to implement a huge blue print in biomedical engineering. Considering the security and transferring requirement of biomedical data, we designed a system based on private cloud computing to process SRS. Then we set the comparable experiments of processing a 5-hour audio recording of an OSAHS patient by a personal computer, a server and a private cloud computing system to demonstrate the efficiency of the infrastructure we proposed.
Collapse
Affiliation(s)
- Kun Qian
- School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing, 210094, China,
| | | | | | | | | |
Collapse
|
41
|
Acoustic Estimation of Neck Fluid Volume. Ann Biomed Eng 2014; 42:2132-42. [DOI: 10.1007/s10439-014-1083-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2014] [Accepted: 07/31/2014] [Indexed: 01/06/2023]
|
42
|
|
43
|
Dafna E, Tarasiuk A, Zigel Y. Automatic detection of whole night snoring events using non-contact microphone. PLoS One 2013; 8:e84139. [PMID: 24391903 PMCID: PMC3877189 DOI: 10.1371/journal.pone.0084139] [Citation(s) in RCA: 76] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2013] [Accepted: 11/12/2013] [Indexed: 11/21/2022] Open
Abstract
Objective Although awareness of sleep disorders is increasing, limited information is available on whole night detection of snoring. Our study aimed to develop and validate a robust, high performance, and sensitive whole-night snore detector based on non-contact technology. Design Sounds during polysomnography (PSG) were recorded using a directional condenser microphone placed 1 m above the bed. An AdaBoost classifier was trained and validated on manually labeled snoring and non-snoring acoustic events. Patients Sixty-seven subjects (age 52.5±13.5 years, BMI 30.8±4.7 kg/m2, m/f 40/27) referred for PSG for obstructive sleep apnea diagnoses were prospectively and consecutively recruited. Twenty-five subjects were used for the design study; the validation study was blindly performed on the remaining forty-two subjects. Measurements and Results To train the proposed sound detector, >76,600 acoustic episodes collected in the design study were manually classified by three scorers into snore and non-snore episodes (e.g., bedding noise, coughing, environmental). A feature selection process was applied to select the most discriminative features extracted from time and spectral domains. The average snore/non-snore detection rate (accuracy) for the design group was 98.4% based on a ten-fold cross-validation technique. When tested on the validation group, the average detection rate was 98.2% with sensitivity of 98.0% (snore as a snore) and specificity of 98.3% (noise as noise). Conclusions Audio-based features extracted from time and spectral domains can accurately discriminate between snore and non-snore acoustic events. This audio analysis approach enables detection and analysis of snoring sounds from a full night in order to produce quantified measures for objective follow-up of patients.
Collapse
Affiliation(s)
- Eliran Dafna
- Department of Biomedical Engineering, Ben-Gurion University of the Negev, Beer–Sheva, Israel
| | - Ariel Tarasiuk
- Sleep-Wake Disorders Unit, Soroka University Medical Center, and Department of Physiology, Faculty of Health Sciences, Ben-Gurion University of the Negev, Israel
| | - Yaniv Zigel
- Department of Biomedical Engineering, Ben-Gurion University of the Negev, Beer–Sheva, Israel
- * E-mail:
| |
Collapse
|
44
|
Orlandi S, Dejonckere P, Schoentgen J, Lebacq J, Rruqja N, Manfredi C. Effective pre-processing of long term noisy audio recordings: An aid to clinical monitoring. Biomed Signal Process Control 2013. [DOI: 10.1016/j.bspc.2013.07.009] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
45
|
Lee HK, Lee J, Kim H, Ha JY, Lee KJ. Snoring detection using a piezo snoring sensor based on hidden Markov models. Physiol Meas 2013; 34:N41-9. [PMID: 23587724 DOI: 10.1088/0967-3334/34/5/n41] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|
46
|
Snoring sounds variability as a signature of obstructive sleep apnea. Med Eng Phys 2013; 35:479-85. [DOI: 10.1016/j.medengphy.2012.06.013] [Citation(s) in RCA: 63] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2012] [Revised: 05/16/2012] [Accepted: 06/15/2012] [Indexed: 11/23/2022]
|
47
|
Azarbarzin A, Moussavi Z. A comparison between recording sites of snoring sounds in relation to upper airway obstruction. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2013; 2012:4246-9. [PMID: 23366865 DOI: 10.1109/embc.2012.6346904] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
This paper presents the results of our study on investigating the acoustical properties of snoring sounds (SS) recorded by two microphones (one over trachea and one hung in the air within 30-50 cm away from the subject) in relation to sleep apnea. Several features were extracted from SS segments of 50 snorers with different Apnea-Hypopnea Index (AHI). We used an optimal subset of the sound features to cluster the SS segments into two clusters (A and B). Then, the number of SS segments in cluster A was calculated and normalized by the total number of SS segments for each subject, resulting in 50 × 1 vector R. A correlation analysis was run between AHI and R. The results show a difference in acoustical properties of the tracheal and ambient snoring sounds and their ability to distinguish two types of snoring; the ambient snoring sounds are not as characteristic as tracheal snoring sounds.
Collapse
Affiliation(s)
- Ali Azarbarzin
- Department of Electrical and Computer Engineering, University of Manitoba, Winnipeg, MB, Canada.
| | | |
Collapse
|
48
|
Intra-subject variability of snoring sounds in relation to body position, sleep stage, and blood oxygen level. Med Biol Eng Comput 2012; 51:429-39. [DOI: 10.1007/s11517-012-1011-8] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2012] [Accepted: 11/30/2012] [Indexed: 10/27/2022]
|
49
|
Respiratory flow-sound relationship during both wakefulness and sleep and its variation in relation to sleep apnea. Ann Biomed Eng 2012; 41:537-46. [PMID: 23149903 DOI: 10.1007/s10439-012-0692-3] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2012] [Accepted: 11/01/2012] [Indexed: 10/27/2022]
Abstract
Tracheal respiratory sound analysis is a simple and non-invasive way to study the pathophysiology of the upper airway and has recently been used for acoustic estimation of respiratory flow and sleep apnea diagnosis. However in none of the previous studies was the respiratory flow-sound relationship studied in people with obstructive sleep apnea (OSA), nor during sleep. In this study, we recorded tracheal sound, respiratory flow, and head position from eight non-OSA and 10 OSA individuals during sleep and wakefulness. We compared the flow-sound relationship and variations in model parameters from wakefulness to sleep within and between the two groups. The results show that during both wakefulness and sleep, flow-sound relationship follows a power law but with different parameters. Furthermore, the variations in model parameters may be representative of the OSA pathology. The other objective of this study was to examine the accuracy of respiratory flow estimation algorithms during sleep: we investigated two approaches for calibrating the model parameters using the known data recorded during either wakefulness or sleep. The results show that the acoustical respiratory flow estimation parameters change from wakefulness to sleep. Therefore, if the model is calibrated using wakefulness data, although the estimated respiratory flow follows the relative variations of the real flow, the quantitative flow estimation error would be high during sleep. On the other hand, when the calibration parameters are extracted from tracheal sound and respiratory flow recordings during sleep, the respiratory flow estimation error is less than 10%.
Collapse
|
50
|
Jané R, Fiz JA, Solà-Soler J, Mesquita J, Morera J. Snoring analysis for the screening of Sleep Apnea Hypopnea Syndrome with a single-channel device developed using polysomnographic and snoring databases. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2012; 2011:8331-3. [PMID: 22256278 DOI: 10.1109/iembs.2011.6092054] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Several studies have shown differences in acoustic snoring characteristics between patients with Sleep Apnea-Hypopnea Syndrome (SAHS) and simple snorers. Usually a few manually isolated snores are analyzed, with an emphasis on postapneic snores in SAHS patients. Automatic analysis of snores can provide objective information over a longer period of sleep. Although some snore detection methods have recently been proposed, they have not yet been applied to full-night analysis devices for screening purposes. We used a new automatic snoring detection and analysis system to monitor snoring during full-night studies to assess whether the acoustic characteristics of snores differ in relation to the Apnea-Hypopnea Index (AHI) and to classify snoring subjects according to their AHI. A complete procedure for device development was designed, using databases with polysomnography (PSG) and snoring signals. This included annotation of many types of episodes by an expert physician: snores, inspiration and exhalation breath sounds, speech and noise artifacts, The AHI of each subject was estimated with classical PSG analysis, as a gold standard. The system was able to correctly classify 77% of subjects in 4 severity levels, based on snoring analysis and sound-based apnea detection. The sensitivity and specificity of the system, to identify healthy subjects from pathologic patients (mild to severe SAHS), were 83% and 100%, respectively. Besides, the Apnea Index (AI) obtained with the system correlated with the obtained by PSG or Respiratory Polygraphy (RP) (r=0.87, p<0.05).
Collapse
Affiliation(s)
- Raimon Jané
- Dept. ESAII, Universitat Politècnica de Catalunya, Institut de Bioenginyeria de Catalunya and CIBER de Bioengenieria, Biomateriales y Nanomedicina, Baldiri Reixac 4, Torre I, 9 floor, 08028 Barcelona, Spain
| | | | | | | | | |
Collapse
|