1
|
Lauwers E, Stas T, McLane I, Snoeckx A, Van Hoorenbeeck K, De Backer W, Ides K, Steckel J, Verhulst S. Exploring the link between a novel approach for computer aided lung sound analysis and imaging biomarkers: a cross-sectional study. Respir Res 2024; 25:177. [PMID: 38658980 PMCID: PMC11044477 DOI: 10.1186/s12931-024-02810-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2023] [Accepted: 04/09/2024] [Indexed: 04/26/2024] Open
Abstract
BACKGROUND Computer Aided Lung Sound Analysis (CALSA) aims to overcome limitations associated with standard lung auscultation by removing the subjective component and allowing quantification of sound characteristics. In this proof-of-concept study, a novel automated approach was evaluated in real patient data by comparing lung sound characteristics to structural and functional imaging biomarkers. METHODS Patients with cystic fibrosis (CF) aged > 5y were recruited in a prospective cross-sectional study. CT scans were analyzed by the CF-CT scoring method and Functional Respiratory Imaging (FRI). A digital stethoscope was used to record lung sounds at six chest locations. Following sound characteristics were determined: expiration-to-inspiration (E/I) signal power ratios within different frequency ranges, number of crackles per respiratory phase and wheeze parameters. Linear mixed-effects models were computed to relate CALSA parameters to imaging biomarkers on a lobar level. RESULTS 222 recordings from 25 CF patients were included. Significant associations were found between E/I ratios and structural abnormalities, of which the ratio between 200 and 400 Hz appeared to be most clinically relevant due to its relation with bronchiectasis, mucus plugging, bronchial wall thickening and air trapping on CT. The number of crackles was also associated with multiple structural abnormalities as well as regional airway resistance determined by FRI. Wheeze parameters were not considered in the statistical analysis, since wheezing was detected in only one recording. CONCLUSIONS The present study is the first to investigate associations between auscultatory findings and imaging biomarkers, which are considered the gold standard to evaluate the respiratory system. Despite the exploratory nature of this study, the results showed various meaningful associations that highlight the potential value of automated CALSA as a novel non-invasive outcome measure in future research and clinical practice.
Collapse
Affiliation(s)
- Eline Lauwers
- Laboratory of Experimental Medicine and Pediatrics and member of Infla-Med Research Consortium of Excellence, University of Antwerp, Wilrijk, Belgium.
- Fluidda NV, Kontich, Belgium.
| | - Toon Stas
- CoSys-Lab Research Group, University of Antwerp and Flanders Make Strategic Research Center, Wilrijk, Lommel, Belgium
| | - Ian McLane
- Sonavi Labs, Baltimore, MD, USA
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Annemiek Snoeckx
- Department of Radiology, Antwerp University Hospital, Edegem, Belgium
- Faculty of Medicine and Health Sciences, University of Antwerp, Wilrijk, Belgium
| | - Kim Van Hoorenbeeck
- Laboratory of Experimental Medicine and Pediatrics and member of Infla-Med Research Consortium of Excellence, University of Antwerp, Wilrijk, Belgium
- Department of Pediatrics, Antwerp University Hospital, Edegem, Belgium
| | - Wilfried De Backer
- Faculty of Medicine and Health Sciences, University of Antwerp, Wilrijk, Belgium
- Fluidda NV, Kontich, Belgium
- MedImprove BV, Kontich, Belgium
| | - Kris Ides
- Laboratory of Experimental Medicine and Pediatrics and member of Infla-Med Research Consortium of Excellence, University of Antwerp, Wilrijk, Belgium
- CoSys-Lab Research Group, University of Antwerp and Flanders Make Strategic Research Center, Wilrijk, Lommel, Belgium
- Department of Pediatrics, Antwerp University Hospital, Edegem, Belgium
- MedImprove BV, Kontich, Belgium
| | - Jan Steckel
- CoSys-Lab Research Group, University of Antwerp and Flanders Make Strategic Research Center, Wilrijk, Lommel, Belgium
| | - Stijn Verhulst
- Laboratory of Experimental Medicine and Pediatrics and member of Infla-Med Research Consortium of Excellence, University of Antwerp, Wilrijk, Belgium
- Department of Pediatrics, Antwerp University Hospital, Edegem, Belgium
| |
Collapse
|
2
|
Diab MS, Rodriguez-Villegas E. Feature evaluation of accelerometry signals for cough detection. Front Digit Health 2024; 6:1368574. [PMID: 38585283 PMCID: PMC10995234 DOI: 10.3389/fdgth.2024.1368574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Accepted: 03/06/2024] [Indexed: 04/09/2024] Open
Abstract
Cough is a common symptom of multiple respiratory diseases, such as asthma and chronic obstructive pulmonary disorder. Various research works targeted cough detection as a means for continuous monitoring of these respiratory health conditions. This has been mainly achieved using sophisticated machine learning or deep learning algorithms fed with audio recordings. In this work, we explore the use of an alternative detection method, since audio can generate privacy and security concerns related to the use of always-on microphones. This study proposes the use of a non-contact tri-axial accelerometer for motion detection to differentiate between cough and non-cough events/movements. A total of 43 time-domain features were extracted from the acquired tri-axial accelerometry signals. These features were evaluated and ranked for their importance using six methods with adjustable conditions, resulting in a total of 11 feature rankings. The ranking methods included model-based feature importance algorithms, first principal component, leave-one-out, permutation, and recursive features elimination (RFE). The ranking results were further used in the feature selection of the top 10, 20, and 30 for use in cough detection. A total of 68 classification models using a simple logistic regression classifier are reported, using two approaches for data splitting: subject-record-split and leave-one-subject-out (LOSO). The best-performing model out of the 34 using subject-record-split obtained an accuracy of 92.20%, sensitivity of 90.87%, specificity of 93.52%, and F1 score of 92.09% using only 20 features selected by the RFE method. The best-performing model out of the 34 using LOSO obtained an accuracy of 89.57%, sensitivity of 85.71%, specificity of 93.43%, and F1 score of 88.72% using only 10 features selected by the RFE method. These results demonstrate the ability for future implementation of a motion-based wearable cough detector.
Collapse
Affiliation(s)
- Maha S. Diab
- Wearable Technologies Lab, Department of Electrical and Electronic Engineering, Imperial College London, London, United Kingdom
| | | |
Collapse
|
3
|
Mang LD, González Martínez FD, Martinez Muñoz D, García Galán S, Cortina R. Classification of Adventitious Sounds Combining Cochleogram and Vision Transformers. Sensors (Basel) 2024; 24:682. [PMID: 38276373 PMCID: PMC10818433 DOI: 10.3390/s24020682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Revised: 01/13/2024] [Accepted: 01/19/2024] [Indexed: 01/27/2024]
Abstract
Early identification of respiratory irregularities is critical for improving lung health and reducing global mortality rates. The analysis of respiratory sounds plays a significant role in characterizing the respiratory system's condition and identifying abnormalities. The main contribution of this study is to investigate the performance when the input data, represented by cochleogram, is used to feed the Vision Transformer (ViT) architecture, since this input-classifier combination is the first time it has been applied to adventitious sound classification to our knowledge. Although ViT has shown promising results in audio classification tasks by applying self-attention to spectrogram patches, we extend this approach by applying the cochleogram, which captures specific spectro-temporal features of adventitious sounds. The proposed methodology is evaluated on the ICBHI dataset. We compare the classification performance of ViT with other state-of-the-art CNN approaches using spectrogram, Mel frequency cepstral coefficients, constant-Q transform, and cochleogram as input data. Our results confirm the superior classification performance combining cochleogram and ViT, highlighting the potential of ViT for reliable respiratory sound classification. This study contributes to the ongoing efforts in developing automatic intelligent techniques with the aim to significantly augment the speed and effectiveness of respiratory disease detection, thereby addressing a critical need in the medical field.
Collapse
Affiliation(s)
- Loredana Daria Mang
- Department of Telecommunication Engineering, University of Jaen, 23700 Linares, Spain; (F.D.G.M.); (D.M.M.); (S.G.G.)
| | | | - Damian Martinez Muñoz
- Department of Telecommunication Engineering, University of Jaen, 23700 Linares, Spain; (F.D.G.M.); (D.M.M.); (S.G.G.)
| | - Sebastián García Galán
- Department of Telecommunication Engineering, University of Jaen, 23700 Linares, Spain; (F.D.G.M.); (D.M.M.); (S.G.G.)
| | - Raquel Cortina
- Department of Computer Science, University of Oviedo, 33003 Oviedo, Spain;
| |
Collapse
|
4
|
Im S, Kim T, Min C, Kang S, Roh Y, Kim C, Kim M, Kim SH, Shim K, Koh JS, Han S, Lee J, Kim D, Kang D, Seo S. Real-time counting of wheezing events from lung sounds using deep learning algorithms: Implications for disease prediction and early intervention. PLoS One 2023; 18:e0294447. [PMID: 37983213 PMCID: PMC10659186 DOI: 10.1371/journal.pone.0294447] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Accepted: 10/23/2023] [Indexed: 11/22/2023] Open
Abstract
This pioneering study aims to revolutionize self-symptom management and telemedicine-based remote monitoring through the development of a real-time wheeze counting algorithm. Leveraging a novel approach that includes the detailed labeling of one breathing cycle into three types: break, normal, and wheeze, this study not only identifies abnormal sounds within each breath but also captures comprehensive data on their location, duration, and relationships within entire respiratory cycles, including atypical patterns. This innovative strategy is based on a combination of a one-dimensional convolutional neural network (1D-CNN) and a long short-term memory (LSTM) network model, enabling real-time analysis of respiratory sounds. Notably, it stands out for its capacity to handle continuous data, distinguishing it from conventional lung sound classification algorithms. The study utilizes a substantial dataset consisting of 535 respiration cycles from diverse sources, including the Child Sim Lung Sound Simulator, the EMTprep Open-Source Database, Clinical Patient Records, and the ICBHI 2017 Challenge Database. Achieving a classification accuracy of 90%, the exceptional result metrics encompass the identification of each breath cycle and simultaneous detection of the abnormal sound, enabling the real-time wheeze counting of all respirations. This innovative wheeze counter holds the promise of revolutionizing research on predicting lung diseases based on long-term breathing patterns and offers applicability in clinical and non-clinical settings for on-the-go detection and remote intervention of exacerbated respiratory symptoms.
Collapse
Affiliation(s)
- Sunghoon Im
- Department of Mechanical Engineering, Ajou University, Suwon-si, Gyeonggi-do, Republic of Korea
| | - Taewi Kim
- Department of Mechanical Engineering, Ajou University, Suwon-si, Gyeonggi-do, Republic of Korea
| | | | - Sanghun Kang
- Department of Mechanical Engineering, Ajou University, Suwon-si, Gyeonggi-do, Republic of Korea
| | - Yeonwook Roh
- Department of Mechanical Engineering, Ajou University, Suwon-si, Gyeonggi-do, Republic of Korea
| | - Changhwan Kim
- Department of Mechanical Engineering, Ajou University, Suwon-si, Gyeonggi-do, Republic of Korea
| | - Minho Kim
- Department of Mechanical Engineering, Ajou University, Suwon-si, Gyeonggi-do, Republic of Korea
| | - Seung Hyun Kim
- Department of Medical Humanities, Korea University College of Medicine, Seoul, Republic of Korea
| | - KyungMin Shim
- Industry-University Cooperation Foundation, Seogyeong University, Seoul, Republic of Korea
| | - Je-sung Koh
- Department of Mechanical Engineering, Ajou University, Suwon-si, Gyeonggi-do, Republic of Korea
| | - Seungyong Han
- Department of Mechanical Engineering, Ajou University, Suwon-si, Gyeonggi-do, Republic of Korea
| | - JaeWang Lee
- Department of Biomedical Laboratory Science, College of Health Science, Eulji University, Seongnam-si, Gyeonggi-do, Republic of Korea
| | - Dohyeong Kim
- University of Texas at Dallas, Richardson, TX, United States of America
| | - Daeshik Kang
- Department of Mechanical Engineering, Ajou University, Suwon-si, Gyeonggi-do, Republic of Korea
| | - SungChul Seo
- Department of Nano-Chemical, Biological and Environmental Engineering, Seogyeong University, Seoul, Republic of Korea
| |
Collapse
|
5
|
Zhou W, Yu L, Zhang M, Xiao W. A low power respiratory sound diagnosis processing unit based on LSTM for wearable health monitoring. BIOMED ENG-BIOMED TE 2023; 68:469-480. [PMID: 37080905 DOI: 10.1515/bmt-2022-0421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Accepted: 04/05/2023] [Indexed: 04/22/2023]
Abstract
Early prevention and detection of respiratory disease have attracted extensive attention due to the significant increase in people with respiratory issues. Restraining the spread and relieving the symptom of this disease is essential. However, the traditional auscultation technique demands a high-level medical skill, and computational respiratory sound analysis approaches have limits in constrained locations. A wearable auscultation device is required to real-time monitor respiratory system health and provides consumers with ease. In this work, we developed a Respiratory Sound Diagnosis Processor Unit (RSDPU) based on Long Short-Term Memory (LSTM). The experiments and analyses were conducted on feature extraction and abnormality diagnosis algorithm of respiratory sound, and Dynamic Normalization Mapping (DNM) was proposed to better utilize quantization bits and lessen overfitting. Furthermore, we developed the hardware implementation of RSDPU including a corrector to filter diagnosis noise. We presented the FPGA prototyping verification and layout of the RSDPU for power and area evaluation. Experimental results demonstrated that RSDPU achieved an abnormality diagnosis accuracy of 81.4 %, an area of 1.57 × 1.76 mm under the SMIC 130 nm process, and power consumption of 381.8 μW, which met the requirements of high accuracy, low power consumption, and small area.
Collapse
Affiliation(s)
- Weixin Zhou
- Chinese Academy of Sciences, Institute of Semiconductors, Beijing, China
| | - Lina Yu
- Chinese Academy of Sciences, Institute of Semiconductors, Beijing, China
| | - Ming Zhang
- Chinese Academy of Sciences, Institute of Semiconductors, Beijing, China
| | - Wan'ang Xiao
- Chinese Academy of Sciences, Institute of Semiconductors, Beijing, China
| |
Collapse
|
6
|
Garcia-Mendez JP, Lal A, Herasevich S, Tekin A, Pinevich Y, Lipatov K, Wang HY, Qamar S, Ayala IN, Khapov I, Gerberi DJ, Diedrich D, Pickering BW, Herasevich V. Machine Learning for Automated Classification of Abnormal Lung Sounds Obtained from Public Databases: A Systematic Review. Bioengineering (Basel) 2023; 10:1155. [PMID: 37892885 PMCID: PMC10604310 DOI: 10.3390/bioengineering10101155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Revised: 09/15/2023] [Accepted: 09/26/2023] [Indexed: 10/29/2023] Open
Abstract
Pulmonary auscultation is essential for detecting abnormal lung sounds during physical assessments, but its reliability depends on the operator. Machine learning (ML) models offer an alternative by automatically classifying lung sounds. ML models require substantial data, and public databases aim to address this limitation. This systematic review compares characteristics, diagnostic accuracy, concerns, and data sources of existing models in the literature. Papers published from five major databases between 1990 and 2022 were assessed. Quality assessment was accomplished with a modified QUADAS-2 tool. The review encompassed 62 studies utilizing ML models and public-access databases for lung sound classification. Artificial neural networks (ANN) and support vector machines (SVM) were frequently employed in the ML classifiers. The accuracy ranged from 49.43% to 100% for discriminating abnormal sound types and 69.40% to 99.62% for disease class classification. Seventeen public databases were identified, with the ICBHI 2017 database being the most used (66%). The majority of studies exhibited a high risk of bias and concerns related to patient selection and reference standards. Summarizing, ML models can effectively classify abnormal lung sounds using publicly available data sources. Nevertheless, inconsistent reporting and methodologies pose limitations to advancing the field, and therefore, public databases should adhere to standardized recording and labeling procedures.
Collapse
Affiliation(s)
- Juan P. Garcia-Mendez
- Department of Anesthesiology and Perioperative Medicine, Division of Critical Care, Mayo Clinic, Rochester, MN 55905, USA (Y.P.); (H.-Y.W.); (I.K.); (V.H.)
| | - Amos Lal
- Department of Medicine, Division of Pulmonary and Critical Care Medicine, Mayo Clinic, Rochester, MN 55905, USA
| | - Svetlana Herasevich
- Department of Anesthesiology and Perioperative Medicine, Division of Critical Care, Mayo Clinic, Rochester, MN 55905, USA (Y.P.); (H.-Y.W.); (I.K.); (V.H.)
| | - Aysun Tekin
- Department of Anesthesiology and Perioperative Medicine, Division of Critical Care, Mayo Clinic, Rochester, MN 55905, USA (Y.P.); (H.-Y.W.); (I.K.); (V.H.)
| | - Yuliya Pinevich
- Department of Anesthesiology and Perioperative Medicine, Division of Critical Care, Mayo Clinic, Rochester, MN 55905, USA (Y.P.); (H.-Y.W.); (I.K.); (V.H.)
- Department of Cardiac Anesthesiology and Intensive Care, Republican Clinical Medical Center, 223052 Minsk, Belarus
| | - Kirill Lipatov
- Division of Pulmonary Medicine, Mayo Clinic Health Systems, Essentia Health, Duluth, MN 55805, USA
| | - Hsin-Yi Wang
- Department of Anesthesiology and Perioperative Medicine, Division of Critical Care, Mayo Clinic, Rochester, MN 55905, USA (Y.P.); (H.-Y.W.); (I.K.); (V.H.)
- Department of Anesthesiology, Taipei Veterans General Hospital, National Yang Ming Chiao Tung University, Taipei 11217, Taiwan
- Department of Biomedical Sciences and Engineering, National Central University, Taoyuan 320317, Taiwan
| | - Shahraz Qamar
- Department of Anesthesiology and Perioperative Medicine, Division of Critical Care, Mayo Clinic, Rochester, MN 55905, USA (Y.P.); (H.-Y.W.); (I.K.); (V.H.)
| | - Ivan N. Ayala
- Department of Anesthesiology and Perioperative Medicine, Division of Critical Care, Mayo Clinic, Rochester, MN 55905, USA (Y.P.); (H.-Y.W.); (I.K.); (V.H.)
| | - Ivan Khapov
- Department of Anesthesiology and Perioperative Medicine, Division of Critical Care, Mayo Clinic, Rochester, MN 55905, USA (Y.P.); (H.-Y.W.); (I.K.); (V.H.)
| | | | - Daniel Diedrich
- Department of Anesthesiology and Perioperative Medicine, Division of Critical Care, Mayo Clinic, Rochester, MN 55905, USA (Y.P.); (H.-Y.W.); (I.K.); (V.H.)
| | - Brian W. Pickering
- Department of Anesthesiology and Perioperative Medicine, Division of Critical Care, Mayo Clinic, Rochester, MN 55905, USA (Y.P.); (H.-Y.W.); (I.K.); (V.H.)
| | - Vitaly Herasevich
- Department of Anesthesiology and Perioperative Medicine, Division of Critical Care, Mayo Clinic, Rochester, MN 55905, USA (Y.P.); (H.-Y.W.); (I.K.); (V.H.)
| |
Collapse
|
7
|
Sakama T, Ichinose M, Obara T, Shibata M, Kagawa T, Takakura H, Hirai K, Furuya H, Kato M, Mochizuki H. Effect of wheeze and lung function on lung sound parameters in children with asthma. Allergol Int 2023; 72:545-550. [PMID: 36935346 DOI: 10.1016/j.alit.2023.03.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 01/13/2023] [Accepted: 02/10/2023] [Indexed: 03/19/2023] Open
Abstract
BACKGROUND In children with asthma, there are many cases in which wheeze is confirmed by auscultation with a normal lung function, or in which the lung function is decreased without wheeze. Using an objective lung sound analysis, we examined the effect of wheeze and the lung function on lung sound parameters in children with asthma. METHODS A total of 114 children with asthma (males to females = 80: 34, median age 10 years old) were analyzed for their lung sound parameters using conventional methods, and wheeze and the lung function were checked. The effects of wheeze and the lung function on lung sound parameters were examined. RESULTS The patients with wheeze or decreased forced expiratory flow and volume in 1 s (FEV1) (% pred) showed a significantly higher sound power of respiration and expiration-to-inspiration sound power ratio (E/I) than those without wheeze and a normal FEV1 (% pred). There was no marked difference in the sound power of respiration or E/I between the patients without wheeze and a decreased FEV1 (% pred) and the patients with wheeze and a normal FEV1 (% pred). CONCLUSIONS Our data suggest that bronchial constriction in the asthmatic children with wheeze similarly exists in the asthmatic children with a decreased lung function. A lung sound analysis is likely to enable an accurate understanding of airway conditions.
Collapse
Affiliation(s)
- Takashi Sakama
- Department of Pediatrics, Tokai University Hachioji Hospital, Tokyo, Japan; Department of Pediatrics, Tokai University School of Medicine, Kanagawa, Japan
| | - Mami Ichinose
- Department of Pediatrics, Tokai University Hachioji Hospital, Tokyo, Japan; Department of Pediatrics, Tokai University School of Medicine, Kanagawa, Japan
| | - Takeru Obara
- Department of Pediatrics, Tokai University Hachioji Hospital, Tokyo, Japan; Department of Pediatrics, Tokai University School of Medicine, Kanagawa, Japan
| | - Mayuko Shibata
- Department of Pediatrics, Tokai University Hachioji Hospital, Tokyo, Japan; Department of Pediatrics, Tokai University School of Medicine, Kanagawa, Japan
| | - Takanori Kagawa
- Department of Pediatrics, Tokai University Hachioji Hospital, Tokyo, Japan; Department of Pediatrics, Tokai University School of Medicine, Kanagawa, Japan
| | - Hiromitsu Takakura
- Department of Pediatrics, Tokai University Hachioji Hospital, Tokyo, Japan; Department of Pediatrics, Tokai University School of Medicine, Kanagawa, Japan
| | - Kota Hirai
- Department of Pediatrics, Tokai University Hachioji Hospital, Tokyo, Japan; Department of Pediatrics, Tokai University School of Medicine, Kanagawa, Japan
| | - Hiroyuki Furuya
- Department of Basic Clinical Science and Public Health, Tokai University School of Medicine, Kanagawa, Japan
| | - Masahiko Kato
- Department of Pediatrics, Tokai University Hachioji Hospital, Tokyo, Japan; Department of Pediatrics, Tokai University School of Medicine, Kanagawa, Japan
| | - Hiroyuki Mochizuki
- Department of Pediatrics, Tokai University Hachioji Hospital, Tokyo, Japan; Department of Pediatrics, Tokai University School of Medicine, Kanagawa, Japan.
| |
Collapse
|
8
|
Pessoa D, Rocha BM, Strodthoff C, Gomes M, Rodrigues G, Petmezas G, Cheimariotis GA, Kilintzis V, Kaimakamis E, Maglaveras N, Marques A, Frerichs I, Carvalho PD, Paiva RP. BRACETS: Bimodal repository of auscultation coupled with electrical impedance thoracic signals. Comput Methods Programs Biomed 2023; 240:107720. [PMID: 37544061 DOI: 10.1016/j.cmpb.2023.107720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 06/27/2023] [Accepted: 07/10/2023] [Indexed: 08/08/2023]
Abstract
BACKGROUND AND OBJECTIVE Respiratory diseases are among the most significant causes of morbidity and mortality worldwide, causing substantial strain on society and health systems. Over the last few decades, there has been increasing interest in the automatic analysis of respiratory sounds and electrical impedance tomography (EIT). Nevertheless, no publicly available databases with both respiratory sound and EIT data are available. METHODS In this work, we have assembled the first open-access bimodal database focusing on the differential diagnosis of respiratory diseases (BRACETS: Bimodal Repository of Auscultation Coupled with Electrical Impedance Thoracic Signals). It includes simultaneous recordings of single and multi-channel respiratory sounds and EIT. Furthermore, we have proposed several machine learning-based baseline systems for automatically classifying respiratory diseases in six distinct evaluation tasks using respiratory sound and EIT (A1, A2, A3, B1, B2, B3). These tasks included classifying respiratory diseases at sample and subject levels. The performance of the classification models was evaluated using a 5-fold cross-validation scheme (with subject isolation between folds). RESULTS The resulting database consists of 1097 respiratory sounds and 795 EIT recordings acquired from 78 adult subjects in two countries (Portugal and Greece). In the task of automatically classifying respiratory diseases, the baseline classification models have achieved the following average balanced accuracy: Task A1 - 77.9±13.1%; Task A2 - 51.6±9.7%; Task A3 - 38.6±13.1%; Task B1 - 90.0±22.4%; Task B2 - 61.4±11.8%; Task B3 - 50.8±10.6%. CONCLUSION The creation of this database and its public release will aid the research community in developing automated methodologies to assess and monitor respiratory function, and it might serve as a benchmark in the field of digital medicine for managing respiratory diseases. Moreover, it could pave the way for creating multi-modal robust approaches for that same purpose.
Collapse
Affiliation(s)
- Diogo Pessoa
- University of Coimbra Centre for Informatics and Systems of the University of Coimbra, Department of Informatics Engineering, 3030-290 Coimbra, Portugal.
| | - Bruno Machado Rocha
- University of Coimbra Centre for Informatics and Systems of the University of Coimbra, Department of Informatics Engineering, 3030-290 Coimbra, Portugal
| | - Claas Strodthoff
- Department of Anesthesiology, and Intensive Care Medicine, University Medical Center Schleswig-Holstein Campus Kiel, Kiel 24105, Schleswig-Holstein, Germany
| | - Maria Gomes
- Lab3R - Respiratory Research and Rehabilitation Laboratory, School of Health Sciences (ESSUA), University of Aveiro, 3810-193 Aveiro, Portugal
| | - Guilherme Rodrigues
- Lab3R - Respiratory Research and Rehabilitation Laboratory, School of Health Sciences (ESSUA), University of Aveiro, 3810-193 Aveiro, Portugal
| | - Georgios Petmezas
- 2nd Department of Obstetrics and Gynaecology, The Medical School, 54124 Thessaloniki, Greece
| | | | - Vassilis Kilintzis
- 2nd Department of Obstetrics and Gynaecology, The Medical School, 54124 Thessaloniki, Greece
| | - Evangelos Kaimakamis
- 1st Intensive Care Unit, "G. Papanikolaou" General Hospital of Thessaloniki, 57010 Pilea Hortiatis, Greece
| | - Nicos Maglaveras
- 2nd Department of Obstetrics and Gynaecology, The Medical School, 54124 Thessaloniki, Greece
| | - Alda Marques
- Lab3R - Respiratory Research and Rehabilitation Laboratory, School of Health Sciences (ESSUA), University of Aveiro, 3810-193 Aveiro, Portugal; Institute of Biomedicine (iBiMED), University of Aveiro, 3810-193 Aveiro, Portugal
| | - Inéz Frerichs
- Department of Anesthesiology, and Intensive Care Medicine, University Medical Center Schleswig-Holstein Campus Kiel, Kiel 24105, Schleswig-Holstein, Germany
| | - Paulo de Carvalho
- University of Coimbra Centre for Informatics and Systems of the University of Coimbra, Department of Informatics Engineering, 3030-290 Coimbra, Portugal
| | - Rui Pedro Paiva
- University of Coimbra Centre for Informatics and Systems of the University of Coimbra, Department of Informatics Engineering, 3030-290 Coimbra, Portugal
| |
Collapse
|
9
|
Abstract
Auscultation is crucial for the diagnosis of respiratory system diseases. However, traditional stethoscopes have inherent limitations, such as inter-listener variability and subjectivity, and they cannot record respiratory sounds for offline/retrospective diagnosis or remote prescriptions in telemedicine. The emergence of digital stethoscopes has overcome these limitations by allowing physicians to store and share respiratory sounds for consultation and education. On this basis, machine learning, particularly deep learning, enables the fully-automatic analysis of lung sounds that may pave the way for intelligent stethoscopes. This review thus aims to provide a comprehensive overview of deep learning algorithms used for lung sound analysis to emphasize the significance of artificial intelligence (AI) in this field. We focus on each component of deep learning-based lung sound analysis systems, including the task categories, public datasets, denoising methods, and, most importantly, existing deep learning methods, i.e., the state-of-the-art approaches to convert lung sounds into two-dimensional (2D) spectrograms and use convolutional neural networks for the end-to-end recognition of respiratory diseases or abnormal lung sounds. Additionally, this review highlights current challenges in this field, including the variety of devices, noise sensitivity, and poor interpretability of deep models. To address the poor reproducibility and variety of deep learning in this field, this review also provides a scalable and flexible open-source framework that aims to standardize the algorithmic workflow and provide a solid basis for replication and future extension: https://github.com/contactless-healthcare/Deep-Learning-for-Lung-Sound-Analysis .
Collapse
Affiliation(s)
- Dong-Min Huang
- Department of Biomedical Engineering, Southern University of Science and Technology, Shenzhen, 518055, Guangdong, China
| | - Jia Huang
- The Third People's Hospital of Shenzhen, Shenzhen, 518112, Guangdong, China
| | - Kun Qiao
- The Third People's Hospital of Shenzhen, Shenzhen, 518112, Guangdong, China
| | - Nan-Shan Zhong
- Guangzhou Institute of Respiratory Health, China State Key Laboratory of Respiratory Disease, National Clinical Research Center for Respiratory Disease, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou, 510120, China.
| | - Hong-Zhou Lu
- The Third People's Hospital of Shenzhen, Shenzhen, 518112, Guangdong, China.
| | - Wen-Jin Wang
- Department of Biomedical Engineering, Southern University of Science and Technology, Shenzhen, 518055, Guangdong, China.
| |
Collapse
|
10
|
Kala A, McCollum ED, Elhilali M. Reference free auscultation quality metric and its trends. Biomed Signal Process Control 2023; 85:104852. [PMID: 38274002 PMCID: PMC10809975 DOI: 10.1016/j.bspc.2023.104852] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/29/2023]
Abstract
Stethoscopes are used ubiquitously in clinical settings to 'listen' to lung sounds. The use of these systems in a variety of healthcare environments (hospitals, urgent care rooms, private offices, community sites, mobile clinics, etc.) presents a range of challenges in terms of ambient noise and distortions that mask lung signals from being heard clearly or processed accurately using auscultation devices. With advances in technology, computerized techniques have been developed to automate analysis or access a digital rendering of lung sounds. However, most approaches are developed and tested in controlled environments and do not reflect real-world conditions where auscultation signals are typically acquired. Without a priori access to a recording of the ambient noise (for signal-to-noise estimation) or a reference signal that reflects the true undistorted lung sound, it is difficult to evaluate the quality of the lung signal and its potential clinical interpretability. The current study proposes an objective reference-free Auscultation Quality Metric (AQM) which incorporates low-level signal attributes with high-level representational embeddings mapped to a nonlinear quality space to provide an independent evaluation of the auscultation quality. This metric is carefully designed to solely judge the signal based on its integrity relative to external distortions and masking effects and not confuse an adventitious breathing pattern as low-quality auscultation. The current study explores the robustness of the proposed AQM method across multiple clinical categorizations and different distortion types. It also evaluates the temporal sensitivity of this approach and its translational impact for deployment in digital auscultation devices.
Collapse
Affiliation(s)
- Annapurna Kala
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, USA
| | - Eric D. McCollum
- Global Program of Pediatric Respiratory Sciences, Eudowood Division of Pediatric Respiratory Sciences, Department of Pediatrics, Johns Hopkins School of Medicine, Baltimore, USA
| | - Mounya Elhilali
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, USA
| |
Collapse
|
11
|
Siebert JN, Hartley MA, Courvoisier DS, Salamin M, Robotham L, Doenz J, Barazzone-Argiroffo C, Gervaix A, Bridevaux PO. Deep learning diagnostic and severity-stratification for interstitial lung diseases and chronic obstructive pulmonary disease in digital lung auscultations and ultrasonography: clinical protocol for an observational case-control study. BMC Pulm Med 2023; 23:191. [PMID: 37264374 DOI: 10.1186/s12890-022-02255-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2022] [Accepted: 11/20/2022] [Indexed: 06/03/2023] Open
Abstract
BACKGROUND Interstitial lung diseases (ILD), such as idiopathic pulmonary fibrosis (IPF) and non-specific interstitial pneumonia (NSIP), and chronic obstructive pulmonary disease (COPD) are severe, progressive pulmonary disorders with a poor prognosis. Prompt and accurate diagnosis is important to enable patients to receive appropriate care at the earliest possible stage to delay disease progression and prolong survival. Artificial intelligence-assisted lung auscultation and ultrasound (LUS) could constitute an alternative to conventional, subjective, operator-related methods for the accurate and earlier diagnosis of these diseases. This protocol describes the standardised collection of digitally-acquired lung sounds and LUS images of adult outpatients with IPF, NSIP or COPD and a deep learning diagnostic and severity-stratification approach. METHODS A total of 120 consecutive patients (≥ 18 years) meeting international criteria for IPF, NSIP or COPD and 40 age-matched controls will be recruited in a Swiss pulmonology outpatient clinic, starting from August 2022. At inclusion, demographic and clinical data will be collected. Lung auscultation will be recorded with a digital stethoscope at 10 thoracic sites in each patient and LUS images using a standard point-of-care device will be acquired at the same sites. A deep learning algorithm (DeepBreath) using convolutional neural networks, long short-term memory models, and transformer architectures will be trained on these audio recordings and LUS images to derive an automated diagnostic tool. The primary outcome is the diagnosis of ILD versus control subjects or COPD. Secondary outcomes are the clinical, functional and radiological characteristics of IPF, NSIP and COPD diagnosis. Quality of life will be measured with dedicated questionnaires. Based on previous work to distinguish normal and pathological lung sounds, we estimate to achieve convergence with an area under the receiver operating characteristic curve of > 80% using 40 patients in each category, yielding a sample size calculation of 80 ILD (40 IPF, 40 NSIP), 40 COPD, and 40 controls. DISCUSSION This approach has a broad potential to better guide care management by exploring the synergistic value of several point-of-care-tests for the automated detection and differential diagnosis of ILD and COPD and to estimate severity. Trial registration Registration: August 8, 2022. CLINICALTRIALS gov Identifier: NCT05318599.
Collapse
Affiliation(s)
- Johan N Siebert
- Division of Paediatric Emergency Medicine, Department of Women, Child and Adolescent, Geneva University Hospitals, 47 Avenue de la Roseraie, 1211, Geneva 14, Switzerland.
- Faculty of Medicine, University of Geneva, Geneva, Switzerland.
| | - Mary-Anne Hartley
- Machine Learning and Optimization (MLO) Laboratory, Swiss Federal Institute of Technology (EPFL), Lausanne, Switzerland
| | - Delphine S Courvoisier
- Quality of Care Unit, Geneva University Hospitals, Geneva, Switzerland
- Faculty of Medicine, University of Geneva, Geneva, Switzerland
| | - Marlène Salamin
- Division of Pulmonology, Hospital of Valais, Sion, Switzerland
| | - Laura Robotham
- Division of Pulmonology, Hospital of Valais, Sion, Switzerland
| | - Jonathan Doenz
- Machine Learning and Optimization (MLO) Laboratory, Swiss Federal Institute of Technology (EPFL), Lausanne, Switzerland
| | - Constance Barazzone-Argiroffo
- Division of Paediatric Pulmonology, Department of Women, Child and Adolescent, Geneva University Hospitals, Geneva, Switzerland
- Faculty of Medicine, University of Geneva, Geneva, Switzerland
| | - Alain Gervaix
- Division of Paediatric Emergency Medicine, Department of Women, Child and Adolescent, Geneva University Hospitals, 47 Avenue de la Roseraie, 1211, Geneva 14, Switzerland
- Faculty of Medicine, University of Geneva, Geneva, Switzerland
| | | |
Collapse
|
12
|
Heitmann J, Glangetas A, Doenz J, Dervaux J, Shama DM, Garcia DH, Benissa MR, Cantais A, Perez A, Müller D, Chavdarova T, Ruchonnet-Metrailler I, Siebert JN, Lacroix L, Jaggi M, Gervaix A, Hartley MA. DeepBreath-automated detection of respiratory pathology from lung auscultation in 572 pediatric outpatients across 5 countries. NPJ Digit Med 2023; 6:104. [PMID: 37268730 DOI: 10.1038/s41746-023-00838-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2022] [Accepted: 05/05/2023] [Indexed: 06/04/2023] Open
Abstract
The interpretation of lung auscultation is highly subjective and relies on non-specific nomenclature. Computer-aided analysis has the potential to better standardize and automate evaluation. We used 35.9 hours of auscultation audio from 572 pediatric outpatients to develop DeepBreath : a deep learning model identifying the audible signatures of acute respiratory illness in children. It comprises a convolutional neural network followed by a logistic regression classifier, aggregating estimates on recordings from eight thoracic sites into a single prediction at the patient-level. Patients were either healthy controls (29%) or had one of three acute respiratory illnesses (71%) including pneumonia, wheezing disorders (bronchitis/asthma), and bronchiolitis). To ensure objective estimates on model generalisability, DeepBreath is trained on patients from two countries (Switzerland, Brazil), and results are reported on an internal 5-fold cross-validation as well as externally validated (extval) on three other countries (Senegal, Cameroon, Morocco). DeepBreath differentiated healthy and pathological breathing with an Area Under the Receiver-Operator Characteristic (AUROC) of 0.93 (standard deviation [SD] ± 0.01 on internal validation). Similarly promising results were obtained for pneumonia (AUROC 0.75 ± 0.10), wheezing disorders (AUROC 0.91 ± 0.03), and bronchiolitis (AUROC 0.94 ± 0.02). Extval AUROCs were 0.89, 0.74, 0.74 and 0.87 respectively. All either matched or were significant improvements on a clinical baseline model using age and respiratory rate. Temporal attention showed clear alignment between model prediction and independently annotated respiratory cycles, providing evidence that DeepBreath extracts physiologically meaningful representations. DeepBreath provides a framework for interpretable deep learning to identify the objective audio signatures of respiratory pathology.
Collapse
Affiliation(s)
- Julien Heitmann
- Intelligent Global Health Research Group, Machine Learning and Optimization Laboratory, Swiss Federal Institute of Technology (EPFL), Lausanne, Switzerland
| | - Alban Glangetas
- Division of Pediatric Emergency Medicine, Department of Women, Child and Adolescent, Geneva University Hospitals (HUG), University of Geneva, Switzerland, Geneva, Switzerland
| | - Jonathan Doenz
- Intelligent Global Health Research Group, Machine Learning and Optimization Laboratory, Swiss Federal Institute of Technology (EPFL), Lausanne, Switzerland
| | - Juliane Dervaux
- Intelligent Global Health Research Group, Machine Learning and Optimization Laboratory, Swiss Federal Institute of Technology (EPFL), Lausanne, Switzerland
| | - Deeksha M Shama
- Intelligent Global Health Research Group, Machine Learning and Optimization Laboratory, Swiss Federal Institute of Technology (EPFL), Lausanne, Switzerland
| | - Daniel Hinjos Garcia
- Intelligent Global Health Research Group, Machine Learning and Optimization Laboratory, Swiss Federal Institute of Technology (EPFL), Lausanne, Switzerland
| | - Mohamed Rida Benissa
- Division of Pediatric Emergency Medicine, Department of Women, Child and Adolescent, Geneva University Hospitals (HUG), University of Geneva, Switzerland, Geneva, Switzerland
| | - Aymeric Cantais
- Pediatric Emergency Department, Hospital University of Saint Etienne, Saint Etienne, France
| | - Alexandre Perez
- Division of Pediatric Emergency Medicine, Department of Women, Child and Adolescent, Geneva University Hospitals (HUG), University of Geneva, Switzerland, Geneva, Switzerland
| | - Daniel Müller
- Intelligent Global Health Research Group, Machine Learning and Optimization Laboratory, Swiss Federal Institute of Technology (EPFL), Lausanne, Switzerland
| | - Tatjana Chavdarova
- Intelligent Global Health Research Group, Machine Learning and Optimization Laboratory, Swiss Federal Institute of Technology (EPFL), Lausanne, Switzerland
| | - Isabelle Ruchonnet-Metrailler
- Division of Pediatric Emergency Medicine, Department of Women, Child and Adolescent, Geneva University Hospitals (HUG), University of Geneva, Switzerland, Geneva, Switzerland
| | - Johan N Siebert
- Division of Pediatric Emergency Medicine, Department of Women, Child and Adolescent, Geneva University Hospitals (HUG), University of Geneva, Switzerland, Geneva, Switzerland
| | - Laurence Lacroix
- Division of Pediatric Emergency Medicine, Department of Women, Child and Adolescent, Geneva University Hospitals (HUG), University of Geneva, Switzerland, Geneva, Switzerland
| | - Martin Jaggi
- Intelligent Global Health Research Group, Machine Learning and Optimization Laboratory, Swiss Federal Institute of Technology (EPFL), Lausanne, Switzerland
| | - Alain Gervaix
- Division of Pediatric Emergency Medicine, Department of Women, Child and Adolescent, Geneva University Hospitals (HUG), University of Geneva, Switzerland, Geneva, Switzerland
| | - Mary-Anne Hartley
- Intelligent Global Health Research Group, Machine Learning and Optimization Laboratory, Swiss Federal Institute of Technology (EPFL), Lausanne, Switzerland.
- Center for Intelligent Systems (CIS), Swiss Federal Institute of Technology (EPFL), Lausanne, Switzerland.
- Division of Pediatric Emergency Medicine, Department of Pediatrics, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland.
| |
Collapse
|
13
|
Kraman SS, Pasterkamp H, Wodicka GR. Smart Devices Are Poised to Revolutionize the Usefulness of Respiratory Sounds. Chest 2023; 163:1519-1528. [PMID: 36706908 PMCID: PMC10925548 DOI: 10.1016/j.chest.2023.01.024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 01/10/2023] [Accepted: 01/17/2023] [Indexed: 01/26/2023] Open
Abstract
The association between breathing sounds and respiratory health or disease has been exceptionally useful in the practice of medicine since the advent of the stethoscope. Remote patient monitoring technology and artificial intelligence offer the potential to develop practical means of assessing respiratory function or dysfunction through continuous assessment of breathing sounds when patients are at home, at work, or even asleep. Automated reports such as cough counts or the percentage of the breathing cycles containing wheezes can be delivered to a practitioner via secure electronic means or returned to the clinical office at the first opportunity. This has not previously been possible. The four respiratory sounds that most lend themselves to this technology are wheezes, to detect breakthrough asthma at night and even occupational asthma when a patient is at work; snoring as an indicator of OSA or adequacy of CPAP settings; cough in which long-term recording can objectively assess treatment adequacy; and crackles, which, although subtle and often overlooked, can contain important clinical information when appearing in a home recording. In recent years, a flurry of publications in the engineering literature described construction, usage, and testing outcomes of such devices. Little of this has appeared in the medical literature. The potential value of this technology for pulmonary medicine is compelling. We expect that these tiny, smart devices soon will allow us to address clinical questions that occur away from the clinic.
Collapse
Affiliation(s)
- Steve S Kraman
- Department of Internal Medicine, Division of Pulmonary, Critical Care and Sleep Medicine, University of Kentucky, Lexington, KY.
| | - Hans Pasterkamp
- University of Manitoba, Department of Pediatrics and Child Health, Max Rady College of Medicine, University of Manitoba, Winnipeg, MB, Canada
| | - George R Wodicka
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN
| |
Collapse
|
14
|
Mang L, Canadas-Quesada F, Carabias-Orti J, Combarro E, Ranilla J. Cochleogram-based adventitious sounds classification using convolutional neural networks. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104555] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
|
15
|
Kikutani K, Ohshimo S, Sadamori T, Ohki S, Giga H, Ishii J, Miyoshi H, Ota K, Nishikimi M, Shime N. Quantification of respiratory sounds by a continuous monitoring system can be used to predict complications after extubation: a pilot study. J Clin Monit Comput 2023; 37:237-48. [PMID: 35731457 DOI: 10.1007/s10877-022-00884-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2022] [Accepted: 05/23/2022] [Indexed: 01/24/2023]
Abstract
To show that quantification of abnormal respiratory sounds by our developed device is useful for predicting respiratory failure and airway problems after extubation. A respiratory sound monitoring system was used to collect respiratory sounds in patients undergoing extubation. The recorded respiratory sounds were subsequently analyzed. We defined the composite poor outcome as requiring any of following medical interventions within 48 h as defined below. This composite outcome includes reintubation, surgical airway management, insertion of airway devices, unscheduled use of noninvasive ventilation or high-flow nasal cannula, unscheduled use of inhaled medications, suctioning of sputum by bronchoscopy and unscheduled imaging studies. The quantitative values (QV) for each abnormal respiratory sound and inspiratory sound volume were compared between composite outcome groups and non-outcome groups. Fifty-seven patients were included in this study. The composite outcome occurred in 18 patients. For neck sounds, the QVs of stridor and rhonchi were significantly higher in the outcome group vs the non-outcome group. For anterior thoracic sounds, the QVs of wheezes, rhonchi, and coarse crackles were significantly higher in the outcome group vs the non-outcome group. For bilateral lateral thoracic sounds, the QV of fine crackles was significantly higher in the outcome group vs the non-outcome group. Cervical inspiratory sounds volume (average of five breaths) immediately after extubation was significantly louder in the outcome group vs non-outcome group (63.3 dB vs 54.3 dB, respectively; p < 0.001). Quantification of abnormal respiratory sounds and respiratory volume may predict respiratory failure and airway problems after extubation.
Collapse
|
16
|
Cinyol F, Baysal U, Köksal D, Babaoğlu E, Ulaşlı SS. Incorporating support vector machine to the classification of respiratory sounds by Convolutional Neural Network. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
17
|
Kim BJ, Kim BS, Mun JH, Lim C, Kim K. An accurate deep learning model for wheezing in children using real world data. Sci Rep 2022; 12:22465. [PMID: 36577766 PMCID: PMC9797543 DOI: 10.1038/s41598-022-25953-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Accepted: 11/25/2022] [Indexed: 12/30/2022] Open
Abstract
Auscultation is an important diagnostic method for lung diseases. However, it is a subjective modality and requires a high degree of expertise. To overcome this constraint, artificial intelligence models are being developed. However, these models require performance improvements and do not reflect the actual clinical situation. We aimed to develop an improved deep-learning model learning to detect wheezing in children, based on data from real clinical practice. In this prospective study, pediatric pulmonologists recorded and verified respiratory sounds in 76 pediatric patients who visited a university hospital in South Korea. In addition, structured data, such as sex, age, and auscultation location, were collected. Using our dataset, we implemented an optimal model by transforming it based on the convolutional neural network model. Finally, we proposed a model using a 34-layer residual network with the convolutional block attention module for audio data and multilayer perceptron layers for tabular data. The proposed model had an accuracy of 91.2%, area under the curve of 89.1%, precision of 94.4%, recall of 81%, and F1-score of 87.2%. The deep-learning model proposed had a high accuracy for detecting wheeze sounds. This high-performance model will be helpful for the accurate diagnosis of respiratory diseases in actual clinical practice.
Collapse
Affiliation(s)
- Beom Joon Kim
- grid.411947.e0000 0004 0470 4224Department of Pediatrics, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Baek Seung Kim
- grid.254224.70000 0001 0789 9563Department of Applied Statistics, Chung-Ang University, 84 Heukseok-Ro, Dongjak-Gu, Seoul, 06974 Republic of Korea
| | - Jeong Hyeon Mun
- grid.254224.70000 0001 0789 9563Department of Applied Statistics, Chung-Ang University, 84 Heukseok-Ro, Dongjak-Gu, Seoul, 06974 Republic of Korea
| | - Changwon Lim
- grid.254224.70000 0001 0789 9563Department of Applied Statistics, Chung-Ang University, 84 Heukseok-Ro, Dongjak-Gu, Seoul, 06974 Republic of Korea
| | - Kyunghoon Kim
- grid.412480.b0000 0004 0647 3378Department of Pediatrics, Seoul National University Bundang Hospital, Seongnam, 13620 Republic of Korea ,grid.31501.360000 0004 0470 5905Department of Pediatrics, Seoul National University College of Medicine, Seoul, Republic of Korea
| |
Collapse
|
18
|
Kikutani K, Ohshimo S, Sadamori T, Ohki S, Giga H, Ishii J, Miyoshi H, Ota K, Shime N. Regional respiratory sound abnormalities in pneumothorax and pleural effusion detected via respiratory sound visualization and quantification: case report. J Clin Monit Comput 2022; 36:1761-6. [PMID: 35147849 DOI: 10.1007/s10877-022-00824-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2021] [Accepted: 01/27/2022] [Indexed: 10/19/2022]
Abstract
Assessment of respiratory sounds by auscultation with a conventional stethoscope is subjective. We developed a continuous monitoring and visualization system that enables objectively and quantitatively visualizing respiratory sounds. We herein present two cases in which the system showed regional differences in the respiratory sounds. We applied our novel continuous monitoring and visualization system to evaluate respiratory abnormalities in patients with acute chest disorders. Respiratory sounds were continuously recorded to assess regional changes in respiratory sound volumes. Because we used this system as a pilot study, the results were not shown in real time and were retrospectively analyzed. Case 1 An 89-year-old woman was admitted to our hospital for sudden-onset respiratory distress and hypoxia. Chest X-rays revealed left pneumothorax; thus, we drained the thorax. After confirming that the pneumothorax had improved, we attached the continuous monitoring and visualization system. Chest X-rays taken the next day showed exacerbation of the pneumothorax. Visual and quantitative findings showed a decreased respiratory volume in the left lung after 3 h. Case 2 A 94-year-old woman was admitted to our hospital for dyspnea. Chest X-rays showed a large amount of pleural effusion on the right side. The continuous monitoring and visualization system visually and quantitatively revealed a decreased respiratory volume in the lower right lung field compared with that in the lower left lung field. Our newly developed continuous monitoring and visualization system enabled quantitatively and visually detecting regional differences in respiratory sounds in patients with pneumothorax and pleural effusion.
Collapse
|
19
|
Ubbink SWJ, van Dijk JMC, Hofman R, van Dijk P. Performance of an Automated Detection Algorithm to Assess Objective Pulsatile Tinnitus. Ear Hear 2022. [PMID: 36395514 DOI: 10.1097/AUD.0000000000001301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
OBJECTIVES In this paper we describe an automated detection algorithm that objectively detects pulsatile tinnitus (PT) and investigate its performance. DESIGN Sound measurements were made with a sensitive microphone placed in the outer ear canal in 36 PT-patients referred to our tertiary clinic, along with a registration of the heart rate. A novel algorithm expressed the coherence between the recorded sound and heart rate as a pulsatility index. This index was determined for 6 octave bands of the recorded sound. We assessed the performance of the detection algorithm by comparing it with the judgement of 3 blinded observers. RESULTS The algorithm showed good agreement compared with the majority judgement of the blinded observers (ROC AUC 0.83). Interobserver reliability for detecting PT in sound recordings by the three blinded observers was substantial (Fleiss's κ=0.64). CONCLUSIONS The algorithm may be a reliable alternative to subjective assessments of in-canal sound measurements in PT-patients, thus providing clinicians with an objective measure to differentiate between subjective and objective pulsatile tinnitus.
Collapse
|
20
|
Li H, Chen X, Qian X, Chen H, Li Z, Bhattacharjee S, Zhang H, Huang MC, Xu W. An explainable COVID-19 detection system based on human sounds. Smart Health (Amst) 2022; 26:100332. [PMID: 36275047 PMCID: PMC9580234 DOI: 10.1016/j.smhl.2022.100332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Accepted: 09/29/2022] [Indexed: 11/06/2022]
Abstract
Acoustic signals generated by the human body have often been used as biomarkers to diagnose and monitor diseases. As the pathogenesis of COVID-19 indicates impairments in the respiratory system, digital acoustic biomarkers of COVID-19 are under investigation. In this paper, we explore an accurate and explainable COVID-19 diagnosis approach based on human speech, cough, and breath data using the power of machine learning. We first analyze our design space considerations from the data aspect and model aspect. Then, we perform data augmentation, Mel-spectrogram transformation, and develop a deep residual architecture-based model for prediction. Experimental results show that our system outperforms the baseline, with the ROC-AUC result increased by 5.47%. Finally, we perform an interpretation analysis based on the visualization of the activation map to further validate the model.
Collapse
Affiliation(s)
- Huining Li
- Department of Computer Science and Engineering, University at Buffalo, United States,Corresponding author
| | - Xingyu Chen
- Department of Computer Science and Engineering, University of Colorado Denver, United States
| | - Xiaoye Qian
- Department of Electrical, Computer, and Systems Engineering, Case Western Reserve University, United States
| | - Huan Chen
- Department of Electrical, Computer, and Systems Engineering, Case Western Reserve University, United States
| | - Zhengxiong Li
- Department of Computer Science and Engineering, University of Colorado Denver, United States
| | | | - Hanbin Zhang
- Department of Computer Science and Engineering, University at Buffalo, United States
| | - Ming-Chun Huang
- Department of Data and Computational Science, Duke Kunshan University, China,Suzhou Huanmu Intelligence Technology Co., Ltd., China
| | - Wenyao Xu
- Department of Computer Science and Engineering, University at Buffalo, United States
| |
Collapse
|
21
|
Zhang Q, Zhang J, Yuan J, Huang H, Zhang Y, Zhang B, Lv G, Lin S, Wang N, Liu X, Tang M, Wang Y, Ma H, Liu L, Yuan S, Zhou H, Zhao J, Li Y, Yin Y, Zhao L, Wang G, Lian Y. SPRSound: Open-Source SJTU Paediatric Respiratory Sound Database. IEEE Trans Biomed Circuits Syst 2022; 16:867-881. [PMID: 36070274 DOI: 10.1109/tbcas.2022.3204910] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
It has proved that the auscultation of respiratory sound has advantage in early respiratory diagnosis. Various methods have been raised to perform automatic respiratory sound analysis to reduce subjective diagnosis and physicians' workload. However, these methods highly rely on the quality of respiratory sound database. In this work, we have developed the first open-access paediatric respiratory sound database, SPRSound. The database consists of 2,683 records and 9,089 respiratory sound events from 292 participants. Accurate label is important to achieve a good prediction for adventitious respiratory sound classification problem. A custom-made sound label annotation software (SoundAnn) has been developed to perform sound editing, sound annotation, and quality assurance evaluation. A team of 11 experienced paediatric physicians is involved in the entire process to establish golden standard reference for the dataset. To verify the robustness and accuracy of the classification model, we have investigated the effects of different feature extraction methods and machine learning classifiers on the classification performance of our dataset. As such, we have achieved a score of 75.22%, 61.57%, 56.71%, and 37.84% for the four different classification challenges at the event level and record level.
Collapse
|
22
|
Au YK, Muqeem T, Fauveau VJ, Cardenas JA, Geris BS, Hassen GW, Glass M. Continuous Monitoring Versus Intermittent Auscultation of Wheezes in Patients Presenting With Acute Respiratory Distress. J Emerg Med 2022. [DOI: 10.1016/j.jemermed.2022.07.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Revised: 06/01/2022] [Accepted: 07/09/2022] [Indexed: 11/06/2022]
|
23
|
Tran-Anh D, Vu NH, Nguyen-Trong K, Pham C. Multi-task learning neural networks for breath sound detection and classification in pervasive healthcare. Pervasive Mob Comput 2022; 86:101685. [PMID: 36061371 PMCID: PMC9419997 DOI: 10.1016/j.pmcj.2022.101685] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/17/2021] [Revised: 07/23/2022] [Accepted: 08/19/2022] [Indexed: 06/15/2023]
Abstract
With the emergence of many grave Chronic obstructive pulmonary diseases (COPDs) and the COVID-19 pandemic, there is a need for timely detection of abnormal respiratory sounds, such as deep and heavy breaths. Although numerous efficient pervasive healthcare systems have been proposed for tracking patients, few studies have focused on these breaths. This paper presents a method that supports physicians in monitoring in-hospital and at-home patients by monitoring their breath. The proposed method is based on three deep neural networks in audio analysis: RNNoise for noise suppression, SincNet - Convolutional Neural Network, and Residual Bidirectional Long Short-Term Memory for breath sound analysis at edge devices and centralized servers, respectively. We also developed a pervasive system with two configurations: (i) an edge architecture for in-hospital patients; and (ii) a central architecture for at-home ones. Furthermore, a dataset, named BreathSet, was collected from 27 COPD patients being treated at three hospitals in Vietnam to verify our proposed method. The experimental results demonstrated that our system efficiently detected and classified breath sounds with F1-scores of 90% and 91% for the tiny model version on low-cost edge devices, and 90% and 95% for the full model version on central servers, respectively. The proposed system was successfully implemented at hospitals to help physicians in monitoring respiratory patients in real time.
Collapse
Affiliation(s)
- Dat Tran-Anh
- Posts and Telecommunications Institute of Technology, Hanoi, Viet Nam
| | - Nam Hoai Vu
- Posts and Telecommunications Institute of Technology, Hanoi, Viet Nam
| | | | - Cuong Pham
- Posts and Telecommunications Institute of Technology, Hanoi, Viet Nam
| |
Collapse
|
24
|
Kasim N, Bachner-Hinenzon N, Brikman S, Cheshin O, Adler D, Dori G. A comparison of the power of breathing sounds signals acquired with a smart stethoscope from a cohort of COVID-19 patients at peak disease, and pre-discharge from the hospital. Biomed Signal Process Control 2022; 78:103920. [PMID: 35785024 PMCID: PMC9234039 DOI: 10.1016/j.bspc.2022.103920] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Revised: 06/14/2022] [Accepted: 06/18/2022] [Indexed: 11/24/2022]
Abstract
Objectives To characterize the frequencies of breathing sounds signals (BS) in COVID-19 patients at peak disease and pre-discharge from hospitalization using a Smart stethoscope. Methods Prospective cohort study conducted during the first COVID-19 wave (April-August 2020) in Israel. COVID-19 patients (n = 19) were validated by SARS-Cov-2 PCR test. The healthy control group was composed of 153 volunteers who stated that they were healthy. Power of BS was calculated in the frequency ranges of 0–20, 0–200, and 0–2000 Hz. Results The power calculated over frequency ranges 0–20, 20–200, and 200–2000 Hz contributed approximately 45%, 45%, and 10% to the total power calculated over the range 0–2000 Hz, respectively. Total power calculated from the right side of the back showed an increase of 45–80% during peak disease compared with the healthy controls (p < 0.05). The power calculated over the back, in the infrasound range, 0–20 Hz, and not in the 20–2000 Hz range, was greater for the healthy controls than for patients. Using all 3 ranges of frequencies for distinguishing peak disease from healthy controls resulted in sensitivity and specificity of 84% and 91%, respectively. Omitting the 0–20 Hz range resulted in sensitivity and specificity of 74% and 67%, respectively. Discussion The BS power acquired from COVID-19 patients at peak disease was significantly greater than that at pre-discharge from the hospital. The infrasound range had a significant contribution to the total power. Although the source of the infrasound is not presently clear, it may serve as an automated diagnostic tool when more clinical experience is gained with this method.
Collapse
Affiliation(s)
- Nour Kasim
- Department of Internal Medicine E and Corona, HaEmek Medical Center, Afula, Israel
| | | | - Shay Brikman
- Department of Internal Medicine E and Corona, HaEmek Medical Center, Afula, Israel
- Faculty of Medicine, Technion -Israel Institute of Technology, Haifa, Israel
| | - Ori Cheshin
- Department of Internal Medicine E and Corona, HaEmek Medical Center, Afula, Israel
| | | | - Guy Dori
- Department of Internal Medicine E and Corona, HaEmek Medical Center, Afula, Israel
- Faculty of Medicine, Technion -Israel Institute of Technology, Haifa, Israel
| |
Collapse
|
25
|
Neili Z, Sundaraj K. A comparative study of the spectrogram, scalogram, melspectrogram and gammatonegram time-frequency representations for the classification of lung sounds using the ICBHI database based on CNNs. BIOMED ENG-BIOMED TE 2022; 67:367-390. [PMID: 35926850 DOI: 10.1515/bmt-2022-0180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Accepted: 06/21/2022] [Indexed: 11/15/2022]
Abstract
In lung sound classification using deep learning, many studies have considered the use of short-time Fourier transform (STFT) as the most commonly used 2D representation of the input data. Consequently, STFT has been widely used as an analytical tool, but other versions of the representation have also been developed. This study aims to evaluate and compare the performance of the spectrogram, scalogram, melspectrogram and gammatonegram representations, and provide comparative information to users regarding the suitability of these time-frequency (TF) techniques in lung sound classification. Lung sound signals used in this study were obtained from the ICBHI 2017 respiratory sound database. These lung sound recordings were converted into images of spectrogram, scalogram, melspectrogram and gammatonegram TF representations respectively. The four types of images were fed separately into the VGG16, ResNet-50 and AlexNet deep-learning architectures. Network performances were analyzed and compared based on accuracy, precision, recall and F1-score. The results of the analysis on the performance of the four representations using these three commonly used CNN deep-learning networks indicate that the generated gammatonegram and scalogram TF images coupled with ResNet-50 achieved maximum classification accuracies.
Collapse
Affiliation(s)
- Zakaria Neili
- Electronics Department, University of Badji Mokhtar Annaba, Annaba, Algeria
| | - Kenneth Sundaraj
- Faculty of Electronics and Computer Engineering, Universiti Teknikal Malaysia Melaka, Melaka, Malaysia
| |
Collapse
|
26
|
Hsu F, Huang S, Huang C, Cheng Y, Chen C, Hsiao J, Chen C, Lai F. A Progressively Expanded Database for Automated Lung Sound Analysis: An Update. Applied Sciences 2022; 12:7623. [DOI: 10.3390/app12157623] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
We previously established an open-access lung sound database, HF_Lung_V1, and developed deep learning models for inhalation, exhalation, continuous adventitious sound (CAS), and discontinuous adventitious sound (DAS) detection. The amount of data used for training contributes to model accuracy. In this study, we collected larger quantities of data to further improve model performance and explored issues of noisy labels and overlapping sounds. HF_Lung_V1 was expanded to HF_Lung_V2 with a 1.43× increase in the number of audio files. Convolutional neural network–bidirectional gated recurrent unit network models were trained separately using the HF_Lung_V1 (V1_Train) and HF_Lung_V2 (V2_Train) training sets. These were tested using the HF_Lung_V1 (V1_Test) and HF_Lung_V2 (V2_Test) test sets, respectively. Segment and event detection performance was evaluated. Label quality was assessed. Overlap ratios were computed between inhalation, exhalation, CAS, and DAS labels. The model trained using V2_Train exhibited improved performance in inhalation, exhalation, CAS, and DAS detection on both V1_Test and V2_Test. Poor CAS detection was attributed to the quality of CAS labels. DAS detection was strongly influenced by the overlapping of DAS with inhalation and exhalation. In conclusion, collecting greater quantities of lung sound data is vital for developing more accurate lung sound analysis models.
Collapse
|
27
|
Furman G, Furman E, Charushin A, Eirikh E, Malinin S, Sheludko V, Sokolovsky V, Shtivelman D. Remote Analysis of Respiratory Sounds in Patients With COVID-19: Development of Fast Fourier Transform–Based Computer-Assisted Diagnostic Methods. JMIR Form Res 2022; 6:e31200. [PMID: 35584091 PMCID: PMC9298483 DOI: 10.2196/31200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2021] [Revised: 10/18/2021] [Accepted: 05/11/2022] [Indexed: 11/13/2022] Open
Abstract
Background
Respiratory sounds have been recognized as a possible indicator of behavior and health. Computer analysis of these sounds can indicate characteristic sound changes caused by COVID-19 and can be used for diagnostics of this illness.
Objective
The aim of the study is to develop 2 fast, remote computer-assisted diagnostic methods for specific acoustic phenomena associated with COVID-19 based on analysis of respiratory sounds.
Methods
Fast Fourier transform (FFT) was applied for computer analysis of respiratory sound recordings produced by hospital doctors near the mouths of 14 patients with COVID-19 (aged 18-80 years) and 17 healthy volunteers (aged 5-48 years). Recordings for 30 patients and 26 healthy persons (aged 11-67 years, 34, 60%, women), who agreed to be tested at home, were made by the individuals themselves using a mobile telephone; the records were passed for analysis using WhatsApp. For hospitalized patients, the illness was diagnosed using a set of medical methods; for outpatients, polymerase chain reaction (PCR) was used. The sampling rate of the recordings was from 44 to 96 kHz. Unlike usual computer-assisted diagnostic methods for illnesses based on respiratory sound analysis, we proposed to test the high-frequency part of the FFT spectrum (2000-6000 Hz).
Results
Comparing the FFT spectra of the respiratory sounds of patients and volunteers, we developed 2 computer-assisted methods of COVID-19 diagnostics and determined numerical healthy-ill criteria. These criteria were independent of gender and age of the tested person.
Conclusions
The 2 proposed computer-assisted diagnostic methods, based on the analysis of the respiratory sound FFT spectra of patients and volunteers, allow one to automatically diagnose specific acoustic phenomena associated with COVID-19 with sufficiently high diagnostic values. These methods can be applied to develop noninvasive screening self-testing kits for COVID-19.
Collapse
Affiliation(s)
- Gregory Furman
- Physics Department, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Evgeny Furman
- Department of Pediatric, EA Vagner Perm State Medical University, Perm, Russian Federation
| | - Artem Charushin
- Department of Ear, Nose and Throat, EA Vagner Perm State Medical University, Perm, Russian Federation
| | - Ekaterina Eirikh
- Department of Ear, Nose and Throat, EA Vagner Perm State Medical University, Perm, Russian Federation
| | - Sergey Malinin
- Central Research Laboratory, EA Vagner Perm State Medical University, Perm, Russian Federation
| | - Valery Sheludko
- Perm Regional Clinical Infectious Diseases Hospital, Perm, Russian Federation
| | | | - David Shtivelman
- Physics Department, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| |
Collapse
|
28
|
Park DE, Watson NL, Focht C, Feikin D, Hammitt LL, Brooks WA, Howie SRC, Kotloff KL, Levine OS, Madhi SA, Murdoch DR, O'Brien KL, Scott JAG, Thea DM, Amorninthapichet T, Awori J, Bunthi C, Ebruke B, Elhilali M, Higdon M, Hossain L, Jahan Y, Moore DP, Mulindwa J, Mwananyanda L, Naorat S, Prosperi C, Thamthitiwat S, Verwey C, Jablonski KA, Power MC, Young HA, Deloria Knoll M, McCollum ED. Digitally recorded and remotely classified lung auscultation compared with conventional stethoscope classifications among children aged 1-59 months enrolled in the Pneumonia Etiology Research for Child Health (PERCH) case-control study. BMJ Open Respir Res 2022; 9:9/1/e001144. [PMID: 35577452 PMCID: PMC9115042 DOI: 10.1136/bmjresp-2021-001144] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2021] [Accepted: 04/28/2022] [Indexed: 01/19/2023] Open
Abstract
BACKGROUND Diagnosis of pneumonia remains challenging. Digitally recorded and remote human classified lung sounds may offer benefits beyond conventional auscultation, but it is unclear whether classifications differ between the two approaches. We evaluated concordance between digital and conventional auscultation. METHODS We collected digitally recorded lung sounds, conventional auscultation classifications and clinical measures and samples from children with pneumonia (cases) in low-income and middle-income countries. Physicians remotely classified recordings as crackles, wheeze or uninterpretable. Conventional and digital auscultation concordance was evaluated among 383 pneumonia cases with concurrently (within 2 hours) collected conventional and digital auscultation classifications using prevalence-adjusted bias-adjusted kappa (PABAK). Using an expanded set of 737 cases that also incorporated the non-concurrently collected assessments, we evaluated whether associations between auscultation classifications and clinical or aetiological findings differed between conventional or digital auscultation using χ2 tests and logistic regression adjusted for age, sex and site. RESULTS Conventional and digital auscultation concordance was moderate for classifying crackles and/or wheeze versus neither crackles nor wheeze (PABAK=0.50), and fair for crackles-only versus not crackles-only (PABAK=0.30) and any wheeze versus no wheeze (PABAK=0.27). Crackles were more common on conventional auscultation, whereas wheeze was more frequent on digital auscultation. Compared with neither crackles nor wheeze, crackles-only on both conventional and digital auscultation was associated with abnormal chest radiographs (adjusted OR (aOR)=1.53, 95% CI 0.99 to 2.36; aOR=2.09, 95% CI 1.19 to 3.68, respectively); any wheeze was inversely associated with C-reactive protein >40 mg/L using conventional auscultation (aOR=0.50, 95% CI 0.27 to 0.92) and with very severe pneumonia using digital auscultation (aOR=0.67, 95% CI 0.46 to 0.97). Crackles-only on digital auscultation was associated with mortality compared with any wheeze (aOR=2.70, 95% CI 1.12 to 6.25). CONCLUSIONS Conventional auscultation and remotely-classified digital auscultation displayed moderate concordance for presence/absence of wheeze and crackles among cases. Conventional and digital auscultation may provide different classification patterns, but wheeze was associated with decreased clinical severity on both.
Collapse
Affiliation(s)
- Daniel E Park
- Department of Environmental and Occupational Health, The George Washington University, Washington, District of Columbia, USA
| | | | | | - Daniel Feikin
- Department of International Health, Johns Hopkins University International Vaccine Access Center, Baltimore, Maryland, USA
| | - Laura L Hammitt
- Department of International Health, Johns Hopkins University International Vaccine Access Center, Baltimore, Maryland, USA,Kenya Medical Research Institute - Wellcome Trust Research Programme, Kilifi, Kenya
| | - W Abdullah Brooks
- International Centre for Diarrhoeal Disease Research Bangladesh, Dhaka and Matlab, Bangladesh,Johns Hopkins University Bloomberg School of Public Health, Baltimore, Maryland, USA
| | - Stephen R C Howie
- Medical Research Council Unit, Basse, Gambia,Department of Paediatrics, The University of Auckland, Auckland, New Zealand
| | - Karen L Kotloff
- Department of Pediatrics, University of Maryland Center for Vaccine Development, Baltimore, Maryland, USA
| | - Orin S Levine
- Department of International Health, Johns Hopkins University International Vaccine Access Center, Baltimore, Maryland, USA,Bill & Melinda Gates Foundation, Seattle, Washington, USA
| | - Shabir A Madhi
- South African Medical Research Council Vaccines and Infectious Diseases Analytics Research Unit, University of the Witwatersrand, Johannesburg, Gauteng, South Africa,Department of Science and Innovation/National Research Foundation: Vaccine Preventable Diseases Unit, University of the Witwatersrand, Johannesburg, Gauteng, South Africa
| | - David R Murdoch
- Department of Pathology and Biomedical Science, University of Otago, Christchurch, New Zealand,Microbiology Unit, Canterbury Health Laboratories, Christchurch, New Zealand
| | - Katherine L O'Brien
- Department of International Health, Johns Hopkins University International Vaccine Access Center, Baltimore, Maryland, USA
| | - J Anthony G Scott
- Kenya Medical Research Institute - Wellcome Trust Research Programme, Kilifi, Kenya,Department of Infectious Disease Epidemiology, London School of Hygiene & Tropical Medicine, London, UK
| | - Donald M Thea
- Department of Global Health, Boston University School of Public Health, Boston, Massachusetts, USA
| | | | - Juliet Awori
- Kenya Medical Research Institute - Wellcome Trust Research Programme, Kilifi, Kenya
| | - Charatdao Bunthi
- Division of Global Health Protection, Thailand Ministry of Public Health – US CDC Collaboration, Royal Thai Government Ministry of Public Health, Bangkok, Thailand
| | - Bernard Ebruke
- Medical Research Council Unit, Basse, Gambia,International Foundation Against Infectious Disease in Nigeria, Abuja, Nigeria
| | - Mounya Elhilali
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Melissa Higdon
- Department of International Health, Johns Hopkins University International Vaccine Access Center, Baltimore, Maryland, USA
| | - Lokman Hossain
- International Centre for Diarrhoeal Disease Research Bangladesh, Dhaka and Matlab, Bangladesh
| | - Yasmin Jahan
- International Centre for Diarrhoeal Disease Research Bangladesh, Dhaka and Matlab, Bangladesh
| | - David P Moore
- South African Medical Research Council Vaccines and Infectious Diseases Analytics Research Unit, University of the Witwatersrand, Johannesburg, South Africa,Department of Paediatrics and Child Health, Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, South Africa
| | - Justin Mulindwa
- Department of Paediatrics and Child Health, University Teaching Hospital, Lusaka, Zambia
| | - Lawrence Mwananyanda
- Department of Global Health, Boston University School of Public Health, Boston, Massachusetts, USA,Right to Care - Zambia, Lusaka, Zambia
| | | | - Christine Prosperi
- Department of International Health, Johns Hopkins University International Vaccine Access Center, Baltimore, Maryland, USA
| | - Somsak Thamthitiwat
- Division of Global Health Protection, Thailand Ministry of Public Health – US CDC Collaboration, Royal Thai Government Ministry of Public Health, Nonthaburi, Thailand
| | - Charl Verwey
- South African Medical Research Council Vaccines and Infectious Diseases Analytics Research Unit, University of the Witwatersrand, Johannesburg, Gauteng, South Africa,Department of Paediatrics and Child Health, Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, South Africa
| | | | - Melinda C Power
- Department of Epidemiology, The George Washington University, Washington, District of Columbia, USA
| | - Heather A Young
- Department of Epidemiology, The George Washington University, Washington, District of Columbia, USA
| | - Maria Deloria Knoll
- Department of International Health, Johns Hopkins University International Vaccine Access Center, Baltimore, Maryland, USA
| | - Eric D McCollum
- Global Program in Respiratory Sciences, Eudowood Division of Pediatric Respiratory Sciences, Johns Hopkins School of Medicine, Baltimore, Maryland, USA,Department of International Health, Johns Hopkins University Bloomberg School of Public Health, Baltimore, Maryland, USA
| |
Collapse
|
29
|
Ahmed S, Sultana S, Khan AM, Islam MS, Habib GMM, McLane IM, McCollum ED, Baqui AH, Cunningham S, Nair H. Digital auscultation as a diagnostic aid to detect childhood pneumonia: A systematic review. J Glob Health 2022; 12:04033. [PMID: 35493777 PMCID: PMC9024283 DOI: 10.7189/jogh.12.04033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
Background Frontline health care workers use World Health Organization Integrated Management of Childhood Illnesses (IMCI) guidelines for child pneumonia care in low-resource settings. IMCI guideline pneumonia diagnostic criterion performs with low specificity, resulting in antibiotic overtreatment. Digital auscultation with automated lung sound analysis may improve the diagnostic performance of IMCI pneumonia guidelines. This systematic review aims to summarize the evidence on detecting adventitious lung sounds by digital auscultation with automated analysis compared to reference physician acoustic analysis for child pneumonia diagnosis. Methods In this review, articles were searched from MEDLINE, Embase, CINAHL Plus, Web of Science, Global Health, IEEExplore database, Scopus, and the ClinicalTrial.gov databases from the inception of each database to October 27, 2021, and reference lists of selected studies and relevant review articles were searched manually. Studies reporting diagnostic performance of digital auscultation and/or computerized lung sound analysis compared against physicians’ acoustic analysis for pneumonia diagnosis in children under the age of 5 were eligible for this systematic review. Retrieved citations were screened and eligible studies were included for extraction. Risk of bias was assessed using the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool. All these steps were independently performed by two authors and disagreements between the reviewers were resolved through discussion with an arbiter. Narrative data synthesis was performed. Results A total of 3801 citations were screened and 46 full-text articles were assessed. 10 studies met the inclusion criteria. Half of the studies used a publicly available respiratory sound database to evaluate their proposed work. Reported methodologies/approaches and performance metrics for classifying adventitious lung sounds varied widely across the included studies. All included studies except one reported overall diagnostic performance of the digital auscultation/computerised sound analysis to distinguish adventitious lung sounds, irrespective of the disease condition or age of the participants. The reported accuracies for classifying adventitious lung sounds in the included studies varied from 66.3% to 100%. However, it remained unclear to what extent these results would be applicable for classifying adventitious lung sounds in children with pneumonia. Conclusions This systematic review found very limited evidence on the diagnostic performance of digital auscultation to diagnose pneumonia in children. Well-designed studies and robust reporting are required to evaluate the accuracy of digital auscultation in the paediatric population.
Collapse
Affiliation(s)
- Salahuddin Ahmed
- Usher Institute, University of Edinburgh, Edinburgh, UK
- Projahnmo Research Foundation, Dhaka, Bangladesh
| | | | - Ahad M Khan
- Usher Institute, University of Edinburgh, Edinburgh, UK
- Projahnmo Research Foundation, Dhaka, Bangladesh
| | - Mohammad S Islam
- Usher Institute, University of Edinburgh, Edinburgh, UK
- Child Health Research Foundation, Dhaka, Bangladesh
| | - GM Monsur Habib
- Usher Institute, University of Edinburgh, Edinburgh, UK
- Bangladesh Primary Care Respiratory Society, Khulna, Bangladesh
| | | | - Eric D McCollum
- Global Program for Pediatric Respiratory Sciences, Eudowood Division of Paediatric Respiratory Sciences, School of Medicine, Johns Hopkins University, Baltimore, Maryland, USA
- Department of International Health, Bloomberg School of Public Health, Johns Hopkins University, Baltimore, Maryland, USA
| | - Abdullah H Baqui
- Department of International Health, Bloomberg School of Public Health, Johns Hopkins University, Baltimore, Maryland, USA
| | - Steven Cunningham
- Department of Child Life and Health, Centre for Inflammation Research, University of Edinburgh, Edinburgh, UK
| | - Harish Nair
- Usher Institute, University of Edinburgh, Edinburgh, UK
| |
Collapse
|
30
|
Serrurier A, Neuschaefer-Rube C, Röhrig R. Past and Trends in Cough Sound Acquisition, Automatic Detection and Automatic Classification: A Comparative Review. Sensors (Basel) 2022; 22:2896. [PMID: 35458885 PMCID: PMC9027375 DOI: 10.3390/s22082896] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Revised: 04/07/2022] [Accepted: 04/08/2022] [Indexed: 11/16/2022]
Abstract
Cough is a very common symptom and the most frequent reason for seeking medical advice. Optimized care goes inevitably through an adapted recording of this symptom and automatic processing. This study provides an updated exhaustive quantitative review of the field of cough sound acquisition, automatic detection in longer audio sequences and automatic classification of the nature or disease. Related studies were analyzed and metrics extracted and processed to create a quantitative characterization of the state-of-the-art and trends. A list of objective criteria was established to select a subset of the most complete detection studies in the perspective of deployment in clinical practice. One hundred and forty-four studies were short-listed, and a picture of the state-of-the-art technology is drawn. The trend shows an increasing number of classification studies, an increase of the dataset size, in part from crowdsourcing, a rapid increase of COVID-19 studies, the prevalence of smartphones and wearable sensors for the acquisition, and a rapid expansion of deep learning. Finally, a subset of 12 detection studies is identified as the most complete ones. An unequaled quantitative overview is presented. The field shows a remarkable dynamic, boosted by the research on COVID-19 diagnosis, and a perfect adaptation to mobile health.
Collapse
Affiliation(s)
- Antoine Serrurier
- Institute of Medical Informatics, University Hospital of the RWTH Aachen, 52057 Aachen, Germany;
- Clinic for Phoniatrics, Pedaudiology & Communication Disorders, University Hospital of the RWTH Aachen, 52057 Aachen, Germany;
| | - Christiane Neuschaefer-Rube
- Clinic for Phoniatrics, Pedaudiology & Communication Disorders, University Hospital of the RWTH Aachen, 52057 Aachen, Germany;
| | - Rainer Röhrig
- Institute of Medical Informatics, University Hospital of the RWTH Aachen, 52057 Aachen, Germany;
| |
Collapse
|
31
|
Rahman T, Ibtehaz N, Khandakar A, Hossain MSA, Mekki YMS, Ezeddin M, Bhuiyan EH, Ayari MA, Tahir A, Qiblawey Y, Mahmud S, Zughaier SM, Abbas T, Al-maadeed S, Chowdhury MEH. QUCoughScope: An Intelligent Application to Detect COVID-19 Patients Using Cough and Breath Sounds. Diagnostics (Basel) 2022; 12:920. [PMID: 35453968 PMCID: PMC9028864 DOI: 10.3390/diagnostics12040920] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Revised: 02/17/2022] [Accepted: 02/28/2022] [Indexed: 11/17/2022] Open
Abstract
Problem—Since the outbreak of the COVID-19 pandemic, mass testing has become essential to reduce the spread of the virus. Several recent studies suggest that a significant number of COVID-19 patients display no physical symptoms whatsoever. Therefore, it is unlikely that these patients will undergo COVID-19 testing, which increases their chances of unintentionally spreading the virus. Currently, the primary diagnostic tool to detect COVID-19 is a reverse-transcription polymerase chain reaction (RT-PCR) test from the respiratory specimens of the suspected patient, which is invasive and a resource-dependent technique. It is evident from recent researches that asymptomatic COVID-19 patients cough and breathe in a different way than healthy people. Aim—This paper aims to use a novel machine learning approach to detect COVID-19 (symptomatic and asymptomatic) patients from the convenience of their homes so that they do not overburden the healthcare system and also do not spread the virus unknowingly by continuously monitoring themselves. Method—A Cambridge University research group shared such a dataset of cough and breath sound samples from 582 healthy and 141 COVID-19 patients. Among the COVID-19 patients, 87 were asymptomatic while 54 were symptomatic (had a dry or wet cough). In addition to the available dataset, the proposed work deployed a real-time deep learning-based backend server with a web application to crowdsource cough and breath datasets and also screen for COVID-19 infection from the comfort of the user’s home. The collected dataset includes data from 245 healthy individuals and 78 asymptomatic and 18 symptomatic COVID-19 patients. Users can simply use the application from any web browser without installation and enter their symptoms, record audio clips of their cough and breath sounds, and upload the data anonymously. Two different pipelines for screening were developed based on the symptoms reported by the users: asymptomatic and symptomatic. An innovative and novel stacking CNN model was developed using three base learners from of eight state-of-the-art deep learning CNN algorithms. The stacking CNN model is based on a logistic regression classifier meta-learner that uses the spectrograms generated from the breath and cough sounds of symptomatic and asymptomatic patients as input using the combined (Cambridge and collected) dataset. Results—The stacking model outperformed the other eight CNN networks with the best classification performance for binary classification using cough sound spectrogram images. The accuracy, sensitivity, and specificity for symptomatic and asymptomatic patients were 96.5%, 96.42%, and 95.47% and 98.85%, 97.01%, and 99.6%, respectively. For breath sound spectrogram images, the metrics for binary classification of symptomatic and asymptomatic patients were 91.03%, 88.9%, and 91.5% and 80.01%, 72.04%, and 82.67%, respectively. Conclusion—The web-application QUCoughScope records coughing and breathing sounds, converts them to a spectrogram, and applies the best-performing machine learning model to classify the COVID-19 patients and healthy subjects. The result is then reported back to the test user in the application interface. Therefore, this novel system can be used by patients in their premises as a pre-screening method to aid COVID-19 diagnosis by prioritizing the patients for RT-PCR testing and thereby reducing the risk of spreading of the disease.
Collapse
|
32
|
Abstract
Computational methods for lung sound analysis are beneficial for computer-aided diagnosis support, storage and monitoring in critical care. In this paper, we use pre-trained ResNet models as backbone architectures for classification of adventitious lung sounds and respiratory diseases. The learned representation of the pre-trained model is transferred by using vanilla fine-tuning, co-tuning, stochastic normalization and the combination of the co-tuning and stochastic normalization techniques. Furthermore, data augmentation in both time domain and time-frequency domain is used to account for the class imbalance of the ICBHI and our multi-channel lung sound dataset. Additionally, we introduce spectrum correction to account for the variations of the recording device properties on the ICBHI dataset. Empirically, our proposed systems mostly outperform all state-of-the-art lung sound classification systems for the adventitious lung sounds and respiratory diseases of both datasets.
Collapse
|
33
|
Ahmed S, Mitra DK, Nair H, Cunningham S, Khan AM, Islam AA, McLane IM, Chowdhury NH, Begum N, Shahidullah M, Islam MS, Norrie J, Campbell H, Sheikh A, Baqui AH, McCollum ED. Digital auscultation as a novel childhood pneumonia diagnostic tool for community clinics in Sylhet, Bangladesh: protocol for a cross-sectional study. BMJ Open 2022; 12:e059630. [PMID: 35140164 PMCID: PMC8830242 DOI: 10.1136/bmjopen-2021-059630] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/06/2022] Open
Abstract
INTRODUCTION The WHO's Integrated Management of Childhood Illnesses (IMCI) algorithm for diagnosis of child pneumonia relies on counting respiratory rate and observing respiratory distress to diagnose childhood pneumonia. IMCI case defination for pneumonia performs with high sensitivity but low specificity, leading to overdiagnosis of child pneumonia and unnecessary antibiotic use. Including lung auscultation in IMCI could improve specificity of pneumonia diagnosis. Our objectives are: (1) assess lung sound recording quality by primary healthcare workers (HCWs) from under-5 children with the Feelix Smart Stethoscope and (2) determine the reliability and performance of recorded lung sound interpretations by an automated algorithm compared with reference paediatrician interpretations. METHODS AND ANALYSIS In a cross-sectional design, community HCWs will record lung sounds of ~1000 under-5-year-old children with suspected pneumonia at first-level facilities in Zakiganj subdistrict, Sylhet, Bangladesh. Enrolled children will be evaluated for pneumonia, including oxygen saturation, and have their lung sounds recorded by the Feelix Smart stethoscope at four sequential chest locations: two back and two front positions. A novel sound-filtering algorithm will be applied to recordings to address ambient noise and optimise recording quality. Recorded sounds will be assessed against a predefined quality threshold. A trained paediatric listening panel will classify recordings into one of the following categories: normal, crackles, wheeze, crackles and wheeze or uninterpretable. All sound files will be classified into the same categories by the automated algorithm and compared with panel classifications. Sensitivity, specificity and predictive values, of the automated algorithm will be assessed considering the panel's final interpretation as gold standard. ETHICS AND DISSEMINATION The study protocol was approved by the National Research Ethics Committee of Bangladesh Medical Research Council, Bangladesh (registration number: 09630012018) and Academic and Clinical Central Office for Research and Development Medical Research Ethics Committee, Edinburgh, UK (REC Reference: 18-HV-051). Dissemination will be through conference presentations, peer-reviewed journals and stakeholder engagement meetings in Bangladesh. TRIAL REGISTRATION NUMBER NCT03959956.
Collapse
Affiliation(s)
- Salahuddin Ahmed
- Projahnmo Research Foundation, Dhaka, Bangladesh
- Usher Institute, The University of Edinburgh, Edinburgh, UK
| | - Dipak Kumar Mitra
- Projahnmo Research Foundation, Dhaka, Bangladesh
- Public Health, North South University, Dhaka, Bangladesh
| | - Harish Nair
- Usher Institute, The University of Edinburgh, Edinburgh, UK
| | - Steven Cunningham
- Department of Child Life and Health, Royal Hospital for Sick Children, Edinburgh, UK
| | - Ahad Mahmud Khan
- Projahnmo Research Foundation, Dhaka, Bangladesh
- Usher Institute, The University of Edinburgh, Edinburgh, UK
| | | | | | | | - Nazma Begum
- Projahnmo Research Foundation, Dhaka, Bangladesh
| | - Mohammod Shahidullah
- Department of Neonatology, Bangabandhu Sheikh Mujib Medical University, Dhaka, Bangladesh
| | - Muhammad Shariful Islam
- Directorate General of Health Services, Ministry of Health and Family Welfare, Government of Bangladesh, Dhaka, Bangladesh
| | - John Norrie
- Usher Institute, Edinburgh Clinical Trials Unit, University of Edinburgh No. 9, Bioquarter, Edinburgh, UK
| | - Harry Campbell
- Usher Institute, The University of Edinburgh, Edinburgh, UK
| | - Aziz Sheikh
- Usher Institute, The University of Edinburgh, Edinburgh, UK
| | - Abdullah H Baqui
- Department of International Health, Johns Hopkins University Bloomberg School of Public Health, Baltimore, Maryland, USA
| | - Eric D McCollum
- Department of International Health, Johns Hopkins University Bloomberg School of Public Health, Baltimore, Maryland, USA
- Global Program in Pediatric Respiratory Sciences, Eudowood Division of Pediatric Respiratory Sciences, Department of Pediatrics, Johns Hopkins School of Medicine, Baltimore, Maryland, USA
| |
Collapse
|
34
|
|
35
|
Das N, Topalovic M, Janssens W. AIM in Respiratory Disorders. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_178] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
36
|
Morgan S. Respiratory assessment: undertaking a physical examination of the chest in adults. Nurs Stand 2021; 37:75-82. [PMID: 34931506 DOI: 10.7748/ns.2021.e11602] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/18/2021] [Indexed: 11/09/2022]
Abstract
Nurses frequently encounter patients in respiratory distress or with respiratory complications, whether from acute disease or a long-term condition. A physical examination of the chest should be conducted as part of a comprehensive respiratory assessment of the patient, and should follow a systematic approach that includes inspection, palpation, percussion and auscultation. Nurses undertaking these hands-on components of respiratory assessments need to have adequate knowledge of the procedures involved, as well as practical skills that need to be practised under supervision. This article outlines how to undertake a physical examination of the chest in adults.
Collapse
Affiliation(s)
- Sara Morgan
- Faculty of Life Sciences and Education, University of South Wales, Pontypridd, Wales
| |
Collapse
|
37
|
Gairola S, Tom F, Kwatra N, Jain M. RespireNet: A Deep Neural Network for Accurately Detecting Abnormal Lung Sounds in Limited Data Setting. Annu Int Conf IEEE Eng Med Biol Soc 2021; 2021:527-530. [PMID: 34891348 DOI: 10.1109/embc46164.2021.9630091] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Auscultation of respiratory sounds is the primary tool for screening and diagnosing lung diseases. Automated analysis, coupled with digital stethoscopes, can play a crucial role in enabling tele-screening of fatal lung diseases. Deep neural networks (DNNs) have shown potential to solve such problems, and are an obvious choice. However, DNNs are data hungry, and the largest respiratory dataset ICBHI has only 6898 breathing cycles, which is quite small for training a satisfactory DNN model. In this work, RespireNet, we propose a simple CNN-based model, along with a suite of novel techniques- device specific fine-tuning, concatenation-based augmentation, blank region clipping, and smart padding-enabling us to efficiently use the small-sized dataset. We perform extensive evaluation on the ICBHI dataset, and improve upon the state-of-the-art results for 4-class classification by 2.2%.Code: https://github.com/microsoft/RespireNet.
Collapse
|
38
|
Abreu V, Oliveira A, Alberto Duarte J, Marques A. Computerized respiratory sounds in paediatrics: A systematic review. Respiratory Medicine: X 2021; 3:100027. [DOI: 10.1016/j.yrmex.2021.100027] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
|
39
|
Nguyen T, Pernkopf F. Crackle Detection In Lung Sounds Using Transfer Learning And Multi-Input Convolutional Neural Networks. Annu Int Conf IEEE Eng Med Biol Soc 2021; 2021:80-83. [PMID: 34891244 DOI: 10.1109/embc46164.2021.9630577] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Large annotated lung sound databases are publicly available and might be used to train algorithms for diagnosis systems. However, it might be a challenge to develop a well-performing algorithm for small non-public data, which have only a few subjects and show differences in recording devices and setup. In this paper, we use transfer learning to tackle the mismatch of the recording setup. This allows us to transfer knowledge from one dataset to another dataset for crackle detection in lung sounds. In particular, a single input convolutional neural network (CNN) model is pre-trained on a source domain using ICBHI 2017, the largest publicly available database of lung sounds. We use log-mel spectrogram features of respiratory cycles of lung sounds. The pre-trained network is used to build a multi-input CNN model, which shares the same network architecture for respiratory cycles and their corresponding respiratory phases. The multi-input model is then fine-tuned on the target domain of our self-collected lung sound database for classifying crackles and normal lung sounds. Our experimental results show significant performance improvements of 9.84% (absolute) in F-score on the target domain using the multi-input CNN model and transfer learning for crackle detection.Clinical relevance- Crackle detection in lung sounds, multi-input convolutional neural networks, transfer learning.
Collapse
|
40
|
McLane I, Lauwers E, Stas T, Busch-Vishniac I, Ides K, Verhulst S, Steckel J. Comprehensive Analysis System for Automated Respiratory Cycle Segmentation and Crackle Peak Detection. IEEE J Biomed Health Inform 2021; 26:1847-1860. [PMID: 34705660 DOI: 10.1109/jbhi.2021.3123353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Digital auscultation is a well-known method for assessing lung sounds, but remains a subjective process in typical practice, relying on the human interpretation. Several methods have been presented for detecting or analyzing crackles but are limited in their real-world application because few have been integrated into comprehensive systems or validated on non-ideal data. This work details a complete signal analysis methodology for analyzing crackles in challenging recordings. The procedure comprises five sequential processing blocks: (1) motion artifact detection, (2) deep learning denoising network, (3) respiratory cycle segmentation, (4) separation of discontinuous adventitious sounds from vesicular sounds, and (5) crackle peak detection. This system uses a collection of new methods and robustness-focused improvements on previous methods to analyze respiratory cycles and crackles therein. To validate the accuracy, the system is tested on a database of 1000 simulated lung sounds with varying levels of motion artifacts, ambient noise, cycle lengths and crackle intensities, in which ground truths are exactly known. The system performs with average F-score of 91.07% for detecting motion artifacts and 94.43% for respiratory cycle extraction, and an overall F-score of 94.08% for detecting the locations of individual crackles. The process also successfully detects healthy recordings. Preliminary validation is also presented on a small set of 20 patient recordings, for which the system performs comparably. These methods provide quantifiable analysis of respiratory sounds to enable clinicians to distinguish between types of crackles, their timing within the respiratory cycle, and the level of occurrence. Crackles are one of the most common abnormal lung sounds, presenting in multiple cardiorespiratory diseases. These features will contribute to a better understanding of disease severity and progression in an objective, simple and non-invasive way.
Collapse
|
41
|
Yu H, Zhao J, Liu D, Chen Z, Sun J, Zhao X. Multi-channel lung sounds intelligent diagnosis of chronic obstructive pulmonary disease. BMC Pulm Med 2021; 21:321. [PMID: 34654400 DOI: 10.1186/s12890-021-01682-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Accepted: 09/29/2021] [Indexed: 11/10/2022] Open
Abstract
Background Chronic obstructive pulmonary disease (COPD) is a chronic respiratory disease that seriously threatens people’s health, with high morbidity and mortality worldwide. At present, the clinical diagnosis methods of COPD are time-consuming, invasive, and radioactive. Therefore, it is urgent to develop a non-invasive and rapid COPD severity diagnosis technique suitable for daily screening in clinical practice. Results This study established an effective model for the preliminary diagnosis of COPD severity using lung sounds with few channels. Firstly, the time-frequency-energy features of 12 channels lung sounds were extracted by Hilbert–Huang transform. And then, channels and features were screened by the reliefF algorithm. Finally, the feature sets were input into a support vector machine to diagnose COPD severity, and the performance with Bayes, decision tree, and deep belief network was compared. Experimental results show that high classification performance using only 4-channel lung sounds of L1, L2, L3, and L4 channels can be achieved by the proposed model. The accuracy, sensitivity, and specificity of mild COPD and moderate + severe COPD were 89.13%, 87.72%, and 91.01%, respectively. The classification performance rates of moderate COPD and severe COPD were 94.26%, 97.32%, and 89.93% for accuracy, sensitivity, and specificity, respectively. Conclusion This model provides a standardized evaluation with high classification performance rates, which can assist doctors to complete the preliminary diagnosis of COPD severity immediately, and has important clinical significance.
Collapse
|
42
|
Al-Dhlan KA. An adaptive speech signal processing for COVID-19 detection using deep learning approach. Int J Speech Technol 2021; 25:641-649. [PMID: 34456611 PMCID: PMC8380014 DOI: 10.1007/s10772-021-09878-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/09/2020] [Accepted: 07/29/2021] [Indexed: 06/13/2023]
Abstract
Researchers and scientists have been conducting plenty of research on COVID-19 since its outbreak. Healthcare professionals, laboratory technicians, and front-line workers like sanitary workers, data collectors are putting tremendous efforts to avoid the prevalence of the COVID-19 pandemic. Currently, the reverse transcription polymerase chain reaction (RT-PCR) testing strategy determines the COVID-19 virus. This RT-PCR processing is more expensive and induces violation of social distancing rules, and time-consuming. Therefore, this research work introduces generative adversarial network deep learning for quickly detect COVID-19 from speech signals. This proposed system consists of two stages, pre-processing and classification. This work uses the least mean square (LMS) filter algorithm to remove the noise or artifacts from input speech signals. After removing the noise, the proposed generative adversarial network classification method analyses the mel-frequency cepstral coefficients features and classifies the COVID-19 signals and non-COVID-19 signals. The results show a more prominent correlation of MFCCs with various COVID-19 cough and breathing sounds, while the sound is more robust between COVID-19 and non-COVID-19 models. As compared with the existing Artificial Neural Network, Convolutional Neural Network, and Recurrent Neural Network, the proposed GAN method obtains the best result. The precision, recall, accuracy, and F-measure of the proposed GAN are 96.54%, 96.15%, 98.56%, and 0.96, respectively.
Collapse
Affiliation(s)
- Kawther A. Al-Dhlan
- Information and Computer Science Department, University of Ha’il, Hail, Kingdom of Saudi Arabia
| |
Collapse
|
43
|
Ferreira-Cardoso H, Jácome C, Silva S, Amorim A, Redondo MT, Fontoura-Matias J, Vicente-Ferreira M, Vieira-Marques P, Valente J, Almeida R, Fonseca JA, Azevedo I. Lung Auscultation Using the Smartphone-Feasibility Study in Real-World Clinical Practice. Sensors (Basel) 2021; 21:4931. [PMID: 34300670 PMCID: PMC8309818 DOI: 10.3390/s21144931] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Revised: 07/03/2021] [Accepted: 07/16/2021] [Indexed: 11/17/2022]
Abstract
Conventional lung auscultation is essential in the management of respiratory diseases. However, detecting adventitious sounds outside medical facilities remains challenging. We assessed the feasibility of lung auscultation using the smartphone built-in microphone in real-world clinical practice. We recruited 134 patients (median[interquartile range] 16[11-22.25]y; 54% male; 31% cystic fibrosis, 29% other respiratory diseases, 28% asthma; 12% no respiratory diseases) at the Pediatrics and Pulmonology departments of a tertiary hospital. First, clinicians performed conventional auscultation with analog stethoscopes at 4 locations (trachea, right anterior chest, right and left lung bases), and documented any adventitious sounds. Then, smartphone auscultation was recorded twice in the same four locations. The recordings (n = 1060) were classified by two annotators. Seventy-three percent of recordings had quality (obtained in 92% of the participants), with the quality proportion being higher at the trachea (82%) and in the children's group (75%). Adventitious sounds were present in only 35% of the participants and 14% of the recordings, which may have contributed to the fair agreement between conventional and smartphone auscultation (85%; k = 0.35(95% CI 0.26-0.44)). Our results show that smartphone auscultation was feasible, but further investigation is required to improve its agreement with conventional auscultation.
Collapse
Affiliation(s)
| | - Cristina Jácome
- MEDCIDS—Department of Community Medicine, Health Information and Decision, Faculty of Medicine, University of Porto, 4200-450 Porto, Portugal; (R.A.); (J.A.F.)
- CINTESIS—Center for Health Technology and Services Research, Faculty of Medicine, University of Porto, 4200-450 Porto, Portugal;
| | - Sónia Silva
- Department of Pediatrics, Centro Hospitalar Universitário de São João, 4200-319 Porto, Portugal; (S.S.); (J.F.-M.); (M.V.-F.); (I.A.)
| | - Adelina Amorim
- Faculty of Medicine, University of Porto, 4200-319 Porto, Portugal; (H.F.-C.); (A.A.)
- Department of Pulmonology, Centro Hospitalar Universitário de São João, 4200-319 Porto, Portugal;
| | - Margarida T. Redondo
- Department of Pulmonology, Centro Hospitalar Universitário de São João, 4200-319 Porto, Portugal;
| | - José Fontoura-Matias
- Department of Pediatrics, Centro Hospitalar Universitário de São João, 4200-319 Porto, Portugal; (S.S.); (J.F.-M.); (M.V.-F.); (I.A.)
| | - Margarida Vicente-Ferreira
- Department of Pediatrics, Centro Hospitalar Universitário de São João, 4200-319 Porto, Portugal; (S.S.); (J.F.-M.); (M.V.-F.); (I.A.)
| | - Pedro Vieira-Marques
- CINTESIS—Center for Health Technology and Services Research, Faculty of Medicine, University of Porto, 4200-450 Porto, Portugal;
| | - José Valente
- MEDIDA—Serviços em Medicina, Educação, Investigação, Desenvolvimento e Avaliação, LDA, 4200-386 Porto, Portugal;
| | - Rute Almeida
- MEDCIDS—Department of Community Medicine, Health Information and Decision, Faculty of Medicine, University of Porto, 4200-450 Porto, Portugal; (R.A.); (J.A.F.)
- CINTESIS—Center for Health Technology and Services Research, Faculty of Medicine, University of Porto, 4200-450 Porto, Portugal;
| | - João Almeida Fonseca
- MEDCIDS—Department of Community Medicine, Health Information and Decision, Faculty of Medicine, University of Porto, 4200-450 Porto, Portugal; (R.A.); (J.A.F.)
- CINTESIS—Center for Health Technology and Services Research, Faculty of Medicine, University of Porto, 4200-450 Porto, Portugal;
- MEDIDA—Serviços em Medicina, Educação, Investigação, Desenvolvimento e Avaliação, LDA, 4200-386 Porto, Portugal;
| | - Inês Azevedo
- Department of Pediatrics, Centro Hospitalar Universitário de São João, 4200-319 Porto, Portugal; (S.S.); (J.F.-M.); (M.V.-F.); (I.A.)
- Department of Obstetrics, Gynecology and Pediatrics, Faculty of Medicine, University of Porto, 4200-319 Porto, Portugal
- EpiUnit, Institute of Public Health, University of Porto, 4050-091 Porto, Portugal
| |
Collapse
|
44
|
Alqudaihi KS, Aslam N, Khan IU, Almuhaideb AM, Alsunaidi SJ, Ibrahim NMAR, Alhaidari FA, Shaikh FS, Alsenbel YM, Alalharith DM, Alharthi HM, Alghamdi WM, Alshahrani MS. Cough Sound Detection and Diagnosis Using Artificial Intelligence Techniques: Challenges and Opportunities. IEEE Access 2021; 9:102327-102344. [PMID: 34786317 PMCID: PMC8545201 DOI: 10.1109/access.2021.3097559] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Accepted: 07/09/2021] [Indexed: 06/02/2023]
Abstract
Coughing is a common symptom of several respiratory diseases. The sound and type of cough are useful features to consider when diagnosing a disease. Respiratory infections pose a significant risk to human lives worldwide as well as a significant economic downturn, particularly in countries with limited therapeutic resources. In this study we reviewed the latest proposed technologies that were used to control the impact of respiratory diseases. Artificial Intelligence (AI) is a promising technology that aids in data analysis and prediction of results, thereby ensuring people's well-being. We conveyed that the cough symptom can be reliably used by AI algorithms to detect and diagnose different types of known diseases including pneumonia, pulmonary edema, asthma, tuberculosis (TB), COVID19, pertussis, and other respiratory diseases. We also identified different techniques that produced the best results for diagnosing respiratory disease using cough samples. This study presents the most recent challenges, solutions, and opportunities in respiratory disease detection and diagnosis, allowing practitioners and researchers to develop better techniques.
Collapse
Affiliation(s)
- Kawther S. Alqudaihi
- Department of Computer ScienceCollege of Computer Science and Information TechnologyImam Abdulrahman Bin Faisal UniversityDammam31441Saudi Arabia
| | - Nida Aslam
- Department of Computer ScienceCollege of Computer Science and Information TechnologyImam Abdulrahman Bin Faisal UniversityDammam31441Saudi Arabia
| | - Irfan Ullah Khan
- Department of Computer ScienceCollege of Computer Science and Information TechnologyImam Abdulrahman Bin Faisal UniversityDammam31441Saudi Arabia
| | - Abdullah M. Almuhaideb
- Department of Networks and CommunicationsCollege of Computer Science and Information TechnologyImam Abdulrahman Bin Faisal UniversityDammam31441Saudi Arabia
| | - Shikah J. Alsunaidi
- Department of Computer ScienceCollege of Computer Science and Information TechnologyImam Abdulrahman Bin Faisal UniversityDammam31441Saudi Arabia
| | - Nehad M. Abdel Rahman Ibrahim
- Department of Computer ScienceCollege of Computer Science and Information TechnologyImam Abdulrahman Bin Faisal UniversityDammam31441Saudi Arabia
| | - Fahd A. Alhaidari
- Department of Networks and CommunicationsCollege of Computer Science and Information TechnologyImam Abdulrahman Bin Faisal UniversityDammam31441Saudi Arabia
| | - Fatema S. Shaikh
- Department of Computer Information SystemsCollege of Computer Science and Information TechnologyImam Abdulrahman Bin Faisal UniversityDammam31441Saudi Arabia
| | - Yasmine M. Alsenbel
- Department of Computer ScienceCollege of Computer Science and Information TechnologyImam Abdulrahman Bin Faisal UniversityDammam31441Saudi Arabia
| | - Dima M. Alalharith
- Department of Computer ScienceCollege of Computer Science and Information TechnologyImam Abdulrahman Bin Faisal UniversityDammam31441Saudi Arabia
| | - Hajar M. Alharthi
- Department of Computer ScienceCollege of Computer Science and Information TechnologyImam Abdulrahman Bin Faisal UniversityDammam31441Saudi Arabia
| | - Wejdan M. Alghamdi
- Department of Computer ScienceCollege of Computer Science and Information TechnologyImam Abdulrahman Bin Faisal UniversityDammam31441Saudi Arabia
| | - Mohammed S. Alshahrani
- Department of Emergency MedicineCollege of MedicineImam Abdulrahman Bin Faisal UniversityDammam31441Saudi Arabia
| |
Collapse
|
45
|
Hsu FS, Huang SR, Huang CW, Huang CJ, Cheng YR, Chen CC, Hsiao J, Chen CW, Chen LC, Lai YC, Hsu BF, Lin NJ, Tsai WL, Wu YL, Tseng TL, Tseng CT, Chen YT, Lai F. Benchmarking of eight recurrent neural network variants for breath phase and adventitious sound detection on a self-developed open-access lung sound database-HF_Lung_V1. PLoS One 2021; 16:e0254134. [PMID: 34197556 PMCID: PMC8248710 DOI: 10.1371/journal.pone.0254134] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Accepted: 06/20/2021] [Indexed: 01/15/2023] Open
Abstract
A reliable, remote, and continuous real-time respiratory sound monitor with automated respiratory sound analysis ability is urgently required in many clinical scenarios-such as in monitoring disease progression of coronavirus disease 2019-to replace conventional auscultation with a handheld stethoscope. However, a robust computerized respiratory sound analysis algorithm for breath phase detection and adventitious sound detection at the recording level has not yet been validated in practical applications. In this study, we developed a lung sound database (HF_Lung_V1) comprising 9,765 audio files of lung sounds (duration of 15 s each), 34,095 inhalation labels, 18,349 exhalation labels, 13,883 continuous adventitious sound (CAS) labels (comprising 8,457 wheeze labels, 686 stridor labels, and 4,740 rhonchus labels), and 15,606 discontinuous adventitious sound labels (all crackles). We conducted benchmark tests using long short-term memory (LSTM), gated recurrent unit (GRU), bidirectional LSTM (BiLSTM), bidirectional GRU (BiGRU), convolutional neural network (CNN)-LSTM, CNN-GRU, CNN-BiLSTM, and CNN-BiGRU models for breath phase detection and adventitious sound detection. We also conducted a performance comparison between the LSTM-based and GRU-based models, between unidirectional and bidirectional models, and between models with and without a CNN. The results revealed that these models exhibited adequate performance in lung sound analysis. The GRU-based models outperformed, in terms of F1 scores and areas under the receiver operating characteristic curves, the LSTM-based models in most of the defined tasks. Furthermore, all bidirectional models outperformed their unidirectional counterparts. Finally, the addition of a CNN improved the accuracy of lung sound analysis, especially in the CAS detection tasks.
Collapse
Affiliation(s)
- Fu-Shun Hsu
- Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan
- Department of Critical Care Medicine, Far Eastern Memorial Hospital, New Taipei, Taiwan
- Heroic Faith Medical Science Co., Ltd., Taipei, Taiwan
| | | | | | - Chao-Jung Huang
- Joint Research Center for Artificial Intelligence Technology and All Vista Healthcare, National Taiwan University, Taipei, Taiwan
| | - Yuan-Ren Cheng
- Heroic Faith Medical Science Co., Ltd., Taipei, Taiwan
- Department of Life Science, College of Life Science, National Taiwan University, Taipei, Taiwan
- Institute of Biomedical Sciences, Academia Sinica, Taipei, Taiwan
| | | | - Jack Hsiao
- HCC Healthcare Group, New Taipei, Taiwan
| | - Chung-Wei Chen
- Department of Critical Care Medicine, Far Eastern Memorial Hospital, New Taipei, Taiwan
| | - Li-Chin Chen
- Research Center for Information Technology Innovation, Academia Sinica, Taipei, Taiwan
| | - Yen-Chun Lai
- Heroic Faith Medical Science Co., Ltd., Taipei, Taiwan
| | - Bi-Fang Hsu
- Heroic Faith Medical Science Co., Ltd., Taipei, Taiwan
| | - Nian-Jhen Lin
- Heroic Faith Medical Science Co., Ltd., Taipei, Taiwan
- Division of Pulmonary Medicine, Far Eastern Memorial Hospital, New Taipei, Taiwan
| | - Wan-Ling Tsai
- Heroic Faith Medical Science Co., Ltd., Taipei, Taiwan
| | - Yi-Lin Wu
- Heroic Faith Medical Science Co., Ltd., Taipei, Taiwan
| | | | | | - Yi-Tsun Chen
- Heroic Faith Medical Science Co., Ltd., Taipei, Taiwan
| | - Feipei Lai
- Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan
| |
Collapse
|
46
|
Gupta P, Wen H, Di Francesco L, Ayazi F. Detection of pathological mechano-acoustic signatures using precision accelerometer contact microphones in patients with pulmonary disorders. Sci Rep 2021; 11:13427. [PMID: 34183695 PMCID: PMC8238985 DOI: 10.1038/s41598-021-92666-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Accepted: 06/11/2021] [Indexed: 11/09/2022] Open
Abstract
Monitoring pathological mechano-acoustic signals emanating from the lungs is critical for timely and cost-effective healthcare delivery. Adventitious lung sounds including crackles, wheezes, rhonchi, bronchial breath sounds, stridor or pleural rub and abnormal breathing patterns function as essential clinical biomarkers for the early identification, accurate diagnosis and monitoring of pulmonary disorders. Here, we present a wearable sensor module comprising of a hermetically encapsulated, high precision accelerometer contact microphone (ACM) which enables both episodic and longitudinal assessment of lung sounds, breathing patterns and respiratory rates using a single integrated sensor. This enhanced ACM sensor leverages a nano-gap transduction mechanism to achieve high sensitivity to weak high frequency vibrations occurring on the surface of the skin due to underlying lung pathologies. The performance of the ACM sensor was compared to recordings from a state-of-art digital stethoscope, and the efficacy of the developed system is demonstrated by conducting an exploratory research study aimed at recording pathological mechano-acoustic signals from hospitalized patients with a chronic obstructive pulmonary disease (COPD) exacerbation, pneumonia, and acute decompensated heart failure. This unobtrusive wearable system can enable both episodic and longitudinal evaluation of lung sounds that allow for the early detection and/or ongoing monitoring of pulmonary disease.
Collapse
Affiliation(s)
- Pranav Gupta
- Georgia Institute of Technology, Atlanta, GA, 30308, USA.
| | - Haoran Wen
- StethX Microsystems, Atlanta, GA, 30308, USA
| | - Lorenzo Di Francesco
- Department of Medicine, Division of General Internal Medicine, Emory University, Atlanta, GA, 30303, USA
| | - Farrokh Ayazi
- Ken Byers Professor in Microsystems, Georgia Institute of Technology, Atlanta, GA, 30308, USA.
| |
Collapse
|
47
|
Jung SY, Liao CH, Wu YS, Yuan SM, Sun CT. Efficiently Classifying Lung Sounds through Depthwise Separable CNN Models with Fused STFT and MFCC Features. Diagnostics (Basel) 2021; 11:732. [PMID: 33924146 PMCID: PMC8074359 DOI: 10.3390/diagnostics11040732] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2021] [Revised: 04/07/2021] [Accepted: 04/13/2021] [Indexed: 01/18/2023] Open
Abstract
Lung sounds remain vital in clinical diagnosis as they reveal associations with pulmonary pathologies. With COVID-19 spreading across the world, it has become more pressing for medical professionals to better leverage artificial intelligence for faster and more accurate lung auscultation. This research aims to propose a feature engineering process that extracts the dedicated features for the depthwise separable convolution neural network (DS-CNN) to classify lung sounds accurately and efficiently. We extracted a total of three features for the shrunk DS-CNN model: the short-time Fourier-transformed (STFT) feature, the Mel-frequency cepstrum coefficient (MFCC) feature, and the fused features of these two. We observed that while DS-CNN models trained on either the STFT or the MFCC feature achieved an accuracy of 82.27% and 73.02%, respectively, fusing both features led to a higher accuracy of 85.74%. In addition, our method achieved 16 times higher inference speed on an edge device and only 0.45% less accuracy than RespireNet. This finding indicates that the fusion of the STFT and MFCC features and DS-CNN would be a model design for lightweight edge devices to achieve accurate AI-aided detection of lung diseases.
Collapse
Affiliation(s)
- Shing-Yun Jung
- Department of Computer Science, National Chiao Tung University, Hsinchu 300, Taiwan; (C.-H.L.); (Y.-S.W.); (C.-T.S.)
| | - Chia-Hung Liao
- Department of Computer Science, National Chiao Tung University, Hsinchu 300, Taiwan; (C.-H.L.); (Y.-S.W.); (C.-T.S.)
| | - Yu-Sheng Wu
- Department of Computer Science, National Chiao Tung University, Hsinchu 300, Taiwan; (C.-H.L.); (Y.-S.W.); (C.-T.S.)
| | - Shyan-Ming Yuan
- Department of Computer Science, National Chiao Tung University, Hsinchu 300, Taiwan; (C.-H.L.); (Y.-S.W.); (C.-T.S.)
- Department of Computer Science, National Yang Ming Chiao Tung University, Hsinchu 300, Taiwan
| | - Chuen-Tsai Sun
- Department of Computer Science, National Chiao Tung University, Hsinchu 300, Taiwan; (C.-H.L.); (Y.-S.W.); (C.-T.S.)
- Department of Computer Science, National Yang Ming Chiao Tung University, Hsinchu 300, Taiwan
| |
Collapse
|
48
|
Pal R, Barney A. Iterative envelope mean fractal dimension filter for the separation of crackles from normal breath sounds. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
49
|
Ntalampiras S, Potamitis I. Automatic acoustic identification of respiratory diseases. Evolving Systems 2021; 12:69-77. [DOI: 10.1007/s12530-020-09339-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
50
|
De La Torre Cruz J, Cañadas Quesada FJ, Ruiz Reyes N, García Galán S, Carabias Orti JJ, Peréz Chica G. Monophonic and Polyphonic Wheezing Classification Based on Constrained Low-Rank Non-Negative Matrix Factorization. Sensors (Basel) 2021; 21:1661. [PMID: 33670892 PMCID: PMC7957792 DOI: 10.3390/s21051661] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Revised: 02/17/2021] [Accepted: 02/22/2021] [Indexed: 11/21/2022]
Abstract
The appearance of wheezing sounds is widely considered by physicians as a key indicator to detect early pulmonary disorders or even the severity associated with respiratory diseases, as occurs in the case of asthma and chronic obstructive pulmonary disease. From a physician's point of view, monophonic and polyphonic wheezing classification is still a challenging topic in biomedical signal processing since both types of wheezes are sinusoidal in nature. Unlike most of the classification algorithms in which interference caused by normal respiratory sounds is not addressed in depth, our first contribution proposes a novel Constrained Low-Rank Non-negative Matrix Factorization (CL-RNMF) approach, never applied to classification of wheezing as far as the authors' knowledge, which incorporates several constraints (sparseness and smoothness) and a low-rank configuration to extract the wheezing spectral content, minimizing the acoustic interference from normal respiratory sounds. The second contribution automatically analyzes the harmonic structure of the energy distribution associated with the estimated wheezing spectrogram to classify the type of wheezing. Experimental results report that: (i) the proposed method outperforms the most recent and relevant state-of-the-art wheezing classification method by approximately 8% in accuracy; (ii) unlike state-of-the-art methods based on classifiers, the proposed method uses an unsupervised approach that does not require any training.
Collapse
Affiliation(s)
- Juan De La Torre Cruz
- Department of Telecommunication Engineering, University of Jaen, Campus Cientifico-Tecnologico de Linares, Avda. de la Universidad, s/n, Linares, 23700 Jaen, Spain; (F.J.C.Q.); (N.R.R.); (S.G.G.); (J.J.C.O.)
| | - Francisco Jesús Cañadas Quesada
- Department of Telecommunication Engineering, University of Jaen, Campus Cientifico-Tecnologico de Linares, Avda. de la Universidad, s/n, Linares, 23700 Jaen, Spain; (F.J.C.Q.); (N.R.R.); (S.G.G.); (J.J.C.O.)
| | - Nicolás Ruiz Reyes
- Department of Telecommunication Engineering, University of Jaen, Campus Cientifico-Tecnologico de Linares, Avda. de la Universidad, s/n, Linares, 23700 Jaen, Spain; (F.J.C.Q.); (N.R.R.); (S.G.G.); (J.J.C.O.)
| | - Sebastián García Galán
- Department of Telecommunication Engineering, University of Jaen, Campus Cientifico-Tecnologico de Linares, Avda. de la Universidad, s/n, Linares, 23700 Jaen, Spain; (F.J.C.Q.); (N.R.R.); (S.G.G.); (J.J.C.O.)
| | - Julio José Carabias Orti
- Department of Telecommunication Engineering, University of Jaen, Campus Cientifico-Tecnologico de Linares, Avda. de la Universidad, s/n, Linares, 23700 Jaen, Spain; (F.J.C.Q.); (N.R.R.); (S.G.G.); (J.J.C.O.)
| | - Gerardo Peréz Chica
- Pneumology Clinical Management Unit of the University Hospital of Jaen, Av. del Ejercito Espanol, 10, 23007 Jaen, Spain;
| |
Collapse
|