1
|
Sethi AK, Muddaloor P, Anvekar P, Agarwal J, Mohan A, Singh M, Gopalakrishnan K, Yadav A, Adhikari A, Damani D, Kulkarni K, Aakre CA, Ryu AJ, Iyer VN, Arunachalam SP. Digital Pulmonology Practice with Phonopulmography Leveraging Artificial Intelligence: Future Perspectives Using Dual Microwave Acoustic Sensing and Imaging. SENSORS (BASEL, SWITZERLAND) 2023; 23:5514. [PMID: 37420680 DOI: 10.3390/s23125514] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Revised: 06/01/2023] [Accepted: 06/05/2023] [Indexed: 07/09/2023]
Abstract
Respiratory disorders, being one of the leading causes of disability worldwide, account for constant evolution in management technologies, resulting in the incorporation of artificial intelligence (AI) in the recording and analysis of lung sounds to aid diagnosis in clinical pulmonology practice. Although lung sound auscultation is a common clinical practice, its use in diagnosis is limited due to its high variability and subjectivity. We review the origin of lung sounds, various auscultation and processing methods over the years and their clinical applications to understand the potential for a lung sound auscultation and analysis device. Respiratory sounds result from the intra-pulmonary collision of molecules contained in the air, leading to turbulent flow and subsequent sound production. These sounds have been recorded via an electronic stethoscope and analyzed using back-propagation neural networks, wavelet transform models, Gaussian mixture models and recently with machine learning and deep learning models with possible use in asthma, COVID-19, asbestosis and interstitial lung disease. The purpose of this review was to summarize lung sound physiology, recording technologies and diagnostics methods using AI for digital pulmonology practice. Future research and development in recording and analyzing respiratory sounds in real time could revolutionize clinical practice for both the patients and the healthcare personnel.
Collapse
Affiliation(s)
- Arshia K Sethi
- GIH Artificial Intelligence Laboratory (GAIL), Division of Gastroenterology and Hepatology, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA
| | - Pratyusha Muddaloor
- GIH Artificial Intelligence Laboratory (GAIL), Division of Gastroenterology and Hepatology, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA
| | | | - Joshika Agarwal
- Division of Pulmonary and Critical Care Medicine, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA
| | - Anmol Mohan
- GIH Artificial Intelligence Laboratory (GAIL), Division of Gastroenterology and Hepatology, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA
| | | | - Keerthy Gopalakrishnan
- GIH Artificial Intelligence Laboratory (GAIL), Division of Gastroenterology and Hepatology, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA
- Microwave Engineering and Imaging Laboratory (MEIL), Division of Gastroenterology & Hepatology, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA
| | - Ashima Yadav
- Department of Cardiovascular Medicine, Mayo Clinic, Rochester, MN 55905, USA
| | - Aakriti Adhikari
- GIH Artificial Intelligence Laboratory (GAIL), Division of Gastroenterology and Hepatology, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA
| | - Devanshi Damani
- Department of Cardiovascular Medicine, Mayo Clinic, Rochester, MN 55905, USA
- Department of Internal Medicine, Texas Tech University Health Science Center, El Paso, TX 79995, USA
| | - Kanchan Kulkarni
- INSERM, Centre de Recherche Cardio-Thoracique de Bordeaux, University of Bordeaux, U1045, F-33000 Bordeaux, France
- IHU Liryc, Heart Rhythm Disease Institute, Fondation Bordeaux Université, F-33600 Pessac, France
| | | | - Alexander J Ryu
- Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA
| | - Vivek N Iyer
- Division of Pulmonary and Critical Care Medicine, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA
| | - Shivaram P Arunachalam
- GIH Artificial Intelligence Laboratory (GAIL), Division of Gastroenterology and Hepatology, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA
- Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA
- Department of Radiology, Mayo Clinic, Rochester, MN 55905, USA
- Microwave Engineering and Imaging Laboratory (MEIL), Division of Gastroenterology & Hepatology, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA
| |
Collapse
|
2
|
Cinyol F, Baysal U, Köksal D, Babaoğlu E, Ulaşlı SS. Incorporating support vector machine to the classification of respiratory sounds by Convolutional Neural Network. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
3
|
Extraction of low-dimensional features for single-channel common lung sound classification. Med Biol Eng Comput 2022; 60:1555-1568. [DOI: 10.1007/s11517-022-02552-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2021] [Accepted: 03/09/2022] [Indexed: 11/27/2022]
|
4
|
Niu J, Cai M, Shi Y, Ren S, Xu W, Gao W, Luo Z, Reinhardt JM. A Novel Method for Automatic Identification of Breathing State. Sci Rep 2019; 9:103. [PMID: 30643176 PMCID: PMC6331627 DOI: 10.1038/s41598-018-36454-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2018] [Accepted: 11/20/2018] [Indexed: 11/20/2022] Open
Abstract
Sputum deposition blocks the airways of patients and leads to blood oxygen desaturation. Medical staff must periodically check the breathing state of intubated patients. This process increases staff workload. In this paper, we describe a system designed to acquire respiratory sounds from intubated subjects, extract the audio features, and classify these sounds to detect the presence of sputum. Our method uses 13 features extracted from the time-frequency spectrum of the respiratory sounds. To test our system, 220 respiratory sound samples were collected. Half of the samples were collected from patients with sputum present, and the remainder were collected from patients with no sputum present. Testing was performed based on ten-fold cross-validation. In the ten-fold cross-validation experiment, the logistic classifier identified breath sounds with sputum present with a sensitivity of 93.36% and a specificity of 93.36%. The feature extraction and classification methods are useful and reliable for sputum detection. This approach differs from waveform research and can provide a better visualization of sputum conditions. The proposed system can be used in the ICU to inform medical staff when sputum is present in a patient's trachea.
Collapse
Affiliation(s)
- Jinglong Niu
- School of Automation Science and Electrical Engineering, Beihang University, Beijing, 100191, China
- Department of Biomedical Engineering, University of Iowa, Iowa City, IA, 52246, United States
| | - Maolin Cai
- School of Automation Science and Electrical Engineering, Beihang University, Beijing, 100191, China
| | - Yan Shi
- School of Automation Science and Electrical Engineering, Beihang University, Beijing, 100191, China.
| | - Shuai Ren
- School of Automation Science and Electrical Engineering, Beihang University, Beijing, 100191, China
| | - Weiqing Xu
- School of Automation Science and Electrical Engineering, Beihang University, Beijing, 100191, China
| | - Wei Gao
- Department of Respiration, Beijing Anzhen Hospital, Capital Medical University, Beijing, 100029, China.
| | - Zujin Luo
- Department of Respiratory and Critical Care Medicine, Beijing Engineering Research Center of Respiratory and Critical Care Medicine, Beijing Institute of Respiratory Medicine, Beijing Chao-Yang Hospital,Capital Medical University, Beijing, 100043, China
| | - Joseph M Reinhardt
- Department of Biomedical Engineering, University of Iowa, Iowa City, IA, 52246, United States
| |
Collapse
|
5
|
Abstract
Recent developments in sensor technology and computational analysis methods enable new strategies to measure and interpret lung acoustic signals that originate internally, such as breathing or vocal sounds, or are externally introduced, such as in chest percussion or airway insonification. A better understanding of these sounds has resulted in a new instrumentation that allows for highly accurate as well as portable options for measurement in the hospital, in the clinic, and even at home. This review outlines the instrumentation for acoustic stimulation and measurement of the lungs. We first review the fundamentals of acoustic lung signals and the pathophysiology of the diseases that these signals are used to detect. Then, we focus on different methods of measuring and creating signals that have been used in recent research for pulmonary disease diagnosis. These new methods, combined with signal processing and modeling techniques, lead to a reduction in noise and allow improved feature extraction and signal classification. We conclude by presenting the results of human subject studies taking advantage of both the instrumentation and signal processing tools to accurately diagnose common lung diseases. This paper emphasizes the active areas of research within modern lung acoustics and encourages the standardization of future work in this field.
Collapse
|
6
|
|
7
|
Rao A, Chu S, Batlivala N, Zetumer S, Roy S. Improved Detection of Lung Fluid With Standardized Acoustic Stimulation of the Chest. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE-JTEHM 2018; 6:3200107. [PMID: 30310761 PMCID: PMC6168182 DOI: 10.1109/jtehm.2018.2863366] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/23/2018] [Revised: 05/29/2018] [Accepted: 06/04/2018] [Indexed: 11/17/2022]
Abstract
Accumulation of excess air and water in the lungs leads to breakdown of respiratory function and is a common cause of patient hospitalization. Compact and non-invasive methods to detect the changes in lung fluid accumulation can allow physicians to assess patients’ respiratory conditions. In this paper, an acoustic transducer and a digital stethoscope system are proposed as a targeted solution for this clinical need. Alterations in the structure of the lungs lead to measurable changes which can be used to assess lung pathology. We standardize this procedure by sending a controlled signal through the lungs of six healthy subjects and six patients with lung disease. We extract mel-frequency cepstral coefficients and spectroid audio features, commonly used in classification for music retrieval, to characterize subjects as healthy or diseased. Using the \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{upgreek}
\usepackage{mathrsfs}
\setlength{\oddsidemargin}{-69pt}
\begin{document}
}{}$K$
\end{document}-nearest neighbors algorithm, we demonstrate 91.7% accuracy in distinguishing between healthy subjects and patients with lung pathology.
Collapse
Affiliation(s)
- Adam Rao
- Department of Bioengineering and Therapeutic SciencesUniversity of California at San FranciscoSan FranciscoCA94158USA
| | - Simon Chu
- School of MedicineUniversity of California at San FranciscoSan FranciscoCA94143USA
| | | | - Samuel Zetumer
- School of MedicineUniversity of California at San FranciscoSan FranciscoCA94143USA
| | - Shuvo Roy
- Department of Bioengineering and Therapeutic SciencesUniversity of California at San FranciscoSan FranciscoCA94158USA
| |
Collapse
|
8
|
Niu J, Shi Y, Cai M, Cao Z, Wang D, Zhang Z, Zhang XD. Detection of sputum by interpreting the time-frequency distribution of respiratory sound signal using image processing techniques. Bioinformatics 2018; 34:820-827. [PMID: 29040453 PMCID: PMC6192228 DOI: 10.1093/bioinformatics/btx652] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2017] [Revised: 07/25/2017] [Accepted: 10/12/2017] [Indexed: 11/14/2022] Open
Abstract
Motivation Sputum in the trachea is hard to expectorate and detect directly for the patients who are unconscious, especially those in Intensive Care Unit. Medical staff should always check the condition of sputum in the trachea. This is time-consuming and the necessary skills are difficult to acquire. Currently, there are few automatic approaches to serve as alternatives to this manual approach. Results We develop an automatic approach to diagnose the condition of the sputum. Our approach utilizes a system involving a medical device and quantitative analytic methods. In this approach, the time-frequency distribution of respiratory sound signals, determined from the spectrum, is treated as an image. The sputum detection is performed by interpreting the patterns in the image through the procedure of preprocessing and feature extraction. In this study, 272 respiratory sound samples (145 sputum sound and 127 non-sputum sound samples) are collected from 12 patients. We apply the method of leave-one out cross-validation to the 12 patients to assess the performance of our approach. That is, out of the 12 patients, 11 are randomly selected and their sound samples are used to predict the sound samples in the remaining one patient. The results show that our automatic approach can classify the sputum condition at an accuracy rate of 83.5%. Availability and implementation The matlab codes and examples of datasets explored in this work are available at Bioinformatics online. Contact yesoyou@gmail.com or douglaszhang@umac.mo. Supplementary information Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Jinglong Niu
- School of Automation Science and Electrical Engineering, Beihang
University, Beijing, China
- Beijing Engineering Research Center of Diagnosis and Treatment of
Respiratory and Critical Care Medicine, Beijing Chaoyang Hospital, Beijing, China
| | - Yan Shi
- School of Automation Science and Electrical Engineering, Beihang
University, Beijing, China
- Beijing Engineering Research Center of Diagnosis and Treatment of
Respiratory and Critical Care Medicine, Beijing Chaoyang Hospital, Beijing, China
- Faculty of Health Sciences, University of Macau, Taipa, Macau,
China
- The State Key Laboratory of Fluid Power Transmission and Control,
Zhejiang University, Hangzhou, China
| | - Maolin Cai
- School of Automation Science and Electrical Engineering, Beihang
University, Beijing, China
- Beijing Engineering Research Center of Diagnosis and Treatment of
Respiratory and Critical Care Medicine, Beijing Chaoyang Hospital, Beijing, China
| | - Zhixin Cao
- Beijing Engineering Research Center of Diagnosis and Treatment of
Respiratory and Critical Care Medicine, Beijing Chaoyang Hospital, Beijing, China
| | - Dandan Wang
- Faculty of Health Sciences, University of Macau, Taipa, Macau,
China
| | - Zhaozhi Zhang
- Department of Statistical Science, Duke University, Durham, NC,
USA
| | | |
Collapse
|
9
|
Pramono RXA, Bowyer S, Rodriguez-Villegas E. Automatic adventitious respiratory sound analysis: A systematic review. PLoS One 2017; 12:e0177926. [PMID: 28552969 PMCID: PMC5446130 DOI: 10.1371/journal.pone.0177926] [Citation(s) in RCA: 88] [Impact Index Per Article: 12.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2016] [Accepted: 05/05/2017] [Indexed: 12/03/2022] Open
Abstract
Background Automatic detection or classification of adventitious sounds is useful to assist physicians in diagnosing or monitoring diseases such as asthma, Chronic Obstructive Pulmonary Disease (COPD), and pneumonia. While computerised respiratory sound analysis, specifically for the detection or classification of adventitious sounds, has recently been the focus of an increasing number of studies, a standardised approach and comparison has not been well established. Objective To provide a review of existing algorithms for the detection or classification of adventitious respiratory sounds. This systematic review provides a complete summary of methods used in the literature to give a baseline for future works. Data sources A systematic review of English articles published between 1938 and 2016, searched using the Scopus (1938-2016) and IEEExplore (1984-2016) databases. Additional articles were further obtained by references listed in the articles found. Search terms included adventitious sound detection, adventitious sound classification, abnormal respiratory sound detection, abnormal respiratory sound classification, wheeze detection, wheeze classification, crackle detection, crackle classification, rhonchi detection, rhonchi classification, stridor detection, stridor classification, pleural rub detection, pleural rub classification, squawk detection, and squawk classification. Study selection Only articles were included that focused on adventitious sound detection or classification, based on respiratory sounds, with performance reported and sufficient information provided to be approximately repeated. Data extraction Investigators extracted data about the adventitious sound type analysed, approach and level of analysis, instrumentation or data source, location of sensor, amount of data obtained, data management, features, methods, and performance achieved. Data synthesis A total of 77 reports from the literature were included in this review. 55 (71.43%) of the studies focused on wheeze, 40 (51.95%) on crackle, 9 (11.69%) on stridor, 9 (11.69%) on rhonchi, and 18 (23.38%) on other sounds such as pleural rub, squawk, as well as the pathology. Instrumentation used to collect data included microphones, stethoscopes, and accelerometers. Several references obtained data from online repositories or book audio CD companions. Detection or classification methods used varied from empirically determined thresholds to more complex machine learning techniques. Performance reported in the surveyed works were converted to accuracy measures for data synthesis. Limitations Direct comparison of the performance of surveyed works cannot be performed as the input data used by each was different. A standard validation method has not been established, resulting in different works using different methods and performance measure definitions. Conclusion A review of the literature was performed to summarise different analysis approaches, features, and methods used for the analysis. The performance of recent studies showed a high agreement with conventional non-automatic identification. This suggests that automated adventitious sound detection or classification is a promising solution to overcome the limitations of conventional auscultation and to assist in the monitoring of relevant diseases.
Collapse
Affiliation(s)
| | - Stuart Bowyer
- Department of Electrical and Electronic Engineering, Imperial College London, London, United Kingdom
| | - Esther Rodriguez-Villegas
- Department of Electrical and Electronic Engineering, Imperial College London, London, United Kingdom
- * E-mail:
| |
Collapse
|
10
|
Sengupta N, Sahidullah M, Saha G. Lung sound classification using cepstral-based statistical features. Comput Biol Med 2016; 75:118-29. [PMID: 27286184 DOI: 10.1016/j.compbiomed.2016.05.013] [Citation(s) in RCA: 98] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2016] [Revised: 05/18/2016] [Accepted: 05/20/2016] [Indexed: 11/16/2022]
Affiliation(s)
- Nandini Sengupta
- Department of Electronics and Electrical Communication Engineering, Indian Institute of Technology, Kharagpur, Kharagpur 721302, India.
| | - Md Sahidullah
- Speech and Image Processing Unit, School of Computing, University of Eastern Finland, Joensuu 80101, Finland.
| | - Goutam Saha
- Department of Electronics and Electrical Communication Engineering, Indian Institute of Technology, Kharagpur, Kharagpur 721302, India.
| |
Collapse
|
11
|
Bokov P, Mahut B, Flaud P, Delclaux C. Wheezing recognition algorithm using recordings of respiratory sounds at the mouth in a pediatric population. Comput Biol Med 2016; 70:40-50. [PMID: 26802543 DOI: 10.1016/j.compbiomed.2016.01.002] [Citation(s) in RCA: 52] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2015] [Revised: 01/03/2016] [Accepted: 01/04/2016] [Indexed: 11/17/2022]
Abstract
BACKGROUND Respiratory diseases in children are a common reason for physician visits. A diagnostic difficulty arises when parents hear wheezing that is no longer present during the medical consultation. Thus, an outpatient objective tool for recognition of wheezing is of clinical value. METHOD We developed a wheezing recognition algorithm from recorded respiratory sounds with a Smartphone placed near the mouth. A total of 186 recordings were obtained in a pediatric emergency department, mostly in toddlers (mean age 20 months). After exclusion of recordings with artefacts and those with a single clinical operator auscultation, 95 recordings with the agreement of two operators on auscultation diagnosis (27 with wheezing and 68 without) were subjected to a two phase algorithm (signal analysis and pattern classifier using machine learning algorithms) to classify records. RESULTS The best performance (71.4% sensitivity and 88.9% specificity) was observed with a Support Vector Machine-based algorithm. We further tested the algorithm over a set of 39 recordings having a single operator and found a fair agreement (kappa=0.28, CI95% [0.12, 0.45]) between the algorithm and the operator. CONCLUSIONS The main advantage of such an algorithm is its use in contact-free sound recording, thus valuable in the pediatric population.
Collapse
Affiliation(s)
- Plamen Bokov
- Assistance Publique-Hôpitaux de Paris, Hôpital Européen Georges Pompidou, Service de Physiologie - Clinique de la Dyspnée, Paris, France; Université Paris Descartes, Paris Sorbonne Cité, Paris, France.
| | - Bruno Mahut
- Assistance Publique-Hôpitaux de Paris, Hôpital Européen Georges Pompidou, Service de Physiologie - Clinique de la Dyspnée, Paris, France
| | - Patrice Flaud
- Laboratoire Matière et Systèmes Complexes, UMR 7057, Université Paris Diderot, Paris, France
| | - Christophe Delclaux
- Assistance Publique-Hôpitaux de Paris, Hôpital Européen Georges Pompidou, Service de Physiologie - Clinique de la Dyspnée, Paris, France; Université Paris Descartes, Paris Sorbonne Cité, Paris, France; CIC Plurithématique 9201, Hôpital Européen Georges Pompidou, Paris, France
| |
Collapse
|
12
|
Göğüş F, Karlık B, Harman G. Identification of Pulmonary Disorders by Using Different Spectral Analysis Methods. INT J COMPUT INT SYS 2016. [DOI: 10.1080/18756891.2016.1204110] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022] Open
|
13
|
Mazić I, Bonković M, Džaja B. Two-level coarse-to-fine classification algorithm for asthma wheezing recognition in children's respiratory sounds. Biomed Signal Process Control 2015. [DOI: 10.1016/j.bspc.2015.05.002] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
14
|
Developing a reference of normal lung sounds in healthy Peruvian children. Lung 2014; 192:765-73. [PMID: 24943262 DOI: 10.1007/s00408-014-9608-3] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2013] [Accepted: 05/26/2014] [Indexed: 10/25/2022]
Abstract
PURPOSE Lung auscultation has long been a standard of care for the diagnosis of respiratory diseases. Recent advances in electronic auscultation and signal processing have yet to find clinical acceptance; however, computerized lung sound analysis may be ideal for pediatric populations in settings, where skilled healthcare providers are commonly unavailable. We described features of normal lung sounds in young children using a novel signal processing approach to lay a foundation for identifying pathologic respiratory sounds. METHODS 186 healthy children with normal pulmonary exams and without respiratory complaints were enrolled at a tertiary care hospital in Lima, Peru. Lung sounds were recorded at eight thoracic sites using a digital stethoscope. 151 (81%) of the recordings were eligible for further analysis. Heavy-crying segments were automatically rejected and features extracted from spectral and temporal signal representations contributed to profiling of lung sounds. RESULTS Mean age, height, and weight among study participants were 2.2 years (SD 1.4), 84.7 cm (SD 13.2), and 12.0 kg (SD 3.6), respectively; and, 47% were boys. We identified ten distinct spectral and spectro-temporal signal parameters and most demonstrated linear relationships with age, height, and weight, while no differences with genders were noted. Older children had a faster decaying spectrum than younger ones. Features like spectral peak width, lower-frequency Mel-frequency cepstral coefficients, and spectro-temporal modulations also showed variations with recording site. CONCLUSIONS Lung sound extracted features varied significantly with child characteristics and lung site. A comparison with adult studies revealed differences in the extracted features for children. While sound-reduction techniques will improve analysis, we offer a novel, reproducible tool for sound analysis in real-world environments.
Collapse
|
15
|
Chen MY, Chou CH. Applying cybernetic technology to diagnose human pulmonary sounds. J Med Syst 2014; 38:58. [PMID: 24878780 DOI: 10.1007/s10916-014-0058-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2014] [Accepted: 05/13/2014] [Indexed: 11/26/2022]
Abstract
Chest auscultation is a crucial and efficient method for diagnosing lung disease; however, it is a subjective process that relies on physician experience and the ability to differentiate between various sound patterns. Because the physiological signals composed of heart sounds and pulmonary sounds (PSs) are greater than 120 Hz and the human ear is not sensitive to low frequencies, successfully making diagnostic classifications is difficult. To solve this problem, we constructed various PS recognition systems for classifying six PS classes: vesicular breath sounds, bronchial breath sounds, tracheal breath sounds, crackles, wheezes, and stridor sounds. First, we used a piezoelectric microphone and data acquisition card to acquire PS signals and perform signal preprocessing. A wavelet transform was used for feature extraction, and the PS signals were decomposed into frequency subbands. Using a statistical method, we extracted 17 features that were used as the input vectors of a neural network. We proposed a 2-stage classifier combined with a back-propagation (BP) neural network and learning vector quantization (LVQ) neural network, which improves classification accuracy by using a haploid neural network. The receiver operating characteristic (ROC) curve verifies the high performance level of the neural network. To expand traditional auscultation methods, we constructed various PS diagnostic systems that can correctly classify the six common PSs. The proposed device overcomes the lack of human sensitivity to low-frequency sounds and various PS waves, characteristic values, and a spectral analysis charts are provided to elucidate the design of the human-machine interface.
Collapse
Affiliation(s)
- Mei-Yung Chen
- National Taiwan Normal University, 162 Heping E. Road Sec. 1, Taipei, Taiwan,
| | | |
Collapse
|
16
|
New approaches for spectro-temporal feature extraction with applications to respiratory sound classification. Neurocomputing 2014. [DOI: 10.1016/j.neucom.2013.07.033] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
17
|
Emmanouilidou D, Patil K, West J, Elhilali M. A multiresolution analysis for detection of abnormal lung sounds. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2013; 2012:3139-42. [PMID: 23366591 DOI: 10.1109/embc.2012.6346630] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Automated analysis and detection of abnormal lung sound patterns has great potential for improving access to standardized diagnosis of pulmonary diseases, especially in low-resource settings. In the current study, we develop signal processing tools for analysis of paediatric auscultations recorded under non-ideal noisy conditions. The proposed model is based on a biomimetic multi-resolution analysis of the spectro-temporal modulation details in lung sounds. The methodology provides a detailed description of joint spectral and temporal variations in the signal and proves to be more robust than frequency-based techniques in distinguishing crackles and wheezes from normal breathing sounds.
Collapse
Affiliation(s)
- Dimitra Emmanouilidou
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
| | | | | | | |
Collapse
|
18
|
|
19
|
DOKUR ZÜMRAY, ÖLMEZ TAMER. CLASSIFICATION OF RESPIRATORY SOUNDS BY USING AN ARTIFICIAL NEURAL NETWORK. INT J PATTERN RECOGN 2011. [DOI: 10.1142/s0218001403002526] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
In this paper, a classification method for respiratory sounds (RSs) in patients with asthma and in healthy subjects is presented. Wavelet transform is applied to a window containing 256 samples. Elements of the feature vectors are obtained from the wavelet coefficients. The best feature elements are selected by using dynamic programming. Grow and Learn (GAL) neural network, Kohonen network and multi-layer perceptron (MLP) are used for the classification. It is observed that RSs of patients (with asthma) and healthy subjects are successfully classified by the GAL network.
Collapse
Affiliation(s)
- ZÜMRAY DOKUR
- Department of Electronics and Communication Engineering, Istanbul Technical University, Istanbul, Turkey
| | - TAMER ÖLMEZ
- Department of Electronics and Communication Engineering, Istanbul Technical University, Istanbul, Turkey
| |
Collapse
|
20
|
Charleston-Villalobos S, Martinez-Hernandez G, Gonzalez-Camarena R, Chi-Lem G, Carrillo J, Aljama-Corrales T. Assessment of multichannel lung sounds parameterization for two-class classification in interstitial lung disease patients. Comput Biol Med 2011; 41:473-82. [PMID: 21571265 DOI: 10.1016/j.compbiomed.2011.04.009] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2010] [Revised: 12/07/2010] [Accepted: 04/18/2011] [Indexed: 10/18/2022]
|
21
|
Gurung A, Scrafford CG, Tielsch JM, Levine OS, Checkley W. Computerized lung sound analysis as diagnostic aid for the detection of abnormal lung sounds: a systematic review and meta-analysis. Respir Med 2011; 105:1396-403. [PMID: 21676606 DOI: 10.1016/j.rmed.2011.05.007] [Citation(s) in RCA: 93] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/11/2011] [Revised: 05/09/2011] [Accepted: 05/11/2011] [Indexed: 10/18/2022]
Abstract
RATIONALE The standardized use of a stethoscope for chest auscultation in clinical research is limited by its inherent inter-listener variability. Electronic auscultation and automated classification of recorded lung sounds may help prevent some of these shortcomings. OBJECTIVE We sought to perform a systematic review and meta-analysis of studies implementing computerized lung sound analysis (CLSA) to aid in the detection of abnormal lung sounds for specific respiratory disorders. METHODS We searched for articles on CLSA in MEDLINE, EMBASE, Cochrane Library and ISI Web of Knowledge through July 31, 2010. Following qualitative review, we conducted a meta-analysis to estimate the sensitivity and specificity of CLSA for the detection of abnormal lung sounds. MEASUREMENTS AND MAIN RESULTS Of 208 articles identified, we selected eight studies for review. Most studies employed either electret microphones or piezoelectric sensors for auscultation, and Fourier Transform and Neural Network algorithms for analysis and automated classification of lung sounds. Overall sensitivity for the detection of wheezes or crackles using CLSA was 80% (95% CI 72-86%) and specificity was 85% (95% CI 78-91%). CONCLUSIONS While quality data on CLSA are relatively limited, analysis of existing information suggests that CLSA can provide a relatively high specificity for detecting abnormal lung sounds such as crackles and wheezes. Further research and product development could promote the value of CLSA in research studies or its diagnostic utility in clinical settings.
Collapse
Affiliation(s)
- Arati Gurung
- Division of Pulmonary and Critical Care, School of Medicine, Johns Hopkins University, Baltimore, MD 21205, USA
| | | | | | | | | |
Collapse
|
22
|
Bahoura M. Pattern recognition methods applied to respiratory sounds classification into normal and wheeze classes. Comput Biol Med 2009; 39:824-43. [PMID: 19631934 DOI: 10.1016/j.compbiomed.2009.06.011] [Citation(s) in RCA: 86] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2007] [Revised: 06/10/2009] [Accepted: 06/26/2009] [Indexed: 11/19/2022]
Abstract
In this paper, we present the pattern recognition methods proposed to classify respiratory sounds into normal and wheeze classes. We evaluate and compare the feature extraction techniques based on Fourier transform, linear predictive coding, wavelet transform and Mel-frequency cepstral coefficients (MFCC) in combination with the classification methods based on vector quantization, Gaussian mixture models (GMM) and artificial neural networks, using receiver operating characteristic curves. We propose the use of an optimized threshold to discriminate the wheezing class from the normal one. Also, post-processing filter is employed to considerably improve the classification accuracy. Experimental results show that our approach based on MFCC coefficients combined to GMM is well adapted to classify respiratory sounds in normal and wheeze classes. McNemar's test demonstrated significant difference between results obtained by the presented classifiers (p<0.05).
Collapse
Affiliation(s)
- Mohammed Bahoura
- Department of Engineering, University of Quebec at Rimouski, allée des Ursulines, Que., Canada.
| |
Collapse
|
23
|
Lev S, Glickman YA, Kagan I, Dahan D, Cohen J, Grinev M, Shapiro M, Singer P. Changes in regional distribution of lung sounds as a function of positive end-expiratory pressure. CRITICAL CARE : THE OFFICIAL JOURNAL OF THE CRITICAL CARE FORUM 2009; 13:R66. [PMID: 19426555 PMCID: PMC2717423 DOI: 10.1186/cc7871] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/07/2008] [Revised: 04/27/2009] [Accepted: 05/10/2009] [Indexed: 11/17/2022]
Abstract
Introduction Automated mapping of lung sound distribution is a novel area of interest currently investigated in mechanically ventilated, critically ill patients. The objective of the present study was to assess changes in thoracic sound distribution resulting from changes in positive end-expiratory pressure (PEEP). Repeatability of automated lung sound measurements was also evaluated. Methods Regional lung sound distribution was assessed in 35 mechanically ventilated patients in the intensive care unit (ICU). A total of 201 vibration response imaging (VRI) measurements were collected at different levels of PEEP between 0 and 15 cmH2O. Findings were correlated with tidal volume, oxygen saturation, airway resistance, and dynamic compliance. Eighty-two duplicated readings were performed to evaluate the repeatability of the measurement. Results A significant shift in sound distribution from the apical to the diaphragmatic lung areas was recorded when increasing PEEP (paired t-tests, P < 0.05). In patients with unilateral lung pathology, this shift was significant in the diseased lung, but not as pronounced in the other lung. No significant difference in lung sound distribution was encountered based on level of ventilator support needed. Decreased lung sound distribution in the base was correlated with lower dynamic compliance. No significant difference was encountered between repeated measurements. Conclusions Lung sounds shift towards the diaphragmatic lung areas when PEEP increases. Lung sound measurements are highly repeatable in mechanically ventilated patients with various lung pathologies. Further studies are needed in order to fully appreciate the contribution of PEEP increase to diaphragmatic sound redistribution.
Collapse
Affiliation(s)
- Shaul Lev
- Department of General Intensive Care, Rabin Medical Center, Beilinson Campus, Petach Tikva 49100, Israel.
| | | | | | | | | | | | | | | |
Collapse
|
24
|
Dokur Z. Respiratory sound classification by using an incremental supervised neural network. Pattern Anal Appl 2008. [DOI: 10.1007/s10044-008-0125-y] [Citation(s) in RCA: 41] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
25
|
Güler I, Polat H, Ergün U. Combining neural network and genetic algorithm for prediction of lung sounds. J Med Syst 2005; 29:217-31. [PMID: 16050077 DOI: 10.1007/s10916-005-5182-9] [Citation(s) in RCA: 36] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
Recognition of lung sounds is an important goal in pulmonary medicine. In this work, we present a study for neural networks-genetic algorithm approach intended to aid in lung sound classification. Lung sound was captured from the chest wall of The subjects with different pulmonary diseases and also from the healthy subjects. Sound intervals with duration of 15-20 s were sampled from subjects. From each interval, full breath cycles were selected. Of each selected breath cycle, a 256-point Fourier Power Spectrum Density (PSD) was calculated. Total of 129 data values calculated by the spectral analysis are selected by genetic algorithm and applied to neural network. Multilayer perceptron (MLP) neural network employing backpropagation training algorithm was used to predict the presence or absence of adventitious sounds (wheeze and crackle). We used genetic algorithms to search for optimal structure and training parameters of neural network for a better predicting of lung sounds. This application resulted in designing of optimum network structure and, hence reducing the processing load and time.
Collapse
Affiliation(s)
- Inan Güler
- Department of Electronic and Computer Education, Faculty of Technical Education, Gazi University, 06500 Teknikokullar, Ankara, Turkey.
| | | | | |
Collapse
|
26
|
Oud M, Maarsingh EJW. Spirometry and forced oscillometry assisted optimal frequency band determination for the computerized analysis of tracheal lung sounds in asthma. Physiol Meas 2005; 25:595-606. [PMID: 15253112 DOI: 10.1088/0967-3334/25/3/001] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
We analysed respiration sounds of individual asthmatic patients, in the scope of the development of a method for computerized recognition of the degree of airway obstruction. Respiration sounds were recorded during laboratory sessions of histamine-provoked airway obstruction. We applied an interpolation technique using supervised artificial neural networks to investigate the optimal frequency band required for studying tracheal asthmatic lung sounds. The optimal band was found to be 100-2300 Hz. The forced expiratory volume in 1 s (FEV1) and the respiratory resistance parameter Rrs(4) were used to describe the degree of airway obstruction that is associated with the lung sounds. By comparing the results obtained with the two parameters, we found that for parametrization of the associated degree of airway obstruction respiratory resistance measurements are preferable over forced expiratory volume measurements.
Collapse
Affiliation(s)
- M Oud
- Biomedical Technology Department, Rijksuniversiteit Groningen, The Netherlands.
| | | |
Collapse
|
27
|
Oud M. Lung function interpolation by means of neural-network-supported analysis of respiration sounds. Med Eng Phys 2003; 25:309-16. [PMID: 12649015 DOI: 10.1016/s1350-4533(02)00198-4] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
Respiration sounds of individual asthmatic patients were analysed in the scope of the development of a method for computerised recognition of the degree of airways obstruction. Respiration sounds were recorded during laboratory sessions of allergen provoked airways obstruction, during several stages of advancing obstruction. The technique of artificial neural networks was applied for relating sound spectra and simultaneously measured lung function values (spirometry parameter FEV(1)). The ability of feedforward neural networks was tested to interpolate obstruction levels of FEV(1)-classes of which no members were included in the set used for training a network. In this way, a situation was simulated of an existing network recognising a new asthmatic attack under the same physiological conditions. It appeared to be possible to interpolate FEV(1) values, and it is concluded that a deterministic relationship exists between sound spectra and lung function parameter FEV(1). Variance optimisation appeared to be important in optimising the neural network configuration.
Collapse
Affiliation(s)
- M Oud
- Biomedical Technology Department, Rijksuniversiteit, Groningen, The Netherlands.
| |
Collapse
|