1
|
Mang LD, González Martínez FD, Martinez Muñoz D, García Galán S, Cortina R. Classification of Adventitious Sounds Combining Cochleogram and Vision Transformers. SENSORS (BASEL, SWITZERLAND) 2024; 24:682. [PMID: 38276373 PMCID: PMC10818433 DOI: 10.3390/s24020682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Revised: 01/13/2024] [Accepted: 01/19/2024] [Indexed: 01/27/2024]
Abstract
Early identification of respiratory irregularities is critical for improving lung health and reducing global mortality rates. The analysis of respiratory sounds plays a significant role in characterizing the respiratory system's condition and identifying abnormalities. The main contribution of this study is to investigate the performance when the input data, represented by cochleogram, is used to feed the Vision Transformer (ViT) architecture, since this input-classifier combination is the first time it has been applied to adventitious sound classification to our knowledge. Although ViT has shown promising results in audio classification tasks by applying self-attention to spectrogram patches, we extend this approach by applying the cochleogram, which captures specific spectro-temporal features of adventitious sounds. The proposed methodology is evaluated on the ICBHI dataset. We compare the classification performance of ViT with other state-of-the-art CNN approaches using spectrogram, Mel frequency cepstral coefficients, constant-Q transform, and cochleogram as input data. Our results confirm the superior classification performance combining cochleogram and ViT, highlighting the potential of ViT for reliable respiratory sound classification. This study contributes to the ongoing efforts in developing automatic intelligent techniques with the aim to significantly augment the speed and effectiveness of respiratory disease detection, thereby addressing a critical need in the medical field.
Collapse
Affiliation(s)
- Loredana Daria Mang
- Department of Telecommunication Engineering, University of Jaen, 23700 Linares, Spain; (F.D.G.M.); (D.M.M.); (S.G.G.)
| | | | - Damian Martinez Muñoz
- Department of Telecommunication Engineering, University of Jaen, 23700 Linares, Spain; (F.D.G.M.); (D.M.M.); (S.G.G.)
| | - Sebastián García Galán
- Department of Telecommunication Engineering, University of Jaen, 23700 Linares, Spain; (F.D.G.M.); (D.M.M.); (S.G.G.)
| | - Raquel Cortina
- Department of Computer Science, University of Oviedo, 33003 Oviedo, Spain;
| |
Collapse
|
2
|
Sethi AK, Muddaloor P, Anvekar P, Agarwal J, Mohan A, Singh M, Gopalakrishnan K, Yadav A, Adhikari A, Damani D, Kulkarni K, Aakre CA, Ryu AJ, Iyer VN, Arunachalam SP. Digital Pulmonology Practice with Phonopulmography Leveraging Artificial Intelligence: Future Perspectives Using Dual Microwave Acoustic Sensing and Imaging. SENSORS (BASEL, SWITZERLAND) 2023; 23:5514. [PMID: 37420680 DOI: 10.3390/s23125514] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Revised: 06/01/2023] [Accepted: 06/05/2023] [Indexed: 07/09/2023]
Abstract
Respiratory disorders, being one of the leading causes of disability worldwide, account for constant evolution in management technologies, resulting in the incorporation of artificial intelligence (AI) in the recording and analysis of lung sounds to aid diagnosis in clinical pulmonology practice. Although lung sound auscultation is a common clinical practice, its use in diagnosis is limited due to its high variability and subjectivity. We review the origin of lung sounds, various auscultation and processing methods over the years and their clinical applications to understand the potential for a lung sound auscultation and analysis device. Respiratory sounds result from the intra-pulmonary collision of molecules contained in the air, leading to turbulent flow and subsequent sound production. These sounds have been recorded via an electronic stethoscope and analyzed using back-propagation neural networks, wavelet transform models, Gaussian mixture models and recently with machine learning and deep learning models with possible use in asthma, COVID-19, asbestosis and interstitial lung disease. The purpose of this review was to summarize lung sound physiology, recording technologies and diagnostics methods using AI for digital pulmonology practice. Future research and development in recording and analyzing respiratory sounds in real time could revolutionize clinical practice for both the patients and the healthcare personnel.
Collapse
Affiliation(s)
- Arshia K Sethi
- GIH Artificial Intelligence Laboratory (GAIL), Division of Gastroenterology and Hepatology, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA
| | - Pratyusha Muddaloor
- GIH Artificial Intelligence Laboratory (GAIL), Division of Gastroenterology and Hepatology, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA
| | | | - Joshika Agarwal
- Division of Pulmonary and Critical Care Medicine, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA
| | - Anmol Mohan
- GIH Artificial Intelligence Laboratory (GAIL), Division of Gastroenterology and Hepatology, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA
| | | | - Keerthy Gopalakrishnan
- GIH Artificial Intelligence Laboratory (GAIL), Division of Gastroenterology and Hepatology, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA
- Microwave Engineering and Imaging Laboratory (MEIL), Division of Gastroenterology & Hepatology, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA
| | - Ashima Yadav
- Department of Cardiovascular Medicine, Mayo Clinic, Rochester, MN 55905, USA
| | - Aakriti Adhikari
- GIH Artificial Intelligence Laboratory (GAIL), Division of Gastroenterology and Hepatology, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA
| | - Devanshi Damani
- Department of Cardiovascular Medicine, Mayo Clinic, Rochester, MN 55905, USA
- Department of Internal Medicine, Texas Tech University Health Science Center, El Paso, TX 79995, USA
| | - Kanchan Kulkarni
- INSERM, Centre de Recherche Cardio-Thoracique de Bordeaux, University of Bordeaux, U1045, F-33000 Bordeaux, France
- IHU Liryc, Heart Rhythm Disease Institute, Fondation Bordeaux Université, F-33600 Pessac, France
| | | | - Alexander J Ryu
- Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA
| | - Vivek N Iyer
- Division of Pulmonary and Critical Care Medicine, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA
| | - Shivaram P Arunachalam
- GIH Artificial Intelligence Laboratory (GAIL), Division of Gastroenterology and Hepatology, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA
- Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA
- Department of Radiology, Mayo Clinic, Rochester, MN 55905, USA
- Microwave Engineering and Imaging Laboratory (MEIL), Division of Gastroenterology & Hepatology, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA
| |
Collapse
|
3
|
Mang L, Canadas-Quesada F, Carabias-Orti J, Combarro E, Ranilla J. Cochleogram-based adventitious sounds classification using convolutional neural networks. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104555] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
|
4
|
Ghulam Nabi F, Sundaraj K, Shahid Iqbal M, Shafiq M, Planiappan R. A telemedicine software application for asthma severity levels identification using wheeze sounds classification. Biocybern Biomed Eng 2022. [DOI: 10.1016/j.bbe.2022.11.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
5
|
Zhang Q, Zhang J, Yuan J, Huang H, Zhang Y, Zhang B, Lv G, Lin S, Wang N, Liu X, Tang M, Wang Y, Ma H, Liu L, Yuan S, Zhou H, Zhao J, Li Y, Yin Y, Zhao L, Wang G, Lian Y. SPRSound: Open-Source SJTU Paediatric Respiratory Sound Database. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2022; 16:867-881. [PMID: 36070274 DOI: 10.1109/tbcas.2022.3204910] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
It has proved that the auscultation of respiratory sound has advantage in early respiratory diagnosis. Various methods have been raised to perform automatic respiratory sound analysis to reduce subjective diagnosis and physicians' workload. However, these methods highly rely on the quality of respiratory sound database. In this work, we have developed the first open-access paediatric respiratory sound database, SPRSound. The database consists of 2,683 records and 9,089 respiratory sound events from 292 participants. Accurate label is important to achieve a good prediction for adventitious respiratory sound classification problem. A custom-made sound label annotation software (SoundAnn) has been developed to perform sound editing, sound annotation, and quality assurance evaluation. A team of 11 experienced paediatric physicians is involved in the entire process to establish golden standard reference for the dataset. To verify the robustness and accuracy of the classification model, we have investigated the effects of different feature extraction methods and machine learning classifiers on the classification performance of our dataset. As such, we have achieved a score of 75.22%, 61.57%, 56.71%, and 37.84% for the four different classification challenges at the event level and record level.
Collapse
|
6
|
Borwankar S, Verma JP, Jain R, Nayyar A. Improvise approach for respiratory pathologies classification with multilayer convolutional neural networks. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:39185-39205. [PMID: 35505670 PMCID: PMC9047583 DOI: 10.1007/s11042-022-12958-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 02/16/2022] [Accepted: 03/09/2022] [Indexed: 06/01/2023]
Abstract
Every respiratory-related checkup includes audio samples collected from the individual, collected through different tools (sonograph, stethoscope). This audio is analyzed to identify pathology, which requires time and effort. The research work proposed in this paper aims at easing the task with deep learning by the diagnosis of lung-related pathologies using Convolutional Neural Network (CNN) with the help of transformed features from the audio samples. International Conference on Biomedical and Health Informatics (ICBHI) corpus dataset was used for lung sound. Here a novel approach is proposed to pre-process the data and pass it through a newly proposed CNN architecture. The combination of pre-processing steps MFCC, Melspectrogram, and Chroma CENS with CNN improvise the performance of the proposed system, which helps to make an accurate diagnosis of lung sounds. The comparative analysis shows how the proposed approach performs better with previous state-of-the-art research approaches. It also shows that there is no need for a wheeze or a crackle to be present in the lung sound to carry out the classification of respiratory pathologies.
Collapse
Affiliation(s)
- Saumya Borwankar
- Institute of Technology, Nirma University, Ahmedabad, Gujarat India
| | | | - Rachna Jain
- IT department, Bhagwan Parshuram Institute of Technology, New Delhi, India
| | - Anand Nayyar
- Graduate School, Faculty of Information Technology, Duy Tan University, Da Nang, 550000 Vietnam
| |
Collapse
|
7
|
A Neural Network-Based Method for Respiratory Sound Analysis and Lung Disease Detection. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12083877] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
Abstract
Background: Respiratory sound analysis represents a research topic of growing interest in recent times. In fact, in this area, there is the potential to automatically infer the abnormalities in the preliminary stages of a lung dysfunction. Methods: In this paper, we propose a method to analyse respiratory sounds in an automatic way. The aim is to show the effectiveness of machine learning techniques in respiratory sound analysis. A feature vector is gathered directly from breath audio and, thus, by exploiting supervised machine learning techniques, we detect if the feature vector is related to a patient affected by a lung disease. Moreover, the proposed method is able to characterise the lung disease in asthma, bronchiectasis, bronchiolitis, chronic obstructive pulmonary disease, pneumonia, and lower or upper respiratory tract infection. Results: A retrospective experimental analysis on 126 patients with 920 recording sessions showed the effectiveness of the proposed method. Conclusion: The experimental analysis demonstrated that it is possible to detect lung disease by exploiting machine learning techniques. We considered several supervised machine learning algorithms, obtaining the most interesting performance with the neural network model, with an F-Measure of 0.983 in lung disease detection and equal to 0.923 in lung disease characterisation, increasing the state-of-the-art performance.
Collapse
|
8
|
Classification of Pulmonary Crackle and Normal Lung Sound Using Spectrogram and Support Vector Machine. JOURNAL OF BIOMIMETICS BIOMATERIALS AND BIOMEDICAL ENGINEERING 2022. [DOI: 10.4028/p-tf63b7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
Crackles is one of the types of adventitious lung sound heard in patients with interstitial pulmonary fibrosis or cystic fibrosis. Pulmonary crackles of discontinuous short duration appear on inspiration, expiration, or both. To differentiate these pulmonary crackles, the medical staff usually uses a manual method, called auscultation. Various methods were developed to recognize pulmonary crackles and distinguish them from normal pulmonary sounds to be applied in digital signal processing technology. This paper demonstrates a feature extraction method to classify pulmonary crackle and normal lung sounds using Support Vector Machine (SVM) method using several kernels by performing spectrograms of the pulmonary sound to generate the frequency profile. Spectrograms with various resolutions and 3-fold cross-validation were used to divide the training data and the test data in the testing process. The resulting accuracy ranges from 81.4% - 100%. More accuracy values of 100% are generated by a feature extraction in several SVM kernels using 256 points FFT with three variations of windowing parameters compared to 512 points, where the best accuracy of 100% was produced by STFT-SVM method. This method has a potential to be used in the classification of other biomedical signals. The advantages of that are that the number of features produced is the same as the N-point FFT used for any signal length, the flexibility in the STFT parameters changes, such as the type of window and the window's length. In this study, only the Keiser window was tested with specific parameters. Exploration with different window types with various parameters is fascinating to do in further research.
Collapse
|
9
|
Habukawa C, Ohgami N, Arai T, Makata H, Tomikawa M, Fujino T, Manabe T, Ogihara Y, Ohtani K, Shirao K, Sugai K, Asai K, Sato T, Murakami K. Wheeze Recognition Algorithm for Remote Medical Care Device in Children: Validation Study. JMIR Pediatr Parent 2021; 4:e28865. [PMID: 33875413 PMCID: PMC8277407 DOI: 10.2196/28865] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/17/2021] [Revised: 04/16/2021] [Accepted: 04/16/2021] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Since 2020, peoples' lifestyles have been largely changed due to the COVID-19 pandemic worldwide. In the medical field, although many patients prefer remote medical care, this prevents the physician from examining the patient directly; thus, it is important for patients to accurately convey their condition to the physician. Accordingly, remote medical care should be implemented and adaptable home medical devices are required. However, only a few highly accurate home medical devices are available for automatic wheeze detection as an exacerbation sign. OBJECTIVE We developed a new handy home medical device with an automatic wheeze recognition algorithm, which is available for clinical use in noisy environments such as a pediatric consultation room or at home. Moreover, the examination time is only 30 seconds, since young children cannot endure a long examination time without crying or moving. The aim of this study was to validate the developed automatic wheeze recognition algorithm as a clinical medical device in children at different institutions. METHODS A total of 374 children aged 4-107 months in pediatric consultation rooms of 10 institutions were enrolled in this study. All participants aged ≥6 years were diagnosed with bronchial asthma and patients ≤5 years had reported at least three episodes of wheezes. Wheezes were detected by auscultation with a stethoscope and recorded for 30 seconds using the wheeze recognition algorithm device (HWZ-1000T) developed based on wheeze characteristics following the Computerized Respiratory Sound Analysis guideline, where the dominant frequency and duration of a wheeze were >100 Hz and >100 ms, respectively. Files containing recorded lung sounds were assessed by each specialist physician and divided into two groups: 177 designated as "wheeze" files and 197 as "no-wheeze" files. Wheeze recognitions were compared between specialist physicians who recorded lung sounds and those recorded using the wheeze recognition algorithm. We calculated the sensitivity, specificity, positive predictive value, and negative predictive value for all recorded sound files, and evaluated the influence of age and sex on the wheeze detection sensitivity. RESULTS Detection of wheezes was not influenced by age and sex. In all files, wheezes were differentiated from noise using the wheeze recognition algorithm. The sensitivity, specificity, positive predictive value, and negative predictive value of the wheeze recognition algorithm were 96.6%, 98.5%, 98.3%, and 97.0%, respectively. Wheezes were automatically detected, and heartbeat sounds, voices, and crying were automatically identified as no-wheeze sounds by the wheeze recognition algorithm. CONCLUSIONS The wheeze recognition algorithm was verified to identify wheezing with high accuracy; therefore, it might be useful in the practical implementation of asthma management at home. Only a few home medical devices are available for automatic wheeze detection. The wheeze recognition algorithm was verified to identify wheezing with high accuracy and will be useful for wheezing management at home and in remote medical care.
Collapse
Affiliation(s)
- Chizu Habukawa
- Department of Pediatrics, Minami Wakayama Medical Center, Tanabe, Japan
| | | | | | | | | | | | | | | | | | - Kenichiro Shirao
- Shirao Clinic of Pediatrics and Pediatric Allergy, Hiroshima, Japan
| | - Kazuko Sugai
- Sugai Children's Clinic Pediatrics/Allergy, Hiroshima, Japan
| | - Kei Asai
- Omron Healthcare Co, Ltd, Muko, Japan
| | | | - Katsumi Murakami
- Department of Psychosomatic Medicine, Sakai Sakibana Hospital, Sakai, Japan
| |
Collapse
|
10
|
De La Torre Cruz J, Cañadas Quesada FJ, Ruiz Reyes N, García Galán S, Carabias Orti JJ, Peréz Chica G. Monophonic and Polyphonic Wheezing Classification Based on Constrained Low-Rank Non-Negative Matrix Factorization. SENSORS (BASEL, SWITZERLAND) 2021; 21:1661. [PMID: 33670892 PMCID: PMC7957792 DOI: 10.3390/s21051661] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Revised: 02/17/2021] [Accepted: 02/22/2021] [Indexed: 11/21/2022]
Abstract
The appearance of wheezing sounds is widely considered by physicians as a key indicator to detect early pulmonary disorders or even the severity associated with respiratory diseases, as occurs in the case of asthma and chronic obstructive pulmonary disease. From a physician's point of view, monophonic and polyphonic wheezing classification is still a challenging topic in biomedical signal processing since both types of wheezes are sinusoidal in nature. Unlike most of the classification algorithms in which interference caused by normal respiratory sounds is not addressed in depth, our first contribution proposes a novel Constrained Low-Rank Non-negative Matrix Factorization (CL-RNMF) approach, never applied to classification of wheezing as far as the authors' knowledge, which incorporates several constraints (sparseness and smoothness) and a low-rank configuration to extract the wheezing spectral content, minimizing the acoustic interference from normal respiratory sounds. The second contribution automatically analyzes the harmonic structure of the energy distribution associated with the estimated wheezing spectrogram to classify the type of wheezing. Experimental results report that: (i) the proposed method outperforms the most recent and relevant state-of-the-art wheezing classification method by approximately 8% in accuracy; (ii) unlike state-of-the-art methods based on classifiers, the proposed method uses an unsupervised approach that does not require any training.
Collapse
Affiliation(s)
- Juan De La Torre Cruz
- Department of Telecommunication Engineering, University of Jaen, Campus Cientifico-Tecnologico de Linares, Avda. de la Universidad, s/n, Linares, 23700 Jaen, Spain; (F.J.C.Q.); (N.R.R.); (S.G.G.); (J.J.C.O.)
| | - Francisco Jesús Cañadas Quesada
- Department of Telecommunication Engineering, University of Jaen, Campus Cientifico-Tecnologico de Linares, Avda. de la Universidad, s/n, Linares, 23700 Jaen, Spain; (F.J.C.Q.); (N.R.R.); (S.G.G.); (J.J.C.O.)
| | - Nicolás Ruiz Reyes
- Department of Telecommunication Engineering, University of Jaen, Campus Cientifico-Tecnologico de Linares, Avda. de la Universidad, s/n, Linares, 23700 Jaen, Spain; (F.J.C.Q.); (N.R.R.); (S.G.G.); (J.J.C.O.)
| | - Sebastián García Galán
- Department of Telecommunication Engineering, University of Jaen, Campus Cientifico-Tecnologico de Linares, Avda. de la Universidad, s/n, Linares, 23700 Jaen, Spain; (F.J.C.Q.); (N.R.R.); (S.G.G.); (J.J.C.O.)
| | - Julio José Carabias Orti
- Department of Telecommunication Engineering, University of Jaen, Campus Cientifico-Tecnologico de Linares, Avda. de la Universidad, s/n, Linares, 23700 Jaen, Spain; (F.J.C.Q.); (N.R.R.); (S.G.G.); (J.J.C.O.)
| | - Gerardo Peréz Chica
- Pneumology Clinical Management Unit of the University Hospital of Jaen, Av. del Ejercito Espanol, 10, 23007 Jaen, Spain;
| |
Collapse
|
11
|
Multi-Time-Scale Features for Accurate Respiratory Sound Classification. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10238606] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The COVID-19 pandemic has amplified the urgency of the developments in computer-assisted medicine and, in particular, the need for automated tools supporting the clinical diagnosis and assessment of respiratory symptoms. This need was already clear to the scientific community, which launched an international challenge in 2017 at the International Conference on Biomedical Health Informatics (ICBHI) for the implementation of accurate algorithms for the classification of respiratory sound. In this work, we present a framework for respiratory sound classification based on two different kinds of features: (i) short-term features which summarize sound properties on a time scale of tenths of a second and (ii) long-term features which assess sounds properties on a time scale of seconds. Using the publicly available dataset provided by ICBHI, we cross-validated the classification performance of a neural network model over 6895 respiratory cycles and 126 subjects. The proposed model reached an accuracy of 85%±3% and an precision of 80%±8%, which compare well with the body of literature. The robustness of the predictions was assessed by comparison with state-of-the-art machine learning tools, such as the support vector machine, Random Forest and deep neural networks. The model presented here is therefore suitable for large-scale applications and for adoption in clinical practice. Finally, an interesting observation is that both short-term and long-term features are necessary for accurate classification, which could be the subject of future studies related to its clinical interpretation.
Collapse
|
12
|
Brunese L, Mercaldo F, Reginelli A, Santone A. Explainable Deep Learning for Pulmonary Disease and Coronavirus COVID-19 Detection from X-rays. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 196:105608. [PMID: 32599338 PMCID: PMC7831868 DOI: 10.1016/j.cmpb.2020.105608] [Citation(s) in RCA: 246] [Impact Index Per Article: 49.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/09/2020] [Accepted: 06/09/2020] [Indexed: 05/03/2023]
Abstract
BACKGROUND AND OBJECTIVE Coronavirus disease (COVID-19) is an infectious disease caused by a new virus never identified before in humans. This virus causes respiratory disease (for instance, flu) with symptoms such as cough, fever and, in severe cases, pneumonia. The test to detect the presence of this virus in humans is performed on sputum or blood samples and the outcome is generally available within a few hours or, at most, days. Analysing biomedical imaging the patient shows signs of pneumonia. In this paper, with the aim of providing a fully automatic and faster diagnosis, we propose the adoption of deep learning for COVID-19 detection from X-rays. METHOD In particular, we propose an approach composed by three phases: the first one to detect if in a chest X-ray there is the presence of a pneumonia. The second one to discern between COVID-19 and pneumonia. The last step is aimed to localise the areas in the X-ray symptomatic of the COVID-19 presence. RESULTS AND CONCLUSION Experimental analysis on 6,523 chest X-rays belonging to different institutions demonstrated the effectiveness of the proposed approach, with an average time for COVID-19 detection of approximately 2.5 seconds and an average accuracy equal to 0.97.
Collapse
Affiliation(s)
- Luca Brunese
- Department of Medicine and Health Sciences "Vincenzo Tiberio", University of Molise, Campobasso, Italy
| | - Francesco Mercaldo
- Department of Medicine and Health Sciences "Vincenzo Tiberio", University of Molise, Campobasso, Italy; Institute for Informatics and Telematics, National Research Council of Italy (CNR), Pisa, Italy
| | - Alfonso Reginelli
- Department of Precision Medicine, University of Campania "Luigi Vanvitelli", Napoli, Italy
| | - Antonella Santone
- Department of Biosciences and Territory, University of Molise, Pesche (IS), Italy
| |
Collapse
|
13
|
Habukawa C, Ohgami N, Matsumoto N, Hashino K, Asai K, Sato T, Murakami K. A wheeze recognition algorithm for practical implementation in children. PLoS One 2020; 15:e0240048. [PMID: 33031408 PMCID: PMC7544038 DOI: 10.1371/journal.pone.0240048] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2020] [Accepted: 09/18/2020] [Indexed: 11/18/2022] Open
Abstract
BACKGROUND The detection of wheezes as an exacerbation sign is important in certain respiratory diseases. However, few highly accurate clinical methods are available for automatic detection of wheezes in children. This study aimed to develop a wheeze detection algorithm for practical implementation in children. METHODS A wheeze recognition algorithm was developed based on wheezes features following the Computerized Respiratory Sound Analysis guidelines. Wheezes can be detected by auscultation with a stethoscope and using an automatic computerized lung sound analysis. Lung sounds were recorded for 30 s in 214 children aged 2 months to 12 years and 11 months in a pediatric consultation room. Files containing recorded lung sounds were assessed by two specialist physicians and divided into two groups: 65 were designated as "wheeze" files, and 149 were designated as "no-wheeze" files. All lung sound judgments were agreed between two specialist physicians. We compared wheeze recognition between the specialist physicians and using the wheeze recognition algorithm and calculated the sensitivity, specificity, positive predictive value, and negative predictive value for all recorded sound files to evaluate the influence of age on the wheeze detection sensitivity. RESULTS The detection of wheezes was not influenced by age. In all files, wheezes were differentiated from noise using the wheeze recognition algorithm. The sensitivity, specificity, positive predictive value, and negative predictive value of the wheeze recognition algorithm were 100%, 95.7%, 90.3%, and 100%, respectively. CONCLUSIONS The wheeze recognition algorithm could identify wheezes in sound files and therefore may be useful in the practical implementation of respiratory illness management at home using properly developed devices.
Collapse
Affiliation(s)
- Chizu Habukawa
- Department of Paediatrics, Minami Wakayama Medical Center, Wakayama, Japan
| | - Naoto Ohgami
- Clinical Development Department, Technology Development HQ, Development center, Omron Healthcare Co., Ltd, Kyoto, Japan
| | - Naoki Matsumoto
- Core Technology Department, Technology Development HQ, Development Center, Omron Healthcare Co., Ltd, Kyoto, Japan
| | - Kenji Hashino
- Core Technology Department, Technology Development HQ, Development Center, Omron Healthcare Co., Ltd, Kyoto, Japan
| | - Kei Asai
- Clinical Development Department, Technology Development HQ, Development center, Omron Healthcare Co., Ltd, Kyoto, Japan
| | - Tetsuya Sato
- Clinical Development Department, Technology Development HQ, Development center, Omron Healthcare Co., Ltd, Kyoto, Japan
| | - Katsumi Murakami
- Department of Psychosomatic Medicine, Sakai Sakibana Hospital, Osaka, Japan
| |
Collapse
|
14
|
De La Torre Cruz J, Cañadas Quesada FJ, Ruiz Reyes N, Vera Candeas P, Carabias Orti JJ. Wheezing Sound Separation Based on Informed Inter-Segment Non-Negative Matrix Partial Co-Factorization. SENSORS (BASEL, SWITZERLAND) 2020; 20:E2679. [PMID: 32397155 PMCID: PMC7249056 DOI: 10.3390/s20092679] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/13/2020] [Revised: 05/01/2020] [Accepted: 05/05/2020] [Indexed: 11/16/2022]
Abstract
Wheezing reveals important cues that can be useful in alerting about respiratory disorders, such as Chronic Obstructive Pulmonary Disease. Early detection of wheezing through auscultation will allow the physician to be aware of the existence of the respiratory disorder in its early stage, thus minimizing the damage the disorder can cause to the subject, especially in low-income and middle-income countries. The proposed method presents an extended version of Non-negative Matrix Partial Co-Factorization (NMPCF) that eliminates most of the acoustic interference caused by normal respiratory sounds while preserving the wheezing content needed by the physician to make a reliable diagnosis of the subject's airway status. This extension, called Informed Inter-Segment NMPCF (IIS-NMPCF), attempts to overcome the drawback of the conventional NMPCF that treats all segments of the spectrogram equally, adding greater importance for signal reconstruction of repetitive sound events to those segments where wheezing sounds have not been detected. Specifically, IIS-NMPCF is based on a bases sharing process in which inter-segment information, informed by a wheezing detection system, is incorporated into the factorization to reconstruct a more accurate modelling of normal respiratory sounds. Results demonstrate the significant improvement obtained in the wheezing sound quality by IIS-NMPCF compared to the conventional NMPCF for all the Signal-to-Noise Ratio (SNR) scenarios evaluated, specifically, an SDR, SIR and SAR improvement equals 5.8 dB, 4.9 dB and 7.5 dB evaluating a noisy scenario with SNR = -5 dB.
Collapse
Affiliation(s)
- Juan De La Torre Cruz
- Departament of Telecommunication Engineering, University of Jaen, Campus Cientifico-Tecnologico de Linares, Avda. de la Universidad, s/n, 23700 Linares, Jaen, Spain; (F.J.C.Q.); (N.R.R.); (P.V.C.); (J.J.C.O.)
| | | | | | | | | |
Collapse
|
15
|
Nabi FG, Sundaraj K, Lam CK. Identification of asthma severity levels through wheeze sound characterization and classification using integrated power features. Biomed Signal Process Control 2019. [DOI: 10.1016/j.bspc.2019.04.018] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
16
|
Ghulam Nabi F, Sundaraj K, Chee Kiang L, Palaniappan R, Sundaraj S. Wheeze sound analysis using computer-based techniques: a systematic review. ACTA ACUST UNITED AC 2019; 64:1-28. [PMID: 29087951 DOI: 10.1515/bmt-2016-0219] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2016] [Accepted: 08/24/2017] [Indexed: 11/15/2022]
Abstract
Wheezes are high pitched continuous respiratory acoustic sounds which are produced as a result of airway obstruction. Computer-based analyses of wheeze signals have been extensively used for parametric analysis, spectral analysis, identification of airway obstruction, feature extraction and diseases or pathology classification. While this area is currently an active field of research, the available literature has not yet been reviewed. This systematic review identified articles describing wheeze analyses using computer-based techniques on the SCOPUS, IEEE Xplore, ACM, PubMed and Springer and Elsevier electronic databases. After a set of selection criteria was applied, 41 articles were selected for detailed analysis. The findings reveal that 1) computerized wheeze analysis can be used for the identification of disease severity level or pathology, 2) further research is required to achieve acceptable rates of identification on the degree of airway obstruction with normal breathing, 3) analysis using combinations of features and on subgroups of the respiratory cycle has provided a pathway to classify various diseases or pathology that stem from airway obstruction.
Collapse
Affiliation(s)
- Fizza Ghulam Nabi
- School of Mechatronic Engineering, Universiti Malaysia Perlis (UniMAP), 02600 Arau, Perlis, Malaysia, Phone: +601111519452
| | - Kenneth Sundaraj
- Faculty of Electronics and Computer Engineering, Universiti Teknikal Malaysia Melaka (UTeM), 76100 Durian Tunggal, Melaka, Malaysia
| | - Lam Chee Kiang
- School of Mechatronic Engineering, Universiti Malaysia Perlis (UniMAP), 02600 Arau, Perlis, Malaysia
| | - Rajkumar Palaniappan
- School of Electronics Engineering, Vellore Institute of Technology (VIT), Tamil Nadu 632014, India
| | - Sebastian Sundaraj
- Department of Anesthesiology, Hospital Tengku Ampuan Rahimah (HTAR), 41200 Klang, Selangor, Malaysia
| |
Collapse
|
17
|
Nabi FG, Sundaraj K, Lam CK, Palaniappan R. Analysis of wheeze sounds during tidal breathing according to severity levels in asthma patients. J Asthma 2019; 57:353-365. [PMID: 30810448 DOI: 10.1080/02770903.2019.1576193] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
Objective: This study aimed to statistically analyze the behavior of time-frequency features in digital recordings of wheeze sounds obtained from patients with various levels of asthma severity (mild, moderate, and severe), and this analysis was based on the auscultation location and/or breath phase. Method: Segmented and validated wheeze sounds were collected from the trachea and lower lung base (LLB) of 55 asthmatic patients during tidal breathing maneuvers and grouped into nine different datasets. The quartile frequencies F25, F50, F75, F90 and F99, mean frequency (MF) and average power (AP) were computed as features, and a univariate statistical analysis was then performed to analyze the behavior of the time-frequency features. Results: All features generally showed statistical significance in most of the datasets for all severity levels [χ2 = 6.021-71.65, p < 0.05, η2 = 0.01-0.52]. Of the seven investigated features, only AP showed statistical significance in all the datasets. F25, F75, F90 and F99 exhibited statistical significance in at least six datasets [χ2 = 4.852-65.63, p < 0.05, η2 = 0.01-0.52], and F25, F50 and MF showed statistical significance with a large η2 in all trachea-related datasets [χ2 = 13.54-55.32, p < 0.05, η2 = 0.13-0.33]. Conclusion: The results obtained for the time-frequency features revealed that (1) the asthma severity levels of patients can be identified through a set of selected features with tidal breathing, (2) tracheal wheeze sounds are more sensitive and specific predictors of severity levels and (3) inspiratory and expiratory wheeze sounds are almost equally informative.
Collapse
Affiliation(s)
- Fizza Ghulam Nabi
- School of Mechatronic Engineering, Universiti Malaysia Perlis, Malaysia
| | - Kenneth Sundaraj
- Centre for Telecommunication Research & Innovation, Fakulti Kejuruteraan Elektronik & Kejuruteraan Komputer, Universiti Teknikal Malaysia Melaka, Malaysia
| | - Chee Kiang Lam
- School of Mechatronic Engineering, Universiti Malaysia Perlis, Malaysia
| | | |
Collapse
|
18
|
Characterization and classification of asthmatic wheeze sounds according to severity level using spectral integrated features. Comput Biol Med 2018; 104:52-61. [PMID: 30439599 DOI: 10.1016/j.compbiomed.2018.10.035] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2018] [Revised: 10/31/2018] [Accepted: 10/31/2018] [Indexed: 11/21/2022]
Abstract
OBJECTIVE This study aimed to investigate and classify wheeze sounds of asthmatic patients according to their severity level (mild, moderate and severe) using spectral integrated (SI) features. METHOD Segmented and validated wheeze sounds were obtained from auscultation recordings of the trachea and lower lung base of 55 asthmatic patients during tidal breathing manoeuvres. The segments were multi-labelled into 9 groups based on the auscultation location and/or breath phases. Bandwidths were selected based on the physiology, and a corresponding SI feature was computed for each segment. Univariate and multivariate statistical analyses were then performed to investigate the discriminatory behaviour of the features with respect to the severity levels in the various groups. The asthmatic severity levels in the groups were then classified using the ensemble (ENS), support vector machine (SVM) and k-nearest neighbour (KNN) methods. RESULTS AND CONCLUSION All statistical comparisons exhibited a significant difference (p < 0.05) among the severity levels with few exceptions. In the classification experiments, the ensemble classifier exhibited better performance in terms of sensitivity, specificity and positive predictive value (PPV). The trachea inspiratory group showed the highest classification performance compared with all the other groups. Overall, the best PPV for the mild, moderate and severe samples were 95% (ENS), 88% (ENS) and 90% (SVM), respectively. With respect to location, the tracheal related wheeze sounds were most sensitive and specific predictors of asthma severity levels. In addition, the classification performances of the inspiratory and expiratory related groups were comparable, suggesting that the samples from these locations are equally informative.
Collapse
|
19
|
Rao A, Chu S, Batlivala N, Zetumer S, Roy S. Improved Detection of Lung Fluid With Standardized Acoustic Stimulation of the Chest. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE-JTEHM 2018; 6:3200107. [PMID: 30310761 PMCID: PMC6168182 DOI: 10.1109/jtehm.2018.2863366] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/23/2018] [Revised: 05/29/2018] [Accepted: 06/04/2018] [Indexed: 11/17/2022]
Abstract
Accumulation of excess air and water in the lungs leads to breakdown of respiratory function and is a common cause of patient hospitalization. Compact and non-invasive methods to detect the changes in lung fluid accumulation can allow physicians to assess patients’ respiratory conditions. In this paper, an acoustic transducer and a digital stethoscope system are proposed as a targeted solution for this clinical need. Alterations in the structure of the lungs lead to measurable changes which can be used to assess lung pathology. We standardize this procedure by sending a controlled signal through the lungs of six healthy subjects and six patients with lung disease. We extract mel-frequency cepstral coefficients and spectroid audio features, commonly used in classification for music retrieval, to characterize subjects as healthy or diseased. Using the \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{upgreek}
\usepackage{mathrsfs}
\setlength{\oddsidemargin}{-69pt}
\begin{document}
}{}$K$
\end{document}-nearest neighbors algorithm, we demonstrate 91.7% accuracy in distinguishing between healthy subjects and patients with lung pathology.
Collapse
Affiliation(s)
- Adam Rao
- Department of Bioengineering and Therapeutic SciencesUniversity of California at San FranciscoSan FranciscoCA94158USA
| | - Simon Chu
- School of MedicineUniversity of California at San FranciscoSan FranciscoCA94143USA
| | | | - Samuel Zetumer
- School of MedicineUniversity of California at San FranciscoSan FranciscoCA94143USA
| | - Shuvo Roy
- Department of Bioengineering and Therapeutic SciencesUniversity of California at San FranciscoSan FranciscoCA94158USA
| |
Collapse
|
20
|
Pramono RXA, Bowyer S, Rodriguez-Villegas E. Automatic adventitious respiratory sound analysis: A systematic review. PLoS One 2017; 12:e0177926. [PMID: 28552969 PMCID: PMC5446130 DOI: 10.1371/journal.pone.0177926] [Citation(s) in RCA: 101] [Impact Index Per Article: 12.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2016] [Accepted: 05/05/2017] [Indexed: 12/03/2022] Open
Abstract
Background Automatic detection or classification of adventitious sounds is useful to assist physicians in diagnosing or monitoring diseases such as asthma, Chronic Obstructive Pulmonary Disease (COPD), and pneumonia. While computerised respiratory sound analysis, specifically for the detection or classification of adventitious sounds, has recently been the focus of an increasing number of studies, a standardised approach and comparison has not been well established. Objective To provide a review of existing algorithms for the detection or classification of adventitious respiratory sounds. This systematic review provides a complete summary of methods used in the literature to give a baseline for future works. Data sources A systematic review of English articles published between 1938 and 2016, searched using the Scopus (1938-2016) and IEEExplore (1984-2016) databases. Additional articles were further obtained by references listed in the articles found. Search terms included adventitious sound detection, adventitious sound classification, abnormal respiratory sound detection, abnormal respiratory sound classification, wheeze detection, wheeze classification, crackle detection, crackle classification, rhonchi detection, rhonchi classification, stridor detection, stridor classification, pleural rub detection, pleural rub classification, squawk detection, and squawk classification. Study selection Only articles were included that focused on adventitious sound detection or classification, based on respiratory sounds, with performance reported and sufficient information provided to be approximately repeated. Data extraction Investigators extracted data about the adventitious sound type analysed, approach and level of analysis, instrumentation or data source, location of sensor, amount of data obtained, data management, features, methods, and performance achieved. Data synthesis A total of 77 reports from the literature were included in this review. 55 (71.43%) of the studies focused on wheeze, 40 (51.95%) on crackle, 9 (11.69%) on stridor, 9 (11.69%) on rhonchi, and 18 (23.38%) on other sounds such as pleural rub, squawk, as well as the pathology. Instrumentation used to collect data included microphones, stethoscopes, and accelerometers. Several references obtained data from online repositories or book audio CD companions. Detection or classification methods used varied from empirically determined thresholds to more complex machine learning techniques. Performance reported in the surveyed works were converted to accuracy measures for data synthesis. Limitations Direct comparison of the performance of surveyed works cannot be performed as the input data used by each was different. A standard validation method has not been established, resulting in different works using different methods and performance measure definitions. Conclusion A review of the literature was performed to summarise different analysis approaches, features, and methods used for the analysis. The performance of recent studies showed a high agreement with conventional non-automatic identification. This suggests that automated adventitious sound detection or classification is a promising solution to overcome the limitations of conventional auscultation and to assist in the monitoring of relevant diseases.
Collapse
Affiliation(s)
| | - Stuart Bowyer
- Department of Electrical and Electronic Engineering, Imperial College London, London, United Kingdom
| | - Esther Rodriguez-Villegas
- Department of Electrical and Electronic Engineering, Imperial College London, London, United Kingdom
- * E-mail:
| |
Collapse
|
21
|
The attractor recurrent neural network based on fuzzy functions: An effective model for the classification of lung abnormalities. Comput Biol Med 2017; 84:124-136. [PMID: 28363113 DOI: 10.1016/j.compbiomed.2017.03.019] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2016] [Revised: 03/18/2017] [Accepted: 03/20/2017] [Indexed: 11/20/2022]
Abstract
The respiratory system dynamic is of high significance when it comes to the detection of lung abnormalities, which highlights the importance of presenting a reliable model for it. In this paper, we introduce a novel dynamic modelling method for the characterization of the lung sounds (LS), based on the attractor recurrent neural network (ARNN). The ARNN structure allows the development of an effective LS model. Additionally, it has the capability to reproduce the distinctive features of the lung sounds using its formed attractors. Furthermore, a novel ARNN topology based on fuzzy functions (FFs-ARNN) is developed. Given the utility of the recurrent quantification analysis (RQA) as a tool to assess the nature of complex systems, it was used to evaluate the performance of both the ARNN and the FFs-ARNN models. The experimental results demonstrate the effectiveness of the proposed approaches for multichannel LS analysis. In particular, a classification accuracy of 91% was achieved using FFs-ARNN with sequences of RQA features.
Collapse
|
22
|
Lozano-García M, Fiz JA, Martínez-Rivera C, Torrents A, Ruiz-Manzano J, Jané R. Novel approach to continuous adventitious respiratory sound analysis for the assessment of bronchodilator response. PLoS One 2017; 12:e0171455. [PMID: 28178317 PMCID: PMC5298277 DOI: 10.1371/journal.pone.0171455] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2016] [Accepted: 01/20/2017] [Indexed: 11/19/2022] Open
Abstract
Background A thorough analysis of continuous adventitious sounds (CAS) can provide distinct and complementary information about bronchodilator response (BDR), beyond that provided by spirometry. Nevertheless, previous approaches to CAS analysis were limited by certain methodology issues. The aim of this study is to propose a new integrated approach to CAS analysis that contributes to improving the assessment of BDR in clinical practice for asthma patients. Methods Respiratory sounds and flow were recorded in 25 subjects, including 7 asthma patients with positive BDR (BDR+), assessed by spirometry, 13 asthma patients with negative BDR (BDR-), and 5 controls. A total of 5149 acoustic components were characterized using the Hilbert spectrum, and used to train and validate a support vector machine classifier, which distinguished acoustic components corresponding to CAS from those corresponding to other sounds. Once the method was validated, BDR was assessed in all participants by CAS analysis, and compared to BDR assessed by spirometry. Results BDR+ patients had a homogenous high change in the number of CAS after bronchodilation, which agreed with the positive BDR by spirometry, indicating high reversibility of airway obstruction. Nevertheless, we also found an appreciable change in the number of CAS in many BDR- patients, revealing alterations in airway obstruction that were not detected by spirometry. We propose a categorization for the change in the number of CAS, which allowed us to stratify BDR- patients into three consistent groups. From the 13 BDR- patients, 6 had a high response, similar to BDR+ patients, 4 had a noteworthy medium response, and 1 had a low response. Conclusions In this study, a new non-invasive and integrated approach to CAS analysis is proposed as a high-sensitive tool for assessing BDR in terms of acoustic parameters which, together with spirometry parameters, contribute to improving the stratification of BDR levels in patients with obstructive pulmonary diseases.
Collapse
Affiliation(s)
- Manuel Lozano-García
- Biomedical Signal Processing and Interpretation Group, Institute for Bioengineering of Catalonia (IBEC), Barcelona, Spain.,Biomedical Research Networking Centre in Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), Barcelona, Spain
| | - José Antonio Fiz
- Biomedical Signal Processing and Interpretation Group, Institute for Bioengineering of Catalonia (IBEC), Barcelona, Spain.,Biomedical Research Networking Centre in Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), Barcelona, Spain.,Pulmonology Service, Germans Trias i Pujol University Hospital, Badalona, Spain
| | | | - Aurora Torrents
- Pulmonology Service, Germans Trias i Pujol University Hospital, Badalona, Spain
| | - Juan Ruiz-Manzano
- Pulmonology Service, Germans Trias i Pujol University Hospital, Badalona, Spain
| | - Raimon Jané
- Biomedical Signal Processing and Interpretation Group, Institute for Bioengineering of Catalonia (IBEC), Barcelona, Spain.,Biomedical Research Networking Centre in Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), Barcelona, Spain.,Department of Automatic Control (ESAII), Universitat Politècnica de Catalunya (UPC)-Barcelona Tech, Barcelona, Spain
| |
Collapse
|
23
|
Li SH, Lin BS, Tsai CH, Yang CT, Lin BS. Design of Wearable Breathing Sound Monitoring System for Real-Time Wheeze Detection. SENSORS (BASEL, SWITZERLAND) 2017; 17:171. [PMID: 28106747 PMCID: PMC5298744 DOI: 10.3390/s17010171] [Citation(s) in RCA: 47] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/13/2016] [Revised: 12/27/2016] [Accepted: 01/13/2017] [Indexed: 11/16/2022]
Abstract
In the clinic, the wheezing sound is usually considered as an indicator symptom to reflect the degree of airway obstruction. The auscultation approach is the most common way to diagnose wheezing sounds, but it subjectively depends on the experience of the physician. Several previous studies attempted to extract the features of breathing sounds to detect wheezing sounds automatically. However, there is still a lack of suitable monitoring systems for real-time wheeze detection in daily life. In this study, a wearable and wireless breathing sound monitoring system for real-time wheeze detection was proposed. Moreover, a breathing sounds analysis algorithm was designed to continuously extract and analyze the features of breathing sounds to provide the objectively quantitative information of breathing sounds to professional physicians. Here, normalized spectral integration (NSI) was also designed and applied in wheeze detection. The proposed algorithm required only short-term data of breathing sounds and lower computational complexity to perform real-time wheeze detection, and is suitable to be implemented in a commercial portable device, which contains relatively low computing power and memory. From the experimental results, the proposed system could provide good performance on wheeze detection exactly and might be a useful assisting tool for analysis of breathing sounds in clinical diagnosis.
Collapse
Affiliation(s)
- Shih-Hong Li
- Department of Thoracic Medicine, Chang Gung Memorial Hospital at Linkou, Taoyuan 33305, Taiwan.
- Department of Respiratory Therapy, College of Medicine, Chang Gung University, Taoyuan 33302, Taiwan.
| | - Bor-Shing Lin
- Department of Computer Science and Information Engineering, National Taipei University, New Taipei City 23741, Taiwan.
| | - Chen-Han Tsai
- Institute of Imaging and Biomedical Photonics, National Chiao Tung University, Tainan 71150, Taiwan.
| | - Cheng-Ta Yang
- Department of Thoracic Medicine, Chang Gung Memorial Hospital at Taoyuan, Taoyuan 33378, Taiwan.
- Department of Respiratory Therapy, College of Medicine, Chang Gung University, Taoyuan 33302, Taiwan.
| | - Bor-Shyh Lin
- Institute of Imaging and Biomedical Photonics, National Chiao Tung University, Tainan 71150, Taiwan.
| |
Collapse
|
24
|
Sengupta N, Sahidullah M, Saha G. Lung sound classification using cepstral-based statistical features. Comput Biol Med 2016; 75:118-29. [PMID: 27286184 DOI: 10.1016/j.compbiomed.2016.05.013] [Citation(s) in RCA: 49] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2016] [Revised: 05/18/2016] [Accepted: 05/20/2016] [Indexed: 11/16/2022]
Affiliation(s)
- Nandini Sengupta
- Department of Electronics and Electrical Communication Engineering, Indian Institute of Technology, Kharagpur, Kharagpur 721302, India.
| | - Md Sahidullah
- Speech and Image Processing Unit, School of Computing, University of Eastern Finland, Joensuu 80101, Finland.
| | - Goutam Saha
- Department of Electronics and Electrical Communication Engineering, Indian Institute of Technology, Kharagpur, Kharagpur 721302, India.
| |
Collapse
|
25
|
Faust O, Yu W, Rajendra Acharya U. The role of real-time in biomedical science: A meta-analysis on computational complexity, delay and speedup. Comput Biol Med 2015; 58:73-84. [DOI: 10.1016/j.compbiomed.2014.12.024] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2014] [Revised: 12/02/2014] [Accepted: 12/30/2014] [Indexed: 12/29/2022]
|
26
|
Palaniappan R, Sundaraj K, Sundaraj S. Artificial intelligence techniques used in respiratory sound analysis--a systematic review. ACTA ACUST UNITED AC 2015; 59:7-18. [PMID: 24114889 DOI: 10.1515/bmt-2013-0074] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2012] [Accepted: 08/30/2013] [Indexed: 11/15/2022]
Abstract
Artificial intelligence (AI) has recently been established as an alternative method to many conventional methods. The implementation of AI techniques for respiratory sound analysis can assist medical professionals in the diagnosis of lung pathologies. This article highlights the importance of AI techniques in the implementation of computer-based respiratory sound analysis. Articles on computer-based respiratory sound analysis using AI techniques were identified by searches conducted on various electronic resources, such as the IEEE, Springer, Elsevier, PubMed, and ACM digital library databases. Brief descriptions of the types of respiratory sounds and their respective characteristics are provided. We then analyzed each of the previous studies to determine the specific respiratory sounds/pathology analyzed, the number of subjects, the signal processing method used, the AI techniques used, and the performance of the AI technique used in the analysis of respiratory sounds. A detailed description of each of these studies is provided. In conclusion, this article provides recommendations for further advancements in respiratory sound analysis.
Collapse
|
27
|
Lozano M, Fiz JA, Jané R. Automatic Differentiation of Normal and Continuous Adventitious Respiratory Sounds Using Ensemble Empirical Mode Decomposition and Instantaneous Frequency. IEEE J Biomed Health Inform 2015; 20:486-97. [PMID: 25643419 DOI: 10.1109/jbhi.2015.2396636] [Citation(s) in RCA: 41] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Differentiating normal from adventitious respiratory sounds (RS) is a major challenge in the diagnosis of pulmonary diseases. Particularly, continuous adventitious sounds (CAS) are of clinical interest because they reflect the severity of certain diseases. This study presents a new classifier that automatically distinguishes normal sounds from CAS. It is based on the multiscale analysis of instantaneous frequency (IF) and envelope (IE) calculated after ensemble empirical mode decomposition (EEMD). These techniques have two major advantages over previous techniques: high temporal resolution is achieved by calculating IF-IE and a priori knowledge of signal characteristics is not required for EEMD. The classifier is based on the fact that the IF dispersion of RS signals markedly decreases when CAS appear in respiratory cycles. Therefore, CAS were detected by using a moving window to calculate the dispersion of IF sequences. The study dataset contained 1494 RS segments extracted from 870 inspiratory cycles recorded from 30 patients with asthma. All cycles and their RS segments were previously classified as containing normal sounds or CAS by a highly experienced physician to obtain a gold standard classification. A support vector machine classifier was trained and tested using an iterative procedure in which the dataset was randomly divided into training (65%) and testing (35%) sets inside a loop. The SVM classifier was also tested on 4592 simulated CAS cycles. High total accuracy was obtained with both recorded (94.6% ± 0.3%) and simulated (92.8% ± 3.6%) signals. We conclude that the proposed method is promising for RS analysis and classification.
Collapse
|
28
|
Ulukaya S, Sen I, Kahya YP. Feature extraction using time-frequency analysis for monophonic-polyphonic wheeze discrimination. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2015; 2015:5412-5415. [PMID: 26737515 DOI: 10.1109/embc.2015.7319615] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
The aim of this study is monophonic-polyphonic wheeze episode discrimination rather than the conventional wheeze (versus non-wheeze) episode detection. We used two different methods for feature extraction to discriminate monophonic and polyphonic wheeze episodes. One of the methods is based on frequency analysis and the other is based on time analysis. Frequency analysis based method uses ratios of quartile frequencies to exploit the difference in the power spectrum. Time analysis based method uses mean crossing irregularity to exploit the difference in periodicity in the time domain. Both methods are applied on the data before and after an image processing based preprocessing step. Calculated features are used in classification both individually and in combinations. Support vector machine, k-nearest neighbor and Naive Bayesian classifiers are adopted in leave-one-out scheme. A total of 121 monophonic and 110 polyphonic wheeze episodes are used in the experiments, where the best classification performances are 71.45% for time domain based features, 68.43% for frequency domain based features, and 75.78% for a combination of selected best features.
Collapse
|
29
|
|
30
|
Chen MY, Chou CH. Applying cybernetic technology to diagnose human pulmonary sounds. J Med Syst 2014; 38:58. [PMID: 24878780 DOI: 10.1007/s10916-014-0058-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2014] [Accepted: 05/13/2014] [Indexed: 11/26/2022]
Abstract
Chest auscultation is a crucial and efficient method for diagnosing lung disease; however, it is a subjective process that relies on physician experience and the ability to differentiate between various sound patterns. Because the physiological signals composed of heart sounds and pulmonary sounds (PSs) are greater than 120 Hz and the human ear is not sensitive to low frequencies, successfully making diagnostic classifications is difficult. To solve this problem, we constructed various PS recognition systems for classifying six PS classes: vesicular breath sounds, bronchial breath sounds, tracheal breath sounds, crackles, wheezes, and stridor sounds. First, we used a piezoelectric microphone and data acquisition card to acquire PS signals and perform signal preprocessing. A wavelet transform was used for feature extraction, and the PS signals were decomposed into frequency subbands. Using a statistical method, we extracted 17 features that were used as the input vectors of a neural network. We proposed a 2-stage classifier combined with a back-propagation (BP) neural network and learning vector quantization (LVQ) neural network, which improves classification accuracy by using a haploid neural network. The receiver operating characteristic (ROC) curve verifies the high performance level of the neural network. To expand traditional auscultation methods, we constructed various PS diagnostic systems that can correctly classify the six common PSs. The proposed device overcomes the lack of human sensitivity to low-frequency sounds and various PS waves, characteristic values, and a spectral analysis charts are provided to elucidate the design of the human-machine interface.
Collapse
Affiliation(s)
- Mei-Yung Chen
- National Taiwan Normal University, 162 Heping E. Road Sec. 1, Taipei, Taiwan,
| | | |
Collapse
|
31
|
Reyes BA, Charleston-Villalobos S, González-Camarena R, Aljama-Corrales T. Assessment of time-frequency representation techniques for thoracic sounds analysis. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2014; 114:276-290. [PMID: 24680639 DOI: 10.1016/j.cmpb.2014.02.016] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/17/2013] [Revised: 02/07/2014] [Accepted: 02/24/2014] [Indexed: 06/03/2023]
Abstract
A step forward in the knowledge about the underlying physiological phenomena of thoracic sounds requires a reliable estimate of their time-frequency behavior that overcomes the disadvantages of the conventional spectrogram. A more detailed time-frequency representation could lead to a better feature extraction for diseases classification and stratification purposes, among others. In this respect, the aim of this study was to look for an omnibus technique to obtain the time-frequency representation (TFR) of thoracic sounds by comparing generic goodness-of-fit criteria in different simulated thoracic sounds scenarios. The performance of ten TFRs for heart, normal tracheal and adventitious lung sounds was assessed using time-frequency patterns obtained by mathematical functions of the thoracic sounds. To find the best TFR performance measures, such as the 2D local (ρ(mean)) and global (ρ) central correlation, the normalized root-mean-square error (NRMSE), the cross-correlation coefficient (ρ(IF)) and the time-frequency resolution (res(TF)) were used. Simulation results pointed out that the Hilbert-Huang spectrum (HHS) had a superior performance as compared with other techniques and then, it can be considered as a reliable TFR for thoracic sounds. Furthermore, the goodness of HHS was assessed using noisy simulated signals. Additionally, HHS was applied to first and second heart sounds taken from a young healthy male subject, to tracheal sound from a middle-age healthy male subject, and to abnormal lung sounds acquired from a male patient with diffuse interstitial pneumonia. It is expected that the results of this research could be used to obtain a better signature of thoracic sounds for pattern recognition purpose, among other tasks.
Collapse
Affiliation(s)
- B A Reyes
- Electrical Engineering Department, Universidad Autonoma Metropolitana, Mexico City 09340, Mexico
| | - S Charleston-Villalobos
- Electrical Engineering Department, Universidad Autonoma Metropolitana, Mexico City 09340, Mexico.
| | - R González-Camarena
- Health Science Department, Universidad Autonoma Metropolitana, Mexico City 09340, Mexico
| | - T Aljama-Corrales
- Electrical Engineering Department, Universidad Autonoma Metropolitana, Mexico City 09340, Mexico
| |
Collapse
|
32
|
|
33
|
Xie S, Jin F, Krishnan S, Sattar F. Signal feature extraction by multi-scale PCA and its application to respiratory sound classification. Med Biol Eng Comput 2012; 50:759-68. [PMID: 22467314 DOI: 10.1007/s11517-012-0903-y] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2011] [Accepted: 03/21/2012] [Indexed: 10/28/2022]
Abstract
Respiratory sound (RS) signals carry significant information about the underlying functioning of the pulmonary system by the presence of adventitious sounds. Although many studies have addressed the problem of pathological RS classification, only a limited number of scientific works have focused in multi-scale analysis. This paper proposes a new signal classification scheme for various types of RS based on multi-scale principal component analysis as a signal enhancement and feature extraction method to capture major variability of Fourier power spectra of the signal. Since we classify RS signals in a high dimensional feature subspace, a new classification method, called empirical classification, is developed for further signal dimension reduction in the classification step and has been shown to be more robust and outperform other simple classifiers. An overall accuracy of 98.34% for the classification of 689 real RS recording segments shows the promising performance of the presented method.
Collapse
Affiliation(s)
- Shengkun Xie
- Department of Electrical and Computer Engineering, Ryerson University, Toronto, ON, Canada.
| | | | | | | |
Collapse
|