1
|
Papunen I, Ylänen K, Lundqvist O, Porkholm M, Rahkonen O, Mecklin M, Eerola A, Kallio M, Arola A, Niemelä J, Jaakkola I, Poutanen T. Automated analysis of heart sound signals in screening for structural heart disease in children. Eur J Pediatr 2024; 183:4951-4958. [PMID: 39304593 PMCID: PMC11473634 DOI: 10.1007/s00431-024-05773-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/06/2024] [Revised: 08/09/2024] [Accepted: 09/07/2024] [Indexed: 09/22/2024]
Abstract
Our aim was to investigate the ability of an artificial intelligence (AI)-based algorithm to differentiate innocent murmurs from pathologic ones. An AI-based algorithm was developed using heart sound recordings collected from 1413 patients at the five university hospitals in Finland. The corresponding heart condition was verified using echocardiography. In the second phase of the study, patients referred to Helsinki New Children's Hospital due to a heart murmur were prospectively assessed with the algorithm, and then the results were compared with echocardiography findings. Ninety-eight children were included in this prospective study. The algorithm classified 72 (73%) of the heart sounds as normal and 26 (27%) as abnormal. Echocardiography was normal in 63 (64%) children and abnormal in 35 (36%). The algorithm recognized abnormal heart sounds in 24 of 35 children with abnormal echocardiography and normal heart sounds with normal echocardiography in 61 of 63 children. When the murmur was audible, the sensitivity and specificity of the algorithm were 83% (24/29) (confidence interval (CI) 64-94%) and 97% (59/61) (CI 89-100%), respectively. CONCLUSION The algorithm was able to distinguish murmurs associated with structural cardiac anomalies from innocent murmurs with good sensitivity and specificity. The algorithm was unable to identify heart defects that did not cause a murmur. Further research is needed on the use of the algorithm in screening for heart murmurs in primary health care. WHAT IS KNOWN • Innocent murmurs are common in children, while the incidence of moderate or severe congenital heart defects is low. Auscultation plays a significant role in assessing the need for further examinations of the murmur. The ability to differentiate innocent murmurs from those related to congenital heart defects requires clinical experience on the part of general practitioners. No AI-based auscultation algorithms have been systematically implemented in primary health care. WHAT IS NEW • We developed an AI-based algorithm using a large dataset of sound samples validated by echocardiography. The algorithm performed well in recognizing pathological and innocent murmurs in children from different age groups.
Collapse
Affiliation(s)
- I Papunen
- Tampere Center for Child, Adolescent and Maternal Health Research, Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland.
| | - K Ylänen
- Tampere Center for Child, Adolescent and Maternal Health Research, Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland
- Department of Pediatrics, Tampere University Hospital, Wellbeing Services County of Pirkanmaa, Tampere, Finland
| | | | | | - O Rahkonen
- AusculThing Oy, Espoo, Finland
- Department of Pediatric Cardiology, New Children's Hospital, University of Helsinki and Helsinki University Hospital, Helsinki, Finland
| | - M Mecklin
- Department of Pediatrics, Tampere University Hospital, Wellbeing Services County of Pirkanmaa, Tampere, Finland
| | - A Eerola
- Department of Pediatrics, Tampere University Hospital, Wellbeing Services County of Pirkanmaa, Tampere, Finland
| | - M Kallio
- Department of Pediatric Cardiology, New Children's Hospital, University of Helsinki and Helsinki University Hospital, Helsinki, Finland
- Department of Pediatrics and Adolescent Medicine, Oulu University Hospital and University of Oulu, Oulu, Finland
| | - A Arola
- Department of Pediatrics and Adolescent Medicine, Turku, Finland
| | - J Niemelä
- Department of Pediatrics and Adolescent Medicine, Turku, Finland
| | - I Jaakkola
- AusculThing Oy, Espoo, Finland
- Department of Pediatric Cardiology, New Children's Hospital, University of Helsinki and Helsinki University Hospital, Helsinki, Finland
| | - T Poutanen
- Tampere Center for Child, Adolescent and Maternal Health Research, Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland
- Department of Pediatrics, Tampere University Hospital, Wellbeing Services County of Pirkanmaa, Tampere, Finland
| |
Collapse
|
2
|
Arjoune Y, Nguyen TN, Doroshow RW, Shekhar R. A Noise-Robust Heart Sound Segmentation Algorithm Based on Shannon Energy. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2024; 12:7747-7761. [PMID: 39398361 PMCID: PMC11469632 DOI: 10.1109/access.2024.3351570] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 10/15/2024]
Abstract
Heart sound segmentation has been shown to improve the performance of artificial intelligence (AI)-based auscultation decision support systems increasingly viewed as a solution to compensate for eroding auscultatory skills and the associated subjectivity. Various segmentation approaches with demonstrated performance can be utilized for this task, but their robustness can suffer in the presence of noise. A noise-robust heart sound segmentation algorithm was developed and its accuracy was tested using two datasets: the CirCor DigiScope Phonocardiogram dataset and an in-house dataset - a heart murmur library collected at the Children's National Hospital (CNH). On the CirCor dataset, our segmentation algorithm marked the boundaries of the primary heart sounds S1 and S2 with an accuracy of 0.28 ms and 0.29 ms, respectively, and correctly identified the actual positive segments with a sensitivity of 97.44%. The algorithm also executed four times faster than a logistic regression hidden semi-Markov model. On the CNH dataset, the algorithm succeeded in 87.4% cases, achieving a 6% increase in segmentation success rate demonstrated by our original Shannon energy-based algorithm. Accurate heart sound segmentation is critical to supporting and accelerating AI research in cardiovascular diseases. The proposed algorithm increases the robustness of heart sound segmentation to noise and viability for clinical use.
Collapse
Affiliation(s)
- Youness Arjoune
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC 20010, USA
| | | | - Robin W Doroshow
- Department of Cardiology, Children's National Hospital, Washington, DC 20010, USA
| | - Raj Shekhar
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC 20010, USA
| |
Collapse
|
3
|
Netto AN, Abraham L, Philip S. HBNET: A blended ensemble model for the detection of cardiovascular anomalies using phonocardiogram. Technol Health Care 2024; 32:1925-1945. [PMID: 38393859 DOI: 10.3233/thc-231290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/25/2024]
Abstract
BACKGROUND Cardiac diseases are highly detrimental illnesses, responsible for approximately 32% of global mortality [1]. Early diagnosis and prompt treatment can reduce deaths caused by cardiac diseases. In paediatric patients, it is challenging for paediatricians to identify functional murmurs and pathological murmurs from heart sounds. OBJECTIVE The study intends to develop a novel blended ensemble model using hybrid deep learning models and softmax regression to classify adult, and paediatric heart sounds into five distinct classes, distinguishing itself as a groundbreaking work in this domain. Furthermore, the research aims to create a comprehensive 5-class paediatric phonocardiogram (PCG) dataset. The dataset includes two critical pathological classes, namely atrial septal defects and ventricular septal defects, along with functional murmurs, pathological and normal heart sounds. METHODS The work proposes a blended ensemble model (HbNet-Heartbeat Network) comprising two hybrid models, CNN-BiLSTM and CNN-LSTM, as base models and Softmax regression as meta-learner. HbNet leverages the strengths of base models and improves the overall PCG classification accuracy. Mel Frequency Cepstral Coefficients (MFCC) capture the crucial audio signal characteristics relevant to the classification. The amalgamation of these two deep learning structures enhances the precision and reliability of PCG classification, leading to improved diagnostic results. RESULTS The HbNet model exhibited excellent results with an average accuracy of 99.72% and sensitivity of 99.3% on an adult dataset, surpassing all the existing state-of-the-art works. The researchers have validated the reliability of the HbNet model by testing it on a real-time paediatric dataset. The paediatric model's accuracy is 86.5%. HbNet detected functional murmur with 100% precision. CONCLUSION The results indicate that the HbNet model exhibits a high level of efficacy in the early detection of cardiac disorders. Results also imply that HbNet has the potential to serve as a valuable tool for the development of decision-support systems that aid medical practitioners in confirming their diagnoses. This method makes it easier for medical professionals to diagnose and initiate prompt treatment while performing preliminary auscultation and reduces unnecessary echocardiograms.
Collapse
Affiliation(s)
- Ann Nita Netto
- Department of Electronics and Communication Engineering, LBS Institute of Technology for Women, APJ Abdul Kalam Technological University, Trivandrum, India
| | - Lizy Abraham
- Department of Electronics and Communication Engineering, LBS Institute of Technology for Women, APJ Abdul Kalam Technological University, Trivandrum, India
| | - Saji Philip
- Department of Cardiology, Thiruvalla Medical Mission Hospital, Thiruvalla, India
| |
Collapse
|
4
|
Tsai YT, Liu YH, Zheng ZW, Chen CC, Lin MC. Heart Murmur Classification Using a Capsule Neural Network. Bioengineering (Basel) 2023; 10:1237. [PMID: 38002361 PMCID: PMC10669720 DOI: 10.3390/bioengineering10111237] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Revised: 10/19/2023] [Accepted: 10/20/2023] [Indexed: 11/26/2023] Open
Abstract
The healthcare industry has made significant progress in the diagnosis of heart conditions due to the use of intelligent detection systems such as electrocardiograms, cardiac ultrasounds, and abnormal sound diagnostics that use artificial intelligence (AI) technology, such as convolutional neural networks (CNNs). Over the past few decades, methods for automated segmentation and classification of heart sounds have been widely studied. In many cases, both experimental and clinical data require electrocardiography (ECG)-labeled phonocardiograms (PCGs) or several feature extraction techniques from the mel-scale frequency cepstral coefficient (MFCC) spectrum of heart sounds to achieve better identification results with AI methods. Without good feature extraction techniques, the CNN may face challenges in classifying the MFCC spectrum of heart sounds. To overcome these limitations, we propose a capsule neural network (CapsNet), which can utilize iterative dynamic routing methods to obtain good combinations for layers in the translational equivariance of MFCC spectrum features, thereby improving the prediction accuracy of heart murmur classification. The 2016 PhysioNet heart sound database was used for training and validating the prediction performance of CapsNet and other CNNs. Then, we collected our own dataset of clinical auscultation scenarios for fine-tuning hyperparameters and testing results. CapsNet demonstrated its feasibility by achieving validation accuracies of 90.29% and 91.67% on the test dataset.
Collapse
Affiliation(s)
- Yu-Ting Tsai
- Master’s Program in Electro-Acoustics, Feng Chia University, Taichung 40724, Taiwan
- Hyper-Automation Laboratory, Feng Chia University, Taichung 40724, Taiwan
| | - Yu-Hsuan Liu
- Master’s Program in Electro-Acoustics, Feng Chia University, Taichung 40724, Taiwan
| | - Zi-Wei Zheng
- Hyper-Automation Laboratory, Feng Chia University, Taichung 40724, Taiwan
- Program of Mechanical and Aeronautical Engineering, Feng Chia University, Taichung 40724, Taiwan
| | - Chih-Cheng Chen
- Hyper-Automation Laboratory, Feng Chia University, Taichung 40724, Taiwan
- Department of Automatic Control Engineering, Feng Chia University, Taichung 40724, Taiwan
| | - Ming-Chih Lin
- Department of Post-Baccalaureate Medicine, College of Medicine, National Chung Hsing University, Taichung 402, Taiwan
- Children’s Medical Center, Taichung Veterans General Hospital, Taichung 40705, Taiwan
- School of Medicine, National Yang Ming Chiao Tung University, Taipei 112304, Taiwan
- Department of Food and Nutrition, Providence University, Taichung 433, Taiwan
- School of Medicine, Chung Shan Medical University, Taichung 40201, Taiwan
| |
Collapse
|
5
|
Arjoune Y, Nguyen TN, Salvador T, Telluri A, Schroeder JC, Geggel RL, May JW, Pillai DK, Teach SJ, Patel SJ, Doroshow RW, Shekhar R. StethAid: A Digital Auscultation Platform for Pediatrics. SENSORS (BASEL, SWITZERLAND) 2023; 23:5750. [PMID: 37420914 PMCID: PMC10304273 DOI: 10.3390/s23125750] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 05/18/2023] [Accepted: 06/15/2023] [Indexed: 07/09/2023]
Abstract
(1) Background: Mastery of auscultation can be challenging for many healthcare providers. Artificial intelligence (AI)-powered digital support is emerging as an aid to assist with the interpretation of auscultated sounds. A few AI-augmented digital stethoscopes exist but none are dedicated to pediatrics. Our goal was to develop a digital auscultation platform for pediatric medicine. (2) Methods: We developed StethAid-a digital platform for artificial intelligence-assisted auscultation and telehealth in pediatrics-that consists of a wireless digital stethoscope, mobile applications, customized patient-provider portals, and deep learning algorithms. To validate the StethAid platform, we characterized our stethoscope and used the platform in two clinical applications: (1) Still's murmur identification and (2) wheeze detection. The platform has been deployed in four children's medical centers to build the first and largest pediatric cardiopulmonary datasets, to our knowledge. We have trained and tested deep-learning models using these datasets. (3) Results: The frequency response of the StethAid stethoscope was comparable to those of the commercially available Eko Core, Thinklabs One, and Littman 3200 stethoscopes. The labels provided by our expert physician offline were in concordance with the labels of providers at the bedside using their acoustic stethoscopes for 79.3% of lungs cases and 98.3% of heart cases. Our deep learning algorithms achieved high sensitivity and specificity for both Still's murmur identification (sensitivity of 91.9% and specificity of 92.6%) and wheeze detection (sensitivity of 83.7% and specificity of 84.4%). (4) Conclusions: Our team has created a technically and clinically validated pediatric digital AI-enabled auscultation platform. Use of our platform could improve efficacy and efficiency of clinical care for pediatric patients, reduce parental anxiety, and result in cost savings.
Collapse
Affiliation(s)
- Youness Arjoune
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children’s National Hospital, Washington, DC 20010, USA
| | - Trong N. Nguyen
- AusculTech Dx, 2601 University Blvd West #301, Silver Spring, MD 20902, USA
| | - Tyler Salvador
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children’s National Hospital, Washington, DC 20010, USA
| | - Anha Telluri
- School of Medicine and Health Sciences, George Washington University, Washington, DC 20052, USA
| | - Jonathan C. Schroeder
- Division of Pulmonary and Sleep Medicine, Children’s National Hospital, Washington, DC 20010, USA
| | - Robert L. Geggel
- Department of Cardiology, Boston Children’s Hospital, Boston, MA 02115, USA
| | - Joseph W. May
- Department of Pediatrics, Walter Reed National Military Medical Center, Bethesda, MD 20814, USA
| | - Dinesh K. Pillai
- Division of Pulmonary and Sleep Medicine, Children’s National Hospital, Washington, DC 20010, USA
| | - Stephen J. Teach
- Department of Pediatrics, Children’s National Hospital, Washington, DC 20010, USA
| | - Shilpa J. Patel
- Division of Emergency Medicine, Children’s National Hospital, Washington, DC 20010, USA
| | - Robin W. Doroshow
- AusculTech Dx, 2601 University Blvd West #301, Silver Spring, MD 20902, USA
- Department of Cardiology, Children’s National Hospital, Washington, DC 20010, USA
| | - Raj Shekhar
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children’s National Hospital, Washington, DC 20010, USA
- AusculTech Dx, 2601 University Blvd West #301, Silver Spring, MD 20902, USA
| |
Collapse
|
6
|
Arjoune Y, Nguyen TN, Doroshow RW, Shekhar R. Technical characterisation of digital stethoscopes: towards scalable artificial intelligence-based auscultation. J Med Eng Technol 2023; 47:165-178. [PMID: 36794318 PMCID: PMC10753976 DOI: 10.1080/03091902.2023.2174198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Revised: 01/25/2023] [Accepted: 01/25/2023] [Indexed: 02/17/2023]
Abstract
Digital stethoscopes can enable the development of integrated artificial intelligence (AI) systems that can remove the subjectivity of manual auscultation, improve diagnostic accuracy, and compensate for diminishing auscultatory skills. Developing scalable AI systems can be challenging, especially when acquisition devices differ and thus introduce sensor bias. To address this issue, a precise knowledge of these differences, i.e., frequency responses of these devices, is needed, but the manufacturers often do not provide complete device specifications. In this study, we reported an effective methodology for determining the frequency response of a digital stethoscope and used it to characterise three common digital stethoscopes: Littmann 3200, Eko Core, and Thinklabs One. Our results show significant inter-device variability in that the frequency responses of the three studied stethoscopes were distinctly different. A moderate intra-device variability was seen when comparing two separate units of Littmann 3200. The study highlights the need for normalisation across devices for developing successful AI-assisted auscultation and provides a technical characterisation approach as a first step to accomplish it.
Collapse
Affiliation(s)
- Youness Arjoune
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC, USA
| | - Trong N Nguyen
- Department of Research, AusculTech DX, Silver Spring, MD, USA
| | - Robin W Doroshow
- Department of Research, AusculTech DX, Silver Spring, MD, USA
- Department of Cardiology, Children's National Hospital, Washington, DC, USA
| | - Raj Shekhar
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC, USA
- Department of Research, AusculTech DX, Silver Spring, MD, USA
| |
Collapse
|