1
|
Ainiwaer A, Kadier K, Qin L, Rehemuding R, Ma X, Ma YT. Audiological Diagnosis of Valvular and Congenital Heart Diseases in the Era of Artificial Intelligence. Rev Cardiovasc Med 2023; 24:175. [PMID: 39077516 PMCID: PMC11264159 DOI: 10.31083/j.rcm2406175] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2023] [Revised: 04/04/2023] [Accepted: 04/10/2023] [Indexed: 07/31/2024] Open
Abstract
In recent years, electronic stethoscopes have been combined with artificial intelligence (AI) technology to digitally acquire heart sounds, intelligently identify valvular disease and congenital heart disease, and improve the accuracy of heart disease diagnosis. The research on AI-based intelligent stethoscopy technology mainly focuses on AI algorithms, and the commonly used methods are end-to-end deep learning algorithms and machine learning algorithms based on feature extraction, and the hot spot for future research is to establish a large standardized heart sound database and unify these algorithms for external validation; in addition, different electronic stethoscopes should also be extensively compared so that the algorithms can be compatible with different. In addition, there should be extensive comparison of different electronic stethoscopes so that the algorithms can be compatible with heart sounds collected by different stethoscopes; especially importantly, the deployment of algorithms in the cloud is a major trend in the future development of artificial intelligence. Finally, the research of artificial intelligence based on heart sounds is still in the preliminary stage, although there is great progress in identifying valve disease and congenital heart disease, they are all in the research of algorithm for disease diagnosis, and there is little research on disease severity, remote monitoring, prognosis, etc., which will be a hot spot for future research.
Collapse
Affiliation(s)
- Aikeliyaer Ainiwaer
- Department of Cardiology, Xinjiang Medical University Affiliated First Hospital, 830011 Urumqi, Xinjiang, China
| | - Kaisaierjiang Kadier
- Department of Cardiology, Xinjiang Medical University Affiliated First Hospital, 830011 Urumqi, Xinjiang, China
| | - Lian Qin
- Department of Cardiology, Xinjiang Medical University Affiliated First Hospital, 830011 Urumqi, Xinjiang, China
| | - Rena Rehemuding
- Department of Cardiology, Xinjiang Medical University Affiliated First Hospital, 830011 Urumqi, Xinjiang, China
| | - Xiang Ma
- Department of Cardiology, Xinjiang Medical University Affiliated First Hospital, 830011 Urumqi, Xinjiang, China
| | - Yi-Tong Ma
- Department of Cardiology, Xinjiang Medical University Affiliated First Hospital, 830011 Urumqi, Xinjiang, China
| |
Collapse
|
2
|
A Computer-Aided Heart Valve Disease Diagnosis System Based on Machine Learning. JOURNAL OF HEALTHCARE ENGINEERING 2023; 2023:7382316. [PMID: 36726774 PMCID: PMC9886464 DOI: 10.1155/2023/7382316] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Revised: 01/04/2023] [Accepted: 01/05/2023] [Indexed: 01/24/2023]
Abstract
Cardiac auscultation is a noninvasive, convenient, and low-cost diagnostic method for heart valvular disease, and it can diagnose the abnormality of the heart valve at an early stage. However, the accuracy of auscultation relies on the professionalism of cardiologists. Doctors in remote areas may lack the experience to diagnose correctly. Therefore, it is necessary to design a system to assist with the diagnosis. This study proposed a computer-aided heart valve disease diagnosis system, including a heart sound acquisition module, a trained model for diagnosis, and software, which can diagnose four kinds of heart valve diseases. In this study, a training dataset containing five categories of heart sounds was collected, including normal, mitral stenosis, mitral regurgitation, and aortic stenosis heart sound. A convolutional neural network GoogLeNet and weighted KNN are used to train the models separately. For the model trained by the convolutional neural network, time series heart sound signals are converted into time-frequency scalograms based on continuous wavelet transform to adapt to the architecture of GoogLeNet. For the model trained by weighted KNN, features from the time domain and time-frequency domain are extracted manually. Then feature selection based on the chi-square test is performed to get a better group of features. Moreover, we designed software that lets doctors upload heart sounds, visualize the heart sound waveform, and use the model to get the diagnosis. Model assessments using accuracy, sensitivity, specificity, and F1 score indicators are done on two trained models. The results showed that the model trained by modified GoogLeNet outperformed others, with an overall accuracy of 97.5%. The average accuracy, sensitivity, specificity, and F1 score for diagnosing four kinds of heart valve diseases are 98.75%, 96.88%, 99.22%, and 97.99%, respectively. The computer-aided diagnosis system, with a heart sound acquisition module, a diagnostic model, and software, can visualize the heart sound waveform and show the reference diagnostic results. This can assist in the diagnosis of heart valve diseases, especially in remote areas, which lack skilled doctors.
Collapse
|
3
|
Guo Y, Yang H, Guo T, Pan J, Wang W. A novel heart sound segmentation algorithm via multi-feature input and neural network with attention mechanism. Biomed Phys Eng Express 2022; 9. [PMID: 36301698 DOI: 10.1088/2057-1976/ac9da6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Accepted: 10/26/2022] [Indexed: 01/06/2023]
Abstract
Objective. Heart sound segmentation (HSS), which aims to identify the exact positions of the first heart sound(S1), second heart sound(S2), the duration of S1, systole, S2, and diastole within a cardiac cycle of phonocardiogram (PCG), is an indispensable step to find out heart health. Recently, some neural network-based methods for heart sound segmentation have shown good performance.Approach. In this paper, a novel method was proposed for HSS exactly using One-Dimensional Convolution and Bidirectional Long-Short Term Memory neural network with Attention mechanism (C-LSTM-A) by incorporating the 0.5-order smooth Shannon entropy envelope and its instantaneous phase waveform (IPW), and third intrinsic mode function (IMF-3) of PCG signal to reduce the difficulty of neural network learning features.Main results. An average F1-score of 96.85 was achieved in the clinical research dataset (Fuwai Yunnan Cardiovascular Hospital heart sound dataset) and an average F1-score of 95.68 was achieved in 2016 PhysioNet/CinC Challenge dataset using the novel method.Significance. The experimental results show that this method has advantages for normal PCG signals and common pathological PCG signals, and the segmented fundamental heart sound(S1, S2), systole, and diastole signal components are beneficial to the study of subsequent heart sound classification.
Collapse
Affiliation(s)
- Yang Guo
- School of Information Science and Technology, Yunnan University, Kunming 650504, People's Republic of China
| | - Hongbo Yang
- Yunnan Fuwai Cardiovascular Disease Hospital, Kunming 650102, People's Republic of China
| | - Tao Guo
- Yunnan Fuwai Cardiovascular Disease Hospital, Kunming 650102, People's Republic of China
| | - Jiahua Pan
- Yunnan Fuwai Cardiovascular Disease Hospital, Kunming 650102, People's Republic of China
| | - Weilian Wang
- School of Information Science and Technology, Yunnan University, Kunming 650504, People's Republic of China
| |
Collapse
|
4
|
A lightweight hybrid deep learning system for cardiac valvular disease classification. Sci Rep 2022; 12:14297. [PMID: 35995814 PMCID: PMC9395359 DOI: 10.1038/s41598-022-18293-7] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2022] [Accepted: 08/09/2022] [Indexed: 12/21/2022] Open
Abstract
Cardiovascular diseases (CVDs) are a prominent cause of death globally. The introduction of medical big data and Artificial Intelligence (AI) technology encouraged the effort to develop and deploy deep learning models for distinguishing heart sound abnormalities. These systems employ phonocardiogram (PCG) signals because of their lack of sophistication and cost-effectiveness. Automated and early diagnosis of cardiovascular diseases (CVDs) helps alleviate deadly complications. In this research, a cardiac diagnostic system that combined CNN and LSTM components was developed, it uses phonocardiogram (PCG) signals, and utilizes either augmented or non-augmented datasets. The proposed model discriminates five heart valvular conditions, namely normal, Aortic Stenosis (AS), Mitral Regurgitation (MR), Mitral Stenosis (MS), and Mitral Valve Prolapse (MVP). The findings demonstrate that the suggested end-to-end architecture yields outstanding performance concerning all important evaluation metrics. For the five classes problem using the open heart sound dataset, accuracy was 98.5%, F1-score was 98.501%, and Area Under the Curve (AUC) was 0.9978 for the non-augmented dataset and accuracy was 99.87%, F1-score was 99.87%, and AUC was 0.9985 for the augmented dataset. Model performance was further evaluated using the PhysioNet/Computing in Cardiology 2016 challenge dataset, for the two classes problem, accuracy was 93.76%, F1-score was 85.59%, and AUC was 0.9505. The achieved results show that the proposed system outperforms all previous works that use the same audio signal databases. In the future, the findings will help build a multimodal structure that uses both PCG and ECG signals.
Collapse
|
5
|
Automatic detection of heart valve disorders using Teager–Kaiser energy operator, rational-dilation wavelet transform and convolutional neural networks with PCG signals. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10184-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
|
6
|
Assessment of Dual-Tree Complex Wavelet Transform to Improve SNR in Collaboration with Neuro-Fuzzy System for Heart-Sound Identification. ELECTRONICS 2022. [DOI: 10.3390/electronics11060938] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
Abstract
The research paper proposes a novel denoising method to improve the outcome of heart-sound (HS)-based heart-condition identification by applying the dual-tree complex wavelet transform (DTCWT) together with the adaptive neuro-fuzzy inference System (ANFIS) classifier. The method consists of three steps: first, preprocessing to eliminate 50 Hz noise; second, applying four successive levels of DTCWT to denoise and reconstruct the time-domain HS signal; third, to evaluate ANFIS on a total of 2735 HS recordings from an international dataset (PhysioNet Challenge 2016). The results show that the signal-to-noise ratio (SNR) with DTCWT was significantly improved (p < 0.001) as compared to original HS recordings. Quantitatively, there was an 11% to many decibel (dB)-fold increase in SNR after DTCWT, representing a significant improvement in denoising HS. In addition, the ANFIS, using six time-domain features, resulted in 55–86% precision, 51–98% recall, 53–86% f-score, and 54–86% MAcc compared to other attempts on the same dataset. Therefore, DTCWT is a successful technique in removing noise from biosignals such as HS recordings. The adaptive property of ANFIS exhibited capability in classifying HS recordings.
Collapse
|
7
|
Radhakrishnan T, Karhade J, Ghosh SK, Muduli PR, Tripathy RK, Acharya UR. AFCNNet: Automated detection of AF using chirplet transform and deep convolutional bidirectional long short term memory network with ECG signals. Comput Biol Med 2021; 137:104783. [PMID: 34481184 DOI: 10.1016/j.compbiomed.2021.104783] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Revised: 08/02/2021] [Accepted: 08/17/2021] [Indexed: 11/16/2022]
Abstract
Atrial fibrillation (AF) is the most common type of cardiac arrhythmia and is characterized by the heart's beating in an uncoordinated manner. In clinical studies, patients often do not have visible symptoms during AF, and hence it is harder to detect this cardiac ailment. Therefore, automated detection of AF using the electrocardiogram (ECG) signals can reduce the risk of stroke, coronary artery disease, and other cardiovascular complications. In this paper, a novel time-frequency domain deep learning-based approach is proposed to detect AF and classify terminating and non-terminating AF episodes using ECG signals. This approach involves evaluating the time-frequency representation (TFR) of ECG signals using the chirplet transform. The two-dimensional (2D) deep convolutional bidirectional long short-term memory (BLSTM) neural network model is used to detect and classify AF episodes using the time-frequency images of ECG signals. The proposed TFR based 2D deep learning approach is evaluated using the ECG signals from three public databases. Our developed approach has obtained an accuracy, sensitivity, and specificity of 99.18% (Confidence interval (CI) as [98.86, 99.49]), 99.17% (CI as [98.85 99.49]), and 99.18% (CI as [98.86 99.49]), respectively, with 10-fold cross-validation (CV) technique to detect AF automatically. The proposed approach also classified terminating and non-terminating AF episodes with an average accuracy of 75.86%. The average accuracy value obtained using the proposed approach is higher than the short-time Fourier transform (STFT), discrete-time continuous wavelet transform (DT-CWT), and Stockwell transform (ST) based time-frequency analysis methods with deep convolutional BLSTM models to detect AF. The proposed approach has better AF detection performance than the existing deep learning-based techniques using ECG signals from the MIT-BIH database.
Collapse
Affiliation(s)
- Tejas Radhakrishnan
- Department of Electrical and Electronics Engineering, BITS-Pilani, Hyderabad Campus, Hyderabad, 500078, India
| | - Jay Karhade
- Department of Electrical and Electronics Engineering, BITS-Pilani, Hyderabad Campus, Hyderabad, 500078, India
| | - S K Ghosh
- Department of Electrical and Electronics Engineering, BITS-Pilani, Hyderabad Campus, Hyderabad, 500078, India
| | - P R Muduli
- Department of Electronics Engineering, Indian Institute of Technology (BHU), Varanasi, Uttar Pradesh, 221005, India
| | - R K Tripathy
- Department of Electrical and Electronics Engineering, BITS-Pilani, Hyderabad Campus, Hyderabad, 500078, India.
| | - U Rajendra Acharya
- Department of Electronics and Computer Engineering, Ngee Ann Polytechnic, Singapore; Department of Bioinformatics and Medical Engineering, Asia University, Taichung, Taiwan; Department of Biomedical Engineering, School of Science and Technology, SUSS University, Singapore
| |
Collapse
|