1
|
Onnis C, van Assen M, Muscogiuri E, Muscogiuri G, Gershon G, Saba L, De Cecco CN. The Role of Artificial Intelligence in Cardiac Imaging. Radiol Clin North Am 2024; 62:473-488. [PMID: 38553181 DOI: 10.1016/j.rcl.2024.01.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/02/2024]
Abstract
Artificial intelligence (AI) is having a significant impact in medical imaging, advancing almost every aspect of the field, from image acquisition and postprocessing to automated image analysis with outreach toward supporting decision making. Noninvasive cardiac imaging is one of the main and most exciting fields for AI development. The aim of this review is to describe the main applications of AI in cardiac imaging, including CT and MR imaging, and provide an overview of recent advancements and available clinical applications that can improve clinical workflow, disease detection, and prognostication in cardiac disease.
Collapse
Affiliation(s)
- Carlotta Onnis
- Translational Laboratory for Cardiothoracic Imaging and Artificial Intelligence, Department of Radiology and Imaging Sciences, Emory University, 100 Woodruff Circle, Atlanta, GA 30322, USA; Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), di Cagliari-Polo di Monserrato, SS 554 km 4,500 Monserrato, Cagliari 09042, Italy. https://twitter.com/CarlottaOnnis
| | - Marly van Assen
- Translational Laboratory for Cardiothoracic Imaging and Artificial Intelligence, Department of Radiology and Imaging Sciences, Emory University, 100 Woodruff Circle, Atlanta, GA 30322, USA. https://twitter.com/marly_van_assen
| | - Emanuele Muscogiuri
- Translational Laboratory for Cardiothoracic Imaging and Artificial Intelligence, Department of Radiology and Imaging Sciences, Emory University, 100 Woodruff Circle, Atlanta, GA 30322, USA; Division of Thoracic Imaging, Department of Radiology, University Hospitals Leuven, Herestraat 49, Leuven 3000, Belgium
| | - Giuseppe Muscogiuri
- Department of Diagnostic and Interventional Radiology, Papa Giovanni XXIII Hospital, Piazza OMS, 1, Bergamo BG 24127, Italy. https://twitter.com/GiuseppeMuscog
| | - Gabrielle Gershon
- Translational Laboratory for Cardiothoracic Imaging and Artificial Intelligence, Department of Radiology and Imaging Sciences, Emory University, 100 Woodruff Circle, Atlanta, GA 30322, USA. https://twitter.com/gabbygershon
| | - Luca Saba
- Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), di Cagliari-Polo di Monserrato, SS 554 km 4,500 Monserrato, Cagliari 09042, Italy. https://twitter.com/lucasabaITA
| | - Carlo N De Cecco
- Translational Laboratory for Cardiothoracic Imaging and Artificial Intelligence, Department of Radiology and Imaging Sciences, Emory University, 100 Woodruff Circle, Atlanta, GA 30322, USA; Division of Cardiothoracic Imaging, Department of Radiology and Imaging Sciences, Emory University, Emory University Hospital, 1365 Clifton Road Northeast, Suite AT503, Atlanta, GA 30322, USA.
| |
Collapse
|
2
|
Zhang W, Zhao N, Gao Y, Huang B, Wang L, Zhou X, Li Z. Automatic liver segmentation and assessment of liver fibrosis using deep learning with MR T1-weighted images in rats. Magn Reson Imaging 2024; 107:1-7. [PMID: 38147969 DOI: 10.1016/j.mri.2023.12.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Revised: 12/15/2023] [Accepted: 12/22/2023] [Indexed: 12/28/2023]
Abstract
OBJECTIVES To validate the performance of nnU-Net in segmentation and CNN in classification for liver fibrosis using T1-weighted images. MATERIALS AND METHODS In this prospective study, animal models of liver fibrosis were induced by injecting subcutaneously a mixture of Carbon tetrachloride and olive oil. A total of 99 male Wistar rats were successfully induced and underwent MR scanning with no contrast agent to get T1-weighted images. The regions of interest (ROIs) of the whole liver were delineated layer by layer along the liver edge by 3D Slicer. For segmentation task, all T1-weighted images were randomly divided into training and test cohorts in a ratio of 7:3. For classification, images containing the hepatic maximum diameter of every rat were selected and 80% images of no liver fibrosis (NLF), early liver fibrosis (ELF) and progressive liver fibrosis (PLF) stages were randomly selected for training, while the rest were used for testing. Liver segmentation was performed by the nnU-Net model. The convolutional neural network (CNN) was used for classification task of liver fibrosis stages. The Dice similarity coefficient was used to evaluate the segmentation performance of nnU-Net. Confusion matrix, ROC curve and accuracy were used to show the classification performance of CNN. RESULTS A total of 2628 images were obtained from 99 Wistar rats by MR scanning. For liver segmentation by nnU-Net, the Dice similarity coefficient in the test set was 0.8477. The accuracies of CNN in staging NLF, ELF and PLF were 0.73, 0.89 and 0.84, respectively. The AUCs were 0.76, 0.88 and 0.79, respectively. CONCLUSION The nnU-Net architecture is of high accuracy for liver segmentation and CNN for assessment of liver fibrosis with T1-weighted images.
Collapse
Affiliation(s)
- Wenjing Zhang
- Department of Radiology, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Nan Zhao
- College of Computer Science and Technology of Qingdao University, Qingdao, China
| | - Yuanxiang Gao
- Department of Radiology, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Baoxiang Huang
- College of Computer Science and Technology of Qingdao University, Qingdao, China
| | - Lili Wang
- Department of Pathology, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Xiaoming Zhou
- Department of Radiology, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Zhiming Li
- Department of Radiology, The Affiliated Hospital of Qingdao University, Qingdao, China.
| |
Collapse
|
3
|
Choi Y, Bang J, Kim SY, Seo M, Jang J. Deep learning-based multimodal segmentation of oropharyngeal squamous cell carcinoma on CT and MRI using self-configuring nnU-Net. Eur Radiol 2024:10.1007/s00330-024-10585-y. [PMID: 38243135 DOI: 10.1007/s00330-024-10585-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2023] [Revised: 12/05/2023] [Accepted: 12/17/2023] [Indexed: 01/21/2024]
Abstract
PURPOSE To evaluate deep learning-based segmentation models for oropharyngeal squamous cell carcinoma (OPSCC) using CT and MRI with nnU-Net. METHODS This single-center retrospective study included 91 patients with OPSCC. The patients were grouped into the development (n = 56), test 1 (n = 13), and test 2 (n = 22) cohorts. In the development cohort, OPSCC was manually segmented on CT, MR, and co-registered CT-MR images, which served as the ground truth. The multimodal and multichannel input images were then trained using a self-configuring nnU-Net. For evaluation metrics, dice similarity coefficient (DSC) and mean Hausdorff distance (HD) were calculated for test cohorts. Pearson's correlation and Bland-Altman analyses were performed between ground truth and prediction volumes. Intraclass correlation coefficients (ICCs) of radiomic features were calculated for reproducibility assessment. RESULTS All models achieved robust segmentation performances with DSC of 0.64 ± 0.33 (CT), 0.67 ± 0.27 (MR), and 0.65 ± 0.29 (CT-MR) in test cohort 1 and 0.57 ± 0.31 (CT), 0.77 ± 0.08 (MR), and 0.73 ± 0.18 (CT-MR) in test cohort 2. No significant differences were found in DSC among the models. HD of CT-MR (1.57 ± 1.06 mm) and MR models (1.36 ± 0.61 mm) were significantly lower than that of the CT model (3.48 ± 5.0 mm) (p = 0.037 and p = 0.014, respectively). The correlation coefficients between the ground truth and prediction volumes for CT, MR, and CT-MR models were 0.88, 0.93, and 0.9, respectively. MR models demonstrated excellent mean ICCs of radiomic features (0.91-0.93). CONCLUSION The self-configuring nnU-Net demonstrated reliable and accurate segmentation of OPSCC on CT and MRI. The multimodal CT-MR model showed promising results for the simultaneous segmentation on CT and MRI. CLINICAL RELEVANCE STATEMENT Deep learning-based automatic detection and segmentation of oropharyngeal squamous cell carcinoma on pre-treatment CT and MRI would facilitate radiologic response assessment and radiotherapy planning. KEY POINTS • The nnU-Net framework produced a reliable and accurate segmentation of OPSCC on CT and MRI. • MR and CT-MR models showed higher DSC and lower Hausdorff distance than the CT model. • Correlation coefficients between the ground truth and predicted segmentation volumes were high in all the three models.
Collapse
Affiliation(s)
- Yangsean Choi
- Department of Radiology, Seoul St. Mary's Hospital, The Catholic University of Korea, College of Medicine, Seoul, Republic of Korea.
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Centre, 43 Olympic-Ro 88, Songpa-Gu, Seoul, 05505, Republic of Korea.
| | - Jooin Bang
- Department of Otolaryngology-Head and Neck Surgery, Seoul St. Mary's Hospital, The Catholic University of Korea, College of Medicine, Seoul, Republic of Korea
| | - Sang-Yeon Kim
- Department of Otolaryngology-Head and Neck Surgery, Seoul St. Mary's Hospital, The Catholic University of Korea, College of Medicine, Seoul, Republic of Korea
| | - Minkook Seo
- Department of Radiology, Seoul St. Mary's Hospital, The Catholic University of Korea, College of Medicine, Seoul, Republic of Korea
| | - Jinhee Jang
- Department of Radiology, Seoul St. Mary's Hospital, The Catholic University of Korea, College of Medicine, Seoul, Republic of Korea
| |
Collapse
|
4
|
Zhu J, Ge M, Chang Z, Dong W. CRCNet: Global-local context and multi-modality cross attention for polyp segmentation. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104593] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
|
5
|
Barbaroux H, Kunze KP, Neji R, Nazir MS, Pennell DJ, Nielles-Vallespin S, Scott AD, Young AA. Automated segmentation of long and short axis DENSE cardiovascular magnetic resonance for myocardial strain analysis using spatio-temporal convolutional neural networks. J Cardiovasc Magn Reson 2023; 25:16. [PMID: 36991474 PMCID: PMC10061808 DOI: 10.1186/s12968-023-00927-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Accepted: 02/01/2023] [Indexed: 03/31/2023] Open
Abstract
BACKGROUND Cine Displacement Encoding with Stimulated Echoes (DENSE) facilitates the quantification of myocardial deformation, by encoding tissue displacements in the cardiovascular magnetic resonance (CMR) image phase, from which myocardial strain can be estimated with high accuracy and reproducibility. Current methods for analyzing DENSE images still heavily rely on user input, making this process time-consuming and subject to inter-observer variability. The present study sought to develop a spatio-temporal deep learning model for segmentation of the left-ventricular (LV) myocardium, as spatial networks often fail due to contrast-related properties of DENSE images. METHODS 2D + time nnU-Net-based models have been trained to segment the LV myocardium from DENSE magnitude data in short- and long-axis images. A dataset of 360 short-axis and 124 long-axis slices was used to train the networks, from a combination of healthy subjects and patients with various conditions (hypertrophic and dilated cardiomyopathy, myocardial infarction, myocarditis). Segmentation performance was evaluated using ground-truth manual labels, and a strain analysis using conventional methods was performed to assess strain agreement with manual segmentation. Additional validation was performed using an externally acquired dataset to compare the inter- and intra-scanner reproducibility with respect to conventional methods. RESULTS Spatio-temporal models gave consistent segmentation performance throughout the cine sequence, while 2D architectures often failed to segment end-diastolic frames due to the limited blood-to-myocardium contrast. Our models achieved a DICE score of 0.83 ± 0.05 and a Hausdorff distance of 4.0 ± 1.1 mm for short-axis segmentation, and 0.82 ± 0.03 and 7.9 ± 3.9 mm respectively for long-axis segmentations. Strain measurements obtained from automatically estimated myocardial contours showed good to excellent agreement with manual pipelines, and remained within the limits of inter-user variability estimated in previous studies. CONCLUSION Spatio-temporal deep learning shows increased robustness for the segmentation of cine DENSE images. It provides excellent agreement with manual segmentation for strain extraction. Deep learning will facilitate the analysis of DENSE data, bringing it one step closer to clinical routine.
Collapse
Affiliation(s)
- Hugo Barbaroux
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK.
- Cardiovascular Magnetic Resonance Unit, The Royal Brompton Hospital (Guy's and St Thomas' NHS Foundation Trust), London, UK.
| | - Karl P Kunze
- MR Research Collaborations, Siemens Healthcare Limited, Camberley, UK
| | - Radhouene Neji
- MR Research Collaborations, Siemens Healthcare Limited, Camberley, UK
| | - Muhummad Sohaib Nazir
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Dudley J Pennell
- Cardiovascular Magnetic Resonance Unit, The Royal Brompton Hospital (Guy's and St Thomas' NHS Foundation Trust), London, UK
- National Heart and Lung Institute, Imperial College London, London, UK
| | - Sonia Nielles-Vallespin
- Cardiovascular Magnetic Resonance Unit, The Royal Brompton Hospital (Guy's and St Thomas' NHS Foundation Trust), London, UK
- National Heart and Lung Institute, Imperial College London, London, UK
| | - Andrew D Scott
- Cardiovascular Magnetic Resonance Unit, The Royal Brompton Hospital (Guy's and St Thomas' NHS Foundation Trust), London, UK
- National Heart and Lung Institute, Imperial College London, London, UK
| | - Alistair A Young
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| |
Collapse
|
6
|
Veiga-Canuto D, Cerdà-Alberich L, Jiménez-Pastor A, Carot Sierra JM, Gomis-Maya A, Sangüesa-Nebot C, Fernández-Patón M, Martínez de las Heras B, Taschner-Mandl S, Düster V, Pötschger U, Simon T, Neri E, Alberich-Bayarri Á, Cañete A, Hero B, Ladenstein R, Martí-Bonmatí L. Independent Validation of a Deep Learning nnU-Net Tool for Neuroblastoma Detection and Segmentation in MR Images. Cancers (Basel) 2023; 15:cancers15051622. [PMID: 36900410 PMCID: PMC10000775 DOI: 10.3390/cancers15051622] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Revised: 02/22/2023] [Accepted: 03/05/2023] [Indexed: 03/08/2023] Open
Abstract
OBJECTIVES To externally validate and assess the accuracy of a previously trained fully automatic nnU-Net CNN algorithm to identify and segment primary neuroblastoma tumors in MR images in a large children cohort. METHODS An international multicenter, multivendor imaging repository of patients with neuroblastic tumors was used to validate the performance of a trained Machine Learning (ML) tool to identify and delineate primary neuroblastoma tumors. The dataset was heterogeneous and completely independent from the one used to train and tune the model, consisting of 300 children with neuroblastic tumors having 535 MR T2-weighted sequences (486 sequences at diagnosis and 49 after finalization of the first phase of chemotherapy). The automatic segmentation algorithm was based on a nnU-Net architecture developed within the PRIMAGE project. For comparison, the segmentation masks were manually edited by an expert radiologist, and the time for the manual editing was recorded. Different overlaps and spatial metrics were calculated to compare both masks. RESULTS The median Dice Similarity Coefficient (DSC) was high 0.997; 0.944-1.000 (median; Q1-Q3). In 18 MR sequences (6%), the net was not able neither to identify nor segment the tumor. No differences were found regarding the MR magnetic field, type of T2 sequence, or tumor location. No significant differences in the performance of the net were found in patients with an MR performed after chemotherapy. The time for visual inspection of the generated masks was 7.9 ± 7.5 (mean ± Standard Deviation (SD)) seconds. Those cases where manual editing was needed (136 masks) required 124 ± 120 s. CONCLUSIONS The automatic CNN was able to locate and segment the primary tumor on the T2-weighted images in 94% of cases. There was an extremely high agreement between the automatic tool and the manually edited masks. This is the first study to validate an automatic segmentation model for neuroblastic tumor identification and segmentation with body MR images. The semi-automatic approach with minor manual editing of the deep learning segmentation increases the radiologist's confidence in the solution with a minor workload for the radiologist.
Collapse
Affiliation(s)
- Diana Veiga-Canuto
- Grupo de Investigación Biomédica en Imagen, Instituto de Investigación Sanitaria La Fe, Avenida Fernando Abril Martorell, 106 Torre A 7planta, 46026 Valencia, Spain
- Área Clínica de Imagen Médica, Hospital Universitario y Politécnico La Fe, Avenida Fernando Abril Martorell, 106 Torre A 7planta, 46026 Valencia, Spain
- Correspondence: (D.V.-C.); (L.M.-B.)
| | - Leonor Cerdà-Alberich
- Grupo de Investigación Biomédica en Imagen, Instituto de Investigación Sanitaria La Fe, Avenida Fernando Abril Martorell, 106 Torre A 7planta, 46026 Valencia, Spain
| | - Ana Jiménez-Pastor
- Quantitative Imaging Biomarkers in Medicine, QUIBIM SL, 46026 Valencia, Spain
| | - José Miguel Carot Sierra
- Departamento de Estadística e Investigación Operativa Aplicadas y Calidad, Universitat Politècnica de València, Camí de Vera s/n, 46022 Valencia, Spain
| | - Armando Gomis-Maya
- Grupo de Investigación Biomédica en Imagen, Instituto de Investigación Sanitaria La Fe, Avenida Fernando Abril Martorell, 106 Torre A 7planta, 46026 Valencia, Spain
| | - Cinta Sangüesa-Nebot
- Área Clínica de Imagen Médica, Hospital Universitario y Politécnico La Fe, Avenida Fernando Abril Martorell, 106 Torre A 7planta, 46026 Valencia, Spain
| | - Matías Fernández-Patón
- Grupo de Investigación Biomédica en Imagen, Instituto de Investigación Sanitaria La Fe, Avenida Fernando Abril Martorell, 106 Torre A 7planta, 46026 Valencia, Spain
| | - Blanca Martínez de las Heras
- Unidad de Oncohematología Pediátrica, Hospital Universitario y Politécnico La Fe, Avenida Fernando Abril Martorell, 106 Torre A 7planta, 46026 Valencia, Spain
| | - Sabine Taschner-Mandl
- St. Anna Children’s Cancer Research Institute, Zimmermannplatz 10, 1090 Vienna, Austria
| | - Vanessa Düster
- St. Anna Children’s Cancer Research Institute, Zimmermannplatz 10, 1090 Vienna, Austria
| | - Ulrike Pötschger
- St. Anna Children’s Cancer Research Institute, Zimmermannplatz 10, 1090 Vienna, Austria
| | - Thorsten Simon
- Department of Pediatric Oncology and Hematology, University Children’s Hospital of Cologne, Medical Faculty, University of Cologne, 50937 Cologne, Germany
| | - Emanuele Neri
- Academic Radiology, Department of Translational Research, University of Pisa, Via Roma, 67, 56126 Pisa, Italy
| | | | - Adela Cañete
- Unidad de Oncohematología Pediátrica, Hospital Universitario y Politécnico La Fe, Avenida Fernando Abril Martorell, 106 Torre A 7planta, 46026 Valencia, Spain
| | - Barbara Hero
- Department of Pediatric Oncology and Hematology, University Children’s Hospital of Cologne, Medical Faculty, University of Cologne, 50937 Cologne, Germany
| | - Ruth Ladenstein
- St. Anna Children’s Cancer Research Institute, Zimmermannplatz 10, 1090 Vienna, Austria
| | - Luis Martí-Bonmatí
- Grupo de Investigación Biomédica en Imagen, Instituto de Investigación Sanitaria La Fe, Avenida Fernando Abril Martorell, 106 Torre A 7planta, 46026 Valencia, Spain
- Área Clínica de Imagen Médica, Hospital Universitario y Politécnico La Fe, Avenida Fernando Abril Martorell, 106 Torre A 7planta, 46026 Valencia, Spain
- Correspondence: (D.V.-C.); (L.M.-B.)
| |
Collapse
|
7
|
Zhu Y, Chen L, Lu W, Gong Y, Wang X. The application of the nnU-Net-based automatic segmentation model in assisting carotid artery stenosis and carotid atherosclerotic plaque evaluation. Front Physiol 2022; 13:1057800. [PMID: 36561211 PMCID: PMC9763590 DOI: 10.3389/fphys.2022.1057800] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2022] [Accepted: 11/11/2022] [Indexed: 12/12/2022] Open
Abstract
Objective: No new U-net (nnU-Net) is a newly-developed deep learning neural network, whose advantages in medical image segmentation have been noticed recently. This study aimed to investigate the value of the nnU-Net-based model for computed tomography angiography (CTA) imaging in assisting the evaluation of carotid artery stenosis (CAS) and atherosclerotic plaque. Methods: This study retrospectively enrolled 93 CAS-suspected patients who underwent head and neck CTA examination, then randomly divided them into the training set (N = 70) and the validation set (N = 23) in a 3:1 ratio. The radiologist-marked images in the training set were used for the development of the nnU-Net model, which was subsequently tested in the validation set. Results: In the training set, the nnU-Net had already displayed a good performance for CAS diagnosis and atherosclerotic plaque segmentation. Then, its utility was further confirmed in the validation set: the Dice similarity coefficient value of the nnU-Net model in segmenting background, blood vessels, calcification plaques, and dark spots reached 0.975, 0.974 0.795, and 0.498, accordingly. Besides, the nnU-Net model displayed a good consistency with physicians in assessing CAS (Kappa = 0.893), stenosis degree (Kappa = 0.930), the number of calcification plaque (Kappa = 0.922), non-calcification (Kappa = 0.768) and mixed plaque (Kappa = 0.793), as well as the max thickness of calcification plaque (intraclass correlation coefficient = 0.972). Additionally, the evaluation time of the nnU-Net model was shortened compared with the physicians (27.3 ± 4.4 s vs. 296.8 ± 81.1 s, p < 0.001). Conclusion: The automatic segmentation model based on nnU-Net shows good accuracy, reliability, and efficiency in assisting CTA to evaluate CAS and carotid atherosclerotic plaques.
Collapse
Affiliation(s)
- Ying Zhu
- First Clinical Medical College, Soochow University, Suzhou, China
| | - Liwei Chen
- Department of Radiology, School of Medicine, Tongren Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Wenjie Lu
- Department of Radiology, School of Medicine, Tongren Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Yongjun Gong
- Department of Radiology, School of Medicine, Tongren Hospital, Shanghai Jiao Tong University, Shanghai, China,*Correspondence: Yongjun Gong, ; Ximing Wang,
| | - Ximing Wang
- Department of Radiology, The First Affiliated Hospital of Soochow University, Suzhou, China,*Correspondence: Yongjun Gong, ; Ximing Wang,
| |
Collapse
|
8
|
Diagnostic utility of artificial intelligence for left ventricular scar identification using cardiac magnetic resonance imaging-A systematic review. CARDIOVASCULAR DIGITAL HEALTH JOURNAL 2021; 2:S21-S29. [PMID: 35265922 PMCID: PMC8890335 DOI: 10.1016/j.cvdhj.2021.11.005] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Background Accurate, rapid quantification of ventricular scar using cardiac magnetic resonance imaging (CMR) carries importance in arrhythmia management and patient prognosis. Artificial intelligence (AI) has been applied to other radiological challenges with success. Objective We aimed to assess AI methodologies used for left ventricular scar identification in CMR, imaging sequences used for training, and its diagnostic evaluation. Methods Following PRISMA recommendations, a systematic search of PubMed, Embase, Web of Science, CINAHL, OpenDissertations, arXiv, and IEEE Xplore was undertaken to June 2021 for full-text publications assessing left ventricular scar identification algorithms. No pre-registration was undertaken. Random-effect meta-analysis was performed to assess Dice Coefficient (DSC) overlap of learning vs predefined thresholding methods. Results Thirty-five articles were included for final review. Supervised and unsupervised learning models had similar DSC compared to predefined threshold models (0.616 vs 0.633, P = .14) but had higher sensitivity, specificity, and accuracy. Meta-analysis of 4 studies revealed standardized mean difference of 1.11; 95% confidence interval -0.16 to 2.38, P = .09, I2 = 98% favoring learning methods. Conclusion Feasibility of applying AI to the task of scar detection in CMR has been demonstrated, but model evaluation remains heterogenous. Progression toward clinical application requires detailed, transparent, standardized model comparison and increased model generalizability.
Collapse
|