1
|
Chen WW, Kuo L, Lin YX, Yu WC, Tseng CC, Lin YJ, Huang CC, Chang SL, Wu JCH, Chen CK, Weng CY, Chan S, Lin WW, Hsieh YC, Lin MC, Fu YC, Chen T, Chen SA, Lu HHS. A Deep Learning Approach to Classify Fabry Cardiomyopathy from Hypertrophic Cardiomyopathy Using Cine Imaging on Cardiac Magnetic Resonance. Int J Biomed Imaging 2024; 2024:6114826. [PMID: 38706878 PMCID: PMC11068448 DOI: 10.1155/2024/6114826] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 03/20/2024] [Accepted: 03/23/2024] [Indexed: 05/07/2024] Open
Abstract
A challenge in accurately identifying and classifying left ventricular hypertrophy (LVH) is distinguishing it from hypertrophic cardiomyopathy (HCM) and Fabry disease. The reliance on imaging techniques often requires the expertise of multiple specialists, including cardiologists, radiologists, and geneticists. This variability in the interpretation and classification of LVH leads to inconsistent diagnoses. LVH, HCM, and Fabry cardiomyopathy can be differentiated using T1 mapping on cardiac magnetic resonance imaging (MRI). However, differentiation between HCM and Fabry cardiomyopathy using echocardiography or MRI cine images is challenging for cardiologists. Our proposed system named the MRI short-axis view left ventricular hypertrophy classifier (MSLVHC) is a high-accuracy standardized imaging classification model developed using AI and trained on MRI short-axis (SAX) view cine images to distinguish between HCM and Fabry disease. The model achieved impressive performance, with an F1-score of 0.846, an accuracy of 0.909, and an AUC of 0.914 when tested on the Taipei Veterans General Hospital (TVGH) dataset. Additionally, a single-blinding study and external testing using data from the Taichung Veterans General Hospital (TCVGH) demonstrated the reliability and effectiveness of the model, achieving an F1-score of 0.727, an accuracy of 0.806, and an AUC of 0.918, demonstrating the model's reliability and usefulness. This AI model holds promise as a valuable tool for assisting specialists in diagnosing LVH diseases.
Collapse
Affiliation(s)
- Wei-Wen Chen
- Institute of Computer Science and Engineering, National Yang-Ming University, Hsinchu, Taiwan
| | - Ling Kuo
- Faculty of Medicine and Institute of Clinical Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Division of Cardiology, Department of Medicine, Taipei Veterans General Hospital, Taipei, Taiwan
- Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, Taipei, Taiwan
| | - Yi-Xun Lin
- Institute of Statistics, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
| | - Wen-Chung Yu
- Faculty of Medicine and Institute of Clinical Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Division of Cardiology, Department of Medicine, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Chien-Chao Tseng
- Institute of Computer Science and Engineering, National Yang-Ming University, Hsinchu, Taiwan
| | - Yenn-Jiang Lin
- Faculty of Medicine and Institute of Clinical Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Division of Cardiology, Department of Medicine, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Ching-Chun Huang
- Institute of Computer Science and Engineering, National Yang-Ming University, Hsinchu, Taiwan
| | - Shih-Lin Chang
- Faculty of Medicine and Institute of Clinical Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Division of Cardiology, Department of Medicine, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Jacky Chung-Hao Wu
- Institute of Statistics, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
| | - Chun-Ku Chen
- Department of Radiology, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Ching-Yao Weng
- Department of Radiology, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Siwa Chan
- Department of Radiology, Taichung Veterans General Hospital, Taichung, Taiwan
- Department of Post-Baccalaureate Medicine, National Chung Hsing University, Taichung, Taiwan
| | - Wei-Wen Lin
- Cardiovascular Center, Taichung Veterans General Hospital, Taichung, Taiwan
| | - Yu-Cheng Hsieh
- Cardiovascular Center, Taichung Veterans General Hospital, Taichung, Taiwan
| | - Ming-Chih Lin
- Department of Post-Baccalaureate Medicine, National Chung Hsing University, Taichung, Taiwan
- Department of Pediatric Cardiology, Taichung Veterans General Hospital, Taichung, Taiwan
- Children's Medical Center, Taichung Veterans General Hospital, Taichung, Taiwan
| | - Yun-Ching Fu
- Department of Pediatric Cardiology, Taichung Veterans General Hospital, Taichung, Taiwan
- Children's Medical Center, Taichung Veterans General Hospital, Taichung, Taiwan
- Department of Pediatrics, School of Medicine, National Chung-Hsing University, Taichung, Taiwan
| | - Tsung Chen
- Institute of Statistics, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
| | - Shih-Ann Chen
- Faculty of Medicine and Institute of Clinical Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Division of Cardiology, Department of Medicine, Taipei Veterans General Hospital, Taipei, Taiwan
- Cardiovascular Center, Taichung Veterans General Hospital, Taichung, Taiwan
- College of Medicine, National Chung Hsing University, Taichung, Taiwan
| | - Henry Horng-Shing Lu
- Institute of Statistics, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
- Department of Statistics and Data Science, Cornell University, Ithaca, New York, USA
| |
Collapse
|
2
|
Jiao T, Li F, Cui Y, Wang X, Li B, Shi F, Xia Y, Zhou Q, Zeng Q. Deep Learning With an Attention Mechanism for Differentiating the Origin of Brain Metastasis Using MR images. J Magn Reson Imaging 2023; 58:1624-1635. [PMID: 36965182 DOI: 10.1002/jmri.28695] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 03/10/2023] [Accepted: 03/10/2023] [Indexed: 03/27/2023] Open
Abstract
BACKGROUND Brain metastasis (BM) is a serious neurological complication of cancer of different origins. The value of deep learning (DL) to identify multiple types of primary origins remains unclear. PURPOSE To distinguish primary site of BM and identify the best DL models. STUDY TYPE Retrospective. POPULATION A total of 449 BM derived from 214 patients (49.5% for female, mean age 58 years) (100 from small cell lung cancer [SCLC], 125 from non-small cell lung cancer [NSCLC], 116 from breast cancer [BC], and 108 from gastrointestinal cancer [GIC]) were included. FIELD STRENGTH/SEQUENCE A 3-T, T1 turbo spin echo (T1-TSE), T2-TSE, T2FLAIR-TSE, DWI echo-planar imaging (DWI-EPI) and contrast-enhanced T1-TSE (CE T1-TSE). ASSESSMENT Lesions were divided into training (n = 285, 153 patients), testing (n = 122, 93 patients), and independent testing cohorts (n = 42, 34 patients). Three-dimensional residual network (3D-ResNet), named 3D ResNet6 and 3D ResNet 18, was proposed for identifying the four origins based on single MRI and combined MRI (T1WI + T2-FLAIR + DWI, CE-T1WI + DWI, CE-T1WI + T2WI + DWI). DL model was used to distinguish lung cancer from non-lung cancer; then SCLC vs. NSCLC for lung cancer classification and BC vs. GIC for non-lung cancer classification was performed. A subjective visual analysis was implemented and compared with DL models. Gradient-weighted class activation mapping (Grad-CAM) was used to visualize the model by heatmaps. STATISTICAL TESTS The area under the receiver operating characteristics curve (AUC) assess each classification performance. RESULTS 3D ResNet18 with Grad-CAM and AIC showed better performance than 3DResNet6, 3DResNet18 and the radiologist for distinguishing lung cancer from non-lung cancer, SCLC from NSCLC, and BC from GIC. For single MRI sequence, T1WI, DWI, and CE-T1WI performed best for lung cancer vs. non-lung cancer, SCLC vs. NSCLC, and BC vs. GIC classifications. The AUC ranged from 0.675 to 0.876 and from 0.684 to 0.800 regarding the testing and independent testing datasets, respectively. For combined MRI sequences, the combination of CE-T1WI + T2WI + DWI performed better for BC vs. GIC (AUCs of 0.788 and 0.848 on testing and independent testing datasets, respectively), while the combined MRI approach (T1WI + T2-FLAIR + DWI, CE-T1WI + DWI) could not achieve higher AUCs for lung cancer vs. non-lung cancer, SCLC vs. NSCLC. Grad-CAM helped for model visualization by heatmaps that focused on tumor regions. DATA CONCLUSION DL models may help to distinguish the origins of BM based on MRI data. EVIDENCE LEVEL 3 TECHNICAL EFFICACY: Stage 2.
Collapse
Affiliation(s)
- Tianyu Jiao
- Department of Radiology, The First Affiliated Hospital of Shandong First Medical University & Shandong Provincial Qianfoshan Hospital, Jinan, China
- Shandong First Medical University, Jinan, China
| | - Fuyan Li
- Department of Radiology, Shandong Provincial Hospital affiliated to Shandong First Medical University, Jinan, China
| | - Yi Cui
- Department of Radiology, Qilu Hospital of Shandong University, Jinan, China
| | - Xiao Wang
- Department of Radiology, Jining No. 1 People's Hospital, Jining, China
| | - Butuo Li
- Department of Radiation Oncology, Shandong Cancer Hospital & Institute, Jinan, China
| | - Feng Shi
- Shanghai United Imaging Intelligence, Shanghai, China
| | - Yuwei Xia
- Shanghai United Imaging Intelligence, Shanghai, China
| | - Qing Zhou
- Shanghai United Imaging Intelligence, Shanghai, China
| | - Qingshi Zeng
- Department of Radiology, The First Affiliated Hospital of Shandong First Medical University & Shandong Provincial Qianfoshan Hospital, Jinan, China
| |
Collapse
|
3
|
Avesta A, Hossain S, Lin M, Aboian M, Krumholz HM, Aneja S. Comparing 3D, 2.5D, and 2D Approaches to Brain Image Auto-Segmentation. Bioengineering (Basel) 2023; 10:181. [PMID: 36829675 PMCID: PMC9952534 DOI: 10.3390/bioengineering10020181] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 01/09/2023] [Accepted: 01/09/2023] [Indexed: 02/04/2023] Open
Abstract
Deep-learning methods for auto-segmenting brain images either segment one slice of the image (2D), five consecutive slices of the image (2.5D), or an entire volume of the image (3D). Whether one approach is superior for auto-segmenting brain images is not known. We compared these three approaches (3D, 2.5D, and 2D) across three auto-segmentation models (capsule networks, UNets, and nnUNets) to segment brain structures. We used 3430 brain MRIs, acquired in a multi-institutional study, to train and test our models. We used the following performance metrics: segmentation accuracy, performance with limited training data, required computational memory, and computational speed during training and deployment. The 3D, 2.5D, and 2D approaches respectively gave the highest to lowest Dice scores across all models. 3D models maintained higher Dice scores when the training set size was decreased from 3199 MRIs down to 60 MRIs. 3D models converged 20% to 40% faster during training and were 30% to 50% faster during deployment. However, 3D models require 20 times more computational memory compared to 2.5D or 2D models. This study showed that 3D models are more accurate, maintain better performance with limited training data, and are faster to train and deploy. However, 3D models require more computational memory compared to 2.5D or 2D models.
Collapse
Affiliation(s)
- Arman Avesta
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT 06510, USA
- Department of Therapeutic Radiology, Yale School of Medicine, New Haven, CT 06510, USA
- Center for Outcomes Research and Evaluation, Yale School of Medicine, New Haven, CT 06510, USA
| | - Sajid Hossain
- Department of Therapeutic Radiology, Yale School of Medicine, New Haven, CT 06510, USA
- Center for Outcomes Research and Evaluation, Yale School of Medicine, New Haven, CT 06510, USA
| | - MingDe Lin
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT 06510, USA
- Visage Imaging, Inc., San Diego, CA 92130, USA
| | - Mariam Aboian
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT 06510, USA
| | - Harlan M. Krumholz
- Center for Outcomes Research and Evaluation, Yale School of Medicine, New Haven, CT 06510, USA
- Division of Cardiovascular Medicine, Yale School of Medicine, New Haven, CT 06510, USA
| | - Sanjay Aneja
- Department of Therapeutic Radiology, Yale School of Medicine, New Haven, CT 06510, USA
- Center for Outcomes Research and Evaluation, Yale School of Medicine, New Haven, CT 06510, USA
- Department of Biomedical Engineering, Yale University, New Haven, CT 06510, USA
| |
Collapse
|
4
|
Wang PH, Huo TI. Outstanding research paper awards of the Journal of the Chinese Medical Association in 2021. J Chin Med Assoc 2022; 85:887-888. [PMID: 36150102 DOI: 10.1097/jcma.0000000000000786] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/14/2022] Open
Affiliation(s)
- Peng-Hui Wang
- Department of Obstetrics and Gynecology, Taipei Veterans General Hospital, Taipei, Taiwan, ROC
- Institute of Clinical Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan, ROC
- Female Cancer Foundation, Taipei, Taiwan, ROC
- Department of Medical Research, China Medical University Hospital, Taichung, Taiwan, ROC
| | - Teh-Ia Huo
- Department of Medical Research, Taipei Veterans General Hospital, Taipei, Taiwan, ROC
- Institute of Pharmacology, National Yang Ming Chiao Tung University, Taipei, Taiwan, ROC
| |
Collapse
|