1
|
Sorin V, Sklair-Levy M, Glicksberg BS, Konen E, Nadkarni GN, Klang E. Deep Learning for Contrast Enhanced Mammography - A Systematic Review. Acad Radiol 2025; 32:2497-2508. [PMID: 39643464 DOI: 10.1016/j.acra.2024.11.035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2024] [Revised: 11/14/2024] [Accepted: 11/15/2024] [Indexed: 12/09/2024]
Abstract
BACKGROUND/AIM Contrast-enhanced mammography (CEM) is a relatively novel imaging technique that enables both anatomical and functional breast imaging, with improved diagnostic performance compared to standard 2D mammography. The aim of this study is to systematically review the literature on deep learning (DL) applications for CEM, exploring how these models can further enhance CEM diagnostic potential. METHODS This systematic review was reported according to the PRISMA guidelines. We searched for studies published up to April 2024. MEDLINE, Scopus and Google Scholar were used as search databases. Two reviewers independently implemented the search strategy. We included all types of original studies published in English that evaluated DL algorithms for automatic analysis of contrast-enhanced mammography CEM images. The quality of the studies was independently evaluated by two reviewers based on the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) criteria. RESULTS Sixteen relevant studies published between 2018 and 2024 were identified. All but one used convolutional neural network models (CNN) models. All studies evaluated DL algorithms for lesion classification, while six studies also assessed lesion detection or segmentation. Segmentation was performed manually in three studies, both manually and automatically in two studies and automatically in ten studies. For lesion classification on retrospective datasets, CNN models reported varied areas under the curve (AUCs) ranging from 0.53 to 0.99. Models incorporating attention mechanism achieved accuracies of 88.1% and 89.1%. Prospective studies reported AUC values of 0.89 and 0.91. Some studies demonstrated that combining DL models with radiomics featured improved classification. Integrating DL algorithms with radiologists' assessments enhanced diagnostic performance. CONCLUSION While still at an early research stage, DL can improve CEM diagnostic precision. However, there is a relatively small number of studies evaluating different DL algorithms, and most studies are retrospective. Further prospective testing to assess performance of applications at actual clinical setting is warranted.
Collapse
Affiliation(s)
- Vera Sorin
- Department of Radiology, Mayo Clinic, Rochester, MN (V.S.).
| | - Miri Sklair-Levy
- Department of Diagnostic Imaging, Chaim Sheba Medical Center, affiliated to the Sackler School of Medicine, Tel-Aviv University, Israel (M.S-L., E.K.)
| | - Benjamin S Glicksberg
- Division of Data-Driven and Digital Medicine (D3M), Icahn School of Medicine at Mount Sinai, New York, NY (B.S.G., G.N.N., E.K.)
| | - Eli Konen
- Department of Diagnostic Imaging, Chaim Sheba Medical Center, affiliated to the Sackler School of Medicine, Tel-Aviv University, Israel (M.S-L., E.K.)
| | - Girish N Nadkarni
- Division of Data-Driven and Digital Medicine (D3M), Icahn School of Medicine at Mount Sinai, New York, NY (B.S.G., G.N.N., E.K.); The Charles Bronfman Institute of Personalized Medicine, Icahn School of Medicine at Mount Sinai, New York, NY (G.N.N., E.K.)
| | - Eyal Klang
- Division of Data-Driven and Digital Medicine (D3M), Icahn School of Medicine at Mount Sinai, New York, NY (B.S.G., G.N.N., E.K.); The Charles Bronfman Institute of Personalized Medicine, Icahn School of Medicine at Mount Sinai, New York, NY (G.N.N., E.K.)
| |
Collapse
|
2
|
Li M, Fang Y, Shao J, Jiang Y, Xu G, Cui XW, Wu X. Vision transformer-based multimodal fusion network for classification of tumor malignancy on breast ultrasound: A retrospective multicenter study. Int J Med Inform 2025; 196:105793. [PMID: 39862564 DOI: 10.1016/j.ijmedinf.2025.105793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2024] [Revised: 12/24/2024] [Accepted: 01/10/2025] [Indexed: 01/27/2025]
Abstract
BACKGROUND In the context of routine breast cancer diagnosis, the precise discrimination between benign and malignant breast masses holds utmost significance. Notably, few prior investigations have concurrently explored the integration of imaging histology features, deep learning characteristics, and clinical parameters. The primary objective of this retrospective study was to pioneer a multimodal feature fusion model tailored for the prediction of breast tumor malignancy, harnessing the potential of ultrasound images. METHOD We compiled a dataset that included clinical features from 1065 patients and 3315 image datasets. Specifically, we selected data from 603 patients for training our multimodal model. The comprehensive experimental workflow involves identifying the optimal unimodal model, extracting unimodal features, fusing multimodal features, gaining insights from these fused features, and ultimately generating prediction results using a classifier. RESULTS Our multimodal feature fusion model demonstrates outstanding performance, achieving an AUC of 0.994 (95 % CI: 0.988-0.999) and an F1 score of 0.971 on the primary multicenter dataset. In the evaluation on two independent testing cohorts (TCs), it maintains strong performance, with AUCs of 0.942 (95 % CI: 0.854-0.994) for TC1 and 0.945 (95 % CI: 0.857-1.000) for TC2, accompanied by corresponding F1 scores of 0.872 and 0.857, respectively. Notably, the decision curve analysis reveals that our model achieves higher accuracy within the threshold probability range of approximately [0.210, 0.890] (TC1) and [0.000, 0.850] (TC2) compared to alternative methods. This capability enhances its utility in clinical decision-making, providing substantial benefits. CONCLUSION The multimodal model proposed in this paper can comprehensively evaluate patients' multifaceted clinical information, achieve the prediction of benign and malignant breast ultrasound tumors, and obtain high performance indexes.
Collapse
Affiliation(s)
- Mengying Li
- School of Computer Science and Engineering, Hubei Key Laboratory of Intelligent Robot, Wuhan Institute of Technology, Wuhan, PR China
| | - Yin Fang
- School of Computer Science and Engineering, Hubei Key Laboratory of Intelligent Robot, Wuhan Institute of Technology, Wuhan, PR China
| | - Jiong Shao
- School of Computer Science and Engineering, Hubei Key Laboratory of Intelligent Robot, Wuhan Institute of Technology, Wuhan, PR China
| | - Yan Jiang
- School of Computer Science and Engineering, Hubei Key Laboratory of Intelligent Robot, Wuhan Institute of Technology, Wuhan, PR China
| | - Guoping Xu
- School of Computer Science and Engineering, Hubei Key Laboratory of Intelligent Robot, Wuhan Institute of Technology, Wuhan, PR China
| | - Xin-Wu Cui
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, PR China
| | - Xinglong Wu
- School of Computer Science and Engineering, Hubei Key Laboratory of Intelligent Robot, Wuhan Institute of Technology, Wuhan, PR China.
| |
Collapse
|
3
|
Bouzarjomehri N, Barzegar M, Rostami H, Keshavarz A, Asghari AN, Azad ST. Multi-modal classification of breast cancer lesions in Digital Mammography and contrast enhanced spectral mammography images. Comput Biol Med 2024; 183:109266. [PMID: 39405734 DOI: 10.1016/j.compbiomed.2024.109266] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2024] [Revised: 08/30/2024] [Accepted: 10/08/2024] [Indexed: 11/20/2024]
Abstract
Breast cancer ranks as the second most prevalent cancer in women, recognized as one of the most dangerous types of cancer, and is on the rise globally. Regular screenings are essential for early-stage treatment. Digital mammography (DM) is the most recognized and widely used technique for breast cancer screening. Contrast-Enhanced Spectral Mammography (CESM or CM) is used in conjunction with DM to detect and identify hidden abnormalities, particularly in dense breast tissue where DM alone might not be as effective. In this work, we explore the effectiveness of each modality (CM, DM, or both) in detecting breast cancer lesions using deep learning methods. We introduce an architecture for detecting and classifying breast cancer lesions in DM and CM images in Craniocaudal (CC) and Mediolateral Oblique (MLO) views. The proposed architecture (JointNet) consists of a convolution module for extracting local features, a transformer module for extracting long-range features, and a feature fusion layer to fuse the local features, global features, and global features weighted based on the local ones. This significantly enhances the accuracy of classifying DM and CM images into normal or abnormal categories and lesion classification into benign or malignant. Using our architecture as a backbone, three lesion classification pipelines are introduced that utilize attention mechanisms focused on lesion shape, texture, and overall breast texture, examining the critical features for effective lesion classification. The results demonstrate that our proposed methods outperform their components in classifying images as normal or abnormal and mitigate the limitations of independently using the transformer module or the convolution module. An ensemble model is also introduced to explore the effect of each modality and each view to increase our baseline architecture's accuracy. The results demonstrate superior performance compared with other similar works. The best performance on DM images was achieved with the semi-automatic AOL Lesion Classification Pipeline, yielding an accuracy of 98.85 %, AUROC of 0.9965, F1-score of 98.85 %, precision of 98.85 %, and specificity of 98.85 %. For CM images, the highest results were obtained using the automatic AOL Lesion Classification Pipeline, with an accuracy of 97.47 %, AUROC of 0.9771, F1-score of 97.34 %, precision of 94.45 %, and specificity of 97.23 %. The semi-automatic ensemble AOL Classification Pipeline provided the best overall performance when using both DM and CM images, with an accuracy of 94.74 %, F1-score of 97.67 %, specificity of 93.75 %, and sensitivity of 95.45 %. Furthermore, we explore the comparative effectiveness of CM and DM images in deep learning models, indicating that while CM images offer clearer insights to the human eye, our model trained on DM images yields better results using Attention on Lesion (AOL) techniques. The research also suggests a multimodal approach using both DM and CM images and ensemble learning could provide more robust classification outcomes.
Collapse
Affiliation(s)
- Narjes Bouzarjomehri
- Department of Computer Engineering, Faculty of Intelligent Systems Engineering and Data Science, Persian Gulf University, Bushehr, 7516913817, Iran
| | - Mohammad Barzegar
- Department of Computer Engineering, Faculty of Intelligent Systems Engineering and Data Science, Persian Gulf University, Bushehr, 7516913817, Iran
| | - Habib Rostami
- Department of Computer Engineering, Faculty of Intelligent Systems Engineering and Data Science, Persian Gulf University, Bushehr, 7516913817, Iran.
| | - Ahmad Keshavarz
- Department of Electrical Engineering, Faculty of Intelligent Systems Engineering and Data Science, Persian Gulf University, Bushehr, 7516913817, Iran
| | - Ahmad Navid Asghari
- Department of Computer Engineering, Faculty of Intelligent Systems Engineering and Data Science, Persian Gulf University, Bushehr, 7516913817, Iran
| | - Saeed Talatian Azad
- Department of Computer Engineering, Faculty of Intelligent Systems Engineering and Data Science, Persian Gulf University, Bushehr, 7516913817, Iran
| |
Collapse
|
4
|
Filippone F, Boudagga Z, Frattini F, Fortuna GF, Razzini D, Tambasco A, Menardi V, Balbiano di Colcavagno A, Carriero S, Gambaro ACL, Carriero A. Contrast Enhancement in Breast Cancer: Magnetic Resonance vs. Mammography: A 10-Year Systematic Review. Diagnostics (Basel) 2024; 14:2400. [PMID: 39518367 PMCID: PMC11545212 DOI: 10.3390/diagnostics14212400] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2024] [Revised: 09/25/2024] [Accepted: 10/05/2024] [Indexed: 11/16/2024] Open
Abstract
PURPOSE Contrast Enhancement Magnetic Resonance (CEMR) and Contrast-Enhanced Mammography (CEM) are important diagnostic tools to evaluate breast cancer patients, and both are objects of interest in the literature. The purpose of this systematic review was to select publications from the last ten years in order to evaluate the literature contributions related to the frequency of contrast agents used, administration techniques and the presence of adverse reactions. METHODS We have selected, according to the PRISMA statement, publications reviewed on Pub Med in the period from 1 January 2012 to 31 December 2022. The search engine was activated using the following keywords: "CESM", "CEM", "CEDM", "Contrast mammography" for CEM, "DCE-MRI", "Contrast Enhancement MRI" for CEMR, excluding reviews, book chapters and meta-analyses. From the total number of publications, we made a preliminary selection based on titles and abstracts and excluded all articles published in languages other than English and all experimental studies performed on surgical specimen or animal population, as well as all articles for which the extended version was not available. Two readers evaluated all the articles and compiled a pre-compiled form accordingly. RESULTS After a preliminary collection of 571 CEM publications, 118 articles were selected, relating to an overall population of 21,178 patients. From a total of 3063 CEMR publications, 356 articles relating to an overall population of 45,649 patients were selected. The most used contrast agents are Iohexol for CEM (39.83%) and Gadopentetic acid (Gd-DTPA) for CEMR (32.5%). Regarding the CEM contrast administration protocol, in 84.7% of cases a dose of 1.5 mL/kg was used with an infusion rate of 2-3 mL/s. Regarding the CEMR infusion protocol, in 71% of cases a dose of 1 mmol/kg was used at an infusion rate of 2-4 mL/s. Twelve out of 118 CEM articles reported allergic reactions, involving 29 patients (0.13%). In DCE-MRI, only one out of 356 articles reported allergic reactions, involving two patients (0.004%). No severe reactions were observed in either cohort of exams. CONCLUSIONS CEM and CEMR are essential contrast methods to evaluate breast diseases. However, from the literature analysis, although there are preferences on the uses of the contrast agent (Iohexol for CESM, G-DTPA for CEMR), a wide range of molecules are still used in contrast methods, with different administration protocols. Based on the collected data, it is possible to state that both methods are safe, and no severe reactions were observed in our evaluation.
Collapse
Affiliation(s)
- Francesco Filippone
- SCDU Radiology, “Maggiore della Carità” Hospital, University of Eastern Piedmont, 28100 Novara, Italy; (F.F.); (G.F.F.); (D.R.); (A.T.); (V.M.); (A.B.d.C.); (A.C.L.G.); (A.C.)
| | - Zohra Boudagga
- SCDU Radiology, “Maggiore della Carità” Hospital, University of Eastern Piedmont, 28100 Novara, Italy; (F.F.); (G.F.F.); (D.R.); (A.T.); (V.M.); (A.B.d.C.); (A.C.L.G.); (A.C.)
| | - Francesca Frattini
- SCDU Radiology, “Maggiore della Carità” Hospital, University of Eastern Piedmont, 28100 Novara, Italy; (F.F.); (G.F.F.); (D.R.); (A.T.); (V.M.); (A.B.d.C.); (A.C.L.G.); (A.C.)
| | - Gaetano Federico Fortuna
- SCDU Radiology, “Maggiore della Carità” Hospital, University of Eastern Piedmont, 28100 Novara, Italy; (F.F.); (G.F.F.); (D.R.); (A.T.); (V.M.); (A.B.d.C.); (A.C.L.G.); (A.C.)
| | - Davide Razzini
- SCDU Radiology, “Maggiore della Carità” Hospital, University of Eastern Piedmont, 28100 Novara, Italy; (F.F.); (G.F.F.); (D.R.); (A.T.); (V.M.); (A.B.d.C.); (A.C.L.G.); (A.C.)
| | - Anna Tambasco
- SCDU Radiology, “Maggiore della Carità” Hospital, University of Eastern Piedmont, 28100 Novara, Italy; (F.F.); (G.F.F.); (D.R.); (A.T.); (V.M.); (A.B.d.C.); (A.C.L.G.); (A.C.)
| | - Veronica Menardi
- SCDU Radiology, “Maggiore della Carità” Hospital, University of Eastern Piedmont, 28100 Novara, Italy; (F.F.); (G.F.F.); (D.R.); (A.T.); (V.M.); (A.B.d.C.); (A.C.L.G.); (A.C.)
| | - Alessandro Balbiano di Colcavagno
- SCDU Radiology, “Maggiore della Carità” Hospital, University of Eastern Piedmont, 28100 Novara, Italy; (F.F.); (G.F.F.); (D.R.); (A.T.); (V.M.); (A.B.d.C.); (A.C.L.G.); (A.C.)
| | - Serena Carriero
- Foundation IRCCS Cà Granda-Ospedale Maggiore Policlinico, 20122 Milan, Italy;
| | - Anna Clelia Lucia Gambaro
- SCDU Radiology, “Maggiore della Carità” Hospital, University of Eastern Piedmont, 28100 Novara, Italy; (F.F.); (G.F.F.); (D.R.); (A.T.); (V.M.); (A.B.d.C.); (A.C.L.G.); (A.C.)
| | - Alessandro Carriero
- SCDU Radiology, “Maggiore della Carità” Hospital, University of Eastern Piedmont, 28100 Novara, Italy; (F.F.); (G.F.F.); (D.R.); (A.T.); (V.M.); (A.B.d.C.); (A.C.L.G.); (A.C.)
| |
Collapse
|
5
|
Qian N, Jiang W, Wu X, Zhang N, Yu H, Guo Y. Lesion attention guided neural network for contrast-enhanced mammography-based biomarker status prediction in breast cancer. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 250:108194. [PMID: 38678959 DOI: 10.1016/j.cmpb.2024.108194] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Revised: 04/13/2024] [Accepted: 04/21/2024] [Indexed: 05/01/2024]
Abstract
BACKGROUND AND OBJECTIVE Accurate identification of molecular biomarker statuses is crucial in cancer diagnosis, treatment, and prognosis. Studies have demonstrated that medical images could be utilized for non-invasive prediction of biomarker statues. The biomarker status-associated features extracted from medical images are essential in developing medical image-based non-invasive prediction models. Contrast-enhanced mammography (CEM) is a promising imaging technique for breast cancer diagnosis. This study aims to develop a neural network-based method to extract biomarker-related image features from CEM images and evaluate the potential of CEM in non-invasive biomarker status prediction. METHODS An end-to-end learning convolutional neural network with the whole breast images as inputs was proposed to extract CEM features for biomarker status prediction in breast cancer. The network focused on lesion regions and flexibly extracted image features from lesion and peri‑tumor regions by employing supervised learning with a smooth L1-based consistency constraint. An image-level weakly supervised segmentation network based on Vision Transformer with cross attention to contrast images of breasts with lesions against the contralateral breast images was developed for automatic lesion segmentation. Finally, prediction models were developed following further selection of significant features and the implementation of random forest-based classification. Results were reported using the area under the curve (AUC), accuracy, sensitivity, and specificity. RESULTS A dataset from 1203 breast cancer patients was utilized to develop and evaluate the proposed method. Compared to the method without lesion attention and with only lesion regions as inputs, the proposed method performed better at biomarker status prediction. Specifically, it achieved an AUC of 0.71 (95 % confidence interval [CI]: 0.65, 0.77) for Ki-67 and 0.73 (95 % CI: 0.65, 0.80) for human epidermal growth factor receptor 2 (HER2). CONCLUSIONS A lesion attention-guided neural network was proposed in this work to extract CEM image features for biomarker status prediction in breast cancer. The promising results demonstrated the potential of CEM in non-invasively predicting the biomarker statuses in breast cancer.
Collapse
Affiliation(s)
- Nini Qian
- Department of Biomedical Engineering, Medical School, Tianjin University, Tianjin 300072, China; State Key Laboratory of Advanced Medical Materials and Devices, Tianjin University, Tianjin, China
| | - Wei Jiang
- Department of Biomedical Engineering, Medical School, Tianjin University, Tianjin 300072, China; State Key Laboratory of Advanced Medical Materials and Devices, Tianjin University, Tianjin, China; Department of Radiotherapy, Yantai Yuhuangding Hospital, Shandong 264000, China
| | - Xiaoqian Wu
- Department of Radiation Oncology, The Affiliated Hospital of Qingdao University, Qingdao 266071, China
| | - Ning Zhang
- Department of Biomedical Engineering, Medical School, Tianjin University, Tianjin 300072, China; State Key Laboratory of Advanced Medical Materials and Devices, Tianjin University, Tianjin, China
| | - Hui Yu
- Department of Biomedical Engineering, Medical School, Tianjin University, Tianjin 300072, China; State Key Laboratory of Advanced Medical Materials and Devices, Tianjin University, Tianjin, China
| | - Yu Guo
- Department of Biomedical Engineering, Medical School, Tianjin University, Tianjin 300072, China; State Key Laboratory of Advanced Medical Materials and Devices, Tianjin University, Tianjin, China.
| |
Collapse
|
6
|
Zhang H, Lin F, Zheng T, Gao J, Wang Z, Zhang K, Zhang X, Xu C, Zhao F, Xie H, Li Q, Cao K, Gu Y, Mao N. Artificial intelligence-based classification of breast lesion from contrast enhanced mammography: a multicenter study. Int J Surg 2024; 110:2593-2603. [PMID: 38748500 PMCID: PMC11093474 DOI: 10.1097/js9.0000000000001076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Accepted: 12/24/2023] [Indexed: 05/19/2024]
Abstract
PURPOSE The authors aimed to establish an artificial intelligence (AI)-based method for preoperative diagnosis of breast lesions from contrast enhanced mammography (CEM) and to explore its biological mechanism. MATERIALS AND METHODS This retrospective study includes 1430 eligible patients who underwent CEM examination from June 2017 to July 2022 and were divided into a construction set (n=1101), an internal test set (n=196), and a pooled external test set (n=133). The AI model adopted RefineNet as a backbone network, and an attention sub-network, named convolutional block attention module (CBAM), was built upon the backbone for adaptive feature refinement. An XGBoost classifier was used to integrate the refined deep learning features with clinical characteristics to differentiate benign and malignant breast lesions. The authors further retrained the AI model to distinguish in situ and invasive carcinoma among breast cancer candidates. RNA-sequencing data from 12 patients were used to explore the underlying biological basis of the AI prediction. RESULTS The AI model achieved an area under the curve of 0.932 in diagnosing benign and malignant breast lesions in the pooled external test set, better than the best-performing deep learning model, radiomics model, and radiologists. Moreover, the AI model has also achieved satisfactory results (an area under the curve from 0.788 to 0.824) for the diagnosis of in situ and invasive carcinoma in the test sets. Further, the biological basis exploration revealed that the high-risk group was associated with the pathways such as extracellular matrix organization. CONCLUSIONS The AI model based on CEM and clinical characteristics had good predictive performance in the diagnosis of breast lesions.
Collapse
Affiliation(s)
- Haicheng Zhang
- Big Data and Artificial Intelligence Laboratory
- Department of Radiology
| | | | | | | | | | | | - Xiang Zhang
- Department of Radiology, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, Guangdong
| | - Cong Xu
- Physical Examination Center, Yantai Yuhuangding Hospital, Qingdao University
| | - Feng Zhao
- School of Computer Science and Technology, Shandong Technology and Business University, Yantai
| | | | - Qin Li
- Department of Radiology, Weifang Hospital of Traditional Chinese Medicine, Weifang, Shandong
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai
| | - Kun Cao
- Department of Radiology, Beijing Cancer Hospital, Beijing, P. R. China
| | - Yajia Gu
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai
| | - Ning Mao
- Big Data and Artificial Intelligence Laboratory
- Department of Radiology
- Shandong Provincial Key Medical and Health Laboratory of Intelligent Diagnosis and Treatment for Women's Diseases (Yantai Yuhuangding Hospital), Yantai, Shandong, P. R. China
| |
Collapse
|
7
|
Wei Z, Xv Y, Liu H, Li Y, Yin S, Xie Y, Chen Y, Lv F, Jiang Q, Li F, Xiao M. A CT-based deep learning model predicts overall survival in patients with muscle invasive bladder cancer after radical cystectomy: a multicenter retrospective cohort study. Int J Surg 2024; 110:2922-2932. [PMID: 38349205 PMCID: PMC11093481 DOI: 10.1097/js9.0000000000001194] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Accepted: 01/31/2024] [Indexed: 05/16/2024]
Abstract
BACKGROUND Muscle invasive bladder cancer (MIBC) has a poor prognosis even after radical cystectomy (RC). Postoperative survival stratification based on radiomics and deep learning (DL) algorithms may be useful for treatment decision-making and follow-up management. This study was aimed to develop and validate a DL model based on preoperative computed tomography (CT) for predicting postcystectomy overall survival (OS) in patients with MIBC. METHODS MIBC patients who underwent RC were retrospectively included from four centers, and divided into the training, internal validation, and external validation sets. A DL model incorporated the convolutional block attention module (CBAM) was built for predicting OS using preoperative CT images. The authors assessed the prognostic accuracy of the DL model and compared it with classic handcrafted radiomics model and clinical model. Then, a deep learning radiomics nomogram (DLRN) was developed by combining clinicopathological factors, radiomics score (Rad-score) and deep learning score (DL-score). Model performance was assessed by C-index, KM curve, and time-dependent ROC curve. RESULTS A total of 405 patients with MIBC were included in this study. The DL-score achieved a much higher C-index than Rad-score and clinical model (0.690 vs. 0.652 vs. 0.618 in the internal validation set, and 0.658 vs. 0.601 vs. 0.610 in the external validation set). After adjusting for clinicopathologic variables, the DL-score was identified as a significantly independent risk factor for OS by the multivariate Cox regression analysis in all sets (all P <0.01). The DLRN further improved the performance, with a C-index of 0.713 (95% CI: 0.627-0.798) in the internal validation set and 0.685 (95% CI: 0.586-0.765) in external validation set, respectively. CONCLUSIONS A DL model based on preoperative CT can predict survival outcome of patients with MIBC, which may help in risk stratification and guide treatment decision-making and follow-up management.
Collapse
Affiliation(s)
| | | | | | | | - Siwen Yin
- Department of Urology, Chongqing University Fuling Hospital
| | | | - Yong Chen
- Department of Urology, Chongqing University Fuling Hospital
| | - Fajin Lv
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University
| | - Qing Jiang
- Department of Urology, The Second Affiliated Hospital of Chongqing Medical University
| | - Feng Li
- Department of Urology, Chongqing University Three Gorges Hospital, Chongqing, People’s Republic of China
| | | |
Collapse
|
8
|
Cerekci E, Alis D, Denizoglu N, Camurdan O, Ege Seker M, Ozer C, Hansu MY, Tanyel T, Oksuz I, Karaarslan E. Quantitative evaluation of Saliency-Based Explainable artificial intelligence (XAI) methods in Deep Learning-Based mammogram analysis. Eur J Radiol 2024; 173:111356. [PMID: 38364587 DOI: 10.1016/j.ejrad.2024.111356] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 12/10/2023] [Accepted: 02/02/2024] [Indexed: 02/18/2024]
Abstract
BACKGROUND Explainable Artificial Intelligence (XAI) is prominent in the diagnostics of opaque deep learning (DL) models, especially in medical imaging. Saliency methods are commonly used, yet there's a lack of quantitative evidence regarding their performance. OBJECTIVES To quantitatively evaluate the performance of widely utilized saliency XAI methods in the task of breast cancer detection on mammograms. METHODS Three radiologists drew ground-truth boxes on a balanced mammogram dataset of women (n = 1496 cancer-positive and negative scans) from three centers. A modified, pre-trained DL model was employed for breast cancer detection, using MLO and CC images. Saliency XAI methods, including Gradient-weighted Class Activation Mapping (Grad-CAM), Grad-CAM++, and Eigen-CAM, were evaluated. We utilized the Pointing Game to assess these methods, determining if the maximum value of a saliency map aligned with the bounding boxes, representing the ratio of correctly identified lesions among all cancer patients, with a value ranging from 0 to 1. RESULTS The development sample included 2,244 women (75%), with the remaining 748 women (25%) in the testing set for unbiased XAI evaluation. The model's recall, precision, accuracy, and F1-Score in identifying cancer in the testing set were 69%, 88%, 80%, and 0.77, respectively. The Pointing Game Scores for Grad-CAM, Grad-CAM++, and Eigen-CAM were 0.41, 0.30, and 0.35 in women with cancer and marginally increased to 0.41, 0.31, and 0.36 when considering only true-positive samples. CONCLUSIONS While saliency-based methods provide some degree of explainability, they frequently fall short in delineating how DL models arrive at decisions in a considerable number of instances.
Collapse
Affiliation(s)
- Esma Cerekci
- Sisli Hamidiye Etfal Training and Research Hospital, Department of Radiology, Istanbul, Turkey.
| | - Deniz Alis
- Acibadem Mehmet Ali Aydinlar University, School of Medicine, Department of Radiology, Istanbul, Turkey.
| | - Nurper Denizoglu
- Acibadem Healthcare Group, Department of Radiology, Istanbul, Turkey.
| | - Ozden Camurdan
- Acibadem Healthcare Group, Department of Radiology, Istanbul, Turkey.
| | - Mustafa Ege Seker
- Acibadem Mehmet Ali Aydinlar University, School of Medicine, Istanbul, Turkey.
| | - Caner Ozer
- Istanbul Technical University, Department of Computer Engineering, Istanbul, Turkey.
| | - Muhammed Yusuf Hansu
- Istanbul Technical University, Department of Electronics and Communication Engineering, Istanbul, Turkey.
| | - Toygar Tanyel
- Istanbul Technical University, Department of Biomedical Engineering, Istanbul, Turkey.
| | - Ilkay Oksuz
- Istanbul Technical University, Department of Computer Engineering, Istanbul, Turkey.
| | - Ercan Karaarslan
- Acibadem Mehmet Ali Aydinlar University, School of Medicine, Department of Radiology, Istanbul, Turkey
| |
Collapse
|
9
|
Covington MF, Salmon S, Weaver BD, Fajardo LL. State-of-the-art for contrast-enhanced mammography. Br J Radiol 2024; 97:695-704. [PMID: 38374651 PMCID: PMC11027262 DOI: 10.1093/bjr/tqae017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 10/23/2023] [Accepted: 01/12/2024] [Indexed: 02/21/2024] Open
Abstract
Contrast-enhanced mammography (CEM) is an emerging breast imaging technology with promise for breast cancer screening, diagnosis, and procedural guidance. However, best uses of CEM in comparison with other breast imaging modalities such as tomosynthesis, ultrasound, and MRI remain inconclusive in many clinical settings. This review article summarizes recent peer-reviewed literature, emphasizing retrospective reviews, prospective clinical trials, and meta-analyses published from 2020 to 2023. The intent of this article is to supplement prior comprehensive reviews and summarize the current state-of-the-art of CEM.
Collapse
Affiliation(s)
- Matthew F Covington
- Department of Radiology and Imaging Sciences, University of Utah, Salt Lake City, UT, 84112, United States
- Center for Quantitative Cancer Imaging, Huntsman Cancer Institute, Salt Lake City, UT, 84112, United States
| | - Samantha Salmon
- Department of Radiology and Imaging Sciences, University of Utah, Salt Lake City, UT, 84112, United States
| | - Bradley D Weaver
- Spencer Fox Eccles School of Medicine, University of Utah, Salt Lake City, UT, 84112, United States
| | - Laurie L Fajardo
- Department of Radiology and Imaging Sciences, University of Utah, Salt Lake City, UT, 84112, United States
| |
Collapse
|
10
|
Yim D, Khuntia J, Parameswaran V, Meyers A. Preliminary Evidence of the Use of Generative AI in Health Care Clinical Services: Systematic Narrative Review. JMIR Med Inform 2024; 12:e52073. [PMID: 38506918 PMCID: PMC10993141 DOI: 10.2196/52073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 10/12/2023] [Accepted: 01/30/2024] [Indexed: 03/21/2024] Open
Abstract
BACKGROUND Generative artificial intelligence tools and applications (GenAI) are being increasingly used in health care. Physicians, specialists, and other providers have started primarily using GenAI as an aid or tool to gather knowledge, provide information, train, or generate suggestive dialogue between physicians and patients or between physicians and patients' families or friends. However, unless the use of GenAI is oriented to be helpful in clinical service encounters that can improve the accuracy of diagnosis, treatment, and patient outcomes, the expected potential will not be achieved. As adoption continues, it is essential to validate the effectiveness of the infusion of GenAI as an intelligent technology in service encounters to understand the gap in actual clinical service use of GenAI. OBJECTIVE This study synthesizes preliminary evidence on how GenAI assists, guides, and automates clinical service rendering and encounters in health care The review scope was limited to articles published in peer-reviewed medical journals. METHODS We screened and selected 0.38% (161/42,459) of articles published between January 1, 2020, and May 31, 2023, identified from PubMed. We followed the protocols outlined in the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines to select highly relevant studies with at least 1 element on clinical use, evaluation, and validation to provide evidence of GenAI use in clinical services. The articles were classified based on their relevance to clinical service functions or activities using the descriptive and analytical information presented in the articles. RESULTS Of 161 articles, 141 (87.6%) reported using GenAI to assist services through knowledge access, collation, and filtering. GenAI was used for disease detection (19/161, 11.8%), diagnosis (14/161, 8.7%), and screening processes (12/161, 7.5%) in the areas of radiology (17/161, 10.6%), cardiology (12/161, 7.5%), gastrointestinal medicine (4/161, 2.5%), and diabetes (6/161, 3.7%). The literature synthesis in this study suggests that GenAI is mainly used for diagnostic processes, improvement of diagnosis accuracy, and screening and diagnostic purposes using knowledge access. Although this solves the problem of knowledge access and may improve diagnostic accuracy, it is oriented toward higher value creation in health care. CONCLUSIONS GenAI informs rather than assisting or automating clinical service functions in health care. There is potential in clinical service, but it has yet to be actualized for GenAI. More clinical service-level evidence that GenAI is used to streamline some functions or provides more automated help than only information retrieval is needed. To transform health care as purported, more studies related to GenAI applications must automate and guide human-performed services and keep up with the optimism that forward-thinking health care organizations will take advantage of GenAI.
Collapse
Affiliation(s)
- Dobin Yim
- Loyola University, Maryland, MD, United States
| | - Jiban Khuntia
- University of Colorado Denver, Denver, CO, United States
| | | | - Arlen Meyers
- University of Colorado Denver, Denver, CO, United States
| |
Collapse
|
11
|
Dong F, Song J, Chen B, Xie X, Cheng J, Song J, Huang Q. Improved detection of aortic dissection in non-contrast-enhanced chest CT using an attention-based deep learning model. Heliyon 2024; 10:e24547. [PMID: 38304839 PMCID: PMC10831773 DOI: 10.1016/j.heliyon.2024.e24547] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2023] [Revised: 12/22/2023] [Accepted: 01/10/2024] [Indexed: 02/03/2024] Open
Abstract
Rationale and objectives This study investigated the effects of implementing an attention-based deep learning model for the detection of aortic dissection (AD) using non-contrast-enhanced chest computed tomography (CT). Materials and methods We analysed the records of 1300 patients who underwent contrast-enhanced chest CT at 2 medical centres between January 2015 and February 2023. We considered an internal cohort of 200 patients with AD and 200 patients without AD and an external test cohort of 40 patients with AD and 40 patients without AD. The internal cohort was divided into training and test sets, and a deep learning model was trained using 9600 CT images. A convolutional block attention module (CBAM) and a traditional deep learning architecture (namely, You Only Look Once version 5 [YOLOv5]) were combined into an attention-based model (i.e., YOLOv5-CBAM). Its performance was measured against the unmodified YOLOv5 model, and the accuracy, sensitivity, and specificity of the algorithm were evaluated by two independent radiologists. Results The CBAM-based model outperformed the traditional deep learning model. In the external testing set, YOLOv5-CBAM achieved an area under the curve (AUC) of 0.938, accuracy of 91.5 %, sensitivity of 90.0 %, and specificity of 92.9 %, whereas the unmodified model achieved an AUC of 0.844, accuracy of 83.6 %, sensitivity of 71.2 %, and specificity of 96.0 %. The sensitivity results of the unmodified algorithms were not significantly different from those of the radiologists; however, the proposed YOLOv5-CBAM algorithm outperformed the unmodified algorithms in terms of detection. Conclusions Incorporating the CBAM attention mechanism into a deep learning model can significantly improve AD detection in non-contrast-enhanced chest CT. This approach may aid radiologists in the timely and accurate diagnosis of AD, which is important for improving patient outcomes.
Collapse
Affiliation(s)
- Fenglei Dong
- Department of Radiology, The Second Affiliated Hospital and Yuying Children’s Hospital of Wenzhou Medical University, 1111 east section of Wenzhou avenue, Longwan District, Wenzhou, China
| | - Jiao Song
- Department of Radiology, The Second Affiliated Hospital and Yuying Children’s Hospital of Wenzhou Medical University, 1111 east section of Wenzhou avenue, Longwan District, Wenzhou, China
| | - Bo Chen
- Department of Radiology, The Second Affiliated Hospital and Yuying Children’s Hospital of Wenzhou Medical University, 1111 east section of Wenzhou avenue, Longwan District, Wenzhou, China
| | - Xiaoxiao Xie
- Department of Radiology, The Second Affiliated Hospital and Yuying Children’s Hospital of Wenzhou Medical University, 1111 east section of Wenzhou avenue, Longwan District, Wenzhou, China
| | - Jianmin Cheng
- Department of Radiology, The Second Affiliated Hospital and Yuying Children’s Hospital of Wenzhou Medical University, 1111 east section of Wenzhou avenue, Longwan District, Wenzhou, China
| | - Jiawen Song
- Department of Radiology, The Second Affiliated Hospital and Yuying Children’s Hospital of Wenzhou Medical University, 1111 east section of Wenzhou avenue, Longwan District, Wenzhou, China
| | - Qun Huang
- Department of Radiology, The First Affiliated Hospital of Wenzhou Medical University, No. 1 Fanhai West Road, Ouhai District, Wenzhou, China
| |
Collapse
|
12
|
Chen Y, Hua Z, Lin F, Zheng T, Zhou H, Zhang S, Gao J, Wang Z, Shao H, Li W, Liu F, Wang S, Zhang Y, Zhao F, Liu H, Xie H, Ma H, Zhang H, Mao N. Detection and classification of breast lesions using multiple information on contrast-enhanced mammography by a multiprocess deep-learning system: A multicenter study. Chin J Cancer Res 2023; 35:408-423. [PMID: 37691895 PMCID: PMC10485921 DOI: 10.21147/j.issn.1000-9604.2023.04.07] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2023] [Accepted: 08/18/2023] [Indexed: 09/12/2023] Open
Abstract
Objective Accurate detection and classification of breast lesions in early stage is crucial to timely formulate effective treatments for patients. We aim to develop a fully automatic system to detect and classify breast lesions using multiple contrast-enhanced mammography (CEM) images. Methods In this study, a total of 1,903 females who underwent CEM examination from three hospitals were enrolled as the training set, internal testing set, pooled external testing set and prospective testing set. Here we developed a CEM-based multiprocess detection and classification system (MDCS) to perform the task of detection and classification of breast lesions. In this system, we introduced an innovative auxiliary feature fusion (AFF) algorithm that could intelligently incorporates multiple types of information from CEM images. The average free-response receiver operating characteristic score (AFROC-Score) was presented to validate system's detection performance, and the performance of classification was evaluated by area under the receiver operating characteristic curve (AUC). Furthermore, we assessed the diagnostic value of MDCS through visual analysis of disputed cases, comparing its performance and efficiency with that of radiologists and exploring whether it could augment radiologists' performance. Results On the pooled external and prospective testing sets, MDCS always maintained a high standalone performance, with AFROC-Scores of 0.953 and 0.963 for detection task, and AUCs for classification were 0.909 [95% confidence interval (95% CI): 0.822-0.996] and 0.912 (95% CI: 0.840-0.985), respectively. It also achieved higher sensitivity than all senior radiologists and higher specificity than all junior radiologists on pooled external and prospective testing sets. Moreover, MDCS performed superior diagnostic efficiency with an average reading time of 5 seconds, compared to the radiologists' average reading time of 3.2 min. The average performance of all radiologists was also improved to varying degrees with MDCS assistance. Conclusions MDCS demonstrated excellent performance in the detection and classification of breast lesions, and greatly enhanced the overall performance of radiologists.
Collapse
Affiliation(s)
- Yuqian Chen
- School of Information and Electronic Engineering, Shandong Technology and Business University, Yantai 264005, China
| | - Zhen Hua
- School of Information and Electronic Engineering, Shandong Technology and Business University, Yantai 264005, China
| | - Fan Lin
- Department of Radiology, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai 264000, China
| | - Tiantian Zheng
- School of Medical Imaging, Binzhou Medical University, Yantai 264003, China
| | - Heng Zhou
- School of Information and Electronic Engineering, Shandong Technology and Business University, Yantai 264005, China
| | - Shijie Zhang
- Department of Radiology, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai 264000, China
| | - Jing Gao
- Department of Radiology, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai 264000, China
| | - Zhongyi Wang
- Department of Radiology, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai 264000, China
| | - Huafei Shao
- Department of Radiology, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai 264000, China
| | - Wenjuan Li
- Department of Radiology, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai 264000, China
| | - Fengjie Liu
- Department of Radiology, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai 264000, China
| | - Simin Wang
- Department of Radiology, Fudan University Cancer Center, Shanghai 200433; China
| | - Yan Zhang
- Department of Radiology, Guangdong Maternal and Child Health Hospital, Guangzhou 510010, China
| | - Feng Zhao
- School of Computer Science and Technology, Shandong Technology and Business University, Yantai 264005, China
| | - Hao Liu
- Yizhun Medical AI Co. Ltd., Beijing 100080, China
| | - Haizhu Xie
- Department of Radiology, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai 264000, China
| | - Heng Ma
- Department of Radiology, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai 264000, China
| | - Haicheng Zhang
- Department of Radiology, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai 264000, China
- Big Data and Artificial Intelligence Laboratory, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai 264000, China
| | - Ning Mao
- Department of Radiology, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai 264000, China
- Big Data and Artificial Intelligence Laboratory, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai 264000, China
| |
Collapse
|
13
|
Taylor CR, Monga N, Johnson C, Hawley JR, Patel M. Artificial Intelligence Applications in Breast Imaging: Current Status and Future Directions. Diagnostics (Basel) 2023; 13:2041. [PMID: 37370936 DOI: 10.3390/diagnostics13122041] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Revised: 05/20/2023] [Accepted: 05/29/2023] [Indexed: 06/29/2023] Open
Abstract
Attempts to use computers to aid in the detection of breast malignancies date back more than 20 years. Despite significant interest and investment, this has historically led to minimal or no significant improvement in performance and outcomes with traditional computer-aided detection. However, recent advances in artificial intelligence and machine learning are now starting to deliver on the promise of improved performance. There are at present more than 20 FDA-approved AI applications for breast imaging, but adoption and utilization are widely variable and low overall. Breast imaging is unique and has aspects that create both opportunities and challenges for AI development and implementation. Breast cancer screening programs worldwide rely on screening mammography to reduce the morbidity and mortality of breast cancer, and many of the most exciting research projects and available AI applications focus on cancer detection for mammography. There are, however, multiple additional potential applications for AI in breast imaging, including decision support, risk assessment, breast density quantitation, workflow and triage, quality evaluation, response to neoadjuvant chemotherapy assessment, and image enhancement. In this review the current status, availability, and future directions of investigation of these applications are discussed, as well as the opportunities and barriers to more widespread utilization.
Collapse
Affiliation(s)
- Clayton R Taylor
- Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, OH 43210, USA
| | - Natasha Monga
- Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, OH 43210, USA
| | - Candise Johnson
- Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, OH 43210, USA
| | - Jeffrey R Hawley
- Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, OH 43210, USA
| | - Mitva Patel
- Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, OH 43210, USA
| |
Collapse
|