1
|
Kim JM, Ha SM. Clinical Application of Artificial Intelligence in Breast MRI. JOURNAL OF THE KOREAN SOCIETY OF RADIOLOGY 2025; 86:227-235. [PMID: 40201613 PMCID: PMC11973112 DOI: 10.3348/jksr.2025.0012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/07/2025] [Revised: 02/27/2025] [Accepted: 03/04/2025] [Indexed: 04/10/2025]
Abstract
Breast MRI is the most sensitive imaging modality for detecting breast cancer. However, its widespread use is limited by factors such as extended examination times, need for contrast agents, and susceptibility to motion artifacts. Artificial intelligence (AI) has emerged as a promising solution for these challenges by enhancing the efficiency and accuracy of breast MRI in multiple domains. AI-driven image reconstruction techniques have significantly reduced scan times while preserving image quality. This method outperforms traditional parallel imaging and compressed sensing. AI has also shown great promise for lesion classification and segmentation, with convolutional neural networks and U-Net architectures improving the differentiation between benign and malignant lesions. AI-based segmentation methods enable accurate tumor detection and characterization, thereby aiding personalized treatment planning. An AI triaging system has demonstrated the potential to streamline workflow efficiency by identifying low-suspicion cases and reducing the workload of radiologists. Another promising application is synthetic breast MR image generation, which aims to generate contrast enhanced images from non-contrast sequences, thereby improving accessibility and patient safety. Further research is required to validate AI models across diverse populations and imaging protocols. As AI continues to evolve, it is expected to play an important role in the optimization of breast MRI.
Collapse
|
2
|
Sun F, Zhang L, Tong Z. Application progress of artificial intelligence in tumor diagnosis and treatment. Front Artif Intell 2025; 7:1487207. [PMID: 39845097 PMCID: PMC11753238 DOI: 10.3389/frai.2024.1487207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2024] [Accepted: 12/19/2024] [Indexed: 01/24/2025] Open
Abstract
The rapid advancement of artificial intelligence (AI) has introduced transformative opportunities in oncology, enhancing the precision and efficiency of tumor diagnosis and treatment. This review examines recent advancements in AI applications across tumor imaging diagnostics, pathological analysis, and treatment optimization, with a particular focus on breast cancer, lung cancer, and liver cancer. By synthesizing findings from peer-reviewed studies published over the past decade, this paper analyzes the role of AI in enhancing diagnostic accuracy, streamlining therapeutic decision-making, and personalizing treatment strategies. Additionally, this paper addresses challenges related to AI integration into clinical workflows and regulatory compliance. As AI continues to evolve, its applications in oncology promise further improvements in patient outcomes, though additional research is needed to address its limitations and ensure ethical and effective deployment.
Collapse
Affiliation(s)
- Fan Sun
- National Clinical Research Center for Cancer, Tianjin Medical University Cancer Institute and Hospital, Tianjin, China
- Tianjin’s Clinical Research Center for Cancer, Tianjin, China
- Key Laboratory of Breast Cancer Prevention and Therapy, Tianjin Medical University, Ministry of Education, Tianjin, China
- Key Laboratory of Cancer Prevention and Therapy, Tianjin, China
| | - Li Zhang
- National Clinical Research Center for Cancer, Tianjin Medical University Cancer Institute and Hospital, Tianjin, China
- Tianjin’s Clinical Research Center for Cancer, Tianjin, China
- Key Laboratory of Breast Cancer Prevention and Therapy, Tianjin Medical University, Ministry of Education, Tianjin, China
- Key Laboratory of Cancer Prevention and Therapy, Tianjin, China
| | - Zhongsheng Tong
- National Clinical Research Center for Cancer, Tianjin Medical University Cancer Institute and Hospital, Tianjin, China
- Tianjin’s Clinical Research Center for Cancer, Tianjin, China
- Key Laboratory of Breast Cancer Prevention and Therapy, Tianjin Medical University, Ministry of Education, Tianjin, China
- Key Laboratory of Cancer Prevention and Therapy, Tianjin, China
| |
Collapse
|
3
|
Gullo RL, Brunekreef J, Marcus E, Han LK, Eskreis-Winkler S, Thakur SB, Mann R, Lipman KG, Teuwen J, Pinker K. AI Applications to Breast MRI: Today and Tomorrow. J Magn Reson Imaging 2024; 60:2290-2308. [PMID: 38581127 PMCID: PMC11452568 DOI: 10.1002/jmri.29358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Revised: 03/07/2024] [Accepted: 03/09/2024] [Indexed: 04/08/2024] Open
Abstract
In breast imaging, there is an unrelenting increase in the demand for breast imaging services, partly explained by continuous expanding imaging indications in breast diagnosis and treatment. As the human workforce providing these services is not growing at the same rate, the implementation of artificial intelligence (AI) in breast imaging has gained significant momentum to maximize workflow efficiency and increase productivity while concurrently improving diagnostic accuracy and patient outcomes. Thus far, the implementation of AI in breast imaging is at the most advanced stage with mammography and digital breast tomosynthesis techniques, followed by ultrasound, whereas the implementation of AI in breast magnetic resonance imaging (MRI) is not moving along as rapidly due to the complexity of MRI examinations and fewer available dataset. Nevertheless, there is persisting interest in AI-enhanced breast MRI applications, even as the use of and indications of breast MRI continue to expand. This review presents an overview of the basic concepts of AI imaging analysis and subsequently reviews the use cases for AI-enhanced MRI interpretation, that is, breast MRI triaging and lesion detection, lesion classification, prediction of treatment response, risk assessment, and image quality. Finally, it provides an outlook on the barriers and facilitators for the adoption of AI in breast MRI. LEVEL OF EVIDENCE: 5 TECHNICAL EFFICACY: Stage 6.
Collapse
Affiliation(s)
- Roberto Lo Gullo
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Joren Brunekreef
- AI for Oncology, Netherlands Cancer Institute, Amsterdam, the Netherlands
| | - Eric Marcus
- AI for Oncology, Netherlands Cancer Institute, Amsterdam, the Netherlands
| | - Lynn K Han
- Weill Cornell Medical College, New York-Presbyterian Hospital, New York, NY, USA
| | - Sarah Eskreis-Winkler
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Sunitha B Thakur
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Ritse Mann
- AI for Oncology, Netherlands Cancer Institute, Amsterdam, the Netherlands
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Kevin Groot Lipman
- AI for Oncology, Netherlands Cancer Institute, Amsterdam, the Netherlands
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Jonas Teuwen
- AI for Oncology, Netherlands Cancer Institute, Amsterdam, the Netherlands
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Katja Pinker
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| |
Collapse
|
4
|
Mayfield JD, Ataya D, Abdalah M, Stringfield O, Bui MM, Raghunand N, Niell B, El Naqa I. Presurgical Upgrade Prediction of DCIS to Invasive Ductal Carcinoma Using Time-dependent Deep Learning Models with DCE MRI. Radiol Artif Intell 2024; 6:e230348. [PMID: 38900042 PMCID: PMC11427917 DOI: 10.1148/ryai.230348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/21/2024]
Abstract
Purpose To determine whether time-dependent deep learning models can outperform single time point models in predicting preoperative upgrade of ductal carcinoma in situ (DCIS) to invasive malignancy at dynamic contrast-enhanced (DCE) breast MRI without a lesion segmentation prerequisite. Materials and Methods In this exploratory study, 154 cases of biopsy-proven DCIS (25 upgraded at surgery and 129 not upgraded) were selected consecutively from a retrospective cohort of preoperative DCE MRI in women with a mean age of 59 years at time of diagnosis from 2012 to 2022. Binary classification was implemented with convolutional neural network (CNN)-long short-term memory (LSTM) architectures benchmarked against traditional CNNs without manual segmentation of the lesions. Combinatorial performance analysis of ResNet50 versus VGG16-based models was performed with each contrast phase. Binary classification area under the receiver operating characteristic curve (AUC) was reported. Results VGG16-based models consistently provided better holdout test AUCs than did ResNet50 in CNN and CNN-LSTM studies (multiphase test AUC, 0.67 vs 0.59, respectively, for CNN models [P = .04] and 0.73 vs 0.62 for CNN-LSTM models [P = .008]). The time-dependent model (CNN-LSTM) provided a better multiphase test AUC over single time point (CNN) models (0.73 vs 0.67; P = .04). Conclusion Compared with single time point architectures, sequential deep learning algorithms using preoperative DCE MRI improved prediction of DCIS lesions upgraded to invasive malignancy without the need for lesion segmentation. Keywords: MRI, Dynamic Contrast-enhanced, Breast, Convolutional Neural Network (CNN) Supplemental material is available for this article. © RSNA, 2024.
Collapse
MESH Headings
- Humans
- Female
- Breast Neoplasms/diagnostic imaging
- Breast Neoplasms/pathology
- Breast Neoplasms/surgery
- Deep Learning
- Middle Aged
- Magnetic Resonance Imaging/methods
- Retrospective Studies
- Carcinoma, Intraductal, Noninfiltrating/diagnostic imaging
- Carcinoma, Intraductal, Noninfiltrating/pathology
- Carcinoma, Intraductal, Noninfiltrating/surgery
- Contrast Media
- Carcinoma, Ductal, Breast/diagnostic imaging
- Carcinoma, Ductal, Breast/pathology
- Carcinoma, Ductal, Breast/surgery
- Aged
- Adult
- Predictive Value of Tests
- Image Interpretation, Computer-Assisted/methods
- Breast/diagnostic imaging
- Breast/pathology
- Breast/surgery
Collapse
Affiliation(s)
- John D Mayfield
- From the Departments of Radiology (J.D.M.), Oncologic Sciences (D.A., M.M.B., N.R., B.N.), and Medical Engineering (J.D.M.), University of South Florida College of Medicine, 12901 Bruce B. Downs Blvd, Tampa, FL 33612; and Department of Diagnostic Imaging and Interventional Radiology (D.A., B.N.), Department of Pathology (M.M.B.), Department of Cancer Physiology (N.R.), Quantitative Imaging CORE (M.A., O.S., I.E.N.), and Department of Machine Learning (M.M.B., I.E.N.), H. Lee Moffitt Cancer Center and Research Institute, Tampa, Fla
| | - Dana Ataya
- From the Departments of Radiology (J.D.M.), Oncologic Sciences (D.A., M.M.B., N.R., B.N.), and Medical Engineering (J.D.M.), University of South Florida College of Medicine, 12901 Bruce B. Downs Blvd, Tampa, FL 33612; and Department of Diagnostic Imaging and Interventional Radiology (D.A., B.N.), Department of Pathology (M.M.B.), Department of Cancer Physiology (N.R.), Quantitative Imaging CORE (M.A., O.S., I.E.N.), and Department of Machine Learning (M.M.B., I.E.N.), H. Lee Moffitt Cancer Center and Research Institute, Tampa, Fla
| | - Mahmoud Abdalah
- From the Departments of Radiology (J.D.M.), Oncologic Sciences (D.A., M.M.B., N.R., B.N.), and Medical Engineering (J.D.M.), University of South Florida College of Medicine, 12901 Bruce B. Downs Blvd, Tampa, FL 33612; and Department of Diagnostic Imaging and Interventional Radiology (D.A., B.N.), Department of Pathology (M.M.B.), Department of Cancer Physiology (N.R.), Quantitative Imaging CORE (M.A., O.S., I.E.N.), and Department of Machine Learning (M.M.B., I.E.N.), H. Lee Moffitt Cancer Center and Research Institute, Tampa, Fla
| | - Olya Stringfield
- From the Departments of Radiology (J.D.M.), Oncologic Sciences (D.A., M.M.B., N.R., B.N.), and Medical Engineering (J.D.M.), University of South Florida College of Medicine, 12901 Bruce B. Downs Blvd, Tampa, FL 33612; and Department of Diagnostic Imaging and Interventional Radiology (D.A., B.N.), Department of Pathology (M.M.B.), Department of Cancer Physiology (N.R.), Quantitative Imaging CORE (M.A., O.S., I.E.N.), and Department of Machine Learning (M.M.B., I.E.N.), H. Lee Moffitt Cancer Center and Research Institute, Tampa, Fla
| | - Marilyn M Bui
- From the Departments of Radiology (J.D.M.), Oncologic Sciences (D.A., M.M.B., N.R., B.N.), and Medical Engineering (J.D.M.), University of South Florida College of Medicine, 12901 Bruce B. Downs Blvd, Tampa, FL 33612; and Department of Diagnostic Imaging and Interventional Radiology (D.A., B.N.), Department of Pathology (M.M.B.), Department of Cancer Physiology (N.R.), Quantitative Imaging CORE (M.A., O.S., I.E.N.), and Department of Machine Learning (M.M.B., I.E.N.), H. Lee Moffitt Cancer Center and Research Institute, Tampa, Fla
| | - Natarajan Raghunand
- From the Departments of Radiology (J.D.M.), Oncologic Sciences (D.A., M.M.B., N.R., B.N.), and Medical Engineering (J.D.M.), University of South Florida College of Medicine, 12901 Bruce B. Downs Blvd, Tampa, FL 33612; and Department of Diagnostic Imaging and Interventional Radiology (D.A., B.N.), Department of Pathology (M.M.B.), Department of Cancer Physiology (N.R.), Quantitative Imaging CORE (M.A., O.S., I.E.N.), and Department of Machine Learning (M.M.B., I.E.N.), H. Lee Moffitt Cancer Center and Research Institute, Tampa, Fla
| | - Bethany Niell
- From the Departments of Radiology (J.D.M.), Oncologic Sciences (D.A., M.M.B., N.R., B.N.), and Medical Engineering (J.D.M.), University of South Florida College of Medicine, 12901 Bruce B. Downs Blvd, Tampa, FL 33612; and Department of Diagnostic Imaging and Interventional Radiology (D.A., B.N.), Department of Pathology (M.M.B.), Department of Cancer Physiology (N.R.), Quantitative Imaging CORE (M.A., O.S., I.E.N.), and Department of Machine Learning (M.M.B., I.E.N.), H. Lee Moffitt Cancer Center and Research Institute, Tampa, Fla
| | - Issam El Naqa
- From the Departments of Radiology (J.D.M.), Oncologic Sciences (D.A., M.M.B., N.R., B.N.), and Medical Engineering (J.D.M.), University of South Florida College of Medicine, 12901 Bruce B. Downs Blvd, Tampa, FL 33612; and Department of Diagnostic Imaging and Interventional Radiology (D.A., B.N.), Department of Pathology (M.M.B.), Department of Cancer Physiology (N.R.), Quantitative Imaging CORE (M.A., O.S., I.E.N.), and Department of Machine Learning (M.M.B., I.E.N.), H. Lee Moffitt Cancer Center and Research Institute, Tampa, Fla
| |
Collapse
|
5
|
Sureshkumar V, Prasad RSN, Balasubramaniam S, Jagannathan D, Daniel J, Dhanasekaran S. Breast Cancer Detection and Analytics Using Hybrid CNN and Extreme Learning Machine. J Pers Med 2024; 14:792. [PMID: 39201984 PMCID: PMC11355507 DOI: 10.3390/jpm14080792] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2024] [Revised: 07/08/2024] [Accepted: 07/15/2024] [Indexed: 09/03/2024] Open
Abstract
Early detection of breast cancer is essential for increasing survival rates, as it is one of the primary causes of death for women globally. Mammograms are extensively used by physicians for diagnosis, but selecting appropriate algorithms for image enhancement, segmentation, feature extraction, and classification remains a significant research challenge. This paper presents a computer-aided diagnosis (CAD)-based hybrid model combining convolutional neural networks (CNN) with a pruned ensembled extreme learning machine (HCPELM) to enhance breast cancer detection, segmentation, feature extraction, and classification. The model employs the rectified linear unit (ReLU) activation function to enhance data analytics after removing artifacts and pectoral muscles, and the HCPELM hybridized with the CNN model improves feature extraction. The hybrid elements are convolutional and fully connected layers. Convolutional layers extract spatial features like edges, textures, and more complex features in deeper layers. The fully connected layers take these features and combine them in a non-linear manner to perform the final classification. ELM performs classification and recognition tasks, aiming for state-of-the-art performance. This hybrid classifier is used for transfer learning by freezing certain layers and modifying the architecture to reduce parameters, easing cancer detection. The HCPELM classifier was trained using the MIAS database and evaluated against benchmark methods. It achieved a breast image recognition accuracy of 86%, outperforming benchmark deep learning models. HCPELM is demonstrating superior performance in early detection and diagnosis, thus aiding healthcare practitioners in breast cancer diagnosis.
Collapse
Affiliation(s)
- Vidhushavarshini Sureshkumar
- Department of Computer Science and Engineering, SRM Institute of Science and Technology, Vadapalani, Chennai 600026, India
| | | | | | - Dhayanithi Jagannathan
- Department of Computer Science and Engineering, Sona College of Technology, Salem 636005, India; (S.B.); (D.J.)
| | - Jayanthi Daniel
- Department of Electronics and Communication Engineering, Rajalakshmi Engineering College, Chennai 602105, India;
| | | |
Collapse
|
6
|
Zhao X, Liao Y, Xie J, He X, Zhang S, Wang G, Fang J, Lu H, Yu J. BreastDM: A DCE-MRI dataset for breast tumor image segmentation and classification. Comput Biol Med 2023; 164:107255. [PMID: 37499296 DOI: 10.1016/j.compbiomed.2023.107255] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2023] [Revised: 05/31/2023] [Accepted: 07/07/2023] [Indexed: 07/29/2023]
Abstract
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) has shown high sensitivity to diagnose breast cancer. However, few computer-aided algorithms focus on employing DCE-MR images for breast cancer diagnosis due to the lack of publicly available DCE-MRI datasets. To address this issue, our work releases a new DCE-MRI dataset called BreastDM for breast tumor segmentation and classification. In particular, a dataset of 232 patients selected with DCE-MR images for benign and malignant cases is established. Each case consists of three types of sequences: pre-contrast, post-contrast, and subtraction sequences. To show the difficulty of breast DCE-MRI tumor image segmentation and classification tasks, benchmarks are achieved by state-of-the-art image segmentation and classification algorithms, including conventional hand-crafted based methods and recently-emerged deep learning-based methods. More importantly, a local-global cross attention fusion network (LG-CAFN) is proposed to further improve the performance of breast tumor images classification. Specifically, LG-CAFN achieved the highest accuracy (88.20%, 83.93%) and AUC value (0.9154,0.8826) in both groups of experiments. Extensive experiments are conducted to present strong baselines based on various typical image segmentation and classification algorithms. Experiment results also demonstrate the superiority of the proposed LG-CAFN to other breast tumor images classification methods. The related dataset and evaluation codes are publicly available at smallboy-code/Breast-cancer-dataset.
Collapse
Affiliation(s)
- Xiaoming Zhao
- Taizhou Central Hospital, Taizhou University, 318000, Taizhou, China; School of Computer Science, Hangzhou Dianzi University, Hangzhou, 310018, China
| | - Yuehui Liao
- Taizhou Central Hospital, Taizhou University, 318000, Taizhou, China; School of Computer Science, Hangzhou Dianzi University, Hangzhou, 310018, China
| | - Jiahao Xie
- Taizhou Central Hospital, Taizhou University, 318000, Taizhou, China; School of Computer Science, Hangzhou Dianzi University, Hangzhou, 310018, China
| | - Xiaxia He
- Taizhou Central Hospital, Taizhou University, 318000, Taizhou, China
| | - Shiqing Zhang
- Taizhou Central Hospital, Taizhou University, 318000, Taizhou, China; School of Computer Science, Hangzhou Dianzi University, Hangzhou, 310018, China.
| | - Guoyu Wang
- Taizhou Central Hospital, Taizhou University, 318000, Taizhou, China.
| | - Jiangxiong Fang
- Taizhou Central Hospital, Taizhou University, 318000, Taizhou, China
| | - Hongsheng Lu
- Taizhou Central Hospital, Taizhou University, 318000, Taizhou, China
| | - Jun Yu
- School of Computer Science, Hangzhou Dianzi University, Hangzhou, 310018, China
| |
Collapse
|
7
|
Jin N, Qiao B, Zhao M, Li L, Zhu L, Zang X, Gu B, Zhang H. Predicting cervical lymph node metastasis in OSCC based on computed tomography imaging genomics. Cancer Med 2023; 12:19260-19271. [PMID: 37635388 PMCID: PMC10557859 DOI: 10.1002/cam4.6474] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Revised: 08/01/2023] [Accepted: 08/15/2023] [Indexed: 08/29/2023] Open
Abstract
BACKGROUND To investigate the correlation between computed tomography (CT) radiomic characteristics and key genes for cervical lymph node metastasis (LNM) in oral squamous cell carcinoma (OSCC). METHODS The region of interest was annotated at the edge of the primary tumor on enhanced CT images from 140 patients with OSCC and obtained radiomic features. Ribonucleic acid (RNA) sequencing was performed on pathological sections from 20 patients. the DESeq software package was used to compare differential gene expression between groups. Weighted gene co-expression network analysis was used to construct co-expressed gene modules, and the KEGG and GO databases were used for pathway enrichment analysis of key gene modules. Finally, Pearson correlation coefficients were calculated between key genes of enriched pathways and radiomic features. RESULTS Four hundred and eighty radiomic features were extracted from enhanced CT images of 140 patients; seven of these correlated significantly with cervical LNM in OSCC (p < 0.01). A total of 3527 differentially expressed RNAs were screened from RNA sequencing data of 20 cases. original_glrlm_RunVariance showed significant positive correlation with most long noncoding RNAs. CONCLUSIONS OSCC cervical LNM is related to the salivary hair bump signaling pathway and biological process. Original_glrlm_RunVariance correlated with LNM and most differentially expressed long noncoding RNAs.
Collapse
Affiliation(s)
- Nenghao Jin
- Medical School of Chinese PLABeijingChina
- Department of Stomatology, The First Medical CentreChinese PLA General HospitalBeijingChina
| | - Bo Qiao
- Medical School of Chinese PLABeijingChina
- Department of Stomatology, The First Medical CentreChinese PLA General HospitalBeijingChina
| | - Min Zhao
- Pharmaceutical Diagnostics, GE HealthcareBeijingChina
- Research Center of Medical Big Data, Chinese PLA General HospitalBeijingChina
| | - Liangbo Li
- Medical School of Chinese PLABeijingChina
- Department of Stomatology, The First Medical CentreChinese PLA General HospitalBeijingChina
| | - Liang Zhu
- Medical School of Chinese PLABeijingChina
- Department of Stomatology, The First Medical CentreChinese PLA General HospitalBeijingChina
| | - Xiaoyi Zang
- Medical School of Chinese PLABeijingChina
- Department of Stomatology, The First Medical CentreChinese PLA General HospitalBeijingChina
| | - Bin Gu
- Department of Stomatology, The First Medical CentreChinese PLA General HospitalBeijingChina
| | - Haizhong Zhang
- Department of Stomatology, The First Medical CentreChinese PLA General HospitalBeijingChina
| |
Collapse
|
8
|
Champendal M, Marmy L, Malamateniou C, Sá Dos Reis C. Artificial intelligence to support person-centred care in breast imaging - A scoping review. J Med Imaging Radiat Sci 2023; 54:511-544. [PMID: 37183076 DOI: 10.1016/j.jmir.2023.04.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 04/05/2023] [Accepted: 04/11/2023] [Indexed: 05/16/2023]
Abstract
AIM To overview Artificial Intelligence (AI) developments and applications in breast imaging (BI) focused on providing person-centred care in diagnosis and treatment for breast pathologies. METHODS The scoping review was conducted in accordance with the Joanna Briggs Institute methodology. The search was conducted on MEDLINE, Embase, CINAHL, Web of science, IEEE explore and arxiv during July 2022 and included only studies published after 2016, in French and English. Combination of keywords and Medical Subject Headings terms (MeSH) related to breast imaging and AI were used. No keywords or MeSH terms related to patients, or the person-centred care (PCC) concept were included. Three independent reviewers screened all abstracts and titles, and all eligible full-text publications during a second stage. RESULTS 3417 results were identified by the search and 106 studies were included for meeting all criteria. Six themes relating to the AI-enabled PCC in BI were identified: individualised risk prediction/growth and prediction/false negative reduction (44.3%), treatment assessment (32.1%), tumour type prediction (11.3%), unnecessary biopsies reduction (5.7%), patients' preferences (2.8%) and other issues (3.8%). The main BI modalities explored in the included studies were magnetic resonance imaging (MRI) (31.1%), mammography (27.4%) and ultrasound (23.6%). The studies were predominantly retrospective, and some variations (age range, data source, race, medical imaging) were present in the datasets used. CONCLUSIONS The AI tools for person-centred care are mainly designed for risk and cancer prediction and disease management to identify the most suitable treatment. However, further studies are needed for image acquisition optimisation for different patient groups, improvement and customisation of patient experience and for communicating to patients the options and pathways of disease management.
Collapse
Affiliation(s)
- Mélanie Champendal
- School of Health Sciences HESAV, HES-SO; University of Applied Sciences Western Switzerland: Lausanne, CH.
| | - Laurent Marmy
- School of Health Sciences HESAV, HES-SO; University of Applied Sciences Western Switzerland: Lausanne, CH.
| | - Christina Malamateniou
- School of Health Sciences HESAV, HES-SO; University of Applied Sciences Western Switzerland: Lausanne, CH; Department of Radiography, Division of Midwifery and Radiography, School of Health Sciences, University of London, London, UK.
| | - Cláudia Sá Dos Reis
- School of Health Sciences HESAV, HES-SO; University of Applied Sciences Western Switzerland: Lausanne, CH.
| |
Collapse
|
9
|
Sujatha R, Chatterjee JM, Angelopoulou A, Kapetanios E, Srinivasu PN, Hemanth DJ. A transfer learning‐based system for grading breast invasive ductal carcinoma. IET IMAGE PROCESSING 2023; 17:1979-1990. [DOI: 10.1049/ipr2.12660] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Accepted: 09/30/2022] [Indexed: 01/15/2025]
Abstract
AbstractBreast carcinoma is a sort of malignancy that begins in the breast. Breast malignancy cells generally structure a tumour that can routinely be seen on an x‐ray or felt like a lump. Despite advances in screening, treatment, and observation that have improved patient endurance rates, breast carcinoma is the most regularly analyzed malignant growth and the subsequent driving reason for malignancy mortality among ladies. Invasive ductal carcinoma is the most boundless breast malignant growth with about 80% of all analyzed cases. It has been found from numerous types of research that artificial intelligence has tremendous capabilities, which is why it is used in various sectors, especially in the healthcare domain. In the initial phase of the medical field, mammography is used for diagnosis, and finding cancer in the case of a dense breast is challenging. The evolution of deep learning and applying the same in the findings are helpful for earlier tracking and medication. The authors have tried to utilize the deep learning concepts for grading breast invasive ductal carcinoma using Transfer Learning in the present work. The authors have used five transfer learning approaches here, namely VGG16, VGG19, InceptionReNetV2, DenseNet121, and DenseNet201 with 50 epochs in the Google Colab platform which has a single 12GB NVIDIA Tesla K80 graphical processing unit (GPU) support that can be used up to 12 h continuously. The dataset used for this work can be openly accessed from http://databiox.com. The experimental results that the authors have received regarding the algorithm's accuracy are as follows: VGG16 with 92.5%, VGG19 with 89.77%, InceptionReNetV2 with 84.46%, DenseNet121 with 92.64%, DenseNet201 with 85.22%. From the experimental results, it is clear that the DenseNet121 gives the maximum accuracy in terms of cancer grading, whereas the InceptionReNetV2 has minimal accuracy.
Collapse
Affiliation(s)
| | | | | | - Epaminondas Kapetanios
- School of Physics, Engineering and Computer Science University of Hertfordshire Hertfordshire UK
| | | | | |
Collapse
|
10
|
Zhao X, Bai JW, Guo Q, Ren K, Zhang GJ. Clinical applications of deep learning in breast MRI. Biochim Biophys Acta Rev Cancer 2023; 1878:188864. [PMID: 36822377 DOI: 10.1016/j.bbcan.2023.188864] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Revised: 01/05/2023] [Accepted: 01/17/2023] [Indexed: 02/25/2023]
Abstract
Deep learning (DL) is one of the most powerful data-driven machine-learning techniques in artificial intelligence (AI). It can automatically learn from raw data without manual feature selection. DL models have led to remarkable advances in data extraction and analysis for medical imaging. Magnetic resonance imaging (MRI) has proven useful in delineating the characteristics and extent of breast lesions and tumors. This review summarizes the current state-of-the-art applications of DL models in breast MRI. Many recent DL models were examined in this field, along with several advanced learning approaches and methods for data normalization and breast and lesion segmentation. For clinical applications, DL-based breast MRI models were proven useful in five aspects: diagnosis of breast cancer, classification of molecular types, classification of histopathological types, prediction of neoadjuvant chemotherapy response, and prediction of lymph node metastasis. For subsequent studies, further improvement in data acquisition and preprocessing is necessary, additional DL techniques in breast MRI should be investigated, and wider clinical applications need to be explored.
Collapse
Affiliation(s)
- Xue Zhao
- Fujian Key Laboratory of Precision Diagnosis and Treatment in Breast Cancer, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; National Institute for Data Science in Health and Medicine, Xiamen University, Xiamen, China; Department of Breast-Thyroid-Surgery and Cancer Center, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Research Center of Clinical Medicine in Breast & Thyroid Cancers, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Key Laboratory of Endocrine-Related Cancer Precision Medicine, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China
| | - Jing-Wen Bai
- Fujian Key Laboratory of Precision Diagnosis and Treatment in Breast Cancer, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Research Center of Clinical Medicine in Breast & Thyroid Cancers, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Key Laboratory of Endocrine-Related Cancer Precision Medicine, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Department of Oncology, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Cancer Research Center, School of Medicine, Xiamen University, Xiamen, China
| | - Qiu Guo
- Department of Radiology, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China
| | - Ke Ren
- Department of Radiology, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China.
| | - Guo-Jun Zhang
- Fujian Key Laboratory of Precision Diagnosis and Treatment in Breast Cancer, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Department of Breast-Thyroid-Surgery and Cancer Center, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Research Center of Clinical Medicine in Breast & Thyroid Cancers, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Key Laboratory of Endocrine-Related Cancer Precision Medicine, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Cancer Research Center, School of Medicine, Xiamen University, Xiamen, China.
| |
Collapse
|
11
|
Do LN, Lee HJ, Im C, Park JH, Lim HS, Park I. Predicting Underestimation of Invasive Cancer in Patients with Core-Needle-Biopsy-Diagnosed Ductal Carcinoma In Situ Using Deep Learning Algorithms. Tomography 2022; 9:1-11. [PMID: 36648988 PMCID: PMC9844271 DOI: 10.3390/tomography9010001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2022] [Revised: 12/13/2022] [Accepted: 12/16/2022] [Indexed: 12/24/2022] Open
Abstract
The prediction of an occult invasive component in ductal carcinoma in situ (DCIS) before surgery is of clinical importance because the treatment strategies are different between pure DCIS without invasive component and upgraded DCIS. We demonstrated the potential of using deep learning models for differentiating between upgraded versus pure DCIS in DCIS diagnosed by core-needle biopsy. Preoperative axial dynamic contrast-enhanced magnetic resonance imaging (MRI) data from 352 lesions were used to train, validate, and test three different types of deep learning models. The highest performance was achieved by Recurrent Residual Convolutional Neural Network using Regions of Interest (ROIs) with an accuracy of 75.0% and area under the receiver operating characteristic curve (AUC) of 0.796. Our results suggest that the deep learning approach may provide an assisting tool to predict the histologic upgrade of DCIS and provide personalized treatment strategies to patients with underestimated invasive disease.
Collapse
Affiliation(s)
- Luu-Ngoc Do
- Department of Radiology, Chonnam National University, 42 Jebong-ro, Dong-gu, Gwangju 61469, Republic of Korea
| | - Hyo-Jae Lee
- Department of Radiology, Chonnam National University Hospital, 42 Jebong-ro, Dong-gu, Gwangju 61469, Republic of Korea
| | - Chaeyeong Im
- Department of Medicine, Chonnam National University, Gwangju 61469, Republic of Korea
| | - Jae Hyeok Park
- Department of Medicine, Chonnam National University, Gwangju 61469, Republic of Korea
| | - Hyo Soon Lim
- Department of Radiology, Chonnam National University, 42 Jebong-ro, Dong-gu, Gwangju 61469, Republic of Korea
- Department of Radiology, Chonnam National University Hwasun Hospital, Gwangju 58128, Republic of Korea
- Correspondence: (H.S.L.); (I.P.)
| | - Ilwoo Park
- Department of Radiology, Chonnam National University, 42 Jebong-ro, Dong-gu, Gwangju 61469, Republic of Korea
- Department of Radiology, Chonnam National University Hospital, 42 Jebong-ro, Dong-gu, Gwangju 61469, Republic of Korea
- Department of Artificial Intelligence Convergence, Chonnam National University, Gwangju 61186, Republic of Korea
- Department of Data Science, Chonnam National University, Gwangju 61186, Republic of Korea
- Correspondence: (H.S.L.); (I.P.)
| |
Collapse
|
12
|
Lee HJ, Park JH, Nguyen AT, Do LN, Park MH, Lee JS, Park I, Lim HS. Prediction of the histologic upgrade of ductal carcinoma in situ using a combined radiomics and machine learning approach based on breast dynamic contrast-enhanced magnetic resonance imaging. Front Oncol 2022; 12:1032809. [PMID: 36408141 PMCID: PMC9667063 DOI: 10.3389/fonc.2022.1032809] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Accepted: 10/14/2022] [Indexed: 11/07/2022] Open
Abstract
Objective To investigate whether support vector machine (SVM) trained with radiomics features based on breast magnetic resonance imaging (MRI) could predict the upgrade of ductal carcinoma in situ (DCIS) diagnosed by core needle biopsy (CNB) after surgical excision. Materials and methods This retrospective study included a total of 349 lesions from 346 female patients (mean age, 54 years) diagnosed with DCIS by CNB between January 2011 and December 2017. Based on histological confirmation after surgery, the patients were divided into pure (n = 198, 56.7%) and upgraded DCIS (n = 151, 43.3%). The entire dataset was randomly split to training (80%) and test sets (20%). Radiomics features were extracted from the intratumor region-of-interest, which was semi-automatically drawn by two radiologists, based on the first subtraction images from dynamic contrast-enhanced T1-weighted MRI. A least absolute shrinkage and selection operator (LASSO) was used for feature selection. A 4-fold cross validation was applied to the training set to determine the combination of features used to train SVM for classification between pure and upgraded DCIS. Sensitivity, specificity, accuracy, and area under the receiver-operating characteristic curve (AUC) were calculated to evaluate the model performance using the hold-out test set. Results The model trained with 9 features (Energy, Skewness, Surface Area to Volume ratio, Gray Level Non Uniformity, Kurtosis, Dependence Variance, Maximum 2D diameter Column, Sphericity, and Large Area Emphasis) demonstrated the highest 4-fold mean validation accuracy and AUC of 0.724 (95% CI, 0.619-0.829) and 0.742 (0.623-0.860), respectively. Sensitivity, specificity, accuracy, and AUC using the test set were 0.733 (0.575-0.892) and 0.7 (0.558-0.842), 0.714 (0.608-0.820) and 0.767 (0.651-0.882), respectively. Conclusion Our study suggested that the combined radiomics and machine learning approach based on preoperative breast MRI may provide an assisting tool to predict the histologic upgrade of DCIS.
Collapse
Affiliation(s)
- Hyo-jae Lee
- Department of Radiology, Chonnam National University Hospital, Gwangju, South Korea
| | - Jae Hyeok Park
- Department of Medicine, Chonnam National University, Gwangju, South Korea
| | - Anh-Tien Nguyen
- Department of Radiology, Chonnam National University Hospital, Gwangju, South Korea
| | - Luu-Ngoc Do
- Department of Radiology, Chonnam National University, Gwangju, South Korea
| | - Min Ho Park
- Department of Medicine, Chonnam National University, Gwangju, South Korea
- Department of Surgery, Chonnam National University Hwasun Hospital, Hwasun, South Korea
| | - Ji Shin Lee
- Department of Medicine, Chonnam National University, Gwangju, South Korea
- Department of Pathology, Chonnam National University Hwasun Hospital, Hwasun, South Korea
| | - Ilwoo Park
- Department of Radiology, Chonnam National University Hospital, Gwangju, South Korea
- Department of Radiology, Chonnam National University, Gwangju, South Korea
- Department of Artificial Intelligence Convergence, Chonnam National University, Gwangju, South Korea
- Department of Data Science, Chonnam National University, Gwangju, South Korea
| | - Hyo Soon Lim
- Department of Radiology, Chonnam National University, Gwangju, South Korea
- Department of Radiology, Chonnam National University Hwasun Hospital, Hwasun, South Korea
| |
Collapse
|
13
|
|
14
|
Kim HE, Cosa-Linan A, Santhanam N, Jannesari M, Maros ME, Ganslandt T. Transfer learning for medical image classification: a literature review. BMC Med Imaging 2022; 22:69. [PMID: 35418051 PMCID: PMC9007400 DOI: 10.1186/s12880-022-00793-7] [Citation(s) in RCA: 183] [Impact Index Per Article: 61.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 03/30/2022] [Indexed: 02/07/2023] Open
Abstract
BACKGROUND Transfer learning (TL) with convolutional neural networks aims to improve performances on a new task by leveraging the knowledge of similar tasks learned in advance. It has made a major contribution to medical image analysis as it overcomes the data scarcity problem as well as it saves time and hardware resources. However, transfer learning has been arbitrarily configured in the majority of studies. This review paper attempts to provide guidance for selecting a model and TL approaches for the medical image classification task. METHODS 425 peer-reviewed articles were retrieved from two databases, PubMed and Web of Science, published in English, up until December 31, 2020. Articles were assessed by two independent reviewers, with the aid of a third reviewer in the case of discrepancies. We followed the PRISMA guidelines for the paper selection and 121 studies were regarded as eligible for the scope of this review. We investigated articles focused on selecting backbone models and TL approaches including feature extractor, feature extractor hybrid, fine-tuning and fine-tuning from scratch. RESULTS The majority of studies (n = 57) empirically evaluated multiple models followed by deep models (n = 33) and shallow (n = 24) models. Inception, one of the deep models, was the most employed in literature (n = 26). With respect to the TL, the majority of studies (n = 46) empirically benchmarked multiple approaches to identify the optimal configuration. The rest of the studies applied only a single approach for which feature extractor (n = 38) and fine-tuning from scratch (n = 27) were the two most favored approaches. Only a few studies applied feature extractor hybrid (n = 7) and fine-tuning (n = 3) with pretrained models. CONCLUSION The investigated studies demonstrated the efficacy of transfer learning despite the data scarcity. We encourage data scientists and practitioners to use deep models (e.g. ResNet or Inception) as feature extractors, which can save computational costs and time without degrading the predictive power.
Collapse
Affiliation(s)
- Hee E Kim
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany.
| | - Alejandro Cosa-Linan
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Nandhini Santhanam
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Mahboubeh Jannesari
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Mate E Maros
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Thomas Ganslandt
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
- Chair of Medical Informatics, Friedrich-Alexander-Universität Erlangen-Nürnberg, Wetterkreuz 15, 91058, Erlangen, Germany
| |
Collapse
|
15
|
Tomographic Ultrasound Imaging in the Diagnosis of Breast Tumors under the Guidance of Deep Learning Algorithms. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:9227440. [PMID: 35265119 PMCID: PMC8901319 DOI: 10.1155/2022/9227440] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/19/2021] [Revised: 01/23/2022] [Accepted: 02/01/2022] [Indexed: 11/18/2022]
Abstract
This study was aimed to discuss the feasibility of distinguishing benign and malignant breast tumors under the tomographic ultrasound imaging (TUI) of deep learning algorithm. The deep learning algorithm was used to segment the images, and 120 patients with breast tumor were included in this study, all of whom underwent routine ultrasound examinations. Subsequently, TUI was used to assist in guiding the positioning, and the light scattering tomography system was used to further measure the lesions. A deep learning model was established to process the imaging results, and the pathological test results were undertaken as the gold standard for the efficiency of different imaging methods to diagnose the breast tumors. The results showed that, among 120 patients with breast tumor, 56 were benign lesions and 64 were malignant lesions. The average total amount of hemoglobin (HBT) of malignant lesions was significantly higher than that of benign lesions (P < 0.05). The sensitivity, specificity, accuracy, positive predictive value, and negative predictive value of TUI in the diagnosis of breast cancer were 90.4%, 75.6%, 81.4%, 84.7%, and 80.6%, respectively. The sensitivity, specificity, accuracy, positive predictive value, and negative predictive value of ultrasound in the diagnosis of breast cancer were 81.7%, 64.9%, 70.5%, 75.9%, and 80.6%, respectively. In addition, for suspected breast malignant lesions, the combined application of ultrasound and tomography can increase the diagnostic specificity to 82.1% and the accuracy to 83.8%. Based on the above results, it was concluded that TUI combined with ultrasound had a significant effect on benign and malignant diagnosis of breast cancer and can significantly improve the specificity and accuracy of diagnosis. It also reflected that deep learning technology had a good auxiliary role in the examination of diseases and was worth the promotion of clinical application.
Collapse
|
16
|
Parida PK, Dora L, Swain M, Agrawal S, Panda R. Data science methodologies in smart healthcare: a review. HEALTH AND TECHNOLOGY 2022. [DOI: 10.1007/s12553-022-00648-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
17
|
Yin XX, Hadjiloucas S, Zhang Y, Tian Z. MRI radiogenomics for intelligent diagnosis of breast tumors and accurate prediction of neoadjuvant chemotherapy responses-a review. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 214:106510. [PMID: 34852935 DOI: 10.1016/j.cmpb.2021.106510] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/14/2021] [Accepted: 11/01/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE This paper aims to overview multidimensional mining algorithms in relation to Magnetic Resonance Imaging (MRI) radiogenomics for computer aided detection and diagnosis of breast tumours. The work also aims to address a new problem in radiogenomics mining: how to combine structural radiomics information with non-structural genomics information for improving the accuracy and efficacy of Neoadjuvant Chemotherapy (NAC). METHODS This requires the automated extraction of parameters from non-structural breast radiomics data, and finding feature vectors with diagnostic value, which then are combined with genomics data. In order to address the problem of weakly labelled tumour images, a Generative Adiversarial Networks (GAN) based deep learning strategy is proposed for the classification of tumour types; this has significant potential for providing accurate real-time identification of tumorous regions from MRI scans. In order to efficiently integrate in a deep learning framework different features from radiogenomics datasets at multiple spatio-temporal resolutions, pyramid structured and multi-scale densely connected U-Nets are proposed. A bidirectional gated recurrent unit (BiGRU) combined with an attention based deep learning approach is also proposed. RESULTS The aim is to accurately predict NAC responses by combining imaging and genomic datasets. The approaches discussed incorporate some of the latest developments in of current signal processing and artificial intelligence and have significant potential in advancing and provide a development platform for future cutting-edge biomedical radiogenomics analysis. CONCLUSIONS The association of genotypic and phenotypic features is at the core of the emergent field of Precision Medicine. It makes use of advances in biomedical big data analysis, which enables the correlation between disease-associated phenotypic characteristics, genetics polymorphism and gene activation to be revealed.
Collapse
Affiliation(s)
- Xiao-Xia Yin
- Cyberspace Institute of Advanced Technology, Guangzhou University, Guangzhou 510006, China.
| | - Sillas Hadjiloucas
- Department of Biomedical Engineering, The University of Reading, RG6 6AY, UK
| | - Yanchun Zhang
- Cyberspace Institute of Advanced Technology, Guangzhou University, Guangzhou 510006, China
| | - Zhihong Tian
- Cyberspace Institute of Advanced Technology, Guangzhou University, Guangzhou 510006, China
| |
Collapse
|
18
|
Li H, Qiu L, Wang M. Informed Attentive Predictors: A Generalisable Architecture for Prior Knowledge-Based Assisted Diagnosis of Cancers. SENSORS 2021; 21:s21196484. [PMID: 34640802 PMCID: PMC8512568 DOI: 10.3390/s21196484] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/27/2021] [Revised: 09/20/2021] [Accepted: 09/24/2021] [Indexed: 11/16/2022]
Abstract
Due to the high mortality of many cancers and their related diseases, the prediction and prognosis techniques of cancers are being extensively studied to assist doctors in making diagnoses. Many machine-learning-based cancer predictors have been put forward, but many of them have failed to become widely utilised due to some crucial problems. For example, most methods require too much training data, which is not always applicable to institutes, and the complicated genetic mutual effects of cancers are generally ignored in many proposed methods. Moreover, a majority of these assist models are actually not safe to use, as they are generally built on black-box machine learners that lack references from related field knowledge. We observe that few machine-learning-based cancer predictors are capable of employing prior knowledge (PrK) to mitigate these issues. Therefore, in this paper, we propose a generalisable informed machine learning architecture named the Informed Attentive Predictor (IAP) to make PrK available to the predictor’s decision-making phases and apply it to the field of cancer prediction. Specifically, we make several implementations of the IAP and evaluate its performance on six TCGA datasets to demonstrate the effectiveness of our architecture as an assist system framework for actual clinical usage. The experimental results show a noticeable improvement in IAP models on accuracies, f1-scores and recall rates compared to their non-IAP counterparts (i.e., basic predictors).
Collapse
|
19
|
Franceschini G, Mason EJ, Orlandi A, D'Archi S, Sanchez AM, Masetti R. How will artificial intelligence impact breast cancer research efficiency? Expert Rev Anticancer Ther 2021; 21:1067-1070. [PMID: 34214007 DOI: 10.1080/14737140.2021.1951240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Affiliation(s)
- Gianluca Franceschini
- Multidisciplinary Breast Center, Dipartimento Scienze della Salute della Donna e del Bambino e di Sanità Pubblica, Fondazione Policlinico Universitario Agostino Gemelli IRCCS, Rome, Italy
| | - Elena Jane Mason
- Multidisciplinary Breast Center, Dipartimento Scienze della Salute della Donna e del Bambino e di Sanità Pubblica, Fondazione Policlinico Universitario Agostino Gemelli IRCCS, Rome, Italy
| | - Armando Orlandi
- Division of Medical Oncology, Fondazione Policlinico Universitario Agostino Gemelli IRCCS, Rome, Italy
| | - Sabatino D'Archi
- Multidisciplinary Breast Center, Dipartimento Scienze della Salute della Donna e del Bambino e di Sanità Pubblica, Fondazione Policlinico Universitario Agostino Gemelli IRCCS, Rome, Italy
| | - Alejandro Martin Sanchez
- Multidisciplinary Breast Center, Dipartimento Scienze della Salute della Donna e del Bambino e di Sanità Pubblica, Fondazione Policlinico Universitario Agostino Gemelli IRCCS, Rome, Italy
| | - Riccardo Masetti
- Multidisciplinary Breast Center, Dipartimento Scienze della Salute della Donna e del Bambino e di Sanità Pubblica, Fondazione Policlinico Universitario Agostino Gemelli IRCCS, Rome, Italy
| |
Collapse
|
20
|
Grimm LJ. Radiomics: A Primer for Breast Radiologists. JOURNAL OF BREAST IMAGING 2021; 3:276-287. [PMID: 38424774 DOI: 10.1093/jbi/wbab014] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Indexed: 03/02/2024]
Abstract
Radiomics has a long-standing history in breast imaging with computer-aided detection (CAD) for screening mammography developed in the late 20th century. Although conventional CAD had widespread adoption, the clinical benefits for experienced breast radiologists were debatable due to high false-positive marks and subsequent increased recall rates. The dramatic growth in recent years of artificial intelligence-based analysis, including machine learning and deep learning, has provided numerous opportunities for improved modern radiomics work in breast imaging. There has been extensive radiomics work in mammography, digital breast tomosynthesis, MRI, ultrasound, PET-CT, and combined multimodality imaging. Specific radiomics outcomes of interest have been diverse, including CAD, prediction of response to neoadjuvant therapy, lesion classification, and survival, among other outcomes. Additionally, the radiogenomics subfield that correlates radiomics features with genetics has been very proliferative, in parallel with the clinical validation of breast cancer molecular subtypes and gene expression assays. Despite the promise of radiomics, there are important challenges related to image normalization, limited large unbiased data sets, and lack of external validation. Much of the radiomics work to date has been exploratory using single-institution retrospective series for analysis, but several promising lines of investigation have made the leap to clinical practice with commercially available products. As a result, breast radiologists will increasingly be incorporating radiomics-based tools into their daily practice in the near future. Therefore, breast radiologists must have a broad understanding of the scope, applications, and limitations of radiomics work.
Collapse
Affiliation(s)
- Lars J Grimm
- Duke University, Department of Radiology, Durham, NC, USA
| |
Collapse
|
21
|
Eskreis-Winkler S, Onishi N, Pinker K, Reiner JS, Kaplan J, Morris EA, Sutton EJ. Using Deep Learning to Improve Nonsystematic Viewing of Breast Cancer on MRI. JOURNAL OF BREAST IMAGING 2021; 3:201-207. [PMID: 38424820 DOI: 10.1093/jbi/wbaa102] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2020] [Indexed: 03/02/2024]
Abstract
OBJECTIVE To investigate the feasibility of using deep learning to identify tumor-containing axial slices on breast MRI images. METHODS This IRB-approved retrospective study included consecutive patients with operable invasive breast cancer undergoing pretreatment breast MRI between January 1, 2014, and December 31, 2017. Axial tumor-containing slices from the first postcontrast phase were extracted. Each axial image was subdivided into two subimages: one of the ipsilateral cancer-containing breast and one of the contralateral healthy breast. Cases were randomly divided into training, validation, and testing sets. A convolutional neural network was trained to classify subimages into "cancer" and "no cancer" categories. Accuracy, sensitivity, and specificity of the classification system were determined using pathology as the reference standard. A two-reader study was performed to measure the time savings of the deep learning algorithm using descriptive statistics. RESULTS Two hundred and seventy-three patients with unilateral breast cancer met study criteria. On the held-out test set, accuracy of the deep learning system for tumor detection was 92.8% (648/706; 95% confidence interval: 89.7%-93.8%). Sensitivity and specificity were 89.5% and 94.3%, respectively. Readers spent 3 to 45 seconds to scroll to the tumor-containing slices without use of the deep learning algorithm. CONCLUSION In breast MR exams containing breast cancer, deep learning can be used to identify the tumor-containing slices. This technology may be integrated into the picture archiving and communication system to bypass scrolling when viewing stacked images, which can be helpful during nonsystematic image viewing, such as during interdisciplinary tumor board meetings.
Collapse
Affiliation(s)
| | - Natsuko Onishi
- Memorial Sloan Kettering Cancer Center, Department of Radiology, New York, NY
- University of California, Department of Radiology, San Francisco, CA
| | - Katja Pinker
- Memorial Sloan Kettering Cancer Center, Department of Radiology, New York, NY
| | - Jeffrey S Reiner
- Memorial Sloan Kettering Cancer Center, Department of Radiology, New York, NY
| | - Jennifer Kaplan
- Memorial Sloan Kettering Cancer Center, Department of Radiology, New York, NY
| | - Elizabeth A Morris
- Memorial Sloan Kettering Cancer Center, Department of Radiology, New York, NY
| | - Elizabeth J Sutton
- Memorial Sloan Kettering Cancer Center, Department of Radiology, New York, NY
| |
Collapse
|
22
|
Li J, Wang W, Liao L, Liu X. Analysis of the nonperfused volume ratio of adenomyosis from MRI images based on fewshot learning. Phys Med Biol 2021; 66:045019. [PMID: 33361557 DOI: 10.1088/1361-6560/abd66b] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
The nonperfused volume (NPV) ratio is the key to the success of high intensity focused ultrasound (HIFU) ablation treatment of adenomyosis. However, there are no qualitative interpretation standards for predicting the NPV ratio of adenomyosis using magnetic resonance imaging (MRI) before HIFU ablation treatment, which leading to inter-reader variability. Convolutional neural networks (CNNs) have achieved state-of-the-art performance in the automatic disease diagnosis of MRI. Since the use of HIFU to treat adenomyosis is a novel treatment, there is not enough MRI data to support CNNs. We proposed a novel few-shot learning framework that extends CNNs to predict NPV ratio of HIFU ablation treatment for adenomyosis. We collected a dataset from 208 patients with adenomyosis who underwent MRI examination before and after HIFU treatment. Our proposed method was trained and evaluated by fourfold cross validation. This framework obtained sensitivity of 85.6%, 89.6% and 92.8% at 0.799, 0.980 and 1.180 FPs per patient. In the receiver operating characteristics analysis for NPV ratio of adenomyosis, our proposed method received the area under the curve of 0.8233, 0.8289, 0.8412, 0.8319, 0.7010, 0.7637, 0.8375, 0.8219, 0.8207, 0.9812 for the classifications of NPV ratio interval [0%-10%), [10%-20%), …, [90%-100%], respectively. The present study demonstrated that few-shot learning on NPV ratio prediction of HIFU ablation treatment for adenomyosis may contribute to the selection of eligible patients and the pre-judgment of clinical efficacy.
Collapse
Affiliation(s)
- Jiaqi Li
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, People's Republic of China
| | - Wei Wang
- Department of Ultrasound, Chinese PLA General Hospital, Beijing, People's Republic of China
| | - Lejian Liao
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, People's Republic of China
| | - Xin Liu
- Department of Ultrasound, Beijing Tiantan Hospital, Capital Medical University, Beijing, People's Republic of China
| |
Collapse
|
23
|
Qian L, Lv Z, Zhang K, Wang K, Zhu Q, Zhou S, Chang C, Tian J. Application of deep learning to predict underestimation in ductal carcinoma in situ of the breast with ultrasound. ANNALS OF TRANSLATIONAL MEDICINE 2021; 9:295. [PMID: 33708922 PMCID: PMC7944276 DOI: 10.21037/atm-20-3981] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Background To develop an ultrasound-based deep learning model to predict postoperative upgrading of pure ductal carcinoma in situ (DCIS) diagnosed by core needle biopsy (CNB) before surgery. Methods Of the 360 patients with DCIS diagnosed by CNB and identified retrospectively, 180 had lesions upstaged to ductal carcinoma in situ with microinvasion (DCISM) or invasive ductal carcinoma (IDC) postoperatively. Ultrasound images obtained from the hospital database were divided into a training set (n=240) and validation set (n=120), with a ratio of 2:1 in chronological order. Four deep learning models, based on the ResNet and VggNet structures, were established to classify the ultrasound images into postoperative upgrade and pure DCIS. We obtained the area under the receiver operating characteristic curve (AUROC), specificity, sensitivity, accuracy, positive predictive value (PPV), and negative predictive value (NPV) to estimate the performance of the predictive models. The robustness of the models was evaluated by a 3-fold cross-validation. Results Clinical features were not significantly different between the training set and the test set (P value >0.05). The area under the receiver operating characteristic curve of our models ranged from 0.724 to 0.804. The sensitivity, specificity, and accuracy of the optimal model were 0.733, 0.750, and 0.742, respectively. The three-fold cross-validation results showed that the model was very robust. Conclusions The ultrasound-based deep learning prediction model is effective in predicting DCIS that will be upgraded postoperatively.
Collapse
Affiliation(s)
- Lang Qian
- Department of Ultrasonography, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Fudan University, Shanghai Medical College, Shanghai, China
| | - Zhikun Lv
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Kai Zhang
- Department of Ultrasonography, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Fudan University, Shanghai Medical College, Shanghai, China
| | - Kun Wang
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Qian Zhu
- Department of Ultrasonography, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Fudan University, Shanghai Medical College, Shanghai, China
| | - Shichong Zhou
- Department of Ultrasonography, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Fudan University, Shanghai Medical College, Shanghai, China
| | - Cai Chang
- Department of Ultrasonography, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Fudan University, Shanghai Medical College, Shanghai, China
| | - Jie Tian
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China.,Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Beihang University, Beijing, China
| |
Collapse
|
24
|
Wang J, Zhu H, Wang SH, Zhang YD. A Review of Deep Learning on Medical Image Analysis. MOBILE NETWORKS AND APPLICATIONS 2021; 26:351-380. [DOI: 10.1007/s11036-020-01672-7] [Citation(s) in RCA: 56] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 10/20/2020] [Indexed: 08/30/2023]
|
25
|
Shui L, Ren H, Yang X, Li J, Chen Z, Yi C, Zhu H, Shui P. The Era of Radiogenomics in Precision Medicine: An Emerging Approach to Support Diagnosis, Treatment Decisions, and Prognostication in Oncology. Front Oncol 2021; 10:570465. [PMID: 33575207 PMCID: PMC7870863 DOI: 10.3389/fonc.2020.570465] [Citation(s) in RCA: 65] [Impact Index Per Article: 16.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2020] [Accepted: 12/08/2020] [Indexed: 02/05/2023] Open
Abstract
With the rapid development of new technologies, including artificial intelligence and genome sequencing, radiogenomics has emerged as a state-of-the-art science in the field of individualized medicine. Radiogenomics combines a large volume of quantitative data extracted from medical images with individual genomic phenotypes and constructs a prediction model through deep learning to stratify patients, guide therapeutic strategies, and evaluate clinical outcomes. Recent studies of various types of tumors demonstrate the predictive value of radiogenomics. And some of the issues in the radiogenomic analysis and the solutions from prior works are presented. Although the workflow criteria and international agreed guidelines for statistical methods need to be confirmed, radiogenomics represents a repeatable and cost-effective approach for the detection of continuous changes and is a promising surrogate for invasive interventions. Therefore, radiogenomics could facilitate computer-aided diagnosis, treatment, and prediction of the prognosis in patients with tumors in the routine clinical setting. Here, we summarize the integrated process of radiogenomics and introduce the crucial strategies and statistical algorithms involved in current studies.
Collapse
Affiliation(s)
- Lin Shui
- Department of Medical Oncology, Cancer Center, West China Hospital, Sichuan University, Chengdu, China
| | - Haoyu Ren
- Department of General, Visceral and Transplantation Surgery, University Hospital, LMU Munich, Munich, Germany
| | - Xi Yang
- Department of Medical Oncology, Cancer Center, West China Hospital, Sichuan University, Chengdu, China
| | - Jian Li
- Department of Pharmacy, The Affiliated Traditional Chinese Medicine Hospital of Southwest Medical University, Luzhou, China
| | - Ziwei Chen
- Department of Nephrology, Chengdu Integrated TCM and Western Medicine Hospital, Chengdu, China
| | - Cheng Yi
- Department of Medical Oncology, Cancer Center, West China Hospital, Sichuan University, Chengdu, China
| | - Hong Zhu
- Department of Medical Oncology, Cancer Center, West China Hospital, Sichuan University, Chengdu, China
| | - Pixian Shui
- School of Pharmacy, Southwest Medical University, Luzhou, China
| |
Collapse
|
26
|
Ou WC, Polat D, Dogan BE. Deep learning in breast radiology: current progress and future directions. Eur Radiol 2021; 31:4872-4885. [PMID: 33449174 DOI: 10.1007/s00330-020-07640-9] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2020] [Revised: 10/30/2020] [Accepted: 12/17/2020] [Indexed: 12/13/2022]
Abstract
This review provides an overview of current applications of deep learning methods within breast radiology. The diagnostic capabilities of deep learning in breast radiology continue to improve, giving rise to the prospect that these methods may be integrated not only into detection and classification of breast lesions, but also into areas such as risk estimation and prediction of tumor responses to therapy. Remaining challenges include limited availability of high-quality data with expert annotations and ground truth determinations, the need for further validation of initial results, and unresolved medicolegal considerations. KEY POINTS: • Deep learning (DL) continues to push the boundaries of what can be accomplished by artificial intelligence (AI) in breast imaging with distinct advantages over conventional computer-aided detection. • DL-based AI has the potential to augment the capabilities of breast radiologists by improving diagnostic accuracy, increasing efficiency, and supporting clinical decision-making through prediction of prognosis and therapeutic response. • Remaining challenges to DL implementation include a paucity of prospective data on DL utilization and yet unresolved medicolegal questions regarding increasing AI utilization.
Collapse
Affiliation(s)
- William C Ou
- Department of Radiology, Seay Biomedical Building, University of Texas Southwestern Medical Center, 2201 Inwood Road, Dallas, TX, 75390, USA.
| | - Dogan Polat
- Department of Radiology, Seay Biomedical Building, University of Texas Southwestern Medical Center, 2201 Inwood Road, Dallas, TX, 75390, USA
| | - Basak E Dogan
- Department of Radiology, Seay Biomedical Building, University of Texas Southwestern Medical Center, 2201 Inwood Road, Dallas, TX, 75390, USA
| |
Collapse
|
27
|
Morid MA, Borjali A, Del Fiol G. A scoping review of transfer learning research on medical image analysis using ImageNet. Comput Biol Med 2021; 128:104115. [PMID: 33227578 DOI: 10.1016/j.compbiomed.2020.104115] [Citation(s) in RCA: 162] [Impact Index Per Article: 40.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2020] [Revised: 10/19/2020] [Accepted: 11/09/2020] [Indexed: 02/06/2023]
Abstract
OBJECTIVE Employing transfer learning (TL) with convolutional neural networks (CNNs), well-trained on non-medical ImageNet dataset, has shown promising results for medical image analysis in recent years. We aimed to conduct a scoping review to identify these studies and summarize their characteristics in terms of the problem description, input, methodology, and outcome. MATERIALS AND METHODS To identify relevant studies, MEDLINE, IEEE, and ACM digital library were searched for studies published between June 1st, 2012 and January 2nd, 2020. Two investigators independently reviewed articles to determine eligibility and to extract data according to a study protocol defined a priori. RESULTS After screening of 8421 articles, 102 met the inclusion criteria. Of 22 anatomical areas, eye (18%), breast (14%), and brain (12%) were the most commonly studied. Data augmentation was performed in 72% of fine-tuning TL studies versus 15% of the feature-extracting TL studies. Inception models were the most commonly used in breast related studies (50%), while VGGNet was the common in eye (44%), skin (50%) and tooth (57%) studies. AlexNet for brain (42%) and DenseNet for lung studies (38%) were the most frequently used models. Inception models were the most frequently used for studies that analyzed ultrasound (55%), endoscopy (57%), and skeletal system X-rays (57%). VGGNet was the most common for fundus (42%) and optical coherence tomography images (50%). AlexNet was the most frequent model for brain MRIs (36%) and breast X-Rays (50%). 35% of the studies compared their model with other well-trained CNN models and 33% of them provided visualization for interpretation. DISCUSSION This study identified the most prevalent tracks of implementation in the literature for data preparation, methodology selection and output evaluation for various medical image analysis tasks. Also, we identified several critical research gaps existing in the TL studies on medical image analysis. The findings of this scoping review can be used in future TL studies to guide the selection of appropriate research approaches, as well as identify research gaps and opportunities for innovation.
Collapse
Affiliation(s)
- Mohammad Amin Morid
- Department of Information Systems and Analytics, Leavey School of Business, Santa Clara University, Santa Clara, CA, USA.
| | - Alireza Borjali
- Department of Orthopaedic Surgery, Harvard Medical School, Boston, MA, USA; Department of Orthopaedic Surgery, Harris Orthopaedics Laboratory, Massachusetts General Hospital, Boston, MA, USA
| | - Guilherme Del Fiol
- Department of Biomedical Informatics, University of Utah, Salt Lake City, UT, USA
| |
Collapse
|
28
|
Debelee TG, Kebede SR, Schwenker F, Shewarega ZM. Deep Learning in Selected Cancers' Image Analysis-A Survey. J Imaging 2020; 6:121. [PMID: 34460565 PMCID: PMC8321208 DOI: 10.3390/jimaging6110121] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Revised: 10/19/2020] [Accepted: 10/26/2020] [Indexed: 02/08/2023] Open
Abstract
Deep learning algorithms have become the first choice as an approach to medical image analysis, face recognition, and emotion recognition. In this survey, several deep-learning-based approaches applied to breast cancer, cervical cancer, brain tumor, colon and lung cancers are studied and reviewed. Deep learning has been applied in almost all of the imaging modalities used for cervical and breast cancers and MRIs for the brain tumor. The result of the review process indicated that deep learning methods have achieved state-of-the-art in tumor detection, segmentation, feature extraction and classification. As presented in this paper, the deep learning approaches were used in three different modes that include training from scratch, transfer learning through freezing some layers of the deep learning network and modifying the architecture to reduce the number of parameters existing in the network. Moreover, the application of deep learning to imaging devices for the detection of various cancer cases has been studied by researchers affiliated to academic and medical institutes in economically developed countries; while, the study has not had much attention in Africa despite the dramatic soar of cancer risks in the continent.
Collapse
Affiliation(s)
- Taye Girma Debelee
- Artificial Intelligence Center, 40782 Addis Ababa, Ethiopia; (S.R.K.); (Z.M.S.)
- College of Electrical and Mechanical Engineering, Addis Ababa Science and Technology University, 120611 Addis Ababa, Ethiopia
| | - Samuel Rahimeto Kebede
- Artificial Intelligence Center, 40782 Addis Ababa, Ethiopia; (S.R.K.); (Z.M.S.)
- Department of Electrical and Computer Engineering, Debreberhan University, 445 Debre Berhan, Ethiopia
| | - Friedhelm Schwenker
- Institute of Neural Information Processing, University of Ulm, 89081 Ulm, Germany;
| | | |
Collapse
|
29
|
Breast Cancer Mass Detection in DCE–MRI Using Deep-Learning Features Followed by Discrimination of Infiltrative vs. In Situ Carcinoma through a Machine-Learning Approach. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10176109] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Breast cancer is the leading cause of cancer deaths worldwide in women. This aggressive tumor can be categorized into two main groups—in situ and infiltrative, with the latter being the most common malignant lesions. The current use of magnetic resonance imaging (MRI) was shown to provide the highest sensitivity in the detection and discrimination between benign vs. malignant lesions, when interpreted by expert radiologists. In this article, we present the prototype of a computer-aided detection/diagnosis (CAD) system that could provide valuable assistance to radiologists for discrimination between in situ and infiltrating tumors. The system consists of two main processing levels—(1) localization of possibly tumoral regions of interest (ROIs) through an iterative procedure based on intensity values (ROI Hunter), followed by a deep-feature extraction and classification method for false-positive rejection; and (2) characterization of the selected ROIs and discrimination between in situ and invasive tumor, consisting of Radiomics feature extraction and classification through a machine-learning algorithm. The CAD system was developed and evaluated using a DCE–MRI image database, containing at least one confirmed mass per image, as diagnosed by an expert radiologist. When evaluating the accuracy of the ROI Hunter procedure with respect to the radiologist-drawn boundaries, sensitivity to mass detection was found to be 75%. The AUC of the ROC curve for discrimination between in situ and infiltrative tumors was 0.70.
Collapse
|
30
|
Hou R, Mazurowski MA, Grimm LJ, Marks JR, King LM, Maley CC, Hwang ESS, Lo JY. Prediction of Upstaged Ductal Carcinoma In Situ Using Forced Labeling and Domain Adaptation. IEEE Trans Biomed Eng 2020; 67:1565-1572. [PMID: 31502960 PMCID: PMC7757748 DOI: 10.1109/tbme.2019.2940195] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
OBJECTIVE The goal of this study is to use adjunctive classes to improve a predictive model whose performance is limited by the common problems of small numbers of primary cases, high feature dimensionality, and poor class separability. Specifically, our clinical task is to use mammographic features to predict whether ductal carcinoma in situ (DCIS) identified at needle core biopsy will be later upstaged or shown to contain invasive breast cancer. METHODS To improve the prediction of pure DCIS (negative) versus upstaged DCIS (positive) cases, this study considers the adjunctive roles of two related classes: atypical ductal hyperplasia (ADH), a non-cancer type of breast abnormity, and invasive ductal carcinoma (IDC), with 113 computer vision based mammographic features extracted from each case. To improve the baseline Model A's classification of pure vs. upstaged DCIS, we designed three different strategies (Models B, C, D) with different ways of embedding features or inputs. RESULTS Based on ROC analysis, the baseline Model A performed with AUC of 0.614 (95% CI, 0.496-0.733). All three new models performed better than the baseline, with domain adaptation (Model D) performing the best with an AUC of 0.697 (95% CI, 0.595-0.797). CONCLUSION We improved the prediction performance of DCIS upstaging by embedding two related pathology classes in different training phases. SIGNIFICANCE The three new strategies of embedding related class data all outperformed the baseline model, thus demonstrating not only feature similarities among these different classes, but also the potential for improving classification by using other related classes.
Collapse
|
31
|
Oseni TO, Smith BL, Lehman CD, Vijapura CA, Pinnamaneni N, Bahl M. Do Eligibility Criteria for Ductal Carcinoma In Situ (DCIS) Active Surveillance Trials Identify Patients at Low Risk for Upgrade to Invasive Carcinoma? Ann Surg Oncol 2020; 27:4459-4465. [PMID: 32418079 DOI: 10.1245/s10434-020-08576-6] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2020] [Indexed: 12/11/2022]
Abstract
BACKGROUND Clinical trials are currently ongoing to determine the safety and efficacy of active surveillance (AS) versus usual care (surgical and radiation treatment) for women with ductal carcinoma in situ (DCIS). This study aimed to determine upgrade rates of DCIS at needle biopsy to invasive carcinoma at surgery among women who meet the eligibility criteria for AS trials. METHODS A retrospective review was performed of consecutive women at an academic medical center with a diagnosis of DCIS at needle biopsy from 2007 to 2016. Medical records were reviewed for mode of presentation, imaging findings, biopsy pathology results, and surgical outcomes. Each patient with DCIS was evaluated for AS trial eligibility based on published criteria for the COMET, LORD, and LORIS trials. RESULTS During a 10-year period, DCIS was diagnosed in 858 women (mean age 58 years; range 28-89 years). Of the 858 women, 498 (58%) were eligible for the COMET trial, 101 (11.8%) for the LORD trial, and 343 (40%) for the LORIS trial. The rates of upgrade to invasive carcinoma were 12% (60/498) for the COMET trial, 5% (5/101) for the LORD trial, and 11.1% (38/343) for the LORIS trial. The invasive carcinomas ranged from 0.2 to 20 mm, and all were node-negative. CONCLUSIONS Women who meet the eligibility criteria for DCIS AS trials remain at risk for occult invasive carcinoma at presentation, with upgrade rates ranging from 5 to 12%. These findings suggest that more precise criteria are needed to ensure that women with invasive carcinoma are excluded from AS trials.
Collapse
Affiliation(s)
- Tawakalitu O Oseni
- Division of Surgical Oncology, Department of Surgery, Massachusetts General Hospital, Boston, MA, USA
| | - Barbara L Smith
- Division of Surgical Oncology, Department of Surgery, Massachusetts General Hospital, Boston, MA, USA
| | - Constance D Lehman
- Division of Breast Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
| | - Charmi A Vijapura
- Division of Breast Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
| | - Niveditha Pinnamaneni
- Division of Breast Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
| | - Manisha Bahl
- Division of Breast Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, USA.
| |
Collapse
|
32
|
Trivizakis E, Papadakis GZ, Souglakos I, Papanikolaou N, Koumakis L, Spandidos DA, Tsatsakis A, Karantanas AH, Marias K. Artificial intelligence radiogenomics for advancing precision and effectiveness in oncologic care (Review). Int J Oncol 2020; 57:43-53. [PMID: 32467997 PMCID: PMC7252460 DOI: 10.3892/ijo.2020.5063] [Citation(s) in RCA: 44] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2020] [Accepted: 05/05/2020] [Indexed: 12/11/2022] Open
Abstract
The new era of artificial intelligence (AI) has introduced revolutionary data-driven analysis paradigms that have led to significant advancements in information processing techniques in the context of clinical decision-support systems. These advances have created unprecedented momentum in computational medical imaging applications and have given rise to new precision medicine research areas. Radiogenomics is a novel research field focusing on establishing associations between radiological features and genomic or molecular expression in order to shed light on the underlying disease mechanisms and enhance diagnostic procedures towards personalized medicine. The aim of the current review was to elucidate recent advances in radiogenomics research, focusing on deep learning with emphasis on radiology and oncology applications. The main deep learning radiogenomics architectures, together with the clinical questions addressed, and the achieved genetic or molecular correlations are presented, while a performance comparison of the proposed methodologies is conducted. Finally, current limitations, potentially understudied topics and future research directions are discussed.
Collapse
Affiliation(s)
- Eleftherios Trivizakis
- Computational Biomedicine Laboratory (CBML), Foundation for Research and Technology Hellas (FORTH), 70013 Heraklion, Greece
| | - Georgios Z Papadakis
- Computational Biomedicine Laboratory (CBML), Foundation for Research and Technology Hellas (FORTH), 70013 Heraklion, Greece
| | - Ioannis Souglakos
- Laboratory of Translational Oncology, Medical School, University of Crete, 71003 Heraklion, Greece
| | - Nikolaos Papanikolaou
- Computational Biomedicine Laboratory (CBML), Foundation for Research and Technology Hellas (FORTH), 70013 Heraklion, Greece
| | - Lefteris Koumakis
- Computational Biomedicine Laboratory (CBML), Foundation for Research and Technology Hellas (FORTH), 70013 Heraklion, Greece
| | - Demetrios A Spandidos
- Laboratory of Clinical Virology, Medical School, University of Crete, 71003 Heraklion, Greece
| | - Aristidis Tsatsakis
- Laboratory of Forensic Sciences and Toxicology, Medical School, University of Crete, 71003 Heraklion, Greece
| | - Apostolos H Karantanas
- Computational Biomedicine Laboratory (CBML), Foundation for Research and Technology Hellas (FORTH), 70013 Heraklion, Greece
| | - Kostas Marias
- Computational Biomedicine Laboratory (CBML), Foundation for Research and Technology Hellas (FORTH), 70013 Heraklion, Greece
| |
Collapse
|
33
|
Matsuzaka Y, Uesawa Y. DeepSnap-Deep Learning Approach Predicts Progesterone Receptor Antagonist Activity With High Performance. Front Bioeng Biotechnol 2020; 7:485. [PMID: 32039185 PMCID: PMC6987043 DOI: 10.3389/fbioe.2019.00485] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2019] [Accepted: 12/30/2019] [Indexed: 12/16/2022] Open
Abstract
The progesterone receptor (PR) is important therapeutic target for many malignancies and endocrine disorders due to its role in controlling ovulation and pregnancy via the reproductive cycle. Therefore, the modulation of PR activity using its agonists and antagonists is receiving increasing interest as novel treatment strategy. However, clinical trials using the PR modulators have not yet been found conclusive evidences. Recently, increasing evidence from several fields shows that the classification of chemical compounds, including agonists and antagonists, can be done with recent improvements in deep learning (DL) using deep neural network. Therefore, we recently proposed a novel DL-based quantitative structure-activity relationship (QSAR) strategy using transfer learning to build prediction models for agonists and antagonists. By employing this novel approach, referred as DeepSnap-DL method, which uses images captured from 3-dimension (3D) chemical structure with multiple angles as input data into the DL classification, we constructed prediction models of the PR antagonists in this study. Here, the DeepSnap-DL method showed a high performance prediction of the PR antagonists by optimization of some parameters and image adjustment from 3D-structures. Furthermore, comparison of the prediction models from this approach with conventional machine learnings (MLs) indicated the DeepSnap-DL method outperformed these MLs. Therefore, the models predicted by DeepSnap-DL would be powerful tool for not only QSAR field in predicting physiological and agonist/antagonist activities, toxicity, and molecular bindings; but also for identifying biological or pathological phenomena.
Collapse
Affiliation(s)
| | - Yoshihiro Uesawa
- Department of Medical Molecular Informatics, Meiji Pharmaceutical University, Tokyo, Japan
| |
Collapse
|