1
|
Guillaumin JB, Nadjem A, Vigouroux L, Sibleyras A, Tanter M, Aubry JF, Berthon B. 3D multiparametric ultrasound of spontaneous murine tumors for non-invasive tumor characterization. Phys Med Biol 2025; 70:095006. [PMID: 40179940 DOI: 10.1088/1361-6560/adc8f4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2025] [Accepted: 04/03/2025] [Indexed: 04/05/2025]
Abstract
Objective.Non-invasive tumor diagnosis and characterization is limited today by the cost and availability of state of the art imaging techniques. Thanks to recent developments, ultrasound (US) imaging can now provide quantitative volumetric maps of different tissue characteristics. This study applied the first fully concurrent 3D ultrasound imaging set-up including B-mode imaging, shear wave elastography (SWE), tissue structure imaging with backscatter tensor imaging (BTI), vascular mapping with ultrasensitive Doppler (uDoppler) and ultrasound localization microscopy (ULM)in-vivo. Subsequent analysis aimed to evaluate its benefits for non-invasive tumor diagnosis.Approach.A total of 26 PyMT-MMTV transgenic mice and 6 control mice were imaged weekly during tumor growth. First-order statistics and radiomic features were extracted from the quantitative maps obtained, and used to build predictive models differentiating healthy from cancerous mammary pads. Imaging features were also compared to histology obtained the last week of imaging.Main results.High quality co-registered quantitative maps were obtained, for which SWE speed, BTI tissue organization, ULM blood vessel count and uDoppler blood vessel density were correlated with histopathology. Significant changes in uDoppler sensitivity and BTI tissue structure were measured during tumor evolution. Predictive models inferring the cancerous state from the multiparametric imaging reached 99% accuracy, and focused mainly on radiomics measures of the BTI maps.Significance.This work indicates the relevance of a multiparametric characterization of lesions, and highlights the strong predictive power of BTI-derived parameters for differentiating tumors from healthy tissue, both before and after the tumor can be detected by palpation.
Collapse
Affiliation(s)
- Jean-Baptiste Guillaumin
- Physics for Medicine Paris Institute, ESPCI Paris, PSL Research University, Inserm U1273, CNRS UMR 8063, Paris, France
| | - Aymeric Nadjem
- Physics for Medicine Paris Institute, ESPCI Paris, PSL Research University, Inserm U1273, CNRS UMR 8063, Paris, France
| | - Léa Vigouroux
- Physics for Medicine Paris Institute, ESPCI Paris, PSL Research University, Inserm U1273, CNRS UMR 8063, Paris, France
| | - Ana Sibleyras
- Physics for Medicine Paris Institute, ESPCI Paris, PSL Research University, Inserm U1273, CNRS UMR 8063, Paris, France
| | - Mickaël Tanter
- Physics for Medicine Paris Institute, ESPCI Paris, PSL Research University, Inserm U1273, CNRS UMR 8063, Paris, France
| | - Jean-François Aubry
- Physics for Medicine Paris Institute, ESPCI Paris, PSL Research University, Inserm U1273, CNRS UMR 8063, Paris, France
| | - Béatrice Berthon
- Physics for Medicine Paris Institute, ESPCI Paris, PSL Research University, Inserm U1273, CNRS UMR 8063, Paris, France
| |
Collapse
|
2
|
Hou C, Huang T, Hu K, Ye Z, Guo J, Zhou H. Artificial intelligence-assisted multimodal imaging for the clinical applications of breast cancer: a bibliometric analysis. Discov Oncol 2025; 16:537. [PMID: 40237900 PMCID: PMC12003249 DOI: 10.1007/s12672-025-02329-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/20/2025] [Accepted: 04/08/2025] [Indexed: 04/18/2025] Open
Abstract
BACKGROUND Breast cancer (BC) remains a leading cause of cancer-related mortality among women globally, with increasing incidence rates posing significant public health challenges. Recent advancements in artificial intelligence (AI) have revolutionized medical imaging, particularly in enhancing diagnostic accuracy and prognostic capabilities for BC. While multimodal imaging combined with AI has shown remarkable potential, a comprehensive analysis is needed to synthesize current research and identify emerging trends and hotspots in AI-assisted multimodal imaging for BC. METHODS This study analyzed literature on AI-assisted multimodal imaging in BC from January 2010 to November 2024 in Web of Science Core Collection (WoSCC). Bibliometric and visualization tools, including VOSviewer, CiteSpace, and the Bibliometrix R package, were employed to assess countries, institutions, authors, journals, and keywords. RESULTS A total of 80 publications were included, revealing a steady increase in annual publications and citations, with a notable surge post-2021. China led in productivity and citations, while Germany exhibited the highest citation average. The United States demonstrated the strongest international collaboration. The most productive institution and author are Radboud University Nijmegen and Xi, Xiaoming. Publications were predominantly published in Computerized Medical Imaging and Graphics, with Qian, XJ's 2021 study on BC risk prediction under deep learning frameworks being the most influential. Keyword analysis highlighted themes such as "breast cancer", "classification", and "deep learning". CONCLUSIONS AI-assisted multimodal imaging has significantly advanced BC diagnosis and management, with promising future developments. This study offers researchers a comprehensive overview of current frameworks and emerging research directions. Future efforts are expected to focus on improving diagnostic precision and refining therapeutic strategies through optimized imaging techniques and AI algorithms, emphasizing international collaboration to drive innovation and clinical translation.
Collapse
Affiliation(s)
- Chenke Hou
- Hangzhou TCM Hospital of Zhejiang Chinese Medical University (Hangzhou Hospital of Traditional Chinese Medicine), Hangzhou, 310007, Zhejiang, China
| | - Ting Huang
- Department of Oncology, Hangzhou TCM Hospital Affiliated to Zhejiang Chinese Medical University, No. 453 Stadium Road, Xihu District, Hangzhou, 310007, Zhejiang, China
| | - Keke Hu
- Department of Oncology, Hangzhou TCM Hospital Affiliated to Zhejiang Chinese Medical University, No. 453 Stadium Road, Xihu District, Hangzhou, 310007, Zhejiang, China
| | - Zhifeng Ye
- Department of Oncology, Hangzhou TCM Hospital Affiliated to Zhejiang Chinese Medical University, No. 453 Stadium Road, Xihu District, Hangzhou, 310007, Zhejiang, China
| | - Junhua Guo
- Department of Oncology, Hangzhou TCM Hospital Affiliated to Zhejiang Chinese Medical University, No. 453 Stadium Road, Xihu District, Hangzhou, 310007, Zhejiang, China
| | - Heran Zhou
- Department of Oncology, Hangzhou TCM Hospital Affiliated to Zhejiang Chinese Medical University, No. 453 Stadium Road, Xihu District, Hangzhou, 310007, Zhejiang, China.
| |
Collapse
|
3
|
Wu C, Andaloussi MA, Hormuth DA, Lima EABF, Lorenzo G, Stowers CE, Ravula S, Levac B, Dimakis AG, Tamir JI, Brock KK, Chung C, Yankeelov TE. A critical assessment of artificial intelligence in magnetic resonance imaging of cancer. NPJ IMAGING 2025; 3:15. [PMID: 40226507 PMCID: PMC11981920 DOI: 10.1038/s44303-025-00076-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/21/2024] [Accepted: 03/17/2025] [Indexed: 04/15/2025]
Abstract
Given the enormous output and pace of development of artificial intelligence (AI) methods in medical imaging, it can be challenging to identify the true success stories to determine the state-of-the-art of the field. This report seeks to provide the magnetic resonance imaging (MRI) community with an initial guide into the major areas in which the methods of AI are contributing to MRI in oncology. After a general introduction to artificial intelligence, we proceed to discuss the successes and current limitations of AI in MRI when used for image acquisition, reconstruction, registration, and segmentation, as well as its utility for assisting in diagnostic and prognostic settings. Within each section, we attempt to present a balanced summary by first presenting common techniques, state of readiness, current clinical needs, and barriers to practical deployment in the clinical setting. We conclude by presenting areas in which new advances must be realized to address questions regarding generalizability, quality assurance and control, and uncertainty quantification when applying MRI to cancer to maintain patient safety and practical utility.
Collapse
Affiliation(s)
- Chengyue Wu
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX USA
- Department of Breast Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX USA
- Department of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, TX USA
- Institute for Data Science in Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX USA
- Oden Institute for Computational Engineering and Sciences, The University of Texas at Austin, Austin, TX USA
| | | | - David A. Hormuth
- Oden Institute for Computational Engineering and Sciences, The University of Texas at Austin, Austin, TX USA
- Livestrong Cancer Institutes, The University of Texas at Austin, Austin, TX USA
| | - Ernesto A. B. F. Lima
- Oden Institute for Computational Engineering and Sciences, The University of Texas at Austin, Austin, TX USA
- Texas Advanced Computing Center, The University of Texas at Austin, Austin, TX USA
| | - Guillermo Lorenzo
- Oden Institute for Computational Engineering and Sciences, The University of Texas at Austin, Austin, TX USA
- Health Research Institute of Santiago de Compostela, Santiago de Compostela, Spain
| | - Casey E. Stowers
- Oden Institute for Computational Engineering and Sciences, The University of Texas at Austin, Austin, TX USA
| | - Sriram Ravula
- Chandra Family Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX USA
| | - Brett Levac
- Chandra Family Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX USA
| | - Alexandros G. Dimakis
- Chandra Family Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX USA
| | - Jonathan I. Tamir
- Oden Institute for Computational Engineering and Sciences, The University of Texas at Austin, Austin, TX USA
- Chandra Family Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX USA
- Department of Diagnostic Medicine, The University of Texas at Austin, Austin, TX USA
| | - Kristy K. Brock
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX USA
- Institute for Data Science in Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX USA
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Caroline Chung
- Institute for Data Science in Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX USA
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX USA
- Department of Neuroradiology, The University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Thomas E. Yankeelov
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX USA
- Oden Institute for Computational Engineering and Sciences, The University of Texas at Austin, Austin, TX USA
- Livestrong Cancer Institutes, The University of Texas at Austin, Austin, TX USA
- Department of Diagnostic Medicine, The University of Texas at Austin, Austin, TX USA
- Department of Biomedical Engineering, The University of Texas at Austin, Austin, TX USA
- Department of Oncology, The University of Texas at Austin, Austin, TX USA
| |
Collapse
|
4
|
Zheng S, Li J, Qiao L, Gao X. Multi-task interaction learning for accurate segmentation and classification of breast tumors in ultrasound images. Phys Med Biol 2025; 70:065006. [PMID: 39854844 DOI: 10.1088/1361-6560/adae4d] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2024] [Accepted: 01/24/2025] [Indexed: 01/27/2025]
Abstract
Objective.In breast diagnostic imaging, the morphological variability of breast tumors and the inherent ambiguity of ultrasound images pose significant challenges. Moreover, multi-task computer-aided diagnosis systems in breast imaging may overlook inherent relationships between pixel-wise segmentation and categorical classification tasks.Approach.In this paper, we propose a multi-task learning network with deep inter-task interactions that exploits the inherently relations between two tasks. First, we fuse self-task attention and cross-task attention mechanisms to explore the two types of interaction information, location and semantic, between tasks. In addition, a feature aggregation block is developed based on the channel attention mechanism, which reduces the semantic differences between the decoder and the encoder. To exploit inter-task further, our network uses an circle training strategy to refine heterogeneous feature with the help of segmentation maps obtained from previous training.Main results.The experimental results show that our method achieved excellent performance on the BUSI and BUS-B datasets, with DSCs of 81.95% and 86.41% for segmentation tasks, and F1 scores of 82.13% and 69.01% for classification tasks, respectively.Significance.The proposed multi-task interaction learning not only enhances the performance of all tasks related to breast tumor segmentation and classification but also promotes research in multi-task learning, providing further insights for clinical applications.
Collapse
Affiliation(s)
- Shenhai Zheng
- College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, People's Republic of China
- Chongqing Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications, Chongqing, People's Republic of China
| | - Jianfei Li
- College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, People's Republic of China
| | - Lihong Qiao
- College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, People's Republic of China
- Chongqing Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications, Chongqing, People's Republic of China
| | - Xi Gao
- College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, People's Republic of China
| |
Collapse
|
5
|
Wang X, Lv L, Tang Q, Wang G, Shang E, Zheng H, Zhang L. A feature fusion method based on radiomic features and revised deep features for improving tumor prediction in ultrasound images. Comput Biol Med 2025; 185:109605. [PMID: 39721417 DOI: 10.1016/j.compbiomed.2024.109605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2024] [Revised: 12/01/2024] [Accepted: 12/19/2024] [Indexed: 12/28/2024]
Abstract
BACKGROUND Radiomic features and deep features are both vitally helpful for the accurate prediction of tumor information in breast ultrasound. However, whether integrating radiomic features and deep features can improve the prediction performance of tumor information is unclear. METHODS A feature fusion method based on radiomic features and revised deep features was proposed to predict tumor information. Radiomic features were extracted from the tumor region on ultrasound images, and the optimal radiomic features were subsequently selected based on Gini score. Revised deep features, which were extracted using the revised CNN models integrating prior information, were combined with radiomic features to build a logistic regression classifier for tumor prediction. The performance was evaluated using area under the receiver operating characteristic (ROC) curve (AUC). RESULTS The results showed that the proposed feature fusion method (AUC = 0.9845) obtained better prediction performance than that based on radiomic features (AUC = 0.9796) or deep features (AUC = 0.9342). CONCLUSIONS Our results demonstrate that the proposed feature fusion framework integrating the radiomic features and revised deep features is an efficient method to improve the prediction performance of tumor information.
Collapse
Affiliation(s)
- Xianyang Wang
- School of Computer and Information, Anqing Normal University, Anqing, 246133, People's Republic of China
| | - Linlin Lv
- School of Computer and Information, Anqing Normal University, Anqing, 246133, People's Republic of China
| | - Qingfeng Tang
- School of Computer and Information, Anqing Normal University, Anqing, 246133, People's Republic of China
| | - Guangjun Wang
- School of Computer and Information, Anqing Normal University, Anqing, 246133, People's Republic of China
| | - Enci Shang
- School of Computer and Information, Anqing Normal University, Anqing, 246133, People's Republic of China
| | - Hang Zheng
- School of Computer and Information, Anqing Normal University, Anqing, 246133, People's Republic of China
| | - Liangliang Zhang
- School of Computer and Information, Anqing Normal University, Anqing, 246133, People's Republic of China.
| |
Collapse
|
6
|
Ahluwalia VS, Schapira MM, Weissman GE, Parikh RB. Primary Care Provider Preferences Regarding Artificial Intelligence in Point-of-Care Cancer Screening. MDM Policy Pract 2025; 10:23814683251329007. [PMID: 40191273 PMCID: PMC11970086 DOI: 10.1177/23814683251329007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2024] [Accepted: 02/23/2025] [Indexed: 04/09/2025] Open
Abstract
Background. It is unclear how to optimize the user interface and user experience of cancer screening artificial intelligence (AI) tools for clinical decision-making in primary care. Methods. We developed an electronic survey for US primary care clinicians to assess 1) general attitudes toward AI in cancer screening and 2) preferences for various aspects of AI model deployment in the context of colorectal, breast, and lung cancer screening. We descriptively analyzed the responses. Results. Ninety-nine surveys met criteria for analysis out of 733 potential respondents (response rate 14%). Ninety (>90%) somewhat or strongly agreed that their medical education did not provide adequate AI training. A plurality (52%, 39%, and 37% for colon, breast, and lung cancers, respectively) preferred that AI tools recommend the interval to the next screening as compared with the 5-y probability of future cancer diagnosis, a binary recommendation of "screen now," or identification of suspicious imaging findings. In terms of workflow, respondents preferred generating a flag in the electronic health record to communicate an AI prediction versus an interactive smartphone application or the delegation of findings to another healthcare professional. No majority preference emerged for an explainability method for breast cancer screening. Limitations. The sample was primarily obtained from a single health care system in the Northeast. Conclusions. Providers indicated that AI models can be most helpful in cancer screening by providing prescriptive outputs, such as recommended intervals until next screening, and by integrating with the electronic health record. Implications. A preliminary framework for AI model development in cancer screening may help ensure effective integration into clinical workflow. These findings can better inform how healthcare systems govern and receive reimbursement for services that use AI. Highlights Clinicians do not feel their undergraduate or graduate medical education has properly prepared them to engage with AI in patient care.We provide a preliminary framework for deploying AI models in primary care-based cancer screening.This framework may have implications for health system governance and provider reimbursement in the age of AI.
Collapse
Affiliation(s)
- Vinayak S. Ahluwalia
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia, PA, USA
| | - Marilyn M. Schapira
- Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia, PA, USA
- Department of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Gary E. Weissman
- Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia, PA, USA
- Department of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Palliative and Advanced Illness Research (PAIR) Center, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA
| | - Ravi B. Parikh
- Emory University School of Medicine, Atlanta, GA, USA
- Emory Winship Cancer Institute, Atlanta, GA, USA
| |
Collapse
|
7
|
Singh S, Healy NA. The top 100 most-cited articles on artificial intelligence in breast radiology: a bibliometric analysis. Insights Imaging 2024; 15:297. [PMID: 39666106 PMCID: PMC11638451 DOI: 10.1186/s13244-024-01869-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2024] [Accepted: 11/24/2024] [Indexed: 12/13/2024] Open
Abstract
INTRODUCTION Artificial intelligence (AI) in radiology is a rapidly evolving field. In breast imaging, AI has already been applied in a real-world setting and multiple studies have been conducted in the area. The aim of this analysis is to identify the most influential publications on the topic of artificial intelligence in breast imaging. METHODS A retrospective bibliometric analysis was conducted on artificial intelligence in breast radiology using the Web of Science database. The search strategy involved searching for the keywords 'breast radiology' or 'breast imaging' and the various keywords associated with AI such as 'deep learning', 'machine learning,' and 'neural networks'. RESULTS From the top 100 list, the number of citations per article ranged from 30 to 346 (average 85). The highest cited article titled 'Artificial Neural Networks In Mammography-Application To Decision-Making In The Diagnosis Of Breast-Cancer' was published in Radiology in 1993. Eighty-three of the articles were published in the last 10 years. The journal with the greatest number of articles was Radiology (n = 22). The most common country of origin was the United States (n = 51). Commonly occurring topics published were the use of deep learning models for breast cancer detection in mammography or ultrasound, radiomics in breast cancer, and the use of AI for breast cancer risk prediction. CONCLUSION This study provides a comprehensive analysis of the top 100 most-cited papers on the subject of artificial intelligence in breast radiology and discusses the current most influential papers in the field. CLINICAL RELEVANCE STATEMENT This article provides a concise summary of the top 100 most-cited articles in the field of artificial intelligence in breast radiology. It discusses the most impactful articles and explores the recent trends and topics of research in the field. KEY POINTS Multiple studies have been conducted on AI in breast radiology. The most-cited article was published in the journal Radiology in 1993. This study highlights influential articles and topics on AI in breast radiology.
Collapse
Affiliation(s)
- Sneha Singh
- Department of Radiology, Royal College of Surgeons in Ireland, Dublin, Ireland.
- Beaumont Breast Centre, Beaumont Hospital, Dublin, Ireland.
| | - Nuala A Healy
- Department of Radiology, Royal College of Surgeons in Ireland, Dublin, Ireland
- Beaumont Breast Centre, Beaumont Hospital, Dublin, Ireland
- Department of Radiology, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
8
|
Elahi R, Nazari M. An updated overview of radiomics-based artificial intelligence (AI) methods in breast cancer screening and diagnosis. Radiol Phys Technol 2024; 17:795-818. [PMID: 39285146 DOI: 10.1007/s12194-024-00842-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2024] [Revised: 08/25/2024] [Accepted: 08/27/2024] [Indexed: 11/21/2024]
Abstract
Current imaging methods for diagnosing breast cancer (BC) are associated with limited sensitivity and specificity and modest positive predictive power. The recent progress in image analysis using artificial intelligence (AI) has created great promise to improve BC diagnosis and subtype differentiation. In this case, novel quantitative computational methods, such as radiomics, have been developed to enhance the sensitivity and specificity of early BC diagnosis and classification. The potential of radiomics in improving the diagnostic efficacy of imaging studies has been shown in several studies. In this review article, we discuss the radiomics workflow and current handcrafted radiomics methods in the diagnosis and classification of BC based on the most recent studies on different imaging modalities, e.g., MRI, mammography, contrast-enhanced spectral mammography (CESM), ultrasound imaging, and digital breast tumosynthesis (DBT). We also discuss current challenges and potential strategies to improve the specificity and sensitivity of radiomics in breast cancer to help achieve a higher level of BC classification and diagnosis in the clinical setting. The growing field of AI incorporation with imaging information has opened a great opportunity to provide a higher level of care for BC patients.
Collapse
Affiliation(s)
- Reza Elahi
- Department of Radiology, Zanjan University of Medical Sciences, Zanjan, Iran.
| | - Mahdis Nazari
- School of Medicine, Zanjan University of Medical Sciences, Zanjan, Iran
| |
Collapse
|
9
|
Ferro A, Bottosso M, Dieci MV, Scagliori E, Miglietta F, Aldegheri V, Bonanno L, Caumo F, Guarneri V, Griguolo G, Pasello G. Clinical applications of radiomics and deep learning in breast and lung cancer: A narrative literature review on current evidence and future perspectives. Crit Rev Oncol Hematol 2024; 203:104479. [PMID: 39151838 DOI: 10.1016/j.critrevonc.2024.104479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Revised: 07/22/2024] [Accepted: 08/10/2024] [Indexed: 08/19/2024] Open
Abstract
Radiomics, analysing quantitative features from medical imaging, has rapidly become an emerging field in translational oncology. Radiomics has been investigated in several neoplastic malignancies as it might allow for a non-invasive tumour characterization and for the identification of predictive and prognostic biomarkers. Over the last few years, evidence has been accumulating regarding potential clinical applications of machine learning in many crucial moments of cancer patients' history. However, the incorporation of radiomics in clinical decision-making process is still limited by low data reproducibility and study variability. Moreover, the need for prospective validations and standardizations is emerging. In this narrative review, we summarize current evidence regarding radiomic applications in high-incidence cancers (breast and lung) for screening, diagnosis, staging, treatment choice, response, and clinical outcome evaluation. We also discuss pro and cons of the radiomic approach, suggesting possible solutions to critical issues which might invalidate radiomics studies and propose future perspectives.
Collapse
Affiliation(s)
- Alessandra Ferro
- Division of Medical Oncology 2, Veneto Institute of Oncology IOV - IRCCS, via Gattamelata 64, Padua 35128, Italy
| | - Michele Bottosso
- Division of Medical Oncology 2, Veneto Institute of Oncology IOV - IRCCS, via Gattamelata 64, Padua 35128, Italy; Department of Surgery, Oncology and Gastroenterology, University of Padova, via Giustiniani 2, Padova 35128, Italy
| | - Maria Vittoria Dieci
- Division of Medical Oncology 2, Veneto Institute of Oncology IOV - IRCCS, via Gattamelata 64, Padua 35128, Italy; Department of Surgery, Oncology and Gastroenterology, University of Padova, via Giustiniani 2, Padova 35128, Italy.
| | - Elena Scagliori
- Radiology Unit, Veneto Institute of Oncology IOV - IRCCS, via Gattamelata 64, Padua 35128, Italy
| | - Federica Miglietta
- Division of Medical Oncology 2, Veneto Institute of Oncology IOV - IRCCS, via Gattamelata 64, Padua 35128, Italy; Department of Surgery, Oncology and Gastroenterology, University of Padova, via Giustiniani 2, Padova 35128, Italy
| | - Vittoria Aldegheri
- Radiology Unit, Veneto Institute of Oncology IOV - IRCCS, via Gattamelata 64, Padua 35128, Italy
| | - Laura Bonanno
- Division of Medical Oncology 2, Veneto Institute of Oncology IOV - IRCCS, via Gattamelata 64, Padua 35128, Italy
| | - Francesca Caumo
- Unit of Breast Radiology, Veneto Institute of Oncology IOV - IRCCS, via Gattamelata 64, Padua 35128, Italy
| | - Valentina Guarneri
- Division of Medical Oncology 2, Veneto Institute of Oncology IOV - IRCCS, via Gattamelata 64, Padua 35128, Italy; Department of Surgery, Oncology and Gastroenterology, University of Padova, via Giustiniani 2, Padova 35128, Italy
| | - Gaia Griguolo
- Division of Medical Oncology 2, Veneto Institute of Oncology IOV - IRCCS, via Gattamelata 64, Padua 35128, Italy; Department of Surgery, Oncology and Gastroenterology, University of Padova, via Giustiniani 2, Padova 35128, Italy
| | - Giulia Pasello
- Division of Medical Oncology 2, Veneto Institute of Oncology IOV - IRCCS, via Gattamelata 64, Padua 35128, Italy; Department of Surgery, Oncology and Gastroenterology, University of Padova, via Giustiniani 2, Padova 35128, Italy
| |
Collapse
|
10
|
Avanzo M, Stancanello J, Pirrone G, Drigo A, Retico A. The Evolution of Artificial Intelligence in Medical Imaging: From Computer Science to Machine and Deep Learning. Cancers (Basel) 2024; 16:3702. [PMID: 39518140 PMCID: PMC11545079 DOI: 10.3390/cancers16213702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2024] [Revised: 10/26/2024] [Accepted: 10/29/2024] [Indexed: 11/16/2024] Open
Abstract
Artificial intelligence (AI), the wide spectrum of technologies aiming to give machines or computers the ability to perform human-like cognitive functions, began in the 1940s with the first abstract models of intelligent machines. Soon after, in the 1950s and 1960s, machine learning algorithms such as neural networks and decision trees ignited significant enthusiasm. More recent advancements include the refinement of learning algorithms, the development of convolutional neural networks to efficiently analyze images, and methods to synthesize new images. This renewed enthusiasm was also due to the increase in computational power with graphical processing units and the availability of large digital databases to be mined by neural networks. AI soon began to be applied in medicine, first through expert systems designed to support the clinician's decision and later with neural networks for the detection, classification, or segmentation of malignant lesions in medical images. A recent prospective clinical trial demonstrated the non-inferiority of AI alone compared with a double reading by two radiologists on screening mammography. Natural language processing, recurrent neural networks, transformers, and generative models have both improved the capabilities of making an automated reading of medical images and moved AI to new domains, including the text analysis of electronic health records, image self-labeling, and self-reporting. The availability of open-source and free libraries, as well as powerful computing resources, has greatly facilitated the adoption of deep learning by researchers and clinicians. Key concerns surrounding AI in healthcare include the need for clinical trials to demonstrate efficacy, the perception of AI tools as 'black boxes' that require greater interpretability and explainability, and ethical issues related to ensuring fairness and trustworthiness in AI systems. Thanks to its versatility and impressive results, AI is one of the most promising resources for frontier research and applications in medicine, in particular for oncological applications.
Collapse
Affiliation(s)
- Michele Avanzo
- Medical Physics Department, Centro di Riferimento Oncologico di Aviano (CRO) IRCCS, 33081 Aviano, Italy; (G.P.); (A.D.)
| | | | - Giovanni Pirrone
- Medical Physics Department, Centro di Riferimento Oncologico di Aviano (CRO) IRCCS, 33081 Aviano, Italy; (G.P.); (A.D.)
| | - Annalisa Drigo
- Medical Physics Department, Centro di Riferimento Oncologico di Aviano (CRO) IRCCS, 33081 Aviano, Italy; (G.P.); (A.D.)
| | - Alessandra Retico
- National Institute for Nuclear Physics (INFN), Pisa Division, 56127 Pisa, Italy;
| |
Collapse
|
11
|
Lin ZY, Chen K, Chen JR, Chen WX, Li JF, Li CG, Song GQ, Liu YZ, Wang J, Liu R, Hu MG. Deep Neural Network and Radiomics-based Magnetic Resonance Imaging System for Predicting Microvascular Invasion in Hepatocellular Carcinoma. J Cancer 2024; 15:6223-6231. [PMID: 39513126 PMCID: PMC11540505 DOI: 10.7150/jca.93712] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2023] [Accepted: 09/25/2024] [Indexed: 11/15/2024] Open
Abstract
Background: Accurate preoperative evaluation of microvascular invasion (MVI) in hepatocellular carcinoma (HCC) is crucial for surgeons to make informed decisions regarding appropriate treatment strategies. However, it continues to pose a significant challenge for radiologists. The integration of computer-aided diagnosis utilizing deep learning technology emerges as a promising approach to enhance the prediction accuracy. Methods: This experiment incorporated magnetic resonance imaging (MRI) scans with six different sequences. After a cross-sequence registration preprocess, a deep neural network was employed for the segmentation of hepatocellular carcinoma. The final prediction model was constructed by combining radiomics features with clinical features. The selection of clinical features for the final model was determined through univariate analysis. Results: In this study, we analyzed MRI scans obtained from a cohort of 420 patients diagnosed with HCC. Among them, 140 cases exhibited MVI, while the remaining 280 cases comprised the non-MVI group. The radiomics features demonstrated strong predictive capability for MVI. By extracting radiomic features from each MRI sequence and subsequently integrating them, we achieved the highest area under the curve (AUC) value of 0.794±0.033. Specifically, for tumor sizes ranging from 3 to 5 cm, the AUC reached 0.860±0.065. Conclusions: In this study, we present a fully automatic system for predicting MVI in HCC based on preoperative MRI. Our approach leverages the fusion of radiomics and clinical features to achieve accurate MVI prediction. The system demonstrates robust performance in predicting MVI, particularly in the 3-5 cm tumor group.
Collapse
Affiliation(s)
- Zhao-Yi Lin
- Medical School of Chinese PLA, 100853, China
- Faculty of Hepato-Biliary-Pancreatic Surgery, The First Medical Center of Chinese PLA General Hospital, 100853, China
| | - Kuang Chen
- Medical School of Chinese PLA, 100853, China
- Faculty of Hepato-Biliary-Pancreatic Surgery, The First Medical Center of Chinese PLA General Hospital, 100853, China
| | - Jia-Rui Chen
- Medical School of Chinese PLA, 100853, China
- Faculty of Hepato-Biliary-Pancreatic Surgery, The First Medical Center of Chinese PLA General Hospital, 100853, China
| | - Wei-Xiang Chen
- Department of Automation, Tsinghua University, 10084, China
| | - Jin-Feng Li
- Department of Radiology, The First Medical Center of Chinese PLA General Hospital, 28 Fuxing Road, Beijing, 100853, China
| | - Cheng-Gang Li
- Faculty of Hepato-Biliary-Pancreatic Surgery, The First Medical Center of Chinese PLA General Hospital, 100853, China
| | - Guo-Quan Song
- Department of Radiology, The First Medical Center of Chinese PLA General Hospital, 28 Fuxing Road, Beijing, 100853, China
| | - Yan-Zhe Liu
- Faculty of Hepato-Biliary-Pancreatic Surgery, The First Medical Center of Chinese PLA General Hospital, 100853, China
| | - Jin Wang
- Medical School of Chinese PLA, 100853, China
- Faculty of Hepato-Biliary-Pancreatic Surgery, The First Medical Center of Chinese PLA General Hospital, 100853, China
| | - Rong Liu
- Medical School of Chinese PLA, 100853, China
- Faculty of Hepato-Biliary-Pancreatic Surgery, The First Medical Center of Chinese PLA General Hospital, 100853, China
| | - Ming-Gen Hu
- Medical School of Chinese PLA, 100853, China
- Faculty of Hepato-Biliary-Pancreatic Surgery, The First Medical Center of Chinese PLA General Hospital, 100853, China
| |
Collapse
|
12
|
Maniaci A, Lavalle S, Gagliano C, Lentini M, Masiello E, Parisi F, Iannella G, Cilia ND, Salerno V, Cusumano G, La Via L. The Integration of Radiomics and Artificial Intelligence in Modern Medicine. Life (Basel) 2024; 14:1248. [PMID: 39459547 PMCID: PMC11508875 DOI: 10.3390/life14101248] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2024] [Revised: 09/16/2024] [Accepted: 09/18/2024] [Indexed: 10/28/2024] Open
Abstract
With profound effects on patient care, the role of artificial intelligence (AI) in radiomics has become a disruptive force in contemporary medicine. Radiomics, the quantitative feature extraction and analysis from medical images, offers useful imaging biomarkers that can reveal important information about the nature of diseases, how well patients respond to treatment and patient outcomes. The use of AI techniques in radiomics, such as machine learning and deep learning, has made it possible to create sophisticated computer-aided diagnostic systems, predictive models, and decision support tools. The many uses of AI in radiomics are examined in this review, encompassing its involvement of quantitative feature extraction from medical images, the machine learning, deep learning and computer-aided diagnostic (CAD) systems approaches in radiomics, and the effect of radiomics and AI on improving workflow automation and efficiency, optimize clinical trials and patient stratification. This review also covers the predictive modeling improvement by machine learning in radiomics, the multimodal integration and enhanced deep learning architectures, and the regulatory and clinical adoption considerations for radiomics-based CAD. Particular emphasis is given to the enormous potential for enhancing diagnosis precision, treatment personalization, and overall patient outcomes.
Collapse
Affiliation(s)
- Antonino Maniaci
- Faculty of Medicine and Surgery, University of Enna “Kore”, 94100 Enna, Italy; (A.M.); (S.L.); (C.G.)
| | - Salvatore Lavalle
- Faculty of Medicine and Surgery, University of Enna “Kore”, 94100 Enna, Italy; (A.M.); (S.L.); (C.G.)
| | - Caterina Gagliano
- Faculty of Medicine and Surgery, University of Enna “Kore”, 94100 Enna, Italy; (A.M.); (S.L.); (C.G.)
| | - Mario Lentini
- ASP Ragusa, Hospital Giovanni Paolo II, 97100 Ragusa, Italy;
| | - Edoardo Masiello
- Radiology Unit, Department Clinical and Experimental, Experimental Imaging Center, Vita-Salute San Raffaele University, 20132 Milan, Italy
| | - Federica Parisi
- Department of Medical and Surgical Sciences and Advanced Technologies “GF Ingrassia”, ENT Section, University of Catania, Via S. Sofia, 78, 95125 Catania, Italy;
| | - Giannicola Iannella
- Department of ‘Organi di Senso’, University “Sapienza”, Viale dell’Università, 33, 00185 Rome, Italy;
| | - Nicole Dalia Cilia
- Department of Computer Engineering, University of Enna “Kore”, 94100 Enna, Italy;
- Institute for Computing and Information Sciences, Radboud University Nijmegen, 6544 Nijmegen, The Netherlands
| | - Valerio Salerno
- Department of Engineering and Architecture, Kore University of Enna, 94100 Enna, Italy;
| | - Giacomo Cusumano
- University Hospital Policlinico “G. Rodolico—San Marco”, 95123 Catania, Italy;
- Department of General Surgery and Medical-Surgical Specialties, University of Catania, 95123 Catania, Italy
| | - Luigi La Via
- University Hospital Policlinico “G. Rodolico—San Marco”, 95123 Catania, Italy;
| |
Collapse
|
13
|
K Rajan B, G V, Harshan M H, Swaminathan R. Augmenting interpretation of vaginoscopy observations in cycling bitches with deep learning model. BMC Vet Res 2024; 20:401. [PMID: 39245728 PMCID: PMC11382409 DOI: 10.1186/s12917-024-04242-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2024] [Accepted: 08/22/2024] [Indexed: 09/10/2024] Open
Abstract
Successful identification of estrum or other stages in a cycling bitch often requires a combination of methods, including assessment of its behavior, exfoliative vaginal cytology, vaginoscopy, and hormonal assays. Vaginoscopy is a handy and inexpensive tool for the assessment of the breeding period. The present study introduces an innovative method for identifying the stages in the estrous cycle of female canines. With a dataset of 210 vaginoscopic images covering four reproductive stages, this approach extracts deep features using the inception v3 and Residual Networks (ResNet) 152 models. Binary gray wolf optimization (BGWO) is applied for feature optimization, and classification is performed with the extreme gradient boosting (XGBoost) algorithm. Both models are compared with the support vector machine (SVM) with the Gaussian and linear kernel, k-nearest neighbor (KNN), and convolutional neural network (CNN), based on performance metrics such as accuracy, specificity, F1 score, sensitivity, precision, matthew correlation coefficient (MCC), and runtime. The outcomes demonstrate the superiority of the deep model of ResNet 152 with XGBoost classifier, achieving an average model accuracy of 90.37%. The method gave a specific accuracy of 90.91%, 96.38%, 88.37%, and 88.24% in predicting the proestrus, estrus, diestrus, and anestrus stages, respectively. When performing deep feature analysis using inception v3 with the same classifiers, the model achieved an accuracy of 89.41%, which is comparable to the results obtained with the ResNet model. The proposed model offers a reliable system for identifying the optimal mating period, providing breeders and veterinarians with an efficient tool to enhance the success of their breeding programs.
Collapse
Affiliation(s)
- Bindhu K Rajan
- Department of Instrumentation and Control Engineering, NSS College of Engineering Palakkad, Kerala, India (Affiliated to APJ Abdul Kalam Technological University, Kerala, India.
| | - Venugopal G
- Department of Instrumentation and Control Engineering, NSS College of Engineering Palakkad, Kerala, India (Affiliated to APJ Abdul Kalam Technological University, Kerala, India
| | - Hiron Harshan M
- Department of Gynaecology, College of Veterinary and Animal Sciences, Mannuthy, Kerala, India
| | - Ramakrishnan Swaminathan
- Biomedical Engineering Group, Department of Applied Mechanics, Indian Institute of Technology Madras, Chennai, Tamil Nadu, India
| |
Collapse
|
14
|
Bahl M. Combining AI and Radiomics to Improve the Accuracy of Breast US. Radiology 2024; 312:e241795. [PMID: 39254454 PMCID: PMC11427849 DOI: 10.1148/radiol.241795] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2024] [Revised: 08/11/2024] [Accepted: 08/12/2024] [Indexed: 09/11/2024]
Affiliation(s)
- Manisha Bahl
- From the Department of Radiology, Massachusetts General Hospital, 55
Fruit St, WAC 240, Boston, MA 02114
| |
Collapse
|
15
|
Magnuska ZA, Roy R, Palmowski M, Kohlen M, Winkler BS, Pfeil T, Boor P, Schulz V, Krauss K, Stickeler E, Kiessling F. Combining Radiomics and Autoencoders to Distinguish Benign and Malignant Breast Tumors on US Images. Radiology 2024; 312:e232554. [PMID: 39254446 DOI: 10.1148/radiol.232554] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/11/2024]
Abstract
Background US is clinically established for breast imaging, but its diagnostic performance depends on operator experience. Computer-assisted (real-time) image analysis may help in overcoming this limitation. Purpose To develop precise real-time-capable US-based breast tumor categorization by combining classic radiomics and autoencoder-based features from automatically localized lesions. Materials and Methods A total of 1619 B-mode US images of breast tumors were retrospectively analyzed between April 2018 and January 2024. nnU-Net was trained for lesion segmentation. Features were extracted from tumor segments, bounding boxes, and whole images using either classic radiomics, autoencoder, or both. Feature selection was performed to generate radiomics signatures, which were used to train machine learning algorithms for tumor categorization. Models were evaluated using the area under the receiver operating characteristic curve (AUC), sensitivity, and specificity and were statistically compared with histopathologically or follow-up-confirmed diagnosis. Results The model was developed on 1191 (mean age, 61 years ± 14 [SD]) female patients and externally validated on 50 (mean age, 55 years ± 15]). The development data set was divided into two parts: testing and training lesion segmentation (419 and 179 examinations) and lesion categorization (503 and 90 examinations). nnU-Net demonstrated precision and reproducibility in lesion segmentation in test set of data set 1 (median Dice score [DS]: 0.90 [IQR, 0.84-0.93]; P = .01) and data set 2 (median DS: 0.89 [IQR, 0.80-0.92]; P = .001). The best model, trained with 23 mixed features from tumor bounding boxes, achieved an AUC of 0.90 (95% CI: 0.83, 0.97), sensitivity of 81% (46 of 57; 95% CI: 70, 91), and specificity of 87% (39 of 45; 95% CI: 77, 87). No evidence of difference was found between model and human readers (AUC = 0.90 [95% CI: 0.83, 0.97] vs 0.83 [95% CI: 0.76, 0.90]; P = .55 and 0.90 vs 0.82 [95% CI: 0.75, 0.90]; P = .45) in tumor classification or between model and histopathologically or follow-up-confirmed diagnosis (AUC = 0.90 [95% CI: 0.83, 0.97] vs 1.00 [95% CI: 1.00,1.00]; P = .10). Conclusion Precise real-time US-based breast tumor categorization was developed by mixing classic radiomics and autoencoder-based features from tumor bounding boxes. ClinicalTrials.gov identifier: NCT04976257 Published under a CC BY 4.0 license. Supplemental material is available for this article. See also the editorial by Bahl in this issue.
Collapse
Affiliation(s)
- Zuzanna Anna Magnuska
- From the Institute for Experimental Molecular Imaging (Z.A.M., R.R., M.P., V.S., F.K.), Institute of Pathology (P.B.), and Department of Obstetrics and Gynecology (M.K., B.S.W., T.P., K.K., E.S.), University Clinic Aachen, RWTH Aachen University, Forckenbeckstrasse 55, 52074 Aachen, Germany; Physics Institute III B, RWTH Aachen University, Aachen, Germany (V.S.); Comprehensive Diagnostic Center Aachen, Uniklinik RWTH Aachen, Aachen, Germany (P.B., V.S., E.S., F.K.); and Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany (P.B., V.S., F.K.)
| | - Rijo Roy
- From the Institute for Experimental Molecular Imaging (Z.A.M., R.R., M.P., V.S., F.K.), Institute of Pathology (P.B.), and Department of Obstetrics and Gynecology (M.K., B.S.W., T.P., K.K., E.S.), University Clinic Aachen, RWTH Aachen University, Forckenbeckstrasse 55, 52074 Aachen, Germany; Physics Institute III B, RWTH Aachen University, Aachen, Germany (V.S.); Comprehensive Diagnostic Center Aachen, Uniklinik RWTH Aachen, Aachen, Germany (P.B., V.S., E.S., F.K.); and Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany (P.B., V.S., F.K.)
| | - Moritz Palmowski
- From the Institute for Experimental Molecular Imaging (Z.A.M., R.R., M.P., V.S., F.K.), Institute of Pathology (P.B.), and Department of Obstetrics and Gynecology (M.K., B.S.W., T.P., K.K., E.S.), University Clinic Aachen, RWTH Aachen University, Forckenbeckstrasse 55, 52074 Aachen, Germany; Physics Institute III B, RWTH Aachen University, Aachen, Germany (V.S.); Comprehensive Diagnostic Center Aachen, Uniklinik RWTH Aachen, Aachen, Germany (P.B., V.S., E.S., F.K.); and Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany (P.B., V.S., F.K.)
| | - Matthias Kohlen
- From the Institute for Experimental Molecular Imaging (Z.A.M., R.R., M.P., V.S., F.K.), Institute of Pathology (P.B.), and Department of Obstetrics and Gynecology (M.K., B.S.W., T.P., K.K., E.S.), University Clinic Aachen, RWTH Aachen University, Forckenbeckstrasse 55, 52074 Aachen, Germany; Physics Institute III B, RWTH Aachen University, Aachen, Germany (V.S.); Comprehensive Diagnostic Center Aachen, Uniklinik RWTH Aachen, Aachen, Germany (P.B., V.S., E.S., F.K.); and Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany (P.B., V.S., F.K.)
| | - Brigitte Sophia Winkler
- From the Institute for Experimental Molecular Imaging (Z.A.M., R.R., M.P., V.S., F.K.), Institute of Pathology (P.B.), and Department of Obstetrics and Gynecology (M.K., B.S.W., T.P., K.K., E.S.), University Clinic Aachen, RWTH Aachen University, Forckenbeckstrasse 55, 52074 Aachen, Germany; Physics Institute III B, RWTH Aachen University, Aachen, Germany (V.S.); Comprehensive Diagnostic Center Aachen, Uniklinik RWTH Aachen, Aachen, Germany (P.B., V.S., E.S., F.K.); and Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany (P.B., V.S., F.K.)
| | - Tatjana Pfeil
- From the Institute for Experimental Molecular Imaging (Z.A.M., R.R., M.P., V.S., F.K.), Institute of Pathology (P.B.), and Department of Obstetrics and Gynecology (M.K., B.S.W., T.P., K.K., E.S.), University Clinic Aachen, RWTH Aachen University, Forckenbeckstrasse 55, 52074 Aachen, Germany; Physics Institute III B, RWTH Aachen University, Aachen, Germany (V.S.); Comprehensive Diagnostic Center Aachen, Uniklinik RWTH Aachen, Aachen, Germany (P.B., V.S., E.S., F.K.); and Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany (P.B., V.S., F.K.)
| | - Peter Boor
- From the Institute for Experimental Molecular Imaging (Z.A.M., R.R., M.P., V.S., F.K.), Institute of Pathology (P.B.), and Department of Obstetrics and Gynecology (M.K., B.S.W., T.P., K.K., E.S.), University Clinic Aachen, RWTH Aachen University, Forckenbeckstrasse 55, 52074 Aachen, Germany; Physics Institute III B, RWTH Aachen University, Aachen, Germany (V.S.); Comprehensive Diagnostic Center Aachen, Uniklinik RWTH Aachen, Aachen, Germany (P.B., V.S., E.S., F.K.); and Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany (P.B., V.S., F.K.)
| | - Volkmar Schulz
- From the Institute for Experimental Molecular Imaging (Z.A.M., R.R., M.P., V.S., F.K.), Institute of Pathology (P.B.), and Department of Obstetrics and Gynecology (M.K., B.S.W., T.P., K.K., E.S.), University Clinic Aachen, RWTH Aachen University, Forckenbeckstrasse 55, 52074 Aachen, Germany; Physics Institute III B, RWTH Aachen University, Aachen, Germany (V.S.); Comprehensive Diagnostic Center Aachen, Uniklinik RWTH Aachen, Aachen, Germany (P.B., V.S., E.S., F.K.); and Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany (P.B., V.S., F.K.)
| | - Katja Krauss
- From the Institute for Experimental Molecular Imaging (Z.A.M., R.R., M.P., V.S., F.K.), Institute of Pathology (P.B.), and Department of Obstetrics and Gynecology (M.K., B.S.W., T.P., K.K., E.S.), University Clinic Aachen, RWTH Aachen University, Forckenbeckstrasse 55, 52074 Aachen, Germany; Physics Institute III B, RWTH Aachen University, Aachen, Germany (V.S.); Comprehensive Diagnostic Center Aachen, Uniklinik RWTH Aachen, Aachen, Germany (P.B., V.S., E.S., F.K.); and Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany (P.B., V.S., F.K.)
| | - Elmar Stickeler
- From the Institute for Experimental Molecular Imaging (Z.A.M., R.R., M.P., V.S., F.K.), Institute of Pathology (P.B.), and Department of Obstetrics and Gynecology (M.K., B.S.W., T.P., K.K., E.S.), University Clinic Aachen, RWTH Aachen University, Forckenbeckstrasse 55, 52074 Aachen, Germany; Physics Institute III B, RWTH Aachen University, Aachen, Germany (V.S.); Comprehensive Diagnostic Center Aachen, Uniklinik RWTH Aachen, Aachen, Germany (P.B., V.S., E.S., F.K.); and Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany (P.B., V.S., F.K.)
| | - Fabian Kiessling
- From the Institute for Experimental Molecular Imaging (Z.A.M., R.R., M.P., V.S., F.K.), Institute of Pathology (P.B.), and Department of Obstetrics and Gynecology (M.K., B.S.W., T.P., K.K., E.S.), University Clinic Aachen, RWTH Aachen University, Forckenbeckstrasse 55, 52074 Aachen, Germany; Physics Institute III B, RWTH Aachen University, Aachen, Germany (V.S.); Comprehensive Diagnostic Center Aachen, Uniklinik RWTH Aachen, Aachen, Germany (P.B., V.S., E.S., F.K.); and Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany (P.B., V.S., F.K.)
| |
Collapse
|
16
|
Kudus K, Wagner MW, Namdar K, Bennett J, Nobre L, Tabori U, Hawkins C, Ertl-Wagner BB, Khalvati F. Beyond hand-crafted features for pretherapeutic molecular status identification of pediatric low-grade gliomas. Sci Rep 2024; 14:19102. [PMID: 39154039 PMCID: PMC11330469 DOI: 10.1038/s41598-024-69870-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2024] [Accepted: 08/09/2024] [Indexed: 08/19/2024] Open
Abstract
The use of targeted agents in the treatment of pediatric low-grade gliomas (pLGGs) relies on the determination of molecular status. It has been shown that genetic alterations in pLGG can be identified non-invasively using MRI-based radiomic features or convolutional neural networks (CNNs). We aimed to build and assess a combined radiomics and CNN non-invasive pLGG molecular status identification model. This retrospective study used the tumor regions, manually segmented from T2-FLAIR MR images, of 336 patients treated for pLGG between 1999 and 2018. We designed a CNN and Random Forest radiomics model, along with a model relying on a combination of CNN and radiomic features, to predict the genetic status of pLGG. Additionally, we investigated whether CNNs could predict radiomic feature values from MR images. The combined model (mean AUC: 0.824) outperformed the radiomics model (0.802) and CNN (0.764). The differences in model performance were statistically significant (p-values < 0.05). The CNN was able to learn predictive radiomic features such as surface-to-volume ratio (average correlation: 0.864), and difference matrix dependence non-uniformity normalized (0.924) well but was unable to learn others such as run-length matrix variance (- 0.017) and non-uniformity normalized (- 0.042). Our results show that a model relying on both CNN and radiomic-based features performs better than either approach separately in differentiating the genetic status of pLGGs, and that CNNs are unable to express all handcrafted features.
Collapse
Affiliation(s)
- Kareem Kudus
- Neurosciences & Mental Health Research Program, The Hospital for Sick Children, Toronto, Canada
- Institute of Medical Science, University of Toronto, Toronto, Canada
| | - Matthias W Wagner
- Department of Diagnostic & Interventional Radiology, The Hospital for Sick Children, Toronto, Canada
- Department of Diagnostic and Interventional Neuroradiology, University Hospital Augsburg, Augsburg, Germany
| | - Khashayar Namdar
- Neurosciences & Mental Health Research Program, The Hospital for Sick Children, Toronto, Canada
- Institute of Medical Science, University of Toronto, Toronto, Canada
| | - Julie Bennett
- Division of Hematology and Oncology, The Hospital for Sick Children, Toronto, Canada
- Division of Medical Oncology and Hematology, Princess Margaret Cancer Centre, Toronto, Canada
- Department of Pediatrics, University of Toronto, Toronto, Canada
| | - Liana Nobre
- Department of Paediatrics, University of Alberta, Edmonton, Canada
- Division of Immunology, Hematology/Oncology and Palliative Care, Stollery Children's Hospital, Edmonton, Canada
| | - Uri Tabori
- Division of Hematology and Oncology, The Hospital for Sick Children, Toronto, Canada
| | - Cynthia Hawkins
- Paediatric Laboratory Medicine, Division of Pathology, The Hospital for Sick Children, Toronto, Canada
| | - Birgit Betina Ertl-Wagner
- Neurosciences & Mental Health Research Program, The Hospital for Sick Children, Toronto, Canada
- Institute of Medical Science, University of Toronto, Toronto, Canada
- Department of Diagnostic & Interventional Radiology, The Hospital for Sick Children, Toronto, Canada
- Department of Medical Imaging, University of Toronto, Toronto, Canada
| | - Farzad Khalvati
- Neurosciences & Mental Health Research Program, The Hospital for Sick Children, Toronto, Canada.
- Institute of Medical Science, University of Toronto, Toronto, Canada.
- Department of Diagnostic & Interventional Radiology, The Hospital for Sick Children, Toronto, Canada.
- Department of Medical Imaging, University of Toronto, Toronto, Canada.
- Department of Computer Science, University of Toronto, Toronto, Canada.
- Department of Mechanical and Industrial Engineering, University of Toronto, Toronto, Canada.
| |
Collapse
|
17
|
Tian R, Lu G, Zhao N, Qian W, Ma H, Yang W. Constructing the Optimal Classification Model for Benign and Malignant Breast Tumors Based on Multifeature Analysis from Multimodal Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1386-1400. [PMID: 38381383 PMCID: PMC11300407 DOI: 10.1007/s10278-024-01036-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/18/2023] [Revised: 01/28/2024] [Accepted: 02/02/2024] [Indexed: 02/22/2024]
Abstract
The purpose of this study was to fuse conventional radiomic and deep features from digital breast tomosynthesis craniocaudal projection (DBT-CC) and ultrasound (US) images to establish a multimodal benign-malignant classification model and evaluate its clinical value. Data were obtained from a total of 487 patients at three centers, each of whom underwent DBT-CC and US examinations. A total of 322 patients from dataset 1 were used to construct the model, while 165 patients from datasets 2 and 3 formed the prospective testing cohort. Two radiologists with 10-20 years of work experience and three sonographers with 12-20 years of work experience semiautomatically segmented the lesions using ITK-SNAP software while considering the surrounding tissue. For the experiments, we extracted conventional radiomic and deep features from tumors from DBT-CCs and US images using PyRadiomics and Inception-v3. Additionally, we extracted conventional radiomic features from four peritumoral layers around the tumors via DBT-CC and US images. Features were fused separately from the intratumoral and peritumoral regions. For the models, we tested the SVM, KNN, decision tree, RF, XGBoost, and LightGBM classifiers. Early fusion and late fusion (ensemble and stacking) strategies were employed for feature fusion. Using the SVM classifier, stacking fusion of deep features and three peritumoral radiomic features from tumors in DBT-CC and US images achieved the optimal performance, with an accuracy and AUC of 0.953 and 0.959 [CI: 0.886-0.996], a sensitivity and specificity of 0.952 [CI: 0.888-0.992] and 0.955 [0.868-0.985], and a precision of 0.976. The experimental results indicate that the fusion model of deep features and peritumoral radiomic features from tumors in DBT-CC and US images shows promise in differentiating benign and malignant breast tumors.
Collapse
Affiliation(s)
- Ronghui Tian
- College of Medicine and Biological Information Engineering, Northeastern University, No. 195 Chuangxin Road, Hunnan District, Shenyang, 110819, Liaoning Province, China
| | - Guoxiu Lu
- College of Medicine and Biological Information Engineering, Northeastern University, No. 195 Chuangxin Road, Hunnan District, Shenyang, 110819, Liaoning Province, China
- Department of Nuclear Medicine, General Hospital of Northern Theatre Command, No. 83 Wenhua Road, Shenhe District, Shenyang, 110016, Liaoning Province, China
| | - Nannan Zhao
- Department of Radiology, Cancer Hospital of China Medical University, Liaoning Cancer Hospital & Institute, No. 44 Xiaoheyan Road, Dadong District, Shenyang, 110042, Liaoning Province, China
| | - Wei Qian
- College of Medicine and Biological Information Engineering, Northeastern University, No. 195 Chuangxin Road, Hunnan District, Shenyang, 110819, Liaoning Province, China
| | - He Ma
- College of Medicine and Biological Information Engineering, Northeastern University, No. 195 Chuangxin Road, Hunnan District, Shenyang, 110819, Liaoning Province, China
| | - Wei Yang
- Department of Radiology, Cancer Hospital of China Medical University, Liaoning Cancer Hospital & Institute, No. 44 Xiaoheyan Road, Dadong District, Shenyang, 110042, Liaoning Province, China.
| |
Collapse
|
18
|
Jiao J, Zhou J, Li X, Xia M, Huang Y, Huang L, Wang N, Zhang X, Zhou S, Wang Y, Guo Y. USFM: A universal ultrasound foundation model generalized to tasks and organs towards label efficient image analysis. Med Image Anal 2024; 96:103202. [PMID: 38788326 DOI: 10.1016/j.media.2024.103202] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Revised: 03/29/2024] [Accepted: 05/11/2024] [Indexed: 05/26/2024]
Abstract
Inadequate generality across different organs and tasks constrains the application of ultrasound (US) image analysis methods in smart healthcare. Building a universal US foundation model holds the potential to address these issues. Nevertheless, the development of such foundation models encounters intrinsic challenges in US analysis, i.e., insufficient databases, low quality, and ineffective features. In this paper, we present a universal US foundation model, named USFM, generalized to diverse tasks and organs towards label efficient US image analysis. First, a large-scale Multi-organ, Multi-center, and Multi-device US database was built, comprehensively containing over two million US images. Organ-balanced sampling was employed for unbiased learning. Then, USFM is self-supervised pre-trained on the sufficient US database. To extract the effective features from low-quality US images, we proposed a spatial-frequency dual masked image modeling method. A productive spatial noise addition-recovery approach was designed to learn meaningful US information robustly, while a novel frequency band-stop masking learning approach was also employed to extract complex, implicit grayscale distribution and textural variations. Extensive experiments were conducted on the various tasks of segmentation, classification, and image enhancement from diverse organs and diseases. Comparisons with representative US image analysis models illustrate the universality and effectiveness of USFM. The label efficiency experiments suggest the USFM obtains robust performance with only 20% annotation, laying the groundwork for the rapid development of US models in clinical practices.
Collapse
Affiliation(s)
- Jing Jiao
- Department of Electronic Engineering, School of Information Science and Technology, Fudan University, Shanghai, China
| | - Jin Zhou
- Fudan University Shanghai Cancer Center, Shanghai, China
| | - Xiaokang Li
- Department of Electronic Engineering, School of Information Science and Technology, Fudan University, Shanghai, China
| | - Menghua Xia
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Yi Huang
- Department of Electronic Engineering, School of Information Science and Technology, Fudan University, Shanghai, China
| | - Lihong Huang
- Department of Electronic Engineering, School of Information Science and Technology, Fudan University, Shanghai, China
| | - Na Wang
- Department of Electronic Engineering, School of Information Science and Technology, Fudan University, Shanghai, China; SenseTime Research, Shanghai, China
| | - Xiaofan Zhang
- Shanghai Artificial Intelligence Laboratory, Shanghai, China
| | - Shichong Zhou
- Fudan University Shanghai Cancer Center, Shanghai, China
| | - Yuanyuan Wang
- Department of Electronic Engineering, School of Information Science and Technology, Fudan University, Shanghai, China; Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention of Shanghai, Shanghai, China
| | - Yi Guo
- Department of Electronic Engineering, School of Information Science and Technology, Fudan University, Shanghai, China; Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention of Shanghai, Shanghai, China.
| |
Collapse
|
19
|
Lu G, Tian R, Yang W, Liu R, Liu D, Xiang Z, Zhang G. Deep learning radiomics based on multimodal imaging for distinguishing benign and malignant breast tumours. Front Med (Lausanne) 2024; 11:1402967. [PMID: 39036101 PMCID: PMC11257849 DOI: 10.3389/fmed.2024.1402967] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2024] [Accepted: 06/14/2024] [Indexed: 07/23/2024] Open
Abstract
Objectives This study aimed to develop a deep learning radiomic model using multimodal imaging to differentiate benign and malignant breast tumours. Methods Multimodality imaging data, including ultrasonography (US), mammography (MG), and magnetic resonance imaging (MRI), from 322 patients (112 with benign breast tumours and 210 with malignant breast tumours) with histopathologically confirmed breast tumours were retrospectively collected between December 2018 and May 2023. Based on multimodal imaging, the experiment was divided into three parts: traditional radiomics, deep learning radiomics, and feature fusion. We tested the performance of seven classifiers, namely, SVM, KNN, random forest, extra trees, XGBoost, LightGBM, and LR, on different feature models. Through feature fusion using ensemble and stacking strategies, we obtained the optimal classification model for benign and malignant breast tumours. Results In terms of traditional radiomics, the ensemble fusion strategy achieved the highest accuracy, AUC, and specificity, with values of 0.892, 0.942 [0.886-0.996], and 0.956 [0.873-1.000], respectively. The early fusion strategy with US, MG, and MRI achieved the highest sensitivity of 0.952 [0.887-1.000]. In terms of deep learning radiomics, the stacking fusion strategy achieved the highest accuracy, AUC, and sensitivity, with values of 0.937, 0.947 [0.887-1.000], and 1.000 [0.999-1.000], respectively. The early fusion strategies of US+MRI and US+MG achieved the highest specificity of 0.954 [0.867-1.000]. In terms of feature fusion, the ensemble and stacking approaches of the late fusion strategy achieved the highest accuracy of 0.968. In addition, stacking achieved the highest AUC and specificity, which were 0.997 [0.990-1.000] and 1.000 [0.999-1.000], respectively. The traditional radiomic and depth features of US+MG + MR achieved the highest sensitivity of 1.000 [0.999-1.000] under the early fusion strategy. Conclusion This study demonstrated the potential of integrating deep learning and radiomic features with multimodal images. As a single modality, MRI based on radiomic features achieved greater accuracy than US or MG. The US and MG models achieved higher accuracy with transfer learning than the single-mode or radiomic models. The traditional radiomic and depth features of US+MG + MR achieved the highest sensitivity under the early fusion strategy, showed higher diagnostic performance, and provided more valuable information for differentiation between benign and malignant breast tumours.
Collapse
Affiliation(s)
- Guoxiu Lu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, China
- Department of Nuclear Medicine, General Hospital of Northern Theater Command, Shenyang, Liaoning, China
| | - Ronghui Tian
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Wei Yang
- Department of Radiology, Cancer Hospital of China Medical University, Liaoning Cancer Hospital and Institute, Shenyang, Liaoning, China
| | - Ruibo Liu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Dongmei Liu
- Department of Ultrasound, Beijing Shijitan Hospital, Capital Medical University, Beijing, China
| | - Zijie Xiang
- Biomedical Engineering, Shenyang University of Technology, Shenyang, Liaoning, China
| | - Guoxu Zhang
- Department of Nuclear Medicine, General Hospital of Northern Theater Command, Shenyang, Liaoning, China
| |
Collapse
|
20
|
Bhalla D, Rangarajan K, Chandra T, Banerjee S, Arora C. Reproducibility and Explainability of Deep Learning in Mammography: A Systematic Review of Literature. Indian J Radiol Imaging 2024; 34:469-487. [PMID: 38912238 PMCID: PMC11188703 DOI: 10.1055/s-0043-1775737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/25/2024] Open
Abstract
Background Although abundant literature is currently available on the use of deep learning for breast cancer detection in mammography, the quality of such literature is widely variable. Purpose To evaluate published literature on breast cancer detection in mammography for reproducibility and to ascertain best practices for model design. Methods The PubMed and Scopus databases were searched to identify records that described the use of deep learning to detect lesions or classify images into cancer or noncancer. A modification of Quality Assessment of Diagnostic Accuracy Studies (mQUADAS-2) tool was developed for this review and was applied to the included studies. Results of reported studies (area under curve [AUC] of receiver operator curve [ROC] curve, sensitivity, specificity) were recorded. Results A total of 12,123 records were screened, of which 107 fit the inclusion criteria. Training and test datasets, key idea behind model architecture, and results were recorded for these studies. Based on mQUADAS-2 assessment, 103 studies had high risk of bias due to nonrepresentative patient selection. Four studies were of adequate quality, of which three trained their own model, and one used a commercial network. Ensemble models were used in two of these. Common strategies used for model training included patch classifiers, image classification networks (ResNet in 67%), and object detection networks (RetinaNet in 67%). The highest reported AUC was 0.927 ± 0.008 on a screening dataset, while it reached 0.945 (0.919-0.968) on an enriched subset. Higher values of AUC (0.955) and specificity (98.5%) were reached when combined radiologist and Artificial Intelligence readings were used than either of them alone. None of the studies provided explainability beyond localization accuracy. None of the studies have studied interaction between AI and radiologist in a real world setting. Conclusion While deep learning holds much promise in mammography interpretation, evaluation in a reproducible clinical setting and explainable networks are the need of the hour.
Collapse
Affiliation(s)
- Deeksha Bhalla
- Department of Radiodiagnosis, All India Institute of Medical Sciences, New Delhi, India
| | - Krithika Rangarajan
- Department of Radiodiagnosis, All India Institute of Medical Sciences, New Delhi, India
| | - Tany Chandra
- Department of Radiodiagnosis, All India Institute of Medical Sciences, New Delhi, India
| | - Subhashis Banerjee
- Department of Computer Science and Engineering, Indian Institute of Technology, New Delhi, India
| | - Chetan Arora
- Department of Computer Science and Engineering, Indian Institute of Technology, New Delhi, India
| |
Collapse
|
21
|
Hu B, Xu Y, Gong H, Tang L, Wang L, Li H. Nomogram Utilizing ABVS Radiomics and Clinical Factors for Predicting ≤ 3 Positive Axillary Lymph Nodes in HR+ /HER2- Breast Cancer with 1-2 Positive Sentinel Nodes. Acad Radiol 2024; 31:2684-2694. [PMID: 38383259 DOI: 10.1016/j.acra.2024.01.026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Revised: 01/15/2024] [Accepted: 01/22/2024] [Indexed: 02/23/2024]
Abstract
BACKGROUND In HR+ /HER2- breast cancer patients with ≤ 3 positive axillary lymph nodes (ALNs), genomic tests can streamline chemotherapy decisions. Current studies, centered on tumor metrics, miss broader patient insights. Automated Breast Volume Scanning (ABVS) provides advanced 3D imaging, and its potential synergy with radiomics for ALN evaluation is untapped. OBJECTIVE This study sought to combine ABVS radiomics and clinical characteristics in a nomogram to predict ≤ 3 positive ALNs in HR+ /HER2- breast cancer patients with 1-2 positive sentinel lymph nodes (SLNs), guiding clinicians in genetic test candidate selection. METHODS We enrolled 511 early-stage breast cancer patients: 362 from A Hospital for training and 149 from B Hospital for validation. Using LASSO logistic regression, primary features were identified. A clinical-radiomics nomogram was developed to predict the likelihood of ≤ 3 positive ALNs in HR+ /HER2- patients with 1-2 positive SLNs. We assessed the discriminative capability of the nomogram using the ROC curve. The model's calibration was confirmed through a calibration curve, while its fit was evaluated using the Hosmer-Lemeshow (HL) test. To determine the clinical net benefits, we employed the Decision Curve Analysis (DCA). RESULTS In the training group, 81.2% patients had ≤ 3 metastatic ALNs, and 83.2% in the validation group. We developed a clinical-radiomics nomogram by analyzing clinical characteristics and rad-scores. Factors like positive SLNs (OR=0.077), absence of negative SLNs (OR=11.138), lymphovascular invasion (OR=0.248), and rad-score (OR=0.003) significantly correlated with ≤ 3 positive ALNs. The clinical-radiomics nomogram, with an AUC of 0.910 in training and 0.882 in validation, outperformed the rad-score-free clinical nomogram (AUCs of 0.796 and 0.782). Calibration curves and the HL test (P values 0.688 and 0.691) confirmed its robustness. DCA showed the clinical-radiomics nomogram provided superior net benefits in predicting ALN burden across specific threshold probabilities. CONCLUSION We developed a clinical-radiomics nomogram that integrated radiomics from ABVS images and clinical data to predict the presence of ≤ 3 positive ALNs in HR+ /HER2- patients with 1-2 positive SLNs, aiding oncologists in identifying candidates for genomic tests, bypassing ALND. In the era of precision medicine, combining genomic tests with SLN biopsy refines both surgical and systemic patient treatments.
Collapse
Affiliation(s)
- Bin Hu
- Department of Ultrasound, Minhang Hospital, Fudan University, 170 Xinsong Rd, Shanghai 201199, China.
| | - Yanjun Xu
- Department of Ultrasonography, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China; Shanghai Institute of Ultrasound in Medicine, Shanghai, China
| | - Huiling Gong
- Department of Ultrasound, Minhang Hospital, Fudan University, 170 Xinsong Rd, Shanghai 201199, China
| | - Lang Tang
- Department of Ultrasound, Minhang Hospital, Fudan University, 170 Xinsong Rd, Shanghai 201199, China
| | - Lihong Wang
- Department of Ultrasound, Minhang Hospital, Fudan University, 170 Xinsong Rd, Shanghai 201199, China
| | - Hongchang Li
- Department of General Surgery, Institute of Fudan-Minhang Academic Health System, Minhang Hospital, Fudan University, Shanghai, China
| |
Collapse
|
22
|
Mitrea D, Brehar R, Itu R, Nedevschi S, Socaciu M, Badea R. Pancreatic Tumor Recognition from CT Images through Advanced Deep Learning Techniques. 2024 IEEE INTERNATIONAL CONFERENCE ON AUTOMATION, QUALITY AND TESTING, ROBOTICS (AQTR) 2024:1-6. [DOI: 10.1109/aqtr61889.2024.10554139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2025]
Affiliation(s)
- Delia Mitrea
- Technical University of Cluj-Napoca,Faculty of Automation and Computer Science,Department of Computer Science,Cluj-Napoca,Romania
| | - Raluca Brehar
- Technical University of Cluj-Napoca,Faculty of Automation and Computer Science,Department of Computer Science,Cluj-Napoca,Romania
| | - Razvan Itu
- Technical University of Cluj-Napoca,Faculty of Automation and Computer Science,Department of Computer Science,Cluj-Napoca,Romania
| | - Sergiu Nedevschi
- Technical University of Cluj-Napoca,Faculty of Automation and Computer Science,Department of Computer Science,Cluj-Napoca,Romania
| | - Mihai Socaciu
- I. Hatieganu University of Medicine and Pharmacy,Department of Medical Imaging,Cluj-Napoca,Romania
| | - Radu Badea
- Octavian Fodor Regional Institute of Gastroenterology and Hepatology Cluj Napoca – Romania,Cluj-Napoca,Romania
| |
Collapse
|
23
|
Yang Y, Zhong Y, Li J, Feng J, Gong C, Yu Y, Hu Y, Gu R, Wang H, Liu F, Mei J, Jiang X, Wang J, Yao Q, Wu W, Liu Q, Yao H. Deep learning combining mammography and ultrasound images to predict the malignancy of BI-RADS US 4A lesions in women with dense breasts: a diagnostic study. Int J Surg 2024; 110:2604-2613. [PMID: 38348891 DOI: 10.1097/js9.0000000000001186] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Accepted: 01/29/2024] [Indexed: 05/16/2024]
Abstract
OBJECTIVES The authors aimed to assess the performance of a deep learning (DL) model, based on a combination of ultrasound (US) and mammography (MG) images, for predicting malignancy in breast lesions categorized as Breast Imaging Reporting and Data System (BI-RADS) US 4A in diagnostic patients with dense breasts. METHODS A total of 992 patients were randomly allocated into the training cohort and the test cohort at a proportion of 4:1. Another, 218 patients were enrolled to form a prospective validation cohort. The DL model was developed by incorporating both US and MG images. The predictive performance of the combined DL model for malignancy was evaluated by sensitivity, specificity, and area under the receiver operating characteristic curve (AUC). The combined DL model was then compared to a clinical nomogram model and to the DL model trained using US image only and to that trained MG image only. RESULTS The combined DL model showed satisfactory diagnostic performance for predicting malignancy in breast lesions, with an AUC of 0.940 (95% CI: 0.874-1.000) in the test cohort, and an AUC of 0.906 (95% CI: 0.817-0.995) in the validation cohort, which was significantly higher than the clinical nomogram model, and the DL model for US or MG alone ( P <0.05). CONCLUSIONS The study developed an objective DL model combining both US and MG imaging features, which was proven to be more accurate for predicting malignancy in the BI-RADS US 4A breast lesions of patients with dense breasts. This model may then be used to more accurately guide clinicians' choices about whether performing biopsies in breast cancer diagnosis.
Collapse
Affiliation(s)
| | - Ying Zhong
- Department of Medical Oncology, Phase I Clinical Trial Centre, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Guangzhou
| | - Junwei Li
- Department of Medical Oncology, Phase I Clinical Trial Centre, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Guangzhou
| | - Jiahao Feng
- Cellsvision (Guangzhou) Medical Technology Inc., People's Republic of China
| | | | - Yunfang Yu
- Department of Medical Oncology, Phase I Clinical Trial Centre, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Guangzhou
| | | | | | | | | | | | | | - Jin Wang
- Cellsvision (Guangzhou) Medical Technology Inc., People's Republic of China
| | - Qinyue Yao
- Cellsvision (Guangzhou) Medical Technology Inc., People's Republic of China
| | | | | | - Herui Yao
- Breast Tumor Center
- Department of Medical Oncology, Phase I Clinical Trial Centre, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Guangzhou
| |
Collapse
|
24
|
Zarghami A, Mirmalek SA. Differentiating Primary and Recurrent Lesions in Patients with a History of Breast Cancer: A Comprehensive Review. Galen Med J 2024; 13:e3340. [PMID: 39224544 PMCID: PMC11368482 DOI: 10.31661/gmj.v13i.3340] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2023] [Revised: 10/01/2023] [Accepted: 10/25/2024] [Indexed: 09/04/2024] Open
Abstract
Breast cancer (BC) recurrence remains a concerning issue, requiring accurate identification and differentiation from primary lesions for optimal patient management. This comprehensive review aims to summarize and evaluate the current evidence on methods to distinguish primary breast tumors from recurrent lesions in patients with a history of BC. Also, we provide a comprehensive understanding of the different imaging techniques, including mammography, ultrasound, magnetic resonance imaging, and positron emission tomography, highlighting their diagnostic accuracy, limitations, and potential integration. In addition, the role of various biopsy modalities and molecular markers was explored. Furthermore, the potential role of liquid biopsy, circulating tumor cells, and circulating tumor DNA in differentiating between primary and recurrent BC was emphasized. Finally, it addresses emerging diagnostic modalities, such as radiomic analysis and artificial intelligence, which show promising potential in enhancing diagnostic accuracy. Through comprehensive analysis and review of the available literature, the current study provides an up-to-date understanding of the current state of knowledge, challenges, and future directions in accurately distinguishing between primary and recurrent breast lesions in patients with a history of BC.
Collapse
Affiliation(s)
- Anita Zarghami
- Department of Surgery, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Seyed Abbas Mirmalek
- Department of Surgery, Tehran Medical Sciences, Islamic Azad University, Tehran,
Iran
| |
Collapse
|
25
|
Kapsner LA, Folle L, Hadler D, Eberle J, Balbach EL, Liebert A, Ganslandt T, Wenkel E, Ohlmeyer S, Uder M, Bickelhaupt S. Lesion-conditioning of synthetic MRI-derived subtraction-MIPs of the breast using a latent diffusion model. Sci Rep 2024; 14:6391. [PMID: 38493266 PMCID: PMC10944528 DOI: 10.1038/s41598-024-56853-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Accepted: 03/12/2024] [Indexed: 03/18/2024] Open
Abstract
The purpose of this feasibility study is to investigate if latent diffusion models (LDMs) are capable to generate contrast enhanced (CE) MRI-derived subtraction maximum intensity projections (MIPs) of the breast, which are conditioned by lesions. We trained an LDM with n = 2832 CE-MIPs of breast MRI examinations of n = 1966 patients (median age: 50 years) acquired between the years 2015 and 2020. The LDM was subsequently conditioned with n = 756 segmented lesions from n = 407 examinations, indicating their location and BI-RADS scores. By applying the LDM, synthetic images were generated from the segmentations of an independent validation dataset. Lesions, anatomical correctness, and realistic impression of synthetic and real MIP images were further assessed in a multi-rater study with five independent raters, each evaluating n = 204 MIPs (50% real/50% synthetic images). The detection of synthetic MIPs by the raters was akin to random guessing with an AUC of 0.58. Interrater reliability of the lesion assessment was high both for real (Kendall's W = 0.77) and synthetic images (W = 0.85). A higher AUC was observed for the detection of suspicious lesions (BI-RADS ≥ 4) in synthetic MIPs (0.88 vs. 0.77; p = 0.051). Our results show that LDMs can generate lesion-conditioned MRI-derived CE subtraction MIPs of the breast, however, they also indicate that the LDM tended to generate rather typical or 'textbook representations' of lesions.
Collapse
Affiliation(s)
- Lorenz A Kapsner
- Institute of Radiology, Uniklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Maximiliansplatz 3, 91054, Erlangen, Germany.
- Chair of Medical Informatics, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Wetterkreuz 15, 91058, Erlangen-Tennenlohe, Germany.
| | - Lukas Folle
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Martensstraße 3, 91058, Erlangen, Germany
| | - Dominique Hadler
- Institute of Radiology, Uniklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Maximiliansplatz 3, 91054, Erlangen, Germany
| | - Jessica Eberle
- Institute of Radiology, Uniklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Maximiliansplatz 3, 91054, Erlangen, Germany
| | - Eva L Balbach
- Institute of Radiology, Uniklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Maximiliansplatz 3, 91054, Erlangen, Germany
| | - Andrzej Liebert
- Institute of Radiology, Uniklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Maximiliansplatz 3, 91054, Erlangen, Germany
| | - Thomas Ganslandt
- Chair of Medical Informatics, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Wetterkreuz 15, 91058, Erlangen-Tennenlohe, Germany
| | - Evelyn Wenkel
- Radiologie München, Burgstraße 7, 80331, Munich, Germany
| | - Sabine Ohlmeyer
- Institute of Radiology, Uniklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Maximiliansplatz 3, 91054, Erlangen, Germany
| | - Michael Uder
- Institute of Radiology, Uniklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Maximiliansplatz 3, 91054, Erlangen, Germany
| | - Sebastian Bickelhaupt
- Institute of Radiology, Uniklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Maximiliansplatz 3, 91054, Erlangen, Germany
| |
Collapse
|
26
|
Liang Y, Tang W, Wang T, Ng WWY, Chen S, Jiang K, Wei X, Jiang X, Guo Y. HRadNet: A Hierarchical Radiomics-Based Network for Multicenter Breast Cancer Molecular Subtypes Prediction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1225-1236. [PMID: 37938946 DOI: 10.1109/tmi.2023.3331301] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2023]
Abstract
Breast cancer is a heterogeneous disease, where molecular subtypes of breast cancer are closely related to the treatment and prognosis. Therefore, the goal of this work is to differentiate between luminal and non-luminal subtypes of breast cancer. The hierarchical radiomics network (HRadNet) is proposed for breast cancer molecular subtypes prediction based on dynamic contrast-enhanced magnetic resonance imaging. HRadNet fuses multilayer features with the metadata of images to take advantage of conventional radiomics methods and general convolutional neural networks. A two-stage training mechanism is adopted to improve the generalization capability of the network for multicenter breast cancer data. The ablation study shows the effectiveness of each component of HRadNet. Furthermore, the influence of features from different layers and metadata fusion are also analyzed. It reveals that selecting certain layers of features for a specified domain can make further performance improvements. Experimental results on three data sets from different devices demonstrate the effectiveness of the proposed network. HRadNet also has good performance when transferring to other domains without fine-tuning.
Collapse
|
27
|
Bhattacharya K, Rastogi S, Mahajan A. Post-treatment imaging of gliomas: challenging the existing dogmas. Clin Radiol 2024; 79:e376-e392. [PMID: 38123395 DOI: 10.1016/j.crad.2023.11.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Revised: 10/23/2023] [Accepted: 11/21/2023] [Indexed: 12/23/2023]
Abstract
Gliomas are the commonest malignant central nervous system tumours in adults and imaging is the cornerstone of diagnosis, treatment, and post-treatment follow-up of these patients. With the ever-evolving treatment strategies post-treatment imaging and interpretation in glioma remains challenging, more so with the advent of anti-angiogenic drugs and immunotherapy, which can significantly alter the appearance in this setting, thus making interpretation of routine imaging findings such as contrast enhancement, oedema, and mass effect difficult to interpret. This review details the various methods of management of glioma including the upcoming novel therapies and their impact on imaging findings, with a comprehensive description of the imaging findings in conventional and advanced imaging techniques. A systematic appraisal for the existing and emerging techniques of imaging in these settings and their clinical application including various response assessment guidelines and artificial intelligence based response assessment will also be discussed.
Collapse
Affiliation(s)
- K Bhattacharya
- Tata Memorial Centre, Homi Bhabha National Institute, Mumbai, Maharashtra, India
| | - S Rastogi
- Tata Memorial Centre, Homi Bhabha National Institute, Mumbai, Maharashtra, India
| | - A Mahajan
- Department of imaging, The Clatterbridge Cancer Centre, NHS Foundation Trust, Pembroke Place, Liverpool L7 8YA, UK; University of Liverpool, Liverpool L69 3BX, UK.
| |
Collapse
|
28
|
Irannejad M, Abedi I, Lonbani VD, Hassanvand M. Deep-neural network approaches for predicting 3D dose distribution in intensity-modulated radiotherapy of the brain tumors. J Appl Clin Med Phys 2024; 25:e14197. [PMID: 37933891 PMCID: PMC10962483 DOI: 10.1002/acm2.14197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 09/24/2023] [Accepted: 10/23/2023] [Indexed: 11/08/2023] Open
Abstract
PURPOSE The aim of this study is to reduce treatment planning time by predicting the intensity-modulated radiotherapy 3D dose distribution using deep learning for brain cancer patients. "For this purpose, two different approaches in dose prediction, i.e., first only planning target volume (PTV) and second PTV with organs at risk (OARs) as input of the U-net model, are employed and their results are compared." METHODS AND MATERIALS The data of 99 patients with glioma tumors referred for IMRT treatment were used so that the images of 90 patients were regarded as training datasets and the others were for the test. All patients were manually planned and treated with sixth-field IMRT; the photon energy was 6MV. The treatment plans were done with the Collapsed Cone Convolution algorithm to deliver 60 Gy in 30 fractions. RESULTS The obtained accuracy and similarity for the proposed methods in dose prediction when compared to the clinical dose distributions on test patients according to MSE, dice metric and SSIM for the Only-PTV and PTV-OARs methods are on average (0.05, 0.851, 0.83) and (0.056, 0.842, 0.82) respectively. Also, dose prediction is done in an extremely short time. CONCLUSION The same results of the two proposed methods prove that the presence of OARs in addition to PTV does not provide new knowledge to the network and only by defining the PTV and its location in the imaging slices, does the dose distribution become predictable. Therefore, the Only-PTV method by eliminating the process of introducing OARs can reduce the overall designing time of treatment by IMRT in patients with glioma tumors.
Collapse
Affiliation(s)
- Maziar Irannejad
- Department of Electrical Engineering, Najafabad BranchIslamic Azad UniversityNajafabadIran
| | - Iraj Abedi
- Medical Physics Department, School of MedicineIsfahan University of Medical SciencesIsfahanIran
| | | | | |
Collapse
|
29
|
Wang L. Mammography with deep learning for breast cancer detection. Front Oncol 2024; 14:1281922. [PMID: 38410114 PMCID: PMC10894909 DOI: 10.3389/fonc.2024.1281922] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 01/19/2024] [Indexed: 02/28/2024] Open
Abstract
X-ray mammography is currently considered the golden standard method for breast cancer screening, however, it has limitations in terms of sensitivity and specificity. With the rapid advancements in deep learning techniques, it is possible to customize mammography for each patient, providing more accurate information for risk assessment, prognosis, and treatment planning. This paper aims to study the recent achievements of deep learning-based mammography for breast cancer detection and classification. This review paper highlights the potential of deep learning-assisted X-ray mammography in improving the accuracy of breast cancer screening. While the potential benefits are clear, it is essential to address the challenges associated with implementing this technology in clinical settings. Future research should focus on refining deep learning algorithms, ensuring data privacy, improving model interpretability, and establishing generalizability to successfully integrate deep learning-assisted mammography into routine breast cancer screening programs. It is hoped that the research findings will assist investigators, engineers, and clinicians in developing more effective breast imaging tools that provide accurate diagnosis, sensitivity, and specificity for breast cancer.
Collapse
Affiliation(s)
- Lulu Wang
- Biomedical Device Innovation Center, Shenzhen Technology University, Shenzhen, China
| |
Collapse
|
30
|
Gao Y, Wang W, Yang Y, Xu Z, Lin Y, Lang T, Lei S, Xiao Y, Yang W, Huang W, Li Y. An integrated model incorporating deep learning, hand-crafted radiomics and clinical and US features to diagnose central lymph node metastasis in patients with papillary thyroid cancer. BMC Cancer 2024; 24:69. [PMID: 38216936 PMCID: PMC10787418 DOI: 10.1186/s12885-024-11838-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2023] [Accepted: 01/03/2024] [Indexed: 01/14/2024] Open
Abstract
OBJECTIVE To evaluate the value of an integrated model incorporating deep learning (DL), hand-crafted radiomics and clinical and US imaging features for diagnosing central lymph node metastasis (CLNM) in patients with papillary thyroid cancer (PTC). METHODS This retrospective study reviewed 613 patients with clinicopathologically confirmed PTC from two institutions. The DL model and hand-crafted radiomics model were developed using primary lesion images and then integrated with clinical and US features selected by multivariate analysis to generate an integrated model. The performance was compared with junior and senior radiologists on the independent test set. SHapley Additive exPlanations (SHAP) plot and Gradient-weighted Class Activation Mapping (Grad-CAM) were used for the visualized explanation of the model. RESULTS The integrated model yielded the best performance with an AUC of 0.841. surpassing that of the hand-crafted radiomics model (0.706, p < 0.001) and the DL model (0.819, p = 0.26). Compared to junior and senior radiologists, the integrated model reduced the missed CLNM rate from 57.89% and 44.74-27.63%, and decreased the rate of unnecessary central lymph node dissection (CLND) from 29.87% and 27.27-18.18%, respectively. SHAP analysis revealed that the DL features played a primary role in the diagnosis of CLNM, while clinical and US features (such as extrathyroidal extension, tumour size, age, gender, and multifocality) provided additional support. Grad-CAM indicated that the model exhibited a stronger focus on thyroid capsule in patients with CLNM. CONCLUSION Integrated model can effectively decrease the incidence of missed CLNM and unnecessary CLND. The application of the integrated model can help improve the acceptance of AI-assisted US diagnosis among radiologists.
Collapse
Affiliation(s)
- Yang Gao
- Department of Ultrasound, Nanfang Hospital, Southern Medical University, 1838 Guangzhou Avenue North, Baiyun District, Guangzhou, Guangdong Province, P. R. China
| | - Weizhen Wang
- Department of Ultrasound, Nanfang Hospital, Southern Medical University, 1838 Guangzhou Avenue North, Baiyun District, Guangzhou, Guangdong Province, P. R. China
| | - Yuan Yang
- Guangdong Provincial Key Laboratory of Medical Image Processing, School of Biomedical Engineering, Southern Medical University, 1838 Guangzhou Avenue North, Baiyun District, Guangzhou, Guangdong Province, P. R. China
| | - Ziting Xu
- Department of Ultrasound, Nanfang Hospital, Southern Medical University, 1838 Guangzhou Avenue North, Baiyun District, Guangzhou, Guangdong Province, P. R. China
| | - Yue Lin
- Department of Ultrasound, Nanfang Hospital, Southern Medical University, 1838 Guangzhou Avenue North, Baiyun District, Guangzhou, Guangdong Province, P. R. China
| | - Ting Lang
- Department of Ultrasound, Nanfang Hospital, Southern Medical University, 1838 Guangzhou Avenue North, Baiyun District, Guangzhou, Guangdong Province, P. R. China
| | - Shangtong Lei
- Department of General Surgery, Nanfang Hospital, Southern Medical University, Guangzhou, P. R. China
| | - Yisheng Xiao
- Department of Ultrasound, the First People's Hospital of Foshan, Lingnan Avenue North No.81, Foshan, Guangdong Province, P. R. China
| | - Wei Yang
- Guangdong Provincial Key Laboratory of Medical Image Processing, School of Biomedical Engineering, Southern Medical University, 1838 Guangzhou Avenue North, Baiyun District, Guangzhou, Guangdong Province, P. R. China.
| | - Weijun Huang
- Department of Ultrasound, the First People's Hospital of Foshan, Lingnan Avenue North No.81, Foshan, Guangdong Province, P. R. China.
| | - Yingjia Li
- Department of Ultrasound, Nanfang Hospital, Southern Medical University, 1838 Guangzhou Avenue North, Baiyun District, Guangzhou, Guangdong Province, P. R. China.
| |
Collapse
|
31
|
Zhang L, Xu R, Zhao J. Learning technology for detection and grading of cancer tissue using tumour ultrasound images1. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2024; 32:157-171. [PMID: 37424493 DOI: 10.3233/xst-230085] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/11/2023]
Abstract
BACKGROUND Early diagnosis of breast cancer is crucial to perform effective therapy. Many medical imaging modalities including MRI, CT, and ultrasound are used to diagnose cancer. OBJECTIVE This study aims to investigate feasibility of applying transfer learning techniques to train convoluted neural networks (CNNs) to automatically diagnose breast cancer via ultrasound images. METHODS Transfer learning techniques helped CNNs recognise breast cancer in ultrasound images. Each model's training and validation accuracies were assessed using the ultrasound image dataset. Ultrasound images educated and tested the models. RESULTS MobileNet had the greatest accuracy during training and DenseNet121 during validation. Transfer learning algorithms can detect breast cancer in ultrasound images. CONCLUSIONS Based on the results, transfer learning models may be useful for automated breast cancer diagnosis in ultrasound images. However, only a trained medical professional should diagnose cancer, and computational approaches should only be used to help make quick decisions.
Collapse
Affiliation(s)
- Liyan Zhang
- Department of Ultrasound, Sunshine Union Hospital, Weifang, China
| | - Ruiyan Xu
- College of Health, Binzhou Polytechnical College, Binzhou, China
| | - Jingde Zhao
- Department of Imaging, Qingdao Hospital of Traditional Chinese Medicine (Qingdao HaiCi Hospital), Qingdao, China
| |
Collapse
|
32
|
Majumder S, Katz S, Kontos D, Roshkovan L. State of the art: radiomics and radiomics-related artificial intelligence on the road to clinical translation. BJR Open 2024; 6:tzad004. [PMID: 38352179 PMCID: PMC10860524 DOI: 10.1093/bjro/tzad004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Revised: 09/15/2023] [Accepted: 10/30/2023] [Indexed: 02/16/2024] Open
Abstract
Radiomics and artificial intelligence carry the promise of increased precision in oncologic imaging assessments due to the ability of harnessing thousands of occult digital imaging features embedded in conventional medical imaging data. While powerful, these technologies suffer from a number of sources of variability that currently impede clinical translation. In order to overcome this impediment, there is a need to control for these sources of variability through harmonization of imaging data acquisition across institutions, construction of standardized imaging protocols that maximize the acquisition of these features, harmonization of post-processing techniques, and big data resources to properly power studies for hypothesis testing. For this to be accomplished, it will be critical to have multidisciplinary and multi-institutional collaboration.
Collapse
Affiliation(s)
- Shweta Majumder
- Department of Radiology, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA 19104, United States
| | - Sharyn Katz
- Department of Radiology, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA 19104, United States
| | - Despina Kontos
- Department of Radiology, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA 19104, United States
| | - Leonid Roshkovan
- Department of Radiology, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA 19104, United States
| |
Collapse
|
33
|
Chen Z, Yu Y, Liu S, Du W, Hu L, Wang C, Li J, Liu J, Zhang W, Peng X. A deep learning and radiomics fusion model based on contrast-enhanced computer tomography improves preoperative identification of cervical lymph node metastasis of oral squamous cell carcinoma. Clin Oral Investig 2023; 28:39. [PMID: 38151672 DOI: 10.1007/s00784-023-05423-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 11/21/2023] [Indexed: 12/29/2023]
Abstract
OBJECTIVES In this study, we constructed and validated models based on deep learning and radiomics to facilitate preoperative diagnosis of cervical lymph node metastasis (LNM) using contrast-enhanced computed tomography (CECT). MATERIALS AND METHODS CECT scans of 100 patients with OSCC (217 metastatic and 1973 non-metastatic cervical lymph nodes: development set, 76 patients; internally independent test set, 24 patients) who received treatment at the Peking University School and Hospital of Stomatology between 2012 and 2016 were retrospectively collected. Clinical diagnoses and pathological findings were used to establish the gold standard for metastatic cervical LNs. A reader study with two clinicians was also performed to evaluate the lymph node status in the test set. The performance of the proposed models and the clinicians was evaluated and compared by measuring using the area under the receiver operating characteristic curve (AUC), accuracy (ACC), sensitivity (SEN), and specificity (SPE). RESULTS A fusion model combining deep learning with radiomics showed the best performance (ACC, 89.2%; SEN, 92.0%; SPE, 88.9%; and AUC, 0.950 [95% confidence interval: 0.908-0.993, P < 0.001]) in the test set. In comparison with the clinicians, the fusion model showed higher sensitivity (92.0 vs. 72.0% and 60.0%) but lower specificity (88.9 vs. 97.5% and 98.8%). CONCLUSION A fusion model combining radiomics and deep learning approaches outperformed other single-technique models and showed great potential to accurately predict cervical LNM in patients with OSCC. CLINICAL RELEVANCE The fusion model can complement the preoperative identification of LNM of OSCC performed by the clinicians.
Collapse
Affiliation(s)
- Zhen Chen
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, No. 22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, People's Republic of China
| | - Yao Yu
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, No. 22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, People's Republic of China
| | - Shuo Liu
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, No. 22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, People's Republic of China
| | - Wen Du
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, No. 22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, People's Republic of China
| | - Leihao Hu
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, No. 22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, People's Republic of China
| | - Congwei Wang
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, No. 22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, People's Republic of China
| | - Jiaqi Li
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, No. 22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, People's Republic of China
| | - Jianbo Liu
- Huafang Hanying Medical Technology Co., Ltd, No.19, West Bridge Road, Miyun District, Beijing, 101520, People's Republic of China
| | - Wenbo Zhang
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, No. 22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, People's Republic of China
| | - Xin Peng
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, No. 22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, People's Republic of China.
| |
Collapse
|
34
|
Yang H, Xu Y, Dong M, Zhang Y, Gong J, Huang D, He J, Wei L, Huang S, Zhao L. Automated Prediction of Neoadjuvant Chemoradiotherapy Response in Locally Advanced Cervical Cancer Using Hybrid Model-Based MRI Radiomics. Diagnostics (Basel) 2023; 14:5. [PMID: 38201314 PMCID: PMC10795804 DOI: 10.3390/diagnostics14010005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 12/11/2023] [Accepted: 12/15/2023] [Indexed: 01/12/2024] Open
Abstract
BACKGROUND This study aimed to develop a model that automatically predicts the neoadjuvant chemoradiotherapy (nCRT) response for patients with locally advanced cervical cancer (LACC) based on T2-weighted MR images and clinical parameters. METHODS A total of 138 patients were enrolled, and T2-weighted MR images and clinical information of the patients before treatment were collected. Clinical information included age, stage, pathological type, squamous cell carcinoma (SCC) level, and lymph node status. A hybrid model extracted the domain-specific features from the computational radiomics system, the abstract features from the deep learning network, and the clinical parameters. Then, it employed an ensemble learning classifier weighted by logistic regression (LR) classifier, support vector machine (SVM) classifier, K-Nearest Neighbor (KNN) classifier, and Bayesian classifier to predict the pathologic complete response (pCR). The area under the receiver operating characteristics curve (AUC), accuracy (ACC), true positive rate (TPR), true negative rate (TNR), and precision were used as evaluation metrics. RESULTS Among the 138 LACC patients, 74 were in the pCR group, and 64 were in the non-pCR group. There was no significant difference between the two cohorts in terms of tumor diameter (p = 0.787), lymph node (p = 0.068), and stage before radiotherapy (p = 0.846), respectively. The 109-dimension domain features and 1472-dimension abstract features from MRI images were used to form a hybrid model. The average AUC, ACC, TPR, TNR, and precision of the proposed hybrid model were about 0.80, 0.71, 0.75, 0.66, and 0.71, while the AUC values of using clinical parameters, domain-specific features, and abstract features alone were 0.61, 0.67 and 0.76, respectively. The AUC value of the model without an ensemble learning classifier was 0.76. CONCLUSIONS The proposed hybrid model can predict the radiotherapy response of patients with LACC, which might help radiation oncologists create personalized treatment plans for patients.
Collapse
Affiliation(s)
- Hua Yang
- Department of Radiation Oncology, Xijing Hospital of Air Force Medical University, Xi’an 710032, China; (H.Y.); (Y.Z.); (J.G.)
- Department of Radiation Oncology, The First Affiliated Hospital of Xi’an Jiaotong University, Xi’an 710061, China
| | - Yinan Xu
- Key Lab of Intelligent Perception and Image Understanding of Ministry of Education, Xidian University, Xi’an 710071, China;
| | - Mohan Dong
- Department of Medical Education, Xijing Hospital of Air Force Medical University, Xi’an 710032, China;
| | - Ying Zhang
- Department of Radiation Oncology, Xijing Hospital of Air Force Medical University, Xi’an 710032, China; (H.Y.); (Y.Z.); (J.G.)
| | - Jie Gong
- Department of Radiation Oncology, Xijing Hospital of Air Force Medical University, Xi’an 710032, China; (H.Y.); (Y.Z.); (J.G.)
| | - Dong Huang
- Department of Military Biomedical Engineering, Air Force Medical University, Xi’an 710012, China;
| | - Junhua He
- Department of Radiation Oncology, 986 Hospital of Air Force Medical University, Xi’an 710054, China;
| | - Lichun Wei
- Department of Radiation Oncology, Xijing Hospital of Air Force Medical University, Xi’an 710032, China; (H.Y.); (Y.Z.); (J.G.)
| | - Shigao Huang
- Department of Radiation Oncology, Xijing Hospital of Air Force Medical University, Xi’an 710032, China; (H.Y.); (Y.Z.); (J.G.)
| | - Lina Zhao
- Department of Radiation Oncology, Xijing Hospital of Air Force Medical University, Xi’an 710032, China; (H.Y.); (Y.Z.); (J.G.)
| |
Collapse
|
35
|
Wang Q, Jia X, Luo T, Yu J, Xia S. Deep learning algorithm using bispectrum analysis energy feature maps based on ultrasound radiofrequency signals to detect breast cancer. Front Oncol 2023; 13:1272427. [PMID: 38179175 PMCID: PMC10766103 DOI: 10.3389/fonc.2023.1272427] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Accepted: 11/15/2023] [Indexed: 01/06/2024] Open
Abstract
Background Ultrasonography is an important imaging method for clinical breast cancer screening. As the original echo signals of ultrasonography, ultrasound radiofrequency (RF) signals provide abundant tissue macroscopic and microscopic information and have important development and utilization value in breast cancer detection. Methods In this study, we proposed a deep learning method based on bispectrum analysis feature maps to process RF signals and realize breast cancer detection. The bispectrum analysis energy feature maps with frequency subdivision were first proposed and applied to breast cancer detection in this study. Our deep learning network was based on a weight sharing network framework for the input of multiple feature maps. A feature map attention module was designed for multiple feature maps input of the network to adaptively learn both feature maps and features that were conducive to classification. We also designed a similarity constraint factor, learning the similarity and difference between feature maps by cosine distance. Results The experiment results showed that the areas under the receiver operating characteristic curves of our proposed method in the validation set and two independent test sets for benign and malignant breast tumor classification were 0.913, 0.900, and 0.885, respectively. The performance of the model combining four ultrasound bispectrum analysis energy feature maps in breast cancer detection was superior to that of the model using an ultrasound grayscale image and the model using a single bispectrum analysis energy feature map in this study. Conclusion The combination of deep learning technology and our proposed ultrasound bispectrum analysis energy feature maps effectively realized breast cancer detection and was an efficient method of feature extraction and utilization of ultrasound RF signals.
Collapse
Affiliation(s)
- Qingmin Wang
- School of Information Science and Engineering, Fudan University, Shanghai, China
| | - Xiaohong Jia
- Ruijin Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Ting Luo
- Ruijin Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Jinhua Yu
- School of Information Science and Engineering, Fudan University, Shanghai, China
| | - Shujun Xia
- Ruijin Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
36
|
Jones MA, Sadeghipour N, Chen X, Islam W, Zheng B. A multi-stage fusion framework to classify breast lesions using deep learning and radiomics features computed from four-view mammograms. Med Phys 2023; 50:7670-7683. [PMID: 37083190 PMCID: PMC10589387 DOI: 10.1002/mp.16419] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Revised: 03/29/2023] [Accepted: 03/31/2023] [Indexed: 04/22/2023] Open
Abstract
BACKGROUND Developing computer aided diagnosis (CAD) schemes of mammograms to classify between malignant and benign breast lesions has attracted a lot of research attention over the last several decades. However, unlike radiologists who make diagnostic decisions based on the fusion of image features extracted from multi-view mammograms, most CAD schemes are single-view-based schemes, which limit CAD performance and clinical utility. PURPOSE This study aims to develop and test a novel CAD framework that optimally fuses information extracted from ipsilateral views of bilateral mammograms using both deep transfer learning (DTL) and radiomics feature extraction methods. METHODS An image dataset containing 353 benign and 611 malignant cases is assembled. Each case contains four images: the craniocaudal (CC) and mediolateral oblique (MLO) view of the left and right breast. First, we extract four matching regions of interest (ROIs) from images that surround centers of two suspicious lesion regions seen in CC and MLO views, as well as matching ROIs in the contralateral breasts. Next, the handcrafted radiomics (HCRs) features and VGG16 model-generated automated features are extracted from each ROI resulting in eight feature vectors. Then, after reducing feature dimensionality and quantifying the bilateral and ipsilateral asymmetry of four ROIs to yield four new feature vectors, we test four fusion methods to build three support vector machine (SVM) classifiers by an optimal fusion of asymmetrical image features extracted from four view images. RESULTS Using a 10-fold cross-validation method, results show that a SVM classifier trained using an optimal fusion of four view images yields the highest classification performance (AUC = 0.876 ± 0.031), which significantly outperforms SVM classifiers trained using one projection view alone, AUC = 0.817 ± 0.026 and 0.792 ± 0.026 for the CC and MLO view of bilateral mammograms, respectively (p < 0.001). CONCLUSIONS The study demonstrates that the shift from single-view CAD to four-view CAD and the inclusion of both DTL and radiomics features significantly increases CAD performance in distinguishing between malignant and benign breast lesions.
Collapse
Affiliation(s)
- Meredith A. Jones
- School of Biomedical Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Negar Sadeghipour
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Xuxin Chen
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Warid Islam
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Bin Zheng
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| |
Collapse
|
37
|
Sufyan M, Shokat Z, Ashfaq UA. Artificial intelligence in cancer diagnosis and therapy: Current status and future perspective. Comput Biol Med 2023; 165:107356. [PMID: 37688994 DOI: 10.1016/j.compbiomed.2023.107356] [Citation(s) in RCA: 27] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Revised: 07/21/2023] [Accepted: 08/12/2023] [Indexed: 09/11/2023]
Abstract
Artificial intelligence (AI) in healthcare plays a pivotal role in combating many fatal diseases, such as skin, breast, and lung cancer. AI is an advanced form of technology that uses mathematical-based algorithmic principles similar to those of the human mind for cognizing complex challenges of the healthcare unit. Cancer is a lethal disease with many etiologies, including numerous genetic and epigenetic mutations. Cancer being a multifactorial disease is difficult to be diagnosed at an early stage. Therefore, genetic variations and other leading factors could be identified in due time through AI and machine learning (ML). AI is the synergetic approach for mining the drug targets, their mechanism of action, and drug-organism interaction from massive raw data. This synergetic approach is also facing several challenges in data mining but computational algorithms from different scientific communities for multi-target drug discovery are highly helpful to overcome the bottlenecks in AI for drug-target discovery. AI and ML could be the epicenter in the medical world for the diagnosis, treatment, and evaluation of almost any disease in the near future. In this comprehensive review, we explore the immense potential of AI and ML when integrated with the biological sciences, specifically in the context of cancer research. Our goal is to illuminate the many ways in which AI and ML are being applied to the study of cancer, from diagnosis to individualized treatment. We highlight the prospective role of AI in supporting oncologists and other medical professionals in making informed decisions and improving patient outcomes by examining the intersection of AI and cancer control. Although AI-based medical therapies show great potential, many challenges must be overcome before they can be implemented in clinical practice. We critically assess the current hurdles and provide insights into the future directions of AI-driven approaches, aiming to pave the way for enhanced cancer interventions and improved patient care.
Collapse
Affiliation(s)
- Muhammad Sufyan
- Department of Bioinformatics and Biotechnology, Government College University Faisalabad, Pakistan.
| | - Zeeshan Shokat
- Department of Bioinformatics and Biotechnology, Government College University Faisalabad, Pakistan.
| | - Usman Ali Ashfaq
- Department of Bioinformatics and Biotechnology, Government College University Faisalabad, Pakistan.
| |
Collapse
|
38
|
Furtney I, Bradley R, Kabuka MR. Patient Graph Deep Learning to Predict Breast Cancer Molecular Subtype. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2023; 20:3117-3127. [PMID: 37379184 PMCID: PMC10623656 DOI: 10.1109/tcbb.2023.3290394] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/30/2023]
Abstract
Breast cancer is a heterogeneous disease consisting of a diverse set of genomic mutations and clinical characteristics. The molecular subtypes of breast cancer are closely tied to prognosis and therapeutic treatment options. We investigate using deep graph learning on a collection of patient factors from multiple diagnostic disciplines to better represent breast cancer patient information and predict molecular subtype. Our method models breast cancer patient data into a multi-relational directed graph with extracted feature embeddings to directly represent patient information and diagnostic test results. We develop a radiographic image feature extraction pipeline to produce vector representation of breast cancer tumors in DCE-MRI and an autoencoder-based genomic variant embedding method to map variant assay results to a low-dimensional latent space. We leverage related-domain transfer learning to train and evaluate a Relational Graph Convolutional Network to predict the probabilities of molecular subtypes for individual breast cancer patient graphs. Our work found that utilizing information from multiple multimodal diagnostic disciplines improved the model's prediction results and produced more distinct learned feature representations for breast cancer patients. This research demonstrates the capabilities of graph neural networks and deep learning feature representation to perform multimodal data fusion and representation in the breast cancer domain.
Collapse
|
39
|
Chang R, Qi S, Wu Y, Yue Y, Zhang X, Guan Y, Qian W. Deep radiomic model based on the sphere-shell partition for predicting treatment response to chemotherapy in lung cancer. Transl Oncol 2023; 35:101719. [PMID: 37320871 PMCID: PMC10277572 DOI: 10.1016/j.tranon.2023.101719] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2022] [Revised: 05/16/2023] [Accepted: 06/08/2023] [Indexed: 06/17/2023] Open
Abstract
BACKGROUND The prognosis of chemotherapy is important in clinical decision-making for non-small cell lung cancer (NSCLC) patients. OBJECTIVES To develop a model for predicting treatment response to chemotherapy in NSCLC patients from pre-chemotherapy CT images. MATERIALS AND METHODS This retrospective multicenter study enrolled 485 patients with NSCLC who received chemotherapy alone as a first-line treatment. Two integrated models were developed using radiomic and deep-learning-based features. First, we partitioned pre-chemotherapy CT images into spheres and shells with different radii around the tumor (0-3, 3-6, 6-9, 9-12, 12-15 mm) containing intratumoral and peritumoral regions. Second, we extracted radiomic and deep-learning-based features from each partition. Third, using radiomic features, five sphere-shell models, one feature fusion model, and one image fusion model were developed. Finally, the model with the best performance was validated in two cohorts. RESULTS Among the five partitions, the model of 9-12 mm achieved the highest area under the curve (AUC) of 0.87 (95% confidence interval: 0.77-0.94). The AUC was 0.94 (0.85-0.98) for the feature fusion model and 0.91 (0.82-0.97) for the image fusion model. For the model integrating radiomic and deep-learning-based features, the AUC was 0.96 (0.88-0.99) for the feature fusion method and 0.94 (0.85-0.98) for the image fusion method. The best-performing model had an AUC of 0.91 (0.81-0.97) and 0.89 (0.79-0.93) in two validation sets, respectively. CONCLUSIONS This integrated model can predict the response to chemotherapy in NSCLC patients and assist physicians in clinical decision-making.
Collapse
Affiliation(s)
- Runsheng Chang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Shouliang Qi
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China.
| | - Yanan Wu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Yong Yue
- Department of Radiology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Xiaoye Zhang
- Department of Oncology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Yubao Guan
- Department of Radiology, The Fifth Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Wei Qian
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| |
Collapse
|
40
|
Han C, Zhang J, Yu B, Zheng H, Wu Y, Lin Z, Ning B, Yi J, Xie C, Jin X. Integrating plan complexity and dosiomics features with deep learning in patient-specific quality assurance for volumetric modulated arc therapy. Radiat Oncol 2023; 18:116. [PMID: 37434171 DOI: 10.1186/s13014-023-02311-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Accepted: 06/30/2023] [Indexed: 07/13/2023] Open
Abstract
PURPOSE To investigate the feasibility and performance of deep learning (DL) models combined with plan complexity (PC) and dosiomics features in the patient-specific quality assurance (PSQA) for patients underwent volumetric modulated arc therapy (VMAT). METHODS Total of 201 VMAT plans with measured PSQA results were retrospectively enrolled and divided into training and testing sets randomly at 7:3. PC metrics were calculated using house-built algorithm based on Matlab. Dosiomics features were extracted and selected using Random Forest (RF) from planning target volume (PTV) and overlap regions with 3D dose distributions. The top 50 dosiomics and 5 PC features were selected based on feature importance screening. A DL DenseNet was adapted and trained for the PSQA prediction. RESULTS The measured average gamma passing rate (GPR) of these VMAT plans was 97.94% ± 1.87%, 94.33% ± 3.22%, and 87.27% ± 4.81% at the criteria of 3%/3 mm, 3%/2 mm, and 2%/2 mm, respectively. Models with PC features alone demonstrated the lowest area under curve (AUC). The AUC and sensitivity of PC and dosiomics (D) combined model at 2%/2 mm were 0.915 and 0.833, respectively. The AUCs of DL models were improved from 0.943, 0.849, 0.841 to 0.948, 0.890, 0.942 in the combined models (PC + D + DL) at 3%/3 mm, 3%/2 mm and 2%/2 mm, respectively. A best AUC of 0.942 with a sensitivity, specificity and accuracy of 100%, 81.8%, and 83.6% was achieved with combined model (PC + D + DL) at 2%/2 mm. CONCLUSIONS Integrating DL with dosiomics and PC metrics is promising in the prediction of GPRs in PSQA for patients underwent VMAT.
Collapse
Affiliation(s)
- Ce Han
- Department of Radiotherapy Center, 1st Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Ji Zhang
- Department of Radiotherapy Center, 1st Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Bing Yu
- Department of Radiotherapy Center, 1st Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Haoze Zheng
- Department of Radiotherapy Center, 1st Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Yibo Wu
- Department of Radiotherapy Center, 1st Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Zhixi Lin
- Department of Radiotherapy Center, 1st Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Boda Ning
- Department of Radiotherapy Center, 1st Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Jinling Yi
- Department of Radiotherapy Center, 1st Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Congying Xie
- Department of Radiotherapy Center, 1st Affiliated Hospital of Wenzhou Medical University, Wenzhou, China.
- Department of Medical and Radiation Oncology, 2nd Affiliated Hospital of Wenzhou Medical University, Wenzhou, China.
| | - Xiance Jin
- Department of Radiotherapy Center, 1st Affiliated Hospital of Wenzhou Medical University, Wenzhou, China.
- School of Basic Medical Science, Wenzhou Medical University, Wenzhou, China.
| |
Collapse
|
41
|
Baughan N, Li H, Lan L, Embury M, Yim I, Whitman GJ, El-Zein R, Bedrosian I, Giger ML. Radiomic and deep learning characterization of breast parenchyma on full field digital mammograms and specimen radiographs: a pilot study of a potential cancer field effect. J Med Imaging (Bellingham) 2023; 10:044501. [PMID: 37426053 PMCID: PMC10329416 DOI: 10.1117/1.jmi.10.4.044501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2022] [Revised: 06/11/2023] [Accepted: 06/20/2023] [Indexed: 07/11/2023] Open
Abstract
Purpose In women with biopsy-proven breast cancer, histologically normal areas of the parenchyma have shown molecular similarity to the tumor, supporting a potential cancer field effect. The purpose of this work was to investigate relationships of human-engineered radiomic and deep learning features between regions across the breast in mammographic parenchymal patterns and specimen radiographs. Approach This study included mammograms from 74 patients with at least 1 identified malignant tumor, of whom 32 also possessed intraoperative radiographs of mastectomy specimens. Mammograms were acquired with a Hologic system and specimen radiographs were acquired with a Fujifilm imaging system. All images were retrospectively collected under an Institutional Review Board-approved protocol. Regions of interest (ROI) of 128 × 128 pixels were selected from three regions: within the identified tumor, near to the tumor, and far from the tumor. Radiographic texture analysis was used to extract 45 radiomic features and transfer learning was used to extract 20 deep learning features in each region. Kendall's Tau-b and Pearson correlation tests were performed to assess relationships between features in each region. Results Statistically significant correlations in select subgroups of features with tumor, near to the tumor, and far from the tumor ROI regions were identified in both mammograms and specimen radiographs. Intensity-based features were found to show significant correlations with ROI regions across both modalities. Conclusions Results support our hypothesis of a potential cancer field effect, accessible radiographically, across tumor and non-tumor regions, thus indicating the potential for computerized analysis of mammographic parenchymal patterns to predict breast cancer risk.
Collapse
Affiliation(s)
- Natalie Baughan
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Hui Li
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Li Lan
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Matthew Embury
- The University of Texas MD Anderson Cancer Center, Department of Breast Surgical Oncology, Houston, Texas, United States
| | - Isaiah Yim
- The University of Texas MD Anderson Cancer Center, Department of Breast Surgical Oncology, Houston, Texas, United States
| | - Gary J. Whitman
- The University of Texas MD Anderson Cancer Center, Department of Breast Imaging, Houston, Texas, United States
| | - Randa El-Zein
- The Houston Methodist Research Institute, Houston, Texas, United States
| | - Isabelle Bedrosian
- The University of Texas MD Anderson Cancer Center, Department of Breast Surgical Oncology, Houston, Texas, United States
| | - Maryellen L. Giger
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| |
Collapse
|
42
|
Li H, Drukker K, Hu Q, Whitney HM, Fuhrman JD, Giger ML. Predicting intensive care need for COVID-19 patients using deep learning on chest radiography. J Med Imaging (Bellingham) 2023; 10:044504. [PMID: 37608852 PMCID: PMC10440543 DOI: 10.1117/1.jmi.10.4.044504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2023] [Revised: 07/12/2023] [Accepted: 08/01/2023] [Indexed: 08/24/2023] Open
Abstract
Purpose Image-based prediction of coronavirus disease 2019 (COVID-19) severity and resource needs can be an important means to address the COVID-19 pandemic. In this study, we propose an artificial intelligence/machine learning (AI/ML) COVID-19 prognosis method to predict patients' needs for intensive care by analyzing chest X-ray radiography (CXR) images using deep learning. Approach The dataset consisted of 8357 CXR exams from 5046 COVID-19-positive patients as confirmed by reverse transcription polymerase chain reaction (RT-PCR) tests for the SARS-CoV-2 virus with a training/validation/test split of 64%/16%/20% on a by patient level. Our model involved a DenseNet121 network with a sequential transfer learning technique employed to train on a sequence of gradually more specific and complex tasks: (1) fine-tuning a model pretrained on ImageNet using a previously established CXR dataset with a broad spectrum of pathologies; (2) refining on another established dataset to detect pneumonia; and (3) fine-tuning using our in-house training/validation datasets to predict patients' needs for intensive care within 24, 48, 72, and 96 h following the CXR exams. The classification performances were evaluated on our independent test set (CXR exams of 1048 patients) using the area under the receiver operating characteristic curve (AUC) as the figure of merit in the task of distinguishing between those COVID-19-positive patients who required intensive care following the imaging exam and those who did not. Results Our proposed AI/ML model achieved an AUC (95% confidence interval) of 0.78 (0.74, 0.81) when predicting the need for intensive care 24 h in advance, and at least 0.76 (0.73, 0.80) for 48 h or more in advance using predictions based on the AI prognostic marker derived from CXR images. Conclusions This AI/ML prediction model for patients' needs for intensive care has the potential to support both clinical decision-making and resource management.
Collapse
Affiliation(s)
- Hui Li
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Karen Drukker
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Qiyuan Hu
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Heather M. Whitney
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Jordan D. Fuhrman
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Maryellen L. Giger
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| |
Collapse
|
43
|
Afrin H, Larson NB, Fatemi M, Alizad A. Deep Learning in Different Ultrasound Methods for Breast Cancer, from Diagnosis to Prognosis: Current Trends, Challenges, and an Analysis. Cancers (Basel) 2023; 15:3139. [PMID: 37370748 PMCID: PMC10296633 DOI: 10.3390/cancers15123139] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 06/02/2023] [Accepted: 06/08/2023] [Indexed: 06/29/2023] Open
Abstract
Breast cancer is the second-leading cause of mortality among women around the world. Ultrasound (US) is one of the noninvasive imaging modalities used to diagnose breast lesions and monitor the prognosis of cancer patients. It has the highest sensitivity for diagnosing breast masses, but it shows increased false negativity due to its high operator dependency. Underserved areas do not have sufficient US expertise to diagnose breast lesions, resulting in delayed management of breast lesions. Deep learning neural networks may have the potential to facilitate early decision-making by physicians by rapidly yet accurately diagnosing and monitoring their prognosis. This article reviews the recent research trends on neural networks for breast mass ultrasound, including and beyond diagnosis. We discussed original research recently conducted to analyze which modes of ultrasound and which models have been used for which purposes, and where they show the best performance. Our analysis reveals that lesion classification showed the highest performance compared to those used for other purposes. We also found that fewer studies were performed for prognosis than diagnosis. We also discussed the limitations and future directions of ongoing research on neural networks for breast ultrasound.
Collapse
Affiliation(s)
- Humayra Afrin
- Department of Physiology and Biomedical Engineering, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA
| | - Nicholas B. Larson
- Department of Quantitative Health Sciences, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA
| | - Mostafa Fatemi
- Department of Physiology and Biomedical Engineering, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA
| | - Azra Alizad
- Department of Physiology and Biomedical Engineering, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA
- Department of Radiology, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA
| |
Collapse
|
44
|
Burrai GP, Gabrieli A, Polinas M, Murgia C, Becchere MP, Demontis P, Antuofermo E. Canine Mammary Tumor Histopathological Image Classification via Computer-Aided Pathology: An Available Dataset for Imaging Analysis. Animals (Basel) 2023; 13:ani13091563. [PMID: 37174600 PMCID: PMC10177203 DOI: 10.3390/ani13091563] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 04/27/2023] [Accepted: 05/04/2023] [Indexed: 05/15/2023] Open
Abstract
Histopathology, the gold-standard technique in classifying canine mammary tumors (CMTs), is a time-consuming process, affected by high inter-observer variability. Digital (DP) and Computer-aided pathology (CAD) are emergent fields that will improve overall classification accuracy. In this study, the ability of the CAD systems to distinguish benign from malignant CMTs has been explored on a dataset-namely CMTD-of 1056 hematoxylin and eosin JPEG images from 20 benign and 24 malignant CMTs, with three different CAD systems based on the combination of a convolutional neural network (VGG16, Inception v3, EfficientNet), which acts as a feature extractor, and a classifier (support vector machines (SVM) or stochastic gradient boosting (SGB)), placed on top of the neural net. Based on a human breast cancer dataset (i.e., BreakHis) (accuracy from 0.86 to 0.91), our models were applied to the CMT dataset, showing accuracy from 0.63 to 0.85 across all architectures. The EfficientNet framework coupled with SVM resulted in the best performances with an accuracy from 0.82 to 0.85. The encouraging results obtained by the use of DP and CAD systems in CMTs provide an interesting perspective on the integration of artificial intelligence and machine learning technologies in cancer-related research.
Collapse
Affiliation(s)
- Giovanni P Burrai
- Department of Veterinary Medicine, University of Sassari, Via Vienna 2, 07100 Sassari, Italy
- Mediterranean Center for Disease Control (MCDC), University of Sassari, Via Vienna 2, 07100 Sassari, Italy
| | - Andrea Gabrieli
- Department of Veterinary Medicine, University of Sassari, Via Vienna 2, 07100 Sassari, Italy
| | - Marta Polinas
- Department of Veterinary Medicine, University of Sassari, Via Vienna 2, 07100 Sassari, Italy
| | - Claudio Murgia
- Department of Veterinary Medicine, University of Sassari, Via Vienna 2, 07100 Sassari, Italy
| | | | - Pierfranco Demontis
- Department of Chemical, Physical, Mathematical and Natural Sciences, University of Sassari, Via Vienna 2, 07100 Sassari, Italy
| | - Elisabetta Antuofermo
- Department of Veterinary Medicine, University of Sassari, Via Vienna 2, 07100 Sassari, Italy
- Mediterranean Center for Disease Control (MCDC), University of Sassari, Via Vienna 2, 07100 Sassari, Italy
| |
Collapse
|
45
|
Zhang Z, Wei X. Artificial intelligence-assisted selection and efficacy prediction of antineoplastic strategies for precision cancer therapy. Semin Cancer Biol 2023; 90:57-72. [PMID: 36796530 DOI: 10.1016/j.semcancer.2023.02.005] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Revised: 01/12/2023] [Accepted: 02/13/2023] [Indexed: 02/16/2023]
Abstract
The rapid development of artificial intelligence (AI) technologies in the context of the vast amount of collectable data obtained from high-throughput sequencing has led to an unprecedented understanding of cancer and accelerated the advent of a new era of clinical oncology with a tone of precision treatment and personalized medicine. However, the gains achieved by a variety of AI models in clinical oncology practice are far from what one would expect, and in particular, there are still many uncertainties in the selection of clinical treatment options that pose significant challenges to the application of AI in clinical oncology. In this review, we summarize emerging approaches, relevant datasets and open-source software of AI and show how to integrate them to address problems from clinical oncology and cancer research. We focus on the principles and procedures for identifying different antitumor strategies with the assistance of AI, including targeted cancer therapy, conventional cancer therapy, and cancer immunotherapy. In addition, we also highlight the current challenges and directions of AI in clinical oncology translation. Overall, we hope this article will provide researchers and clinicians with a deeper understanding of the role and implications of AI in precision cancer therapy, and help AI move more quickly into accepted cancer guidelines.
Collapse
Affiliation(s)
- Zhe Zhang
- Laboratory of Aging Research and Cancer Drug Target, State Key Laboratory of Biotherapy and Cancer Center, National Clinical Research Center for Geriatrics, West China Hospital, Sichuan University, Chengdu 610041, PR China; State Key Laboratory of Biotherapy and Cancer Center, West China Hospital, Sichuan University, and Collaborative Innovation Center for Biotherapy, Chengdu 610041, PR China
| | - Xiawei Wei
- Laboratory of Aging Research and Cancer Drug Target, State Key Laboratory of Biotherapy and Cancer Center, National Clinical Research Center for Geriatrics, West China Hospital, Sichuan University, Chengdu 610041, PR China.
| |
Collapse
|
46
|
Yu S, Jin M, Wen T, Zhao L, Zou X, Liang X, Xie Y, Pan W, Piao C. Accurate breast cancer diagnosis using a stable feature ranking algorithm. BMC Med Inform Decis Mak 2023; 23:64. [PMID: 37024893 PMCID: PMC10080822 DOI: 10.1186/s12911-023-02142-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Accepted: 03/14/2023] [Indexed: 04/08/2023] Open
Abstract
BACKGROUND Breast cancer (BC) is one of the most common cancers among women. Since diverse features can be collected, how to stably select the powerful ones for accurate BC diagnosis remains challenging. METHODS A hybrid framework is designed for successively investigating both feature ranking (FR) stability and cancer diagnosis effectiveness. Specifically, on 4 BC datasets (BCDR-F03, WDBC, GSE10810 and GSE15852), the stability of 23 FR algorithms is evaluated via an advanced estimator (S), and the predictive power of the stable feature ranks is further tested by using different machine learning classifiers. RESULTS Experimental results identify 3 algorithms achieving good stability ([Formula: see text]) on the four datasets and generalized Fisher score (GFS) leading to state-of-the-art performance. Moreover, GFS ranks suggest that shape features are crucial in BC image analysis (BCDR-F03 and WDBC) and that using a few genes can well differentiate benign and malignant tumor cases (GSE10810 and GSE15852). CONCLUSIONS The proposed framework recognizes a stable FR algorithm for accurate BC diagnosis. Stable and effective features could deepen the understanding of BC diagnosis and related decision-making applications.
Collapse
Affiliation(s)
- Shaode Yu
- School of Information and Communication Engineering, Communication University of China, Beijing, China
| | - Mingxue Jin
- School of Information and Communication Engineering, Communication University of China, Beijing, China
| | - Tianhang Wen
- Department of Radiology, The Second Affiliated Hospital of Shenyang Medical College, Shenyang, China
| | - Linlin Zhao
- Department of Radiology, The Second Affiliated Hospital of Shenyang Medical College, Shenyang, China
| | - Xuechao Zou
- Department of Radiology, The Second Affiliated Hospital of Shenyang Medical College, Shenyang, China
| | - Xiaokun Liang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yaoqin Xie
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Wanlong Pan
- Experimental Teaching Center for Pathogen Biology and Immunology, North Sichuan Medical College, Nanchong, China
| | - Chenghao Piao
- Department of Radiology, The Second Affiliated Hospital of Shenyang Medical College, Shenyang, China.
| |
Collapse
|
47
|
Li H, Robinson K, Lan L, Baughan N, Chan CW, Embury M, Whitman GJ, El-Zein R, Bedrosian I, Giger ML. Temporal Machine Learning Analysis of Prior Mammograms for Breast Cancer Risk Prediction. Cancers (Basel) 2023; 15:2141. [PMID: 37046802 PMCID: PMC10093086 DOI: 10.3390/cancers15072141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Revised: 03/24/2023] [Accepted: 03/29/2023] [Indexed: 04/09/2023] Open
Abstract
The identification of women at risk for sporadic breast cancer remains a clinical challenge. We hypothesize that the temporal analysis of annual screening mammograms, using a long short-term memory (LSTM) network, could accurately identify women at risk of future breast cancer. Women with an imaging abnormality, which had been biopsy-confirmed to be cancer or benign, who also had antecedent imaging available were included in this case-control study. Sequences of antecedent mammograms were retrospectively collected under HIPAA-approved guidelines. Radiomic and deep-learning-based features were extracted on regions of interest placed posterior to the nipple in antecedent images. These features were input to LSTM recurrent networks to classify whether the future lesion would be malignant or benign. Classification performance was assessed using all available antecedent time-points and using a single antecedent time-point in the task of lesion classification. Classifiers incorporating multiple time-points with LSTM, based either on deep-learning-extracted features or on radiomic features, tended to perform statistically better than chance, whereas those using only a single time-point failed to show improved performance compared to chance, as judged by area under the receiver operating characteristic curves (AUC: 0.63 ± 0.05, 0.65 ± 0.05, 0.52 ± 0.06 and 0.54 ± 0.06, respectively). Lastly, similar classification performance was observed when using features extracted from the affected versus the contralateral breast in predicting future unilateral malignancy (AUC: 0.63 ± 0.05 vs. 0.59 ± 0.06 for deep-learning-extracted features; 0.65 ± 0.05 vs. 0.62 ± 0.06 for radiomic features). The results of this study suggest that the incorporation of temporal information into radiomic analyses may improve the overall classification performance through LSTM, as demonstrated by the improved discrimination of future lesions as malignant or benign. Further, our data suggest that a potential field effect, changes in the breast extending beyond the lesion itself, is present in both the affected and contralateral breasts in antecedent imaging, and, thus, the evaluation of either breast might inform on the future risk of breast cancer.
Collapse
Affiliation(s)
- Hui Li
- Department of Radiology, The University of Chicago, Chicago, IL 60637, USA; (H.L.)
| | - Kayla Robinson
- Department of Radiology, The University of Chicago, Chicago, IL 60637, USA; (H.L.)
| | - Li Lan
- Department of Radiology, The University of Chicago, Chicago, IL 60637, USA; (H.L.)
| | - Natalie Baughan
- Department of Radiology, The University of Chicago, Chicago, IL 60637, USA; (H.L.)
| | - Chun-Wai Chan
- Department of Radiology, The University of Chicago, Chicago, IL 60637, USA; (H.L.)
| | - Matthew Embury
- Department of Breast Surgical Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Gary J. Whitman
- Department of Breast Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Randa El-Zein
- Department of Radiology, Houston Methodist Research Institute, Houston, TX 77030, USA
| | - Isabelle Bedrosian
- Department of Breast Surgical Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Maryellen L. Giger
- Department of Radiology, The University of Chicago, Chicago, IL 60637, USA; (H.L.)
| |
Collapse
|
48
|
Feature generation and multi-sequence fusion based deep convolutional network for breast tumor diagnosis with missing MR sequences. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104536] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
|
49
|
Ong W, Zhu L, Tan YL, Teo EC, Tan JH, Kumar N, Vellayappan BA, Ooi BC, Quek ST, Makmur A, Hallinan JTPD. Application of Machine Learning for Differentiating Bone Malignancy on Imaging: A Systematic Review. Cancers (Basel) 2023; 15:cancers15061837. [PMID: 36980722 PMCID: PMC10047175 DOI: 10.3390/cancers15061837] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 03/07/2023] [Accepted: 03/16/2023] [Indexed: 03/22/2023] Open
Abstract
An accurate diagnosis of bone tumours on imaging is crucial for appropriate and successful treatment. The advent of Artificial intelligence (AI) and machine learning methods to characterize and assess bone tumours on various imaging modalities may assist in the diagnostic workflow. The purpose of this review article is to summarise the most recent evidence for AI techniques using imaging for differentiating benign from malignant lesions, the characterization of various malignant bone lesions, and their potential clinical application. A systematic search through electronic databases (PubMed, MEDLINE, Web of Science, and clinicaltrials.gov) was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. A total of 34 articles were retrieved from the databases and the key findings were compiled and summarised. A total of 34 articles reported the use of AI techniques to distinguish between benign vs. malignant bone lesions, of which 12 (35.3%) focused on radiographs, 12 (35.3%) on MRI, 5 (14.7%) on CT and 5 (14.7%) on PET/CT. The overall reported accuracy, sensitivity, and specificity of AI in distinguishing between benign vs. malignant bone lesions ranges from 0.44–0.99, 0.63–1.00, and 0.73–0.96, respectively, with AUCs of 0.73–0.96. In conclusion, the use of AI to discriminate bone lesions on imaging has achieved a relatively good performance in various imaging modalities, with high sensitivity, specificity, and accuracy for distinguishing between benign vs. malignant lesions in several cohort studies. However, further research is necessary to test the clinical performance of these algorithms before they can be facilitated and integrated into routine clinical practice.
Collapse
Affiliation(s)
- Wilson Ong
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
- Correspondence: ; Tel.: +65-67725207
| | - Lei Zhu
- Department of Computer Science, School of Computing, National University of Singapore, 13 Computing Drive, Singapore 117417, Singapore
| | - Yi Liang Tan
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
| | - Ee Chin Teo
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
| | - Jiong Hao Tan
- University Spine Centre, Department of Orthopaedic Surgery, National University Health System, 1E, Lower Kent Ridge Road, Singapore 119228, Singapore
| | - Naresh Kumar
- University Spine Centre, Department of Orthopaedic Surgery, National University Health System, 1E, Lower Kent Ridge Road, Singapore 119228, Singapore
| | - Balamurugan A. Vellayappan
- Department of Radiation Oncology, National University Cancer Institute Singapore, National University Hospital, 5 Lower Kent Ridge Road, Singapore 119074, Singapore
| | - Beng Chin Ooi
- Department of Computer Science, School of Computing, National University of Singapore, 13 Computing Drive, Singapore 117417, Singapore
| | - Swee Tian Quek
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| | - Andrew Makmur
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| | - James Thomas Patrick Decourcy Hallinan
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| |
Collapse
|
50
|
Zhao X, Bai JW, Guo Q, Ren K, Zhang GJ. Clinical applications of deep learning in breast MRI. Biochim Biophys Acta Rev Cancer 2023; 1878:188864. [PMID: 36822377 DOI: 10.1016/j.bbcan.2023.188864] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Revised: 01/05/2023] [Accepted: 01/17/2023] [Indexed: 02/25/2023]
Abstract
Deep learning (DL) is one of the most powerful data-driven machine-learning techniques in artificial intelligence (AI). It can automatically learn from raw data without manual feature selection. DL models have led to remarkable advances in data extraction and analysis for medical imaging. Magnetic resonance imaging (MRI) has proven useful in delineating the characteristics and extent of breast lesions and tumors. This review summarizes the current state-of-the-art applications of DL models in breast MRI. Many recent DL models were examined in this field, along with several advanced learning approaches and methods for data normalization and breast and lesion segmentation. For clinical applications, DL-based breast MRI models were proven useful in five aspects: diagnosis of breast cancer, classification of molecular types, classification of histopathological types, prediction of neoadjuvant chemotherapy response, and prediction of lymph node metastasis. For subsequent studies, further improvement in data acquisition and preprocessing is necessary, additional DL techniques in breast MRI should be investigated, and wider clinical applications need to be explored.
Collapse
Affiliation(s)
- Xue Zhao
- Fujian Key Laboratory of Precision Diagnosis and Treatment in Breast Cancer, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; National Institute for Data Science in Health and Medicine, Xiamen University, Xiamen, China; Department of Breast-Thyroid-Surgery and Cancer Center, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Research Center of Clinical Medicine in Breast & Thyroid Cancers, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Key Laboratory of Endocrine-Related Cancer Precision Medicine, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China
| | - Jing-Wen Bai
- Fujian Key Laboratory of Precision Diagnosis and Treatment in Breast Cancer, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Research Center of Clinical Medicine in Breast & Thyroid Cancers, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Key Laboratory of Endocrine-Related Cancer Precision Medicine, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Department of Oncology, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Cancer Research Center, School of Medicine, Xiamen University, Xiamen, China
| | - Qiu Guo
- Department of Radiology, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China
| | - Ke Ren
- Department of Radiology, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China.
| | - Guo-Jun Zhang
- Fujian Key Laboratory of Precision Diagnosis and Treatment in Breast Cancer, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Department of Breast-Thyroid-Surgery and Cancer Center, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Research Center of Clinical Medicine in Breast & Thyroid Cancers, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Key Laboratory of Endocrine-Related Cancer Precision Medicine, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Cancer Research Center, School of Medicine, Xiamen University, Xiamen, China.
| |
Collapse
|