1
|
Joshi RC, Srivastava P, Mishra R, Burget R, Dutta MK. Biomarker profiling and integrating heterogeneous models for enhanced multi-grade breast cancer prognostication. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 255:108349. [PMID: 39096573 DOI: 10.1016/j.cmpb.2024.108349] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/22/2024] [Revised: 07/01/2024] [Accepted: 07/22/2024] [Indexed: 08/05/2024]
Abstract
BACKGROUND Breast cancer remains a leading cause of female mortality worldwide, exacerbated by limited awareness, inadequate screening resources, and treatment options. Accurate and early diagnosis is crucial for improving survival rates and effective treatment. OBJECTIVES This study aims to develop an innovative artificial intelligence (AI) based model for predicting breast cancer and its various histopathological grades by integrating multiple biomarkers and subject age, thereby enhancing diagnostic accuracy and prognostication. METHODS A novel ensemble-based machine learning (ML) framework has been introduced that integrates three distinct biomarkers-beta-human chorionic gonadotropin (β-hCG), Programmed Cell Death Ligand 1 (PD-L1), and alpha-fetoprotein (AFP)-alongside subject age. Hyperparameter optimization was performed using the Particle Swarm Optimization (PSO) algorithm, and minority oversampling techniques were employed to mitigate overfitting. The model's performance was validated through rigorous five-fold cross-validation. RESULTS The proposed model demonstrated superior performance, achieving a 97.93% accuracy and a 98.06% F1-score on meticulously labeled test data across diverse age groups. Comparative analysis showed that the model outperforms state-of-the-art approaches, highlighting its robustness and generalizability. CONCLUSION By providing a comprehensive analysis of multiple biomarkers and effectively predicting tumor grades, this study offers a significant advancement in breast cancer screening, particularly in regions with limited medical resources. The proposed framework has the potential to reduce breast cancer mortality rates and improve early intervention and personalized treatment strategies.
Collapse
Affiliation(s)
- Rakesh Chandra Joshi
- Amity Centre for Artificial Intelligence, Amity University, Noida, Uttar Pradesh, India; Centre for Advanced Studies, Dr. A.P.J. Abdul Kalam Technical University, Lucknow, Uttar Pradesh, India
| | - Pallavi Srivastava
- Department of Biotechnology, Noida Institute of Engineering & Technology, Greater Noida, Uttar Pradesh, India
| | - Rashmi Mishra
- Department of Biotechnology, Noida Institute of Engineering & Technology, Greater Noida, Uttar Pradesh, India
| | - Radim Burget
- Department of Telecommunications, Faculty of Electrical Engineering and Communication, Brno University of Technology, Brno, Czech Republic
| | - Malay Kishore Dutta
- Amity Centre for Artificial Intelligence, Amity University, Noida, Uttar Pradesh, India.
| |
Collapse
|
2
|
Bhalla D, Rangarajan K, Chandra T, Banerjee S, Arora C. Reproducibility and Explainability of Deep Learning in Mammography: A Systematic Review of Literature. Indian J Radiol Imaging 2024; 34:469-487. [PMID: 38912238 PMCID: PMC11188703 DOI: 10.1055/s-0043-1775737] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/25/2024] Open
Abstract
Background Although abundant literature is currently available on the use of deep learning for breast cancer detection in mammography, the quality of such literature is widely variable. Purpose To evaluate published literature on breast cancer detection in mammography for reproducibility and to ascertain best practices for model design. Methods The PubMed and Scopus databases were searched to identify records that described the use of deep learning to detect lesions or classify images into cancer or noncancer. A modification of Quality Assessment of Diagnostic Accuracy Studies (mQUADAS-2) tool was developed for this review and was applied to the included studies. Results of reported studies (area under curve [AUC] of receiver operator curve [ROC] curve, sensitivity, specificity) were recorded. Results A total of 12,123 records were screened, of which 107 fit the inclusion criteria. Training and test datasets, key idea behind model architecture, and results were recorded for these studies. Based on mQUADAS-2 assessment, 103 studies had high risk of bias due to nonrepresentative patient selection. Four studies were of adequate quality, of which three trained their own model, and one used a commercial network. Ensemble models were used in two of these. Common strategies used for model training included patch classifiers, image classification networks (ResNet in 67%), and object detection networks (RetinaNet in 67%). The highest reported AUC was 0.927 ± 0.008 on a screening dataset, while it reached 0.945 (0.919-0.968) on an enriched subset. Higher values of AUC (0.955) and specificity (98.5%) were reached when combined radiologist and Artificial Intelligence readings were used than either of them alone. None of the studies provided explainability beyond localization accuracy. None of the studies have studied interaction between AI and radiologist in a real world setting. Conclusion While deep learning holds much promise in mammography interpretation, evaluation in a reproducible clinical setting and explainable networks are the need of the hour.
Collapse
Affiliation(s)
- Deeksha Bhalla
- Department of Radiodiagnosis, All India Institute of Medical Sciences, New Delhi, India
| | - Krithika Rangarajan
- Department of Radiodiagnosis, All India Institute of Medical Sciences, New Delhi, India
| | - Tany Chandra
- Department of Radiodiagnosis, All India Institute of Medical Sciences, New Delhi, India
| | - Subhashis Banerjee
- Department of Computer Science and Engineering, Indian Institute of Technology, New Delhi, India
| | - Chetan Arora
- Department of Computer Science and Engineering, Indian Institute of Technology, New Delhi, India
| |
Collapse
|
3
|
Wang SH, Chen G, Zhong X, Lin T, Shen Y, Fan X, Cao L. Global development of artificial intelligence in cancer field: a bibliometric analysis range from 1983 to 2022. Front Oncol 2023; 13:1215729. [PMID: 37519796 PMCID: PMC10382324 DOI: 10.3389/fonc.2023.1215729] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Accepted: 06/26/2023] [Indexed: 08/01/2023] Open
Abstract
Background Artificial intelligence (AI) is widely applied in cancer field nowadays. The aim of this study is to explore the hotspots and trends of AI in cancer research. Methods The retrieval term includes four topic words ("tumor," "cancer," "carcinoma," and "artificial intelligence"), which were searched in the database of Web of Science from January 1983 to December 2022. Then, we documented and processed all data, including the country, continent, Journal Impact Factor, and so on using the bibliometric software. Results A total of 6,920 papers were collected and analyzed. We presented the annual publications and citations, most productive countries/regions, most influential scholars, the collaborations of journals and institutions, and research focus and hotspots in AI-based cancer research. Conclusion This study systematically summarizes the current research overview of AI in cancer research so as to lay the foundation for future research.
Collapse
Affiliation(s)
- Sui-Han Wang
- Department of General Surgery, Sir Run Run Shaw Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Guoqiao Chen
- Department of General Surgery, Sir Run Run Shaw Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Xin Zhong
- Department of General Surgery, Sir Run Run Shaw Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Tianyu Lin
- Department of General Surgery, Sir Run Run Shaw Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Yan Shen
- Department of General Surgery, The First People’s Hospital of Yu Hang District, Hangzhou, China
| | - Xiaoxiao Fan
- Department of General Surgery, Sir Run Run Shaw Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Liping Cao
- Department of General Surgery, Sir Run Run Shaw Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| |
Collapse
|
4
|
Guo F, Li Q, Gao F, Huang C, Zhang F, Xu J, Xu Y, Li Y, Sun J, Jiang L. Evaluation of the peritumoral features using radiomics and deep learning technology in non-spiculated and noncalcified masses of the breast on mammography. Front Oncol 2022; 12:1026552. [PMID: 36479079 PMCID: PMC9721450 DOI: 10.3389/fonc.2022.1026552] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Accepted: 10/18/2022] [Indexed: 09/05/2023] Open
Abstract
OBJECTIVE To assess the significance of peritumoral features based on deep learning in classifying non-spiculated and noncalcified masses (NSNCM) on mammography. METHODS We retrospectively screened the digital mammography data of 2254 patients who underwent surgery for breast lesions in Harbin Medical University Cancer Hospital from January to December 2018. Deep learning and radiomics models were constructed. The classification efficacy in ROI and patient levels of AUC, accuracy, sensitivity, and specificity were compared. Stratified analysis was conducted to analyze the influence of primary factors on the AUC of the deep learning model. The image filter and CAM were used to visualize the radiomics and depth features. RESULTS For 1298 included patients, 771 (59.4%) were benign, and 527 (40.6%) were malignant. The best model was the deep learning combined model (2 mm), in which the AUC was 0.884 (P < 0.05); especially the AUC of breast composition B reached 0.941. All the deep learning models were superior to the radiomics models (P < 0.05), and the class activation map (CAM) showed a high expression of signals around the tumor of the deep learning model. The deep learning model achieved higher AUC for large size, age >60 years, and breast composition type B (P < 0.05). CONCLUSION Combining the tumoral and peritumoral features resulted in better identification of malignant NSNCM on mammography, and the performance of the deep learning model exceeded the radiomics model. Age, tumor size, and the breast composition type are essential for diagnosis.
Collapse
Affiliation(s)
- Fei Guo
- Department of Radiology, Harbin Medical University Cancer Hospital, Harbin, Heilongjiang, China
| | - Qiyang Li
- Department of Radiology, Harbin Medical University Cancer Hospital, Harbin, Heilongjiang, China
| | - Fei Gao
- Deepwise Artificial Intelligence Lab, Beijing Deepwise and League of PHD Technology Co., Ltd, Beijing, China
| | - Chencui Huang
- Deepwise Artificial Intelligence Lab, Beijing Deepwise and League of PHD Technology Co., Ltd, Beijing, China
| | - Fandong Zhang
- Deepwise Artificial Intelligence Lab, Beijing Deepwise and League of PHD Technology Co., Ltd, Beijing, China
| | - Jingxu Xu
- Deepwise Artificial Intelligence Lab, Beijing Deepwise and League of PHD Technology Co., Ltd, Beijing, China
| | - Ye Xu
- Department of Radiology, Harbin Medical University Cancer Hospital, Harbin, Heilongjiang, China
| | - Yuanzhou Li
- Department of Radiology, Harbin Medical University Cancer Hospital, Harbin, Heilongjiang, China
| | - Jianghong Sun
- Department of Radiology, Harbin Medical University Cancer Hospital, Harbin, Heilongjiang, China
| | - Li Jiang
- Department of Oncology, Harbin Medical University Cancer Hospital, Harbin, Heilongjiang, China
| |
Collapse
|
5
|
Mao YJ, Lim HJ, Ni M, Yan WH, Wong DWC, Cheung JCW. Breast Tumour Classification Using Ultrasound Elastography with Machine Learning: A Systematic Scoping Review. Cancers (Basel) 2022; 14:367. [PMID: 35053531 PMCID: PMC8773731 DOI: 10.3390/cancers14020367] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Revised: 01/10/2022] [Accepted: 01/11/2022] [Indexed: 12/21/2022] Open
Abstract
Ultrasound elastography can quantify stiffness distribution of tissue lesions and complements conventional B-mode ultrasound for breast cancer screening. Recently, the development of computer-aided diagnosis has improved the reliability of the system, whilst the inception of machine learning, such as deep learning, has further extended its power by facilitating automated segmentation and tumour classification. The objective of this review was to summarize application of the machine learning model to ultrasound elastography systems for breast tumour classification. Review databases included PubMed, Web of Science, CINAHL, and EMBASE. Thirteen (n = 13) articles were eligible for review. Shear-wave elastography was investigated in six articles, whereas seven studies focused on strain elastography (5 freehand and 2 Acoustic Radiation Force). Traditional computer vision workflow was common in strain elastography with separated image segmentation, feature extraction, and classifier functions using different algorithm-based methods, neural networks or support vector machines (SVM). Shear-wave elastography often adopts the deep learning model, convolutional neural network (CNN), that integrates functional tasks. All of the reviewed articles achieved sensitivity ³ 80%, while only half of them attained acceptable specificity ³ 95%. Deep learning models did not necessarily perform better than traditional computer vision workflow. Nevertheless, there were inconsistencies and insufficiencies in reporting and calculation, such as the testing dataset, cross-validation, and methods to avoid overfitting. Most of the studies did not report loss or hyperparameters. Future studies may consider using the deep network with an attention layer to locate the targeted object automatically and online training to facilitate efficient re-training for sequential data.
Collapse
Affiliation(s)
- Ye-Jiao Mao
- Department of Bioengineering, Imperial College, London SW7 2AZ, UK;
| | - Hyo-Jung Lim
- Department of Biomedical Engineering, Faculty of Engineering, The Hong Kong Polytechnic University, Hong Kong 999077, China;
| | - Ming Ni
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China;
- Department of Orthopaedics, Pudong New Area People’s Hospital Affiliated to Shanghai University of Medicine and Health Science, Shanghai 201299, China
| | - Wai-Hin Yan
- Department of Economics, The Chinese University of Hong Kong, Hong Kong 999077, China;
| | - Duo Wai-Chi Wong
- Department of Biomedical Engineering, Faculty of Engineering, The Hong Kong Polytechnic University, Hong Kong 999077, China;
| | - James Chung-Wai Cheung
- Department of Biomedical Engineering, Faculty of Engineering, The Hong Kong Polytechnic University, Hong Kong 999077, China;
- Research Institute of Smart Ageing, The Hong Kong Polytechnic University, Hong Kong 999077, China
| |
Collapse
|
6
|
Dunnmon J. Separating Hope from Hype: Artificial Intelligence Pitfalls and Challenges in Radiology. Radiol Clin North Am 2021; 59:1063-1074. [PMID: 34689874 DOI: 10.1016/j.rcl.2021.07.006] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
Although recent scientific studies suggest that artificial intelligence (AI) could provide value in many radiology applications, much of the hard engineering work required to consistently realize this value in practice remains to be done. In this article, we summarize the various ways in which AI can benefit radiology practice, identify key challenges that must be overcome for those benefits to be delivered, and discuss promising avenues by which these challenges can be addressed.
Collapse
Affiliation(s)
- Jared Dunnmon
- Department of Biomedical Data Science, Stanford University, 1265 Welch Rd, Stanford, CA 94305, USA.
| |
Collapse
|
7
|
Viegas L, Domingues I, Mendes M. Study on Data Partition for Delimitation of Masses in Mammography. J Imaging 2021; 7:174. [PMID: 34564100 PMCID: PMC8470756 DOI: 10.3390/jimaging7090174] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2021] [Revised: 08/26/2021] [Accepted: 08/26/2021] [Indexed: 11/20/2022] Open
Abstract
Mammography is the primary medical imaging method used for routine screening and early detection of breast cancer in women. However, the process of manually inspecting, detecting, and delimiting the tumoral massess in 2D images is a very time-consuming task, subject to human errors due to fatigue. Therefore, integrated computer-aided detection systems have been proposed, based on modern computer vision and machine learning methods. In the present work, mammogram images from the publicly available Inbreast dataset are first converted to pseudo-color and then used to train and test a Mask R-CNN deep neural network. The most common approach is to start with a dataset and split the images into train and test set randomly. However, since there are often two or more images of the same case in the dataset, the way the dataset is split may have an impact on the results. Our experiments show that random partition of the data can produce unreliable training, so the dataset must be split using case-wise partition for more stable results. In experimental results, the method achieves an average true positive rate of 0.936 with 0.063 standard deviation using random partition and 0.908 with 0.002 standard deviation using case-wise partition, showing that case-wise partition must be used for more reliable results.
Collapse
Affiliation(s)
- Luís Viegas
- Polytechnic of Coimbra—ISEC, Rua Pedro Nunes, Quinta da Nora, 3030-199 Coimbra, Portugal;
| | - Inês Domingues
- Medical Physics, Radiobiology and Radiation Protection Group, IPO Porto Research Centre (CI-IPOP), 4200-072 Porto, Portugal;
| | - Mateus Mendes
- Polytechnic of Coimbra—ISEC, Rua Pedro Nunes, Quinta da Nora, 3030-199 Coimbra, Portugal;
- ISR (Instituto de Sistemas e Robótica), Departamento de Engenharia Electrotécnica e de Computadores da UC, University of Coimbra, 3004-531 Coimbra, Portugal
| |
Collapse
|