1
|
Patel AN, Srinivasan K. Deep learning paradigms in lung cancer diagnosis: A methodological review, open challenges, and future directions. Phys Med 2025; 131:104914. [PMID: 39938402 DOI: 10.1016/j.ejmp.2025.104914] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Revised: 12/19/2024] [Accepted: 01/30/2025] [Indexed: 02/14/2025] Open
Abstract
Lung cancer is the leading cause of global cancer-related deaths, which emphasizes the critical importance of early diagnosis in enhancing patient outcomes. Deep learning has demonstrated significant promise in lung cancer diagnosis, excelling in nodule detection, classification, and prognosis prediction. This methodological review comprehensively explores deep learning models' application in lung cancer diagnosis, uncovering their integration across various imaging modalities. Deep learning consistently achieves state-of-the-art performance, occasionally surpassing human expert accuracy. Notably, deep neural networks excel in detecting lung nodules, distinguishing between benign and malignant nodules, and predicting patient prognosis. They have also led to the development of computer-aided diagnosis systems, enhancing diagnostic accuracy for radiologists. This review follows the specified criteria for article selection outlined by PRISMA framework. Despite challenges such as data quality and interpretability limitations, this review emphasizes the potential of deep learning to significantly improve the precision and efficiency of lung cancer diagnosis, facilitating continued research efforts to overcome these obstacles and fully harness neural network's transformative impact in this field.
Collapse
Affiliation(s)
- Aryan Nikul Patel
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore, India.
| | - Kathiravan Srinivasan
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore, India.
| |
Collapse
|
2
|
Kabir MM, Mridha M, Rahman A, Hamid MA, Monowar MM. Detection of COVID-19, pneumonia, and tuberculosis from radiographs using AI-driven knowledge distillation. Heliyon 2024; 10:e26801. [PMID: 38444490 PMCID: PMC10912466 DOI: 10.1016/j.heliyon.2024.e26801] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2023] [Revised: 01/30/2024] [Accepted: 02/20/2024] [Indexed: 03/07/2024] Open
Abstract
Chest radiography is an essential diagnostic tool for respiratory diseases such as COVID-19, pneumonia, and tuberculosis because it accurately depicts the structures of the chest. However, accurate detection of these diseases from radiographs is a complex task that requires the availability of medical imaging equipment and trained personnel. Conventional deep learning models offer a viable automated solution for this task. However, the high complexity of these models often poses a significant obstacle to their practical deployment within automated medical applications, including mobile apps, web apps, and cloud-based platforms. This study addresses and resolves this dilemma by reducing the complexity of neural networks using knowledge distillation techniques (KDT). The proposed technique trains a neural network on an extensive collection of chest X-ray images and propagates the knowledge to a smaller network capable of real-time detection. To create a comprehensive dataset, we have integrated three popular chest radiograph datasets with chest radiographs for COVID-19, pneumonia, and tuberculosis. Our experiments show that this knowledge distillation approach outperforms conventional deep learning methods in terms of computational complexity and performance for real-time respiratory disease detection. Specifically, our system achieves an impressive average accuracy of 0.97, precision of 0.94, and recall of 0.97.
Collapse
Affiliation(s)
- Md Mohsin Kabir
- Department of Computer Science & Engineering, Bangladesh University of Business & Technology, Dhaka-1216, Bangladesh
| | - M.F. Mridha
- Department of Computer Science, American International University-Bangladesh, Dhaka-1229, Bangladesh
| | - Ashifur Rahman
- Department of Computer Science & Engineering, Bangladesh University of Business & Technology, Dhaka-1216, Bangladesh
| | - Md. Abdul Hamid
- Department of Information Technology, Faculty of Computing & Information Technology, King Abdulaziz University, Jeddah-21589, Kingdom of Saudi Arabia
| | - Muhammad Mostafa Monowar
- Department of Information Technology, Faculty of Computing & Information Technology, King Abdulaziz University, Jeddah-21589, Kingdom of Saudi Arabia
| |
Collapse
|
3
|
Bakasa W, Viriri S. VGG16 Feature Extractor with Extreme Gradient Boost Classifier for Pancreas Cancer Prediction. J Imaging 2023; 9:138. [PMID: 37504815 PMCID: PMC10381878 DOI: 10.3390/jimaging9070138] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 06/19/2023] [Accepted: 07/04/2023] [Indexed: 07/29/2023] Open
Abstract
The prognosis of patients with pancreatic ductal adenocarcinoma (PDAC) is greatly improved by an early and accurate diagnosis. Several studies have created automated methods to forecast PDAC development utilising various medical imaging modalities. These papers give a general overview of the classification, segmentation, or grading of many cancer types utilising conventional machine learning techniques and hand-engineered characteristics, including pancreatic cancer. This study uses cutting-edge deep learning techniques to identify PDAC utilising computerised tomography (CT) medical imaging modalities. This work suggests that the hybrid model VGG16-XGBoost (VGG16-backbone feature extractor and Extreme Gradient Boosting-classifier) for PDAC images. According to studies, the proposed hybrid model performs better, obtaining an accuracy of 0.97 and a weighted F1 score of 0.97 for the dataset under study. The experimental validation of the VGG16-XGBoost model uses the Cancer Imaging Archive (TCIA) public access dataset, which has pancreas CT images. The results of this study can be extremely helpful for PDAC diagnosis from computerised tomography (CT) pancreas images, categorising them into five different tumours (T), node (N), and metastases (M) (TNM) staging system class labels, which are T0, T1, T2, T3, and T4.
Collapse
|
4
|
Chen T, Hu L, Lu Q, Xiao F, Xu H, Li H, Lu L. A computer-aided diagnosis system for brain tumors based on artificial intelligence algorithms. Front Neurosci 2023; 17:1120781. [PMID: 37483342 PMCID: PMC10360168 DOI: 10.3389/fnins.2023.1120781] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Accepted: 06/19/2023] [Indexed: 07/25/2023] Open
Abstract
The choice of treatment and prognosis evaluation depend on the accurate early diagnosis of brain tumors. Many brain tumors go undiagnosed or are overlooked by clinicians as a result of the challenges associated with manually evaluating magnetic resonance imaging (MRI) images in clinical practice. In this study, we built a computer-aided diagnosis (CAD) system for glioma detection, grading, segmentation, and knowledge discovery based on artificial intelligence algorithms. Neuroimages are specifically represented using a type of visual feature known as the histogram of gradients (HOG). Then, through a two-level classification framework, the HOG features are employed to distinguish between healthy controls and patients, or between different glioma grades. This CAD system also offers tumor visualization using a semi-automatic segmentation tool for better patient management and treatment monitoring. Finally, a knowledge base is created to offer additional advice for the diagnosis of brain tumors. Based on our proposed two-level classification framework, we train models for glioma detection and grading, achieving area under curve (AUC) of 0.921 and 0.806, respectively. Different from other systems, we integrate these diagnostic tools with a web-based interface, which provides the flexibility for system deployment.
Collapse
Affiliation(s)
- Tao Chen
- School of Information Technology, Shangqiu Normal University, Shangqiu, China
| | - Lianting Hu
- Medical Big Data Center, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
- Guangdong Cardiovascular Institute, Guangzhou, China
| | - Quan Lu
- School of Information Management, Wuhan University, Wuhan, China
| | - Feng Xiao
- Department of Radiology, Zhongnan Hospital of Wuhan University, Wuhan, China
| | - Haibo Xu
- Department of Radiology, Zhongnan Hospital of Wuhan University, Wuhan, China
| | - Hongjun Li
- Department of Radiology, Beijing Youan Hospital, Capital Medical University, Beijing, China
| | - Long Lu
- School of Information Management, Wuhan University, Wuhan, China
- Big Data Institute, Wuhan University, Wuhan, China
- School of Public Health, Wuhan University, Wuhan, China
- Institute of Pediatrics, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China
| |
Collapse
|
5
|
Patel SK. Improving intrusion detection in cloud-based healthcare using neural network. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104680] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/17/2023]
|
6
|
A Series-Based Deep Learning Approach to Lung Nodule Image Classification. Cancers (Basel) 2023; 15:cancers15030843. [PMID: 36765801 PMCID: PMC9913559 DOI: 10.3390/cancers15030843] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2022] [Revised: 01/24/2023] [Accepted: 01/28/2023] [Indexed: 02/01/2023] Open
Abstract
Although many studies have shown that deep learning approaches yield better results than traditional methods based on manual features, CADs methods still have several limitations. These are due to the diversity in imaging modalities and clinical pathologies. This diversity creates difficulties because of variation and similarities between classes. In this context, the new approach from our study is a hybrid method that performs classifications using both medical image analysis and radial scanning series features. Hence, the areas of interest obtained from images are subjected to a radial scan, with their centers as poles, in order to obtain series. A U-shape convolutional neural network model is then used for the 4D data classification problem. We therefore present a novel approach to the classification of 4D data obtained from lung nodule images. With radial scanning, the eigenvalue of nodule images is captured, and a powerful classification is performed. According to our results, an accuracy of 92.84% was obtained and much more efficient classification scores resulted as compared to recent classifiers.
Collapse
|
7
|
Mao J, Yin X, Zhang G, Chen B, Chang Y, Chen W, Yu J, Wang Y. Pseudo-labeling generative adversarial networks for medical image classification. Comput Biol Med 2022; 147:105729. [PMID: 35752115 DOI: 10.1016/j.compbiomed.2022.105729] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 05/30/2022] [Accepted: 06/11/2022] [Indexed: 11/27/2022]
Abstract
Semi-supervised learning has become a popular technology in recent years. In this paper, we propose a novel semi-supervised medical image classification algorithm, called Pseudo-Labeling Generative Adversarial Networks (PLGAN), which only uses a small number of real images with few labels to generate fake images or mask images to enlarge the sample size of the labeled training set. First, we combine MixMatch to generate pseudo labels for the fake and unlabeled images to do the classification. Second, contrastive learning and self-attention mechanisms are introduced into PLGAN to exclude the influence of unimportant details. Third, the problem of mode collapse in contrastive learning is well addressed by cyclic consistency loss. Finally, we design global and local classifiers to complement each other with the key information needed for classification. The experimental results on four medical image datasets show that PLGAN can obtain relatively high learning performance by using few labeled and unlabeled data. For example, the classification accuracy of PLGAN is 11% higher than that of MixMatch with 100 labeled images and 1000 unlabeled images on the OCT dataset. In addition, we also conduct other experiments to verify the effectiveness of our algorithm.
Collapse
Affiliation(s)
- Jiawei Mao
- Department of Digital Media Technology, Hangzhou Dianzi University, Hangzhou, 310018, China.
| | - Xuesong Yin
- Department of Digital Media Technology, Hangzhou Dianzi University, Hangzhou, 310018, China.
| | - Guodao Zhang
- Department of Digital Media Technology, Hangzhou Dianzi University, Hangzhou, 310018, China.
| | - Bowen Chen
- School of Mechanical Engineering and Automation, Harbin Institute of Technology, Shenzhen, 518000, China.
| | - Yuanqi Chang
- Department of Digital Media Technology, Hangzhou Dianzi University, Hangzhou, 310018, China.
| | - Weibin Chen
- Department of Computer Science and Artificial Intelligence, Wenzhou University, Wenzhou, 325035, China.
| | - Jieyue Yu
- Department of Digital Media Technology, Hangzhou Dianzi University, Hangzhou, 310018, China.
| | - Yigang Wang
- Department of Digital Media Technology, Hangzhou Dianzi University, Hangzhou, 310018, China.
| |
Collapse
|
8
|
Zhang B, Yao K, Xu M, Wu J, Cheng C. Deep Learning Predicts EBV Status in Gastric Cancer Based on Spatial Patterns of Lymphocyte Infiltration. Cancers (Basel) 2021; 13:6002. [PMID: 34885112 PMCID: PMC8656870 DOI: 10.3390/cancers13236002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Revised: 11/17/2021] [Accepted: 11/23/2021] [Indexed: 12/28/2022] Open
Abstract
EBV infection occurs in around 10% of gastric cancer cases and represents a distinct subtype, characterized by a unique mutation profile, hypermethylation, and overexpression of PD-L1. Moreover, EBV positive gastric cancer tends to have higher immune infiltration and a better prognosis. EBV infection status in gastric cancer is most commonly determined using PCR and in situ hybridization, but such a method requires good nucleic acid preservation. Detection of EBV status with histopathology images may complement PCR and in situ hybridization as a first step of EBV infection assessment. Here, we developed a deep learning-based algorithm to directly predict EBV infection in gastric cancer from H&E stained histopathology slides. Our model can not only predict EBV infection in gastric cancers from tumor regions but also from normal regions with potential changes induced by adjacent EBV+ regions within each H&E slide. Furthermore, in cohorts with zero EBV abundances, a significant difference of immune infiltration between high and low EBV score samples was observed, consistent with the immune infiltration difference observed between EBV positive and negative samples. Therefore, we hypothesized that our model's prediction of EBV infection is partially driven by the spatial information of immune cell composition, which was supported by mostly positive local correlations between the EBV score and immune infiltration in both tumor and normal regions across all H&E slides. Finally, EBV scores calculated from our model were found to be significantly associated with prognosis. This framework can be readily applied to develop interpretable models for prediction of virus infection across cancers.
Collapse
Affiliation(s)
- Baoyi Zhang
- Department of Chemical and Biomolecular Engineering, Rice University, Houston, TX 77030, USA;
| | - Kevin Yao
- Department of Electrical and Computer Engineering, Texas A&M University, College Station, TX 77843, USA;
| | - Min Xu
- Computational Biology Department, Carnegie Mellon University, Pittsburgh, PA 15213, USA;
- Computer Vision Department, Mohamed bin Zayed University of Artificial Intelligence, Abu Dhabi 144534, United Arab Emirates
| | - Jia Wu
- Department of Imaging Physics, Division of Diagnostic Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA;
| | - Chao Cheng
- Department of Medicine, Baylor College of Medicine, Houston, TX 77030, USA
- Dan L. Duncan Comprehensive Cancer Center, Baylor College of Medicine, Houston, TX 77030, USA
- Institute for Clinical and Translational Research, Baylor College of Medicine, Houston, TX 77030, USA
| |
Collapse
|