1
|
Lei J, Dai L, Jiang H, Wu C, Zhang X, Zhang Y, Yao J, Xie W, Zhang Y, Li Y, Zhang Y, Wang Y. UniBrain: Universal Brain MRI diagnosis with hierarchical knowledge-enhanced pre-training. Comput Med Imaging Graph 2025; 122:102516. [PMID: 40073706 DOI: 10.1016/j.compmedimag.2025.102516] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2024] [Revised: 01/09/2025] [Accepted: 02/18/2025] [Indexed: 03/14/2025]
Abstract
Magnetic Resonance Imaging (MRI) has become a pivotal tool in diagnosing brain diseases, with a wide array of computer-aided artificial intelligence methods being proposed to enhance diagnostic accuracy. However, early studies were often limited by small-scale datasets and a narrow range of disease types, which posed challenges in model generalization. This study presents UniBrain, a hierarchical knowledge-enhanced pre-training framework designed for universal brain MRI diagnosis. UniBrain leverages a large-scale dataset comprising 24,770 imaging-report pairs from routine diagnostics for pre-training. Unlike previous approaches that either focused solely on visual representation learning or used brute-force alignment between vision and language, the framework introduces a hierarchical alignment mechanism. This mechanism extracts structured knowledge from free-text clinical reports at multiple granularities, enabling vision-language alignment at both the sequence and case levels, thereby significantly improving feature learning efficiency. A coupled vision-language perception module is further employed for text-guided multi-label classification, which facilitates zero-shot evaluation and fine-tuning of downstream tasks without modifying the model architecture. UniBrain is validated on both in-domain and out-of-domain datasets, consistently surpassing existing state-of-the-art diagnostic models and demonstrating performance on par with radiologists in specific disease categories. It shows strong generalization capabilities across diverse tasks, highlighting its potential for broad clinical application. The code is available at https://github.com/ljy19970415/UniBrain.
Collapse
Affiliation(s)
- Jiayu Lei
- School of Computer Science and Technology, University of Science and Technology of China, Hefei, Anhui, 230026, China; Shanghai Artificial Intelligence Laboratory, Shanghai, 200232, China
| | - Lisong Dai
- Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, 200233, China
| | - Haoyun Jiang
- Cooperative Medianet Innovation Center, Shanghai Jiao Tong University, Shanghai, 200240, China; Shanghai Artificial Intelligence Laboratory, Shanghai, 200232, China
| | - Chaoyi Wu
- Cooperative Medianet Innovation Center, Shanghai Jiao Tong University, Shanghai, 200240, China; Shanghai Artificial Intelligence Laboratory, Shanghai, 200232, China
| | - Xiaoman Zhang
- Cooperative Medianet Innovation Center, Shanghai Jiao Tong University, Shanghai, 200240, China; Shanghai Artificial Intelligence Laboratory, Shanghai, 200232, China
| | - Yao Zhang
- Shanghai Artificial Intelligence Laboratory, Shanghai, 200232, China
| | - Jiangchao Yao
- Cooperative Medianet Innovation Center, Shanghai Jiao Tong University, Shanghai, 200240, China; Shanghai Artificial Intelligence Laboratory, Shanghai, 200232, China.
| | - Weidi Xie
- School of Artificial Intelligence, Shanghai Jiao Tong University, Shanghai, 200230, China; Shanghai Artificial Intelligence Laboratory, Shanghai, 200232, China
| | - Yanyong Zhang
- School of Computer Science and Technology, University of Science and Technology of China, Hefei, Anhui, 230026, China
| | - Yuehua Li
- Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, 200233, China
| | - Ya Zhang
- School of Artificial Intelligence, Shanghai Jiao Tong University, Shanghai, 200230, China; Shanghai Artificial Intelligence Laboratory, Shanghai, 200232, China
| | - Yanfeng Wang
- School of Artificial Intelligence, Shanghai Jiao Tong University, Shanghai, 200230, China; Shanghai Artificial Intelligence Laboratory, Shanghai, 200232, China.
| |
Collapse
|
2
|
Wang J, Cai J, Tang W, Dudurych I, van Tuinen M, Vliegenthart R, van Ooijen P. A comparison of an integrated and image-only deep learning model for predicting the disappearance of indeterminate pulmonary nodules. Comput Med Imaging Graph 2025; 123:102553. [PMID: 40239430 DOI: 10.1016/j.compmedimag.2025.102553] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2024] [Revised: 03/18/2025] [Accepted: 04/03/2025] [Indexed: 04/18/2025]
Abstract
BACKGROUND Indeterminate pulmonary nodules (IPNs) require follow-up CT to assess potential growth; however, benign nodules may disappear. Accurately predicting whether IPNs will resolve is a challenge for radiologists. Therefore, we aim to utilize deep-learning (DL) methods to predict the disappearance of IPNs. MATERIAL AND METHODS This retrospective study utilized data from the Dutch-Belgian Randomized Lung Cancer Screening Trial (NELSON) and Imaging in Lifelines (ImaLife) cohort. Participants underwent follow-up CT to determine the evolution of baseline IPNs. The NELSON data was used for model training. External validation was performed in ImaLife. We developed integrated DL-based models that incorporated CT images and demographic data (age, sex, smoking status, and pack years). We compared the performance of integrated methods with those limited to CT images only and calculated sensitivity, specificity, and area under the receiver operating characteristic curve (AUC). From a clinical perspective, ensuring high specificity is critical, as it minimizes false predictions of non-resolving nodules that should be monitored for evolution on follow-up CTs. Feature importance was calculated using SHapley Additive exPlanations (SHAP) values. RESULTS The training dataset included 840 IPNs (134 resolving) in 672 participants. The external validation dataset included 111 IPNs (46 resolving) in 65 participants. On the external validation set, the performance of the integrated model (sensitivity, 0.50; 95 % CI, 0.35-0.65; specificity, 0.91; 95 % CI, 0.80-0.96; AUC, 0.82; 95 % CI, 0.74-0.90) was comparable to that solely trained on CT image (sensitivity, 0.41; 95 % CI, 0.27-0.57; specificity, 0.89; 95 % CI, 0.78-0.95; AUC, 0.78; 95 % CI, 0.69-0.86; P = 0.39). The top 10 most important features were all image related. CONCLUSION Deep learning-based models can predict the disappearance of IPNs with high specificity. Integrated models using CT scans and clinical data had comparable performance to those using only CT images.
Collapse
Affiliation(s)
- Jingxuan Wang
- Department of Radiology, University of Groningen, University Medical Center of Groningen, Groningen, the Netherlands; Data Science in Health (DASH), University of Groningen, University Medical Center of Groningen, Groningen, the Netherlands
| | - Jiali Cai
- Department of Epidemiology, University of Groningen, University Medical Center of Groningen, Groningen, the Netherlands
| | - Wei Tang
- Department of Neurology, University of Groningen, University Medical Center of Groningen, Groningen, the Netherlands; Data Science in Health (DASH), University of Groningen, University Medical Center of Groningen, Groningen, the Netherlands
| | - Ivan Dudurych
- Department of Radiology, University of Groningen, University Medical Center of Groningen, Groningen, the Netherlands
| | - Marcel van Tuinen
- Department of Radiology, University of Groningen, University Medical Center of Groningen, Groningen, the Netherlands
| | - Rozemarijn Vliegenthart
- Department of Radiology, University of Groningen, University Medical Center of Groningen, Groningen, the Netherlands; Data Science in Health (DASH), University of Groningen, University Medical Center of Groningen, Groningen, the Netherlands
| | - Peter van Ooijen
- Department of Radiation Oncology, University of Groningen, University Medical Center of Groningen, Groningen, the Netherlands; Data Science in Health (DASH), University of Groningen, University Medical Center of Groningen, Groningen, the Netherlands.
| |
Collapse
|
3
|
Wang X, Zhao Z, Pan D, Zhou H, Hou J, Sun H, Shen X, Mehta S, Wang W. Deep cross entropy fusion for pulmonary nodule classification based on ultrasound Imagery. Front Oncol 2025; 15:1514779. [PMID: 40255427 PMCID: PMC12005990 DOI: 10.3389/fonc.2025.1514779] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2024] [Accepted: 03/18/2025] [Indexed: 04/22/2025] Open
Abstract
Introduction Accurate differentiation of benign and malignant pulmonary nodules in ultrasound remains a clinical challenge due to insufficient diagnostic precision. We propose the Deep Cross-Entropy Fusion (DCEF) model to enhance classification accuracy. Methods A retrospective dataset of 135 patients (27 benign, 68 malignant training; 11 benign, 29 malignant testing) was analyzed. Manually annotated ultrasound ROIs were preprocessed and input into DCEF, which integrates ResNet, DenseNet, VGG, and InceptionV3 via entropy-based fusion. Performance was evaluated using AUC, accuracy, sensitivity, specificity, precision, and F1-score. Results DCEF achieved an AUC of 0.873 (training) and 0.792 (testing), outperforming traditional methods. Test metrics included 71.5% accuracy, 70.69% sensitivity, 70.58% specificity, 72.55% precision, and 71.13% F1-score, demonstrating robust diagnostic capability. Discussion DCEF's multi-architecture fusion enhances diagnostic reliability for ultrasound-based nodule assessment. While promising, validation in larger multi-center cohorts is needed to address single-center data limitations. Future work will explore next-generation architectures and multi-modal integration.
Collapse
Affiliation(s)
- Xian Wang
- Department of Ultrasound, Affiliated People’s Hospital of Jiangsu University, Zhenjiang, Jiangsu, China
- Medical College of Yangzhou University, Yangzhou, Jiangsu, China
| | - Ziou Zhao
- Department of Ultrasound, Affiliated People’s Hospital of Jiangsu University, Zhenjiang, Jiangsu, China
| | - Donggang Pan
- Department of Radiology, Affiliated People’s Hospital of Jiangsu University, Zhenjiang, Jiangsu, China
| | - Hui Zhou
- Department of Ultrasound, Affiliated People’s Hospital of Jiangsu University, Zhenjiang, Jiangsu, China
| | - Jie Hou
- Department of Ultrasound, Affiliated People’s Hospital of Jiangsu University, Zhenjiang, Jiangsu, China
| | - Hui Sun
- Department of Pathology, Affiliated People’s Hospital of Jiangsu University, Zhenjiang, Jiangsu, China
| | - Xiangjun Shen
- School of Computer Science & Communication Engineering, Jiangsu University, Zhenjiang, Jiangsu, China
| | - Sumet Mehta
- School of Computer Science & Communication Engineering, Jiangsu University, Zhenjiang, Jiangsu, China
| | - Wei Wang
- Department of Radiology, Affiliated Hospital of Yangzhou University, Yangzhou, Jiangsu, China
| |
Collapse
|
4
|
Xue P, Lu H, Fu Y, Ji H, Ren M, Xiao T, Zhang Z, Dong E. Prior knowledge-based multi-task learning network for pulmonary nodule classification. Comput Med Imaging Graph 2025; 121:102511. [PMID: 39970821 DOI: 10.1016/j.compmedimag.2025.102511] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Revised: 03/20/2024] [Accepted: 02/07/2025] [Indexed: 02/21/2025]
Abstract
The morphological characteristics of pulmonary nodule, also known as the attributes, are crucial for classification of benign and malignant nodules. In clinical, radiologists usually conduct a comprehensive analysis of correlations between different attributes, to accurately judge pulmonary nodules are benign or malignant. However, most of pulmonary nodule classification models ignore the inherent correlations between different attributes, leading to unsatisfactory classification performance. To address these problems, we propose a prior knowledge-based multi-task learning (PK-MTL) network for pulmonary nodule classification. To be specific, the correlations between different attributes are treated as prior knowledge, and established through multi-order task transfer learning. Then, the complex correlations between different attributes are encoded into hypergraph structure, and leverage hypergraph neural network for learning the correlation representation. On the other hand, a multi-task learning framework is constructed for joint segmentation, benign-malignant classification and attribute scoring of pulmonary nodules, aiming to improve the classification performance of pulmonary nodules comprehensively. In order to embed prior knowledge into multi-task learning framework, a feature fusion block is designed to organically integrate image-level features with attribute prior knowledge. In addition, a channel-wise cross attention block is constructed to fuse the features of encoder and decoder, to further improve the segmentation performance. Extensive experiments on LIDC-IDRI dataset show that our proposed method can achieve 91.04% accuracy for diagnosing malignant nodules, obtaining the state-of-art results.
Collapse
Affiliation(s)
- Peng Xue
- School of Mechanical, Electrical and Information Engineering, Shandong University, Weihai, 264209, China.
| | - Hang Lu
- School of Mechanical, Electrical and Information Engineering, Shandong University, Weihai, 264209, China
| | - Yu Fu
- School of Mechanical, Electrical and Information Engineering, Shandong University, Weihai, 264209, China
| | - Huizhong Ji
- School of Mechanical, Electrical and Information Engineering, Shandong University, Weihai, 264209, China
| | - Meirong Ren
- School of Mechanical, Electrical and Information Engineering, Shandong University, Weihai, 264209, China
| | - Taohui Xiao
- School of Mechanical, Electrical and Information Engineering, Shandong University, Weihai, 264209, China
| | - Zhili Zhang
- School of Mechanical, Electrical and Information Engineering, Shandong University, Weihai, 264209, China
| | - Enqing Dong
- School of Mechanical, Electrical and Information Engineering, Shandong University, Weihai, 264209, China; Shandong Intelligent Sensing Electronic Technology Co., Ltd. Weihai, 264209, China.
| |
Collapse
|
5
|
Liu W, Zhang L, Li X, Liu H, Feng M, Li Y. A semisupervised knowledge distillation model for lung nodule segmentation. Sci Rep 2025; 15:10562. [PMID: 40148406 PMCID: PMC11950440 DOI: 10.1038/s41598-025-94132-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2024] [Accepted: 03/11/2025] [Indexed: 03/29/2025] Open
Abstract
Early screening of lung nodules is mainly done manually by reading the patient's lung CT. This approach is time-consuming laborious and prone to leakage and misdiagnosis. Current methods for lung nodule detection face limitations such as the high cost of obtaining large-scale, high-quality annotated datasets and poor robustness when dealing with data of varying quality. The challenges include accurately detecting small and irregular nodules, as well as ensuring model generalization across different data sources. Therefore, this paper proposes a lung nodule detection model based on semi-supervised learning and knowledge distillation (SSLKD-UNet). In this paper, a feature encoder with a hybrid architecture of CNN and Transformer is designed to fully extract the features of lung nodule images, and at the same time, a distillation training strategy is designed in this paper, which uses the teacher model to instruct the student model to learn the more relevant features to nodule regions in the CT images and, and finally, this paper applies the rough annotation of the lung nodules to the LUNA16 and LC183 dataset with the help of semi-supervised learning idea, and completes the model with the accurate annotation of lung nodules. Combined with the accurate lung nodule annotation to complete the model training process. Further experiments show that the model proposed in this paper can utilize a small amount of inexpensive and easy-to-obtain coarse-grained annotations of pulmonary nodules for training under the guidance of semi-supervised learning and knowledge distillation training strategies, which means inaccurate annotations or incomplete information annotations, e.g., using nodule coordinates instead of pixel-level segmentation masks, and realize the early recognition of lung nodules. The segmentation results further corroborates the model's efficacy, with SSLKD-UNet demonstrating superior delineation of lung nodules, even in cases with complex anatomical structures and varying nodule sizes.
Collapse
Affiliation(s)
- Wenjuan Liu
- Department of Respiratory and Critical Care Medicine, The First Affiliated Hospital of Dalian Medical University, Dalian, 116021, China
| | - Limin Zhang
- Department of Respiratory and Critical Care Medicine, The First Affiliated Hospital of Dalian Medical University, Dalian, 116021, China
| | - Xiangrui Li
- Department of Respiratory and Critical Care Medicine, The First Affiliated Hospital of Dalian Medical University, Dalian, 116021, China
| | - Haoran Liu
- Clinical Medicine, Dalian Medical University, Dalian, 116000, China
| | - Min Feng
- Department of Respiratory and Critical Care Medicine, The First Affiliated Hospital of Dalian Medical University, Dalian, 116021, China.
| | - Yanxia Li
- Department of Respiratory and Critical Care Medicine, The First Affiliated Hospital of Dalian Medical University, Dalian, 116021, China.
| |
Collapse
|
6
|
Hossain MS, Basak N, Mollah MA, Nahiduzzaman M, Ahsan M, Haider J. Ensemble-based multiclass lung cancer classification using hybrid CNN-SVD feature extraction and selection method. PLoS One 2025; 20:e0318219. [PMID: 40106514 PMCID: PMC11922248 DOI: 10.1371/journal.pone.0318219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2024] [Accepted: 01/10/2025] [Indexed: 03/22/2025] Open
Abstract
Lung cancer (LC) is a leading cause of cancer-related fatalities worldwide, underscoring the urgency of early detection for improved patient outcomes. The main objective of this research is to harness the noble strategies of artificial intelligence for identifying and classifying lung cancers more precisely from CT scan images at the early stage. This study introduces a novel lung cancer detection method, which was mainly focused on Convolutional Neural Networks (CNN) and was later customized for binary and multiclass classification utilizing a publicly available dataset of chest CT scan images of lung cancer. The main contribution of this research lies in its use of a hybrid CNN-SVD (Singular Value Decomposition) method and the use of a robust voting ensemble approach, which results in superior accuracy and effectiveness for mitigating potential errors. By employing contrast-limited adaptive histogram equalization (CLAHE), contrast-enhanced images were generated with minimal noise and prominent distinctive features. Subsequently, a CNN-SVD-Ensemble model was implemented to extract important features and reduce dimensionality. The extracted features were then processed by a set of ML algorithms along with a voting ensemble approach. Additionally, Gradient-weighted Class Activation Mapping (Grad-CAM) was integrated as an explainable AI (XAI) technique for enhancing model transparency by highlighting key influencing regions in the CT scans, which improved interpretability and ensured reliable and trustworthy results for clinical applications. This research offered state-of-the-art results, which achieved remarkable performance metrics with an accuracy, AUC, precision, recall, F1 score, Cohen's Kappa and Matthews Correlation Coefficient (MCC) of 99.49%, 99.73%, 100%, 99%, 99%, 99.15% and 99.16%, respectively, addressing the prior research gaps and setting a new benchmark in the field. Furthermore, in binary class classification, all the performance indicators attained a perfect score of 100%. The robustness of the suggested approach offered more reliable and impactful insights in the medical field, thus improving existing knowledge and setting the stage for future innovations.
Collapse
Affiliation(s)
- Md Sabbir Hossain
- Department of Electronics & Telecommunication Engineering, Rajshahi University of Engineering & Technology, Rajshahi, Bangladesh
| | - Niloy Basak
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi, Bangladesh
| | - Md Aslam Mollah
- Department of Electronics & Telecommunication Engineering, Rajshahi University of Engineering & Technology, Rajshahi, Bangladesh
| | - Md Nahiduzzaman
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi, Bangladesh
| | - Mominul Ahsan
- Department of Computer Science, University of York, York, United Kingdom
| | - Julfikar Haider
- Department of Engineering, Manchester Metropolitan University, Manchester, United Kingdom
| |
Collapse
|
7
|
Zhu Q, Fei L. Cross-ViT based benign and malignant classification of pulmonary nodules. PLoS One 2025; 20:e0318670. [PMID: 39908279 DOI: 10.1371/journal.pone.0318670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2024] [Accepted: 01/21/2025] [Indexed: 02/07/2025] Open
Abstract
The benign and malignant discrimination of pulmonary nodules plays a very important role in diagnosing the extent of lung cancer lesions. There are many methods using Convolutional neural network (CNN) for benign and malignant classification of pulmonary nodules, but traditional CNN models focus more on the local features of pulmonary nodules and lack the extraction of global features of pulmonary nodules. To solve this problem, a Cross fusion attention ViT (Cross-ViT) network that fuses local features extracted by CNN and global features extracted by Transformer is proposed. The network first extracts different features independently through two branches and then performs feature fusion through the Cross fusion attention module. Cross-ViT can effectively capture and process both local and global information of lung nodules, which improves the accuracy of classifying the benign and malignant nature of pulmonary nodules. Experimental validation was performed on the LUNA16 dataset, and the accuracy, precision, recall and F1 score reached 91.04%, 91.42%, 92.45% and 91.92%, respectively, and the accuracy, precision, recall and F1 score with SENet as CNN branch reached 92.43%, 94.27%, 91.68% and 92.96%, respectively. The results show that the accuracy, precision, recall and F1 score of the proposed method are 0.3%, 0.11%, 4.52% and 3.03% higher than those of the average optimal method, respectively, and the performance of Cross-ViT network for benign and malignant classification is better than most classification methods.
Collapse
Affiliation(s)
- Qinfang Zhu
- Geriatric Hospital Affiliated to Wuhan University of Science and Technology, Wuhan, Hubei, China
| | - Liangyan Fei
- Key Laboratory of Metallurgical Equipment and Control Technology, Ministry of Education, Wuhan University of Science and Technology, Wuhan, Hubei, China
| |
Collapse
|
8
|
Deng H, Li Y, Liu X, Cheng K, Fang T, Min X. Multi-scale dual attention embedded U-shaped network for accurate segmentation of coronary vessels in digital subtraction angiography. Med Phys 2025. [PMID: 39899182 DOI: 10.1002/mp.17618] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2024] [Revised: 11/29/2024] [Accepted: 12/23/2024] [Indexed: 02/04/2025] Open
Abstract
BACKGROUND Most attention-based networks fall short in effectively integrating spatial and channel-wise information across different scales, which results in suboptimal performance for segmenting coronary vessels in x-ray digital subtraction angiography (DSA) images. This limitation becomes particularly evident when attempting to identify tiny sub-branches. PURPOSE To address this limitation, a multi-scale dual attention embedded network (named MDA-Net) is proposed to consolidate contextual spatial and channel information across contiguous levels and scales. METHODS MDA-Net employs five cascaded double-convolution blocks within its encoder to adeptly extract multi-scale features. It incorporates skip connections that facilitate the retention of low-level feature details throughout the decoding phase, thereby enhancing the reconstruction of detailed image information. Furthermore, MDA modules, which take in features from neighboring scales and hierarchical levels, are tasked with discerning subtle distinctions between foreground elements, such as coronary vessels of diverse morphologies and dimensions, and the complex background, which includes structures like catheters or other tissues with analogous intensities. To sharpen the segmentation accuracy, the network utilizes a composite loss function that integrates intersection over union (IoU) loss with binary cross-entropy loss, ensuring the precision of the segmentation outcomes and maintaining an equilibrium between positive and negative classifications. RESULTS Experimental results demonstrate that MDA-Net not only performs more robustly and effectively on DSA images under various image conditions, but also achieves significant advantages over state-of-the-art methods, achieving the optimal scores in terms of IoU, Dice, accuracy, and Hausdorff distance 95%. CONCLUSIONS MDA-Net has high robustness for coronary vessels segmentation, providing an active strategy for early diagnosis of cardiovascular diseases. The code is publicly available at https://github.com/30410B/MDA-Net.git.
Collapse
Affiliation(s)
- He Deng
- School of Computer Science and Technology, Hubei Province Key Laboratory of Intelligent Information Processing and Real-time Industrial System, Wuhan University of Science and Technology, Wuhan, Hubei, China
| | - Yuqing Li
- School of Computer Science and Technology, Hubei Province Key Laboratory of Intelligent Information Processing and Real-time Industrial System, Wuhan University of Science and Technology, Wuhan, Hubei, China
| | - Xu Liu
- School of Computer Science and Technology, Hubei Province Key Laboratory of Intelligent Information Processing and Real-time Industrial System, Wuhan University of Science and Technology, Wuhan, Hubei, China
| | - Kai Cheng
- School of Computer Science and Technology, Hubei Province Key Laboratory of Intelligent Information Processing and Real-time Industrial System, Wuhan University of Science and Technology, Wuhan, Hubei, China
| | - Tong Fang
- School of Computer Science and Technology, Hubei Province Key Laboratory of Intelligent Information Processing and Real-time Industrial System, Wuhan University of Science and Technology, Wuhan, Hubei, China
| | - Xiangde Min
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| |
Collapse
|
9
|
Zhu H, Liu W, Gao Z, Zhang H. Explainable Classification of Benign-Malignant Pulmonary Nodules With Neural Networks and Information Bottleneck. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:2028-2039. [PMID: 37843998 DOI: 10.1109/tnnls.2023.3303395] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/18/2023]
Abstract
Computerized tomography (CT) is a clinically primary technique to differentiate benign-malignant pulmonary nodules for lung cancer diagnosis. Early classification of pulmonary nodules is essential to slow down the degenerative process and reduce mortality. The interactive paradigm assisted by neural networks is considered to be an effective means for early lung cancer screening in large populations. However, some inherent characteristics of pulmonary nodules in high-resolution CT images, e.g., diverse shapes and sparse distribution over the lung fields, have been inducing inaccurate results. On the other hand, most existing methods with neural networks are dissatisfactory from a lack of transparency. In order to overcome these obstacles, a united framework is proposed, including the classification and feature visualization stages, to learn distinctive features and provide visual results. Specifically, a bilateral scheme is employed to synchronously extract and aggregate global-local features in the classification stage, where the global branch is constructed to perceive deep-level features and the local branch is built to focus on the refined details. Furthermore, an encoder is built to generate some features, and a decoder is constructed to simulate decision behavior, followed by the information bottleneck viewpoint to optimize the objective. Extensive experiments are performed to evaluate our framework on two publicly available datasets, namely, 1) the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) and 2) the Lung and Colon Histopathological Image Dataset (LC25000). For instance, our framework achieves 92.98% accuracy and presents additional visualizations on the LIDC. The experiment results show that our framework can obtain outstanding performance and is effective to facilitate explainability. It also demonstrates that this united framework is a serviceable tool and further has the scalability to be introduced into clinical research.
Collapse
|
10
|
Ji G, Liu F, Chen Z, Peng J, Deng H, Xiao S, Li Y. Application value of CT three-dimensional reconstruction technology in the identification of benign and malignant lung nodules and the characteristics of nodule distribution. BMC Med Imaging 2025; 25:7. [PMID: 39762736 PMCID: PMC11702159 DOI: 10.1186/s12880-024-01505-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2024] [Accepted: 11/18/2024] [Indexed: 01/11/2025] Open
Abstract
OBJECTIVE The study aimed to evaluate the application value of computed tomography (CT) three-dimensional (3D) reconstruction technology in identifying benign and malignant lung nodules and characterizing the distribution of the nodules. METHODS CT 3D reconstruction was performed for lung nodules. Pathological results were used as the gold standard to compare the detection rates of various lung nodule signs between conventional chest CT scanning and CT 3D reconstruction techniques. Additionally, the differences in mean diffusion coefficient values and partial anisotropy index values between male and female patients were analyzed. RESULTS Pathologic confirmation identified 30 patients with benign lesions and 45 patients with malignant lesions. CT 3D reconstruction demonstrated higher diagnostic accuracy for lung nodule imaging signs compared to conventional CT scanning (P < 0.05). The mean diffusion coefficient values and partial anisotropy index values were lower in female patients compared to male patients in the lung nodule lesion area, lung perinodular edema area, and normal lung tissue (P < 0.05). Conventional CT scanning showed a benign accuracy rate of 63.33% and a malignant accuracy rate of 60.00%, whereas CT 3D imaging achieved a benign and malignant accuracy rate of 86.67% for both. The accuracy rates for CT 3D imaging were significantly higher than those for conventional CT scanning (P < 0.05). CONCLUSION CT 3D imaging technology demonstrates high diagnostic accuracy in differentiating benign from malignant lung nodules.
Collapse
Affiliation(s)
- Guanghai Ji
- Department of Radiology, The First Affiliated Hospital of Yangtze University, No. 40 Jinlong Road, Shashi District, Jingzhou, Hubei, 434000, China
| | - Fei Liu
- Department of Radiology, The First Affiliated Hospital of Yangtze University, No. 40 Jinlong Road, Shashi District, Jingzhou, Hubei, 434000, China
| | - Zhiqiang Chen
- Department of Radiology, The First Hospital Affiliated of Hainan Medical University, Haikou, Hainan, 570102, China
| | - Jie Peng
- Department of Radiology, The First Affiliated Hospital of Yangtze University, No. 40 Jinlong Road, Shashi District, Jingzhou, Hubei, 434000, China
| | - Hao Deng
- Department of Urology, The First Affiliated Hospital of Yangtze University, Jingzhou, Hubei, 434000, China
| | - Sheng Xiao
- Department of Radiology, The First Affiliated Hospital of Yangtze University, No. 40 Jinlong Road, Shashi District, Jingzhou, Hubei, 434000, China.
| | - Yun Li
- Department of Radiology, The First Affiliated Hospital of Yangtze University, No. 40 Jinlong Road, Shashi District, Jingzhou, Hubei, 434000, China
| |
Collapse
|
11
|
Miao S, Dong Q, Liu L, Xuan Q, An Y, Qi H, Wang Q, Liu Z, Wang R. Dual biomarkers CT-based deep learning model incorporating intrathoracic fat for discriminating benign and malignant pulmonary nodules in multi-center cohorts. Phys Med 2025; 129:104877. [PMID: 39689571 DOI: 10.1016/j.ejmp.2024.104877] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/27/2024] [Revised: 12/01/2024] [Accepted: 12/02/2024] [Indexed: 12/19/2024] Open
Abstract
BACKGROUND Recent studies in the field of lung cancer have emphasized the important role of body composition, particularly fatty tissue, as a prognostic factor. However, there is still a lack of practice in combining fatty tissue to discriminate benign and malignant pulmonary nodules. PURPOSE This study proposes a deep learning (DL) approach to explore the potential predictive value of dual imaging markers, including intrathoracic fat (ITF), in patients with pulmonary nodules. METHODS We enrolled 1321 patients with pulmonary nodules from three centers. Image feature extraction was performed on computed tomography (CT) images of pulmonary nodules and ITF by DL, multimodal information was used to discriminate benign and malignant in patients with pulmonary nodules. RESULTS Here, the areas under the receiver operating characteristic curve (AUC) of the model for ITF combined with pulmonary nodules were 0.910(95 % confidence interval [CI]: 0.870-0.950, P = 0.016), 0.922(95 % CI: 0.883-0.960, P = 0.037) and 0.899(95 % CI: 0.849-0.949, P = 0.033) in the internal test cohort, external test cohort1 and external test cohort2, respectively, which were significantly better than the model for pulmonary nodules. Intrathoracic fat index (ITFI) emerged as an independent influencing factor for benign and malignant in patients with pulmonary nodules, correlating with a 9.4 % decrease in the risk of malignancy for each additional unit. CONCLUSION This study demonstrates the potential auxiliary predictive value of ITF as a noninvasive imaging biomarker in assessing pulmonary nodules.
Collapse
Affiliation(s)
- Shidi Miao
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin, China
| | - Qi Dong
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin, China
| | - Le Liu
- Department of Internal Medicine, Harbin Medical University Cancer Hospital, Harbin Medical University, Harbin, China
| | - Qifan Xuan
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin, China
| | - Yunfei An
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin, China
| | - Hongzhuo Qi
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin, China
| | - Qiujun Wang
- Department of General Practice, the Second Affiliated Hospital, Harbin Medical University, Harbin, China
| | - Zengyao Liu
- Department of Interventional Medicine, the First Affiliated Hospital, Harbin Medical University, Harbin, China
| | - Ruitao Wang
- Department of Internal Medicine, Harbin Medical University Cancer Hospital, Harbin Medical University, Harbin, China.
| |
Collapse
|
12
|
Xie L, Xu Y, Zheng M, Chen Y, Sun M, Archer MA, Mao W, Tong Y, Wan Y. An anthropomorphic diagnosis system of pulmonary nodules using weak annotation-based deep learning. Comput Med Imaging Graph 2024; 118:102438. [PMID: 39426342 PMCID: PMC11620937 DOI: 10.1016/j.compmedimag.2024.102438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2024] [Revised: 09/18/2024] [Accepted: 09/19/2024] [Indexed: 10/21/2024]
Abstract
The accurate categorization of lung nodules in CT scans is an essential aspect in the prompt detection and diagnosis of lung cancer. The categorization of grade and texture for nodules is particularly significant since it can aid radiologists and clinicians to make better-informed decisions concerning the management of nodules. However, currently existing nodule classification techniques have a singular function of nodule classification and rely on an extensive amount of high-quality annotation data, which does not meet the requirements of clinical practice. To address this issue, we develop an anthropomorphic diagnosis system of pulmonary nodules (PN) based on deep learning (DL) that is trained by weak annotation data and has comparable performance to full-annotation based diagnosis systems. The proposed system uses DL models to classify PNs (benign vs. malignant) with weak annotations, which eliminates the need for time-consuming and labor-intensive manual annotations of PNs. Moreover, the PN classification networks, augmented with handcrafted shape features acquired through the ball-scale transform technique, demonstrate capability to differentiate PNs with diverse labels, including pure ground-glass opacities, part-solid nodules, and solid nodules. Through 5-fold cross-validation on two datasets, the system achieved the following results: (1) an Area Under Curve (AUC) of 0.938 for PN localization and an AUC of 0.912 for PN differential diagnosis on the LIDC-IDRI dataset of 814 testing cases, (2) an AUC of 0.943 for PN localization and an AUC of 0.815 for PN differential diagnosis on the in-house dataset of 822 testing cases. In summary, our system demonstrates efficient localization and differential diagnosis of PNs in a resource limited environment, and thus could be translated into clinical use in the future.
Collapse
Affiliation(s)
- Lipeng Xie
- School of Cyber Science and Engineering, Zhengzhou University, Zhengzhou, China
| | - Yongrui Xu
- Department of Cardio-thoracic Surgery, Nanjing Medical University Affiliated Wuxi People's Hospital, Wuxi, Jiangsu, China; Nanjing Medical University, Nanjing, Jiangsu, China
| | - Mingfeng Zheng
- Department of Cardio-thoracic Surgery, Nanjing Medical University Affiliated Wuxi People's Hospital, Wuxi, Jiangsu, China; Nanjing Medical University, Nanjing, Jiangsu, China
| | - Yundi Chen
- Department of Biomedical Engineering, Binghamton University, Binghamton, NY, USA
| | - Min Sun
- Division of Oncology, University of Pittsburgh Medical Center Hillman Cancer Center at St. Margaret, Pittsburgh, PA, USA
| | - Michael A Archer
- Division of Thoracic Surgery, SUNY Upstate Medical University, USA
| | - Wenjun Mao
- Department of Cardio-thoracic Surgery, Nanjing Medical University Affiliated Wuxi People's Hospital, Wuxi, Jiangsu, China; Nanjing Medical University, Nanjing, Jiangsu, China.
| | - Yubing Tong
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard building, 3710 Hamilton Walk, Philadelphia, PA 19104, USA.
| | - Yuan Wan
- Department of Biomedical Engineering, Binghamton University, Binghamton, NY, USA.
| |
Collapse
|
13
|
Lv E, Kang X, Wen P, Tian J, Zhang M. A novel benign and malignant classification model for lung nodules based on multi-scale interleaved fusion integrated network. Sci Rep 2024; 14:27506. [PMID: 39528563 PMCID: PMC11555393 DOI: 10.1038/s41598-024-79058-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2024] [Accepted: 11/06/2024] [Indexed: 11/16/2024] Open
Abstract
One of the precursors of lung cancer is the presence of lung nodules, and accurate identification of their benign or malignant nature is important for the long-term survival of patients. With the development of artificial intelligence, deep learning has become the main method for lung nodule classification. However, successful deep learning models usually require large number of parameters and carefully annotated data. In the field of medical images, the availability of such data is usually limited, which makes deep networks often perform poorly on new test data. In addition, the model based on the linear stacked single branch structure hinders the extraction of multi-scale features and reduces the classification performance. In this paper, to address this problem, we propose a lightweight interleaved fusion integration network with multi-scale feature learning modules, called MIFNet. The MIFNet consists of a series of MIF blocks that efficiently combine multiple convolutional layers containing 1 × 1 and 3 × 3 convolutional kernels with shortcut links to extract multiscale features at different levels and preserving them throughout the block. The model has only 0.7 M parameters and requires low computational cost and memory space compared to many ImageNet pretrained CNN architectures. The proposed MIFNet conducted exhaustive experiments on the reconstructed LUNA16 dataset, achieving impressive results with 94.82% accuracy, 97.34% F1 value, 96.74% precision, 97.10% sensitivity, and 84.75% specificity. The results show that our proposed deep integrated network achieves higher performance than pre-trained deep networks and state-of-the-art methods. This provides an objective and efficient auxiliary method for accurately classifying the type of lung nodule in medical images.
Collapse
Affiliation(s)
- Enhui Lv
- School of Medical Information & Engineering, Xuzhou Medical University, Xuzhou, Jiangsu, China
| | - Xingxing Kang
- School of Medical Information & Engineering, Xuzhou Medical University, Xuzhou, Jiangsu, China
| | - Pengbo Wen
- School of Medical Information & Engineering, Xuzhou Medical University, Xuzhou, Jiangsu, China
| | - Jiaqi Tian
- School of Medical Information & Engineering, Xuzhou Medical University, Xuzhou, Jiangsu, China.
| | - Mengying Zhang
- School of Medical Information & Engineering, Xuzhou Medical University, Xuzhou, Jiangsu, China.
| |
Collapse
|
14
|
Huang Q, Li G. Knowledge graph based reasoning in medical image analysis: A scoping review. Comput Biol Med 2024; 182:109100. [PMID: 39244959 DOI: 10.1016/j.compbiomed.2024.109100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2024] [Revised: 08/04/2024] [Accepted: 08/31/2024] [Indexed: 09/10/2024]
Abstract
Automated computer-aided diagnosis (CAD) is becoming more significant in the field of medicine due to advancements in computer hardware performance and the progress of artificial intelligence. The knowledge graph is a structure for visually representing knowledge facts. In the last decade, a large body of work based on knowledge graphs has effectively improved the organization and interpretability of large-scale complex knowledge. Introducing knowledge graph inference into CAD is a research direction with significant potential. In this review, we briefly review the basic principles and application methods of knowledge graphs firstly. Then, we systematically organize and analyze the research and application of knowledge graphs in medical imaging-assisted diagnosis. We also summarize the shortcomings of the current research, such as medical data barriers and deficiencies, low utilization of multimodal information, and weak interpretability. Finally, we propose future research directions with possibilities and potentials to address the shortcomings of current approaches.
Collapse
Affiliation(s)
- Qinghua Huang
- School of Artificial Intelligence, OPtics and ElectroNics (iOPEN), Northwestern Polytechnical University, 127 West Youyi Road, Beilin District, Xi'an, 710072, Shaanxi, China.
| | - Guanghui Li
- School of Artificial Intelligence, OPtics and ElectroNics (iOPEN), Northwestern Polytechnical University, 127 West Youyi Road, Beilin District, Xi'an, 710072, Shaanxi, China; School of Computer Science, Northwestern Polytechnical University, 1 Dongxiang Road, Chang'an District, Xi'an, 710129, Shaanxi, China.
| |
Collapse
|
15
|
Esha JF, Islam T, Pranto MAM, Borno AS, Faruqui N, Yousuf MA, Azad AKM, Al-Moisheer AS, Alotaibi N, Alyami SA, Moni MA. Multi-View Soft Attention-Based Model for the Classification of Lung Cancer-Associated Disabilities. Diagnostics (Basel) 2024; 14:2282. [PMID: 39451604 PMCID: PMC11506595 DOI: 10.3390/diagnostics14202282] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2024] [Revised: 09/24/2024] [Accepted: 09/29/2024] [Indexed: 10/26/2024] Open
Abstract
Background: The detection of lung nodules at their early stages may significantly enhance the survival rate and prevent progression to severe disability caused by advanced lung cancer, but it often requires manual and laborious efforts for radiologists, with limited success. To alleviate it, we propose a Multi-View Soft Attention-Based Convolutional Neural Network (MVSA-CNN) model for multi-class lung nodular classifications in three stages (benign, primary, and metastatic). Methods: Initially, patches from each nodule are extracted into three different views, each fed to our model to classify the malignancy. A dataset, namely the Lung Image Database Consortium Image Database Resource Initiative (LIDC-IDRI), is used for training and testing. The 10-fold cross-validation approach was used on the database to assess the model's performance. Results: The experimental results suggest that MVSA-CNN outperforms other competing methods with 97.10% accuracy, 96.31% sensitivity, and 97.45% specificity. Conclusions: We hope the highly predictive performance of MVSA-CNN in lung nodule classification from lung Computed Tomography (CT) scans may facilitate more reliable diagnosis, thereby improving outcomes for individuals with disabilities who may experience disparities in healthcare access and quality.
Collapse
Affiliation(s)
- Jannatul Ferdous Esha
- Department of Information and Communication Technology, Bangladesh University of Professionals, Mirpur Cantonment, Dhaka 1216, Bangladesh; (J.F.E.); (T.I.); (M.A.M.P.); (A.S.B.)
| | - Tahmidul Islam
- Department of Information and Communication Technology, Bangladesh University of Professionals, Mirpur Cantonment, Dhaka 1216, Bangladesh; (J.F.E.); (T.I.); (M.A.M.P.); (A.S.B.)
| | - Md. Appel Mahmud Pranto
- Department of Information and Communication Technology, Bangladesh University of Professionals, Mirpur Cantonment, Dhaka 1216, Bangladesh; (J.F.E.); (T.I.); (M.A.M.P.); (A.S.B.)
| | - Abrar Siam Borno
- Department of Information and Communication Technology, Bangladesh University of Professionals, Mirpur Cantonment, Dhaka 1216, Bangladesh; (J.F.E.); (T.I.); (M.A.M.P.); (A.S.B.)
| | - Nuruzzaman Faruqui
- Department of Software Engineering, Daffodil International University, Daffodil Smart City, Birulia 1216, Bangladesh;
| | - Mohammad Abu Yousuf
- Institute of Information Technology, Jahangirnagar University, Dhaka 1342, Bangladesh
| | - AKM Azad
- Department of Mathematics and Statistics, Faculty of Science, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 13318, Saudi Arabia; (A.A.); (A.S.A.-M.); (N.A.); (S.A.A.)
| | - Asmaa Soliman Al-Moisheer
- Department of Mathematics and Statistics, Faculty of Science, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 13318, Saudi Arabia; (A.A.); (A.S.A.-M.); (N.A.); (S.A.A.)
| | - Naif Alotaibi
- Department of Mathematics and Statistics, Faculty of Science, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 13318, Saudi Arabia; (A.A.); (A.S.A.-M.); (N.A.); (S.A.A.)
| | - Salem A. Alyami
- Department of Mathematics and Statistics, Faculty of Science, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 13318, Saudi Arabia; (A.A.); (A.S.A.-M.); (N.A.); (S.A.A.)
| | - Mohammad Ali Moni
- AI & Digital Health Technology, AI and Cyber Futures Institute, Charles Sturt University, Bathurst, NSW 2795, Australia
- AI & Digital Health Technology, Rural Health Research Institute, Charles Sturt University, Orange, NSW 2800, Australia
| |
Collapse
|
16
|
Ashames MMA, Demir A, Gerek ON, Fidan M, Gulmezoglu MB, Ergin S, Edizkan R, Koc M, Barkana A, Calisir C. Are deep learning classification results obtained on CT scans fair and interpretable? Phys Eng Sci Med 2024; 47:967-979. [PMID: 38573489 PMCID: PMC11408573 DOI: 10.1007/s13246-024-01419-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2023] [Accepted: 03/12/2024] [Indexed: 04/05/2024]
Abstract
Following the great success of various deep learning methods in image and object classification, the biomedical image processing society is also overwhelmed with their applications to various automatic diagnosis cases. Unfortunately, most of the deep learning-based classification attempts in the literature solely focus on the aim of extreme accuracy scores, without considering interpretability, or patient-wise separation of training and test data. For example, most lung nodule classification papers using deep learning randomly shuffle data and split it into training, validation, and test sets, causing certain images from the Computed Tomography (CT) scan of a person to be in the training set, while other images of the same person to be in the validation or testing image sets. This can result in reporting misleading accuracy rates and the learning of irrelevant features, ultimately reducing the real-life usability of these models. When the deep neural networks trained on the traditional, unfair data shuffling method are challenged with new patient images, it is observed that the trained models perform poorly. In contrast, deep neural networks trained with strict patient-level separation maintain their accuracy rates even when new patient images are tested. Heat map visualizations of the activations of the deep neural networks trained with strict patient-level separation indicate a higher degree of focus on the relevant nodules. We argue that the research question posed in the title has a positive answer only if the deep neural networks are trained with images of patients that are strictly isolated from the validation and testing patient sets.
Collapse
Affiliation(s)
- Mohamad M A Ashames
- Department of Electrical and Electronics Engineering, Eskisehir Osmangazi University, Eskisehir, Turkey
| | - Ahmet Demir
- Department of Electrical and Electronics Engineering, Eskisehir Osmangazi University, Eskisehir, Turkey
| | - Omer N Gerek
- Department of Electrical and Electronics Engineering, Eskisehir Technical University, Eskisehir, Turkey
| | - Mehmet Fidan
- Vocational School of Transportation, Eskisehir Technical University, Eskisehir, Turkey
| | - M Bilginer Gulmezoglu
- Department of Electrical and Electronics Engineering, Eskisehir Osmangazi University, Eskisehir, Turkey
| | - Semih Ergin
- Department of Electrical and Electronics Engineering, Eskisehir Osmangazi University, Eskisehir, Turkey
| | - Rifat Edizkan
- Department of Electrical and Electronics Engineering, Eskisehir Osmangazi University, Eskisehir, Turkey
| | - Mehmet Koc
- Department of Computer Engineering, Eskisehir Technical University, Eskisehir, Turkey.
| | - Atalay Barkana
- Department of Electrical and Electronics Engineering, Eskisehir Technical University, Eskisehir, Turkey
| | - Cuneyt Calisir
- Department of Radiology, Manisa Celal Bayar University, Manisa, Turkey
| |
Collapse
|
17
|
Gunawan R, Tran Y, Zheng J, Nguyen H, Carrigan A, Mills MK, Chai R. Combining Multistaged Filters and Modified Segmentation Network for Improving Lung Nodules Classification. IEEE J Biomed Health Inform 2024; 28:5519-5527. [PMID: 38805332 DOI: 10.1109/jbhi.2024.3405907] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/30/2024]
Abstract
Advancements in computational technology have led to a shift towards automated detection processes in lung cancer screening, particularly through nodule segmentation techniques. These techniques employ thresholding to distinguish between soft and firm tissues, including cancerous nodules. The challenge of accurately detecting nodules close to critical lung structures such as blood vessels, bronchi, and the pleura highlights the necessity for more sophisticated methods to enhance diagnostic accuracy. This paper proposed combined processing filters for data preparation before using one of the modified Convolutional Neural Networks (CNNs) as the classifier. With refined filters, the nodule targets are solid, semi-solid, and ground glass, ranging from low-stage cancer (cancer screening data) to high-stage cancer. Furthermore, two additional works were added to address juxta-pleural nodules while the pre-processing end and classification are done in a 3-dimensional domain in opposition to the usual image classification. The accuracy output indicates that even using a simple Segmentation Network if modified correctly, can improve the classification result compared to the other eight models. The proposed sequence total accuracy reached 99.7%, with 99.71% cancer class accuracy and 99.82% non-cancer accuracy, much higher than any previous research, which can improve the detection efforts of the radiologist.
Collapse
|
18
|
Jin Y, Mu W, Shi Y, Qi Q, Wang W, He Y, Sun X, Yang B, Cui P, Li C, Liu F, Liu Y, Wang G, Zhao J, Zhang Y, Zhang S, Cao C, Sun C, Hong N, Cai S, Tian J, Yang F, Chen K. Development and validation of an integrated system for lung cancer screening and post-screening pulmonary nodules management: a proof-of-concept study (ASCEND-LUNG). EClinicalMedicine 2024; 75:102769. [PMID: 39165498 PMCID: PMC11334824 DOI: 10.1016/j.eclinm.2024.102769] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Revised: 07/14/2024] [Accepted: 07/17/2024] [Indexed: 08/22/2024] Open
Abstract
Background In order to address the low compliance and dissatisfied specificity of low-dose computed tomography (LDCT), efficient and non-invasive approaches are needed to complement its limitations for lung cancer screening and management. The ASCEND-LUNG study is a prospective two-stage case-control study designed to evaluate the performance of a liquid biopsy-based comprehensive lung cancer screening and post-screening pulmonary nodules management system. Methods We aimed to develop a comprehensive lung cancer system called Peking University Lung Cancer Screening and Management System (PKU-LCSMS) which comprises a lung cancer screening model to identify specific populations requiring LDCT and an artificial intelligence-aided (AI-aided) pulmonary nodules diagnostic model to classify pulmonary nodules following LDCT. A dataset of 465 participants (216 cancer, 47 benign, 202 non-cancer control) were used for the two models' development phase. For the lung cancer screening model development, cancer participants were randomly split at a ratio of 1:1 into the train and validation cohorts, and then non-cancer controls were age-matched to the cancer cases in a 1:1 ratio. Similarly, for the AI-aided pulmonary nodules model, cancer and benign participants were also randomly divided at a ratio of 2:1 into the train and validation cohorts. Subsequently, during the model validation phase, sensitivity and specificity were validated using an independent validation cohort consisting of 291 participants (140 cancer, 25 benign, 126 non-cancer control). Prospectively collected blood samples were analyzed for multi-omics including cell-free DNA (cfDNA) methylation, mutation, and serum protein. Computerized tomography (CT) images data was also obtained. Paired tissue samples were additionally analyzed for DNA methylation, DNA mutation, and messenger RNA (mRNA) expression to further explore the potential biological mechanisms. This study is registered with ClinicalTrials.gov, NCT04817046. Findings Baseline blood samples were evaluated for the whole screening and diagnostic process. The cfDNA methylation-based lung cancer screening model exhibited the highest area under the curve (AUC) of 0.910 (95% CI, 0.869-0.950), followed by the protein model (0.891 [95% CI, 0.845-0.938]) and lastly the mutation model (0.577 [95% CI, 0.482-0.672]). Further, the final screening model, which incorporated cfDNA methylation and protein features, achieved an AUC of 0.963 (95% CI, 0.942-0.984). In the independent validation cohort, the multi-omics screening model showed a sensitivity of 99.2% (95% CI, 0.957-1.000) at a specificity of 56.3% (95% CI, 0.472-0.652). For the AI-aided pulmonary nodules diagnostic model, which incorporated cfDNA methylation and CT images features, it yielded a sensitivity of 81.1% (95% CI, 0.732-0.875), a specificity of 76.0% (95% CI, 0.549-0.906) in the independent validation cohort. Furthermore, four differentially methylated regions (DMRs) were shared in the lung cancer screening model and the AI-aided pulmonary nodules diagnostic model. Interpretation We developed and validated a liquid biopsy-based comprehensive lung cancer screening and management system called PKU-LCSMS which combined a blood multi-omics based lung cancer screening model incorporating cfDNA methylation and protein features and an AI-aided pulmonary nodules diagnostic model integrating CT images and cfDNA methylation features in sequence to streamline the entire process of lung cancer screening and post-screening pulmonary nodules management. It might provide a promising applicable solution for lung cancer screening and management. Funding This work was supported by Science, Science, Technology & Innovation Project of Xiongan New Area, Beijing Natural Science Foundation, CAMS Innovation Fund for Medical Sciences (CIFMS), Clinical Medicine Plus X-Young Scholars Project of Peking University, the Fundamental Research Funds for the Central Universities, Research Unit of Intelligence Diagnosis and Treatment in Early Non-small Cell Lung Cancer, Chinese Academy of Medical Sciences, National Natural Science Foundation of China, Peking University People's Hospital Research and Development Funds, National Key Research and Development Program of China, and the fundamental research funds for the central universities.
Collapse
Affiliation(s)
- Yichen Jin
- Department of Thoracic Oncology Institute & Research Unit of Intelligence Diagnosis and Treatment in Early Non-small Cell Lung Cancer, Peking University People's Hospital, Beijing, 100044, China
- Department of Thoracic Surgery, Peking University People's Hospital, Beijing, 100044, China
| | - Wei Mu
- School of Engineering Medicine, Beihang University, Beijing, 100191, China
- Key Laboratory of Big Data-Based Precision Medicine (Beihang University), Ministry of Industry and Information Technology of the People's Republic of China, Beijing, 100191, China
| | - Yezhen Shi
- Burning Rock Biotech, Guangzhou, 510300, China
| | - Qingyi Qi
- Department of Radiology, Peking University People's Hospital, Beijing, 100044, China
| | - Wenxiang Wang
- Department of Thoracic Oncology Institute & Research Unit of Intelligence Diagnosis and Treatment in Early Non-small Cell Lung Cancer, Peking University People's Hospital, Beijing, 100044, China
- Department of Thoracic Surgery, Peking University People's Hospital, Beijing, 100044, China
| | - Yue He
- Department of Thoracic Oncology Institute & Research Unit of Intelligence Diagnosis and Treatment in Early Non-small Cell Lung Cancer, Peking University People's Hospital, Beijing, 100044, China
- Department of Thoracic Surgery, Peking University People's Hospital, Beijing, 100044, China
| | - Xiaoran Sun
- Burning Rock Biotech, Guangzhou, 510300, China
| | - Bo Yang
- Burning Rock Biotech, Guangzhou, 510300, China
| | - Peng Cui
- Burning Rock Biotech, Guangzhou, 510300, China
| | | | - Fang Liu
- Burning Rock Biotech, Guangzhou, 510300, China
| | - Yuxia Liu
- Burning Rock Biotech, Guangzhou, 510300, China
| | | | - Jing Zhao
- Burning Rock Biotech, Guangzhou, 510300, China
| | - Yuzi Zhang
- Burning Rock Biotech, Guangzhou, 510300, China
| | - Shuaitong Zhang
- School of Medical Technology, Beijing Institute of Technology, Beijing, 100081, China
| | - Caifang Cao
- School of Engineering Medicine, Beihang University, Beijing, 100191, China
- Key Laboratory of Big Data-Based Precision Medicine (Beihang University), Ministry of Industry and Information Technology of the People's Republic of China, Beijing, 100191, China
| | - Chao Sun
- Department of Radiology, Peking University People's Hospital, Beijing, 100044, China
| | - Nan Hong
- Department of Radiology, Peking University People's Hospital, Beijing, 100044, China
| | - Shangli Cai
- Burning Rock Biotech, Guangzhou, 510300, China
| | - Jie Tian
- School of Engineering Medicine, Beihang University, Beijing, 100191, China
- Key Laboratory of Big Data-Based Precision Medicine (Beihang University), Ministry of Industry and Information Technology of the People's Republic of China, Beijing, 100191, China
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, 100191, China
| | - Fan Yang
- Department of Thoracic Oncology Institute & Research Unit of Intelligence Diagnosis and Treatment in Early Non-small Cell Lung Cancer, Peking University People's Hospital, Beijing, 100044, China
- Department of Thoracic Surgery, Peking University People's Hospital, Beijing, 100044, China
| | - Kezhong Chen
- Department of Thoracic Oncology Institute & Research Unit of Intelligence Diagnosis and Treatment in Early Non-small Cell Lung Cancer, Peking University People's Hospital, Beijing, 100044, China
- Department of Thoracic Surgery, Peking University People's Hospital, Beijing, 100044, China
- Institute of Advanced Clinical Medicine, Peking University, Beijing, 100191, China
| |
Collapse
|
19
|
Gao Y, Yang X, Li H, Ding DW. A knowledge-enhanced interpretable network for early recurrence prediction of hepatocellular carcinoma via multi-phase CT imaging. Int J Med Inform 2024; 189:105509. [PMID: 38851131 DOI: 10.1016/j.ijmedinf.2024.105509] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2024] [Revised: 05/20/2024] [Accepted: 05/28/2024] [Indexed: 06/10/2024]
Abstract
BACKGROUND Predicting early recurrence (ER) of hepatocellular carcinoma (HCC) accurately can guide treatment decisions and further enhance survival. Computed tomography (CT) imaging, analyzed by deep learning (DL) models combining domain knowledge, has been employed for the prediction. However, these DL models utilized late fusion, restricting the interaction between domain knowledge and images during feature extraction, thereby limiting the prediction performance and compromising decision-making interpretability. METHODS We propose a novel Vision Transformer (ViT)-based DL network, referred to as Dual-Style ViT (DSViT), to augment the interaction between domain knowledge and images and the effective fusion among multi-phase CT images for improving both predictive performance and interpretability. We apply the DSViT to develop pre-/post-operative models for predicting ER. Within DSViT, to balance the utilization between domain knowledge and images within DSViT, we propose an adaptive self-attention mechanism. Moreover, we present an attention-guided supervised learning module for balancing the contributions of multi-phase CT images to prediction and a domain knowledge self-supervision module for enhancing the fusion between domain knowledge and images, thereby further improving predictive performance. Finally, we provide the interpretability of the DSViT decision-making. RESULTS Experiments on our multi-phase data demonstrate that DSViTs surpass the existing models across multiple performance metrics and provide the decision-making interpretability. Additional validation on a publicly available dataset underscores the generalizability of DSViT. CONCLUSIONS The proposed DSViT can significantly improve the performance and interpretability of ER prediction, thereby fortifying the trustworthiness of artificial intelligence tool for HCC ER prediction in clinical settings.
Collapse
Affiliation(s)
- Yu Gao
- School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, China; Key Laboratory of Knowledge Automation for Industrial Processes, Ministry of Education, Beijing 100083, China
| | - Xue Yang
- First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan 450052, China; Department of Radiology, Beijing Youan Hospital, Capital Medical University, Beijing 100069, China
| | - Hongjun Li
- Department of Radiology, Beijing Youan Hospital, Capital Medical University, Beijing 100069, China.
| | - Da-Wei Ding
- School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, China; Key Laboratory of Knowledge Automation for Industrial Processes, Ministry of Education, Beijing 100083, China.
| |
Collapse
|
20
|
Jian M, Chen H, Zhang Z, Yang N, Zhang H, Ma L, Xu W, Zhi H. A Lung Nodule Dataset with Histopathology-based Cancer Type Annotation. Sci Data 2024; 11:824. [PMID: 39068171 PMCID: PMC11283520 DOI: 10.1038/s41597-024-03658-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2023] [Accepted: 07/17/2024] [Indexed: 07/30/2024] Open
Abstract
Recently, Computer-Aided Diagnosis (CAD) systems have emerged as indispensable tools in clinical diagnostic workflows, significantly alleviating the burden on radiologists. Nevertheless, despite their integration into clinical settings, CAD systems encounter limitations. Specifically, while CAD systems can achieve high performance in the detection of lung nodules, they face challenges in accurately predicting multiple cancer types. This limitation can be attributed to the scarcity of publicly available datasets annotated with expert-level cancer type information. This research aims to bridge this gap by providing publicly accessible datasets and reliable tools for medical diagnosis, facilitating a finer categorization of different types of lung diseases so as to offer precise treatment recommendations. To achieve this objective, we curated a diverse dataset of lung Computed Tomography (CT) images, comprising 330 annotated nodules (nodules are labeled as bounding boxes) from 95 distinct patients. The quality of the dataset was evaluated using a variety of classical classification and detection models, and these promising results demonstrate that the dataset has a feasible application and further facilitate intelligent auxiliary diagnosis.
Collapse
Affiliation(s)
- Muwei Jian
- School of Computer Science and Technology, Shandong University of Finance and Economics, Jinan, China.
- School of Information Science and Technology, Linyi University, Linyi, China.
| | - Hongyu Chen
- School of Information Science and Technology, Linyi University, Linyi, China
| | - Zaiyong Zhang
- Thoracic Surgery Department of Linyi Central Hospital, Linyi, China
| | - Nan Yang
- School of Information Science and Technology, Linyi University, Linyi, China
| | - Haorang Zhang
- School of Computer Science and Technology, Shandong University of Finance and Economics, Jinan, China
| | - Lifu Ma
- Personnel Department of Linyi Central Hospital, Linyi, China
| | - Wenjing Xu
- School of Information Science and Technology, Linyi University, Linyi, China
| | - Huixiang Zhi
- School of Information Science and Technology, Linyi University, Linyi, China
| |
Collapse
|
21
|
Chowdary S, Purushotaman SB. An Improved Archimedes Optimization-aided Multi-scale Deep Learning Segmentation with dilated ensemble CNN classification for detecting lung cancer using CT images. NETWORK (BRISTOL, ENGLAND) 2024:1-39. [PMID: 38975771 DOI: 10.1080/0954898x.2024.2373127] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Accepted: 06/22/2024] [Indexed: 07/09/2024]
Abstract
Early detection of lung cancer is necessary to prevent deaths caused by lung cancer. But, the identification of cancer in lungs using Computed Tomography (CT) scan based on some deep learning algorithms does not provide accurate results. A novel adaptive deep learning is developed with heuristic improvement. The proposed framework constitutes three sections as (a) Image acquisition, (b) Segmentation of Lung nodule, and (c) Classifying lung cancer. The raw CT images are congregated through standard data sources. It is then followed by nodule segmentation process, which is conducted by Adaptive Multi-Scale Dilated Trans-Unet3+. For increasing the segmentation accuracy, the parameters in this model is optimized by proposing Modified Transfer Operator-based Archimedes Optimization (MTO-AO). At the end, the segmented images are subjected to classification procedure, namely, Advanced Dilated Ensemble Convolutional Neural Networks (ADECNN), in which it is constructed with Inception, ResNet and MobileNet, where the hyper parameters is tuned by MTO-AO. From the three networks, the final result is estimated by high ranking-based classification. Hence, the performance is investigated using multiple measures and compared among different approaches. Thus, the findings of model demonstrate to prove the system's efficiency of detecting cancer and help the patient to get the appropriate treatment.
Collapse
Affiliation(s)
- Shalini Chowdary
- ECE, Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Chennai, Tamil Nadu, India
| | | |
Collapse
|
22
|
Zhu W, Jin Y, Ma G, Chen G, Egger J, Zhang S, Metaxas DN. Classification of lung cancer subtypes on CT images with synthetic pathological priors. Med Image Anal 2024; 95:103199. [PMID: 38759258 DOI: 10.1016/j.media.2024.103199] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2023] [Revised: 12/12/2023] [Accepted: 05/06/2024] [Indexed: 05/19/2024]
Abstract
The accurate diagnosis on pathological subtypes for lung cancer is of significant importance for the follow-up treatments and prognosis managements. In this paper, we propose self-generating hybrid feature network (SGHF-Net) for accurately classifying lung cancer subtypes on computed tomography (CT) images. Inspired by studies stating that cross-scale associations exist in the image patterns between the same case's CT images and its pathological images, we innovatively developed a pathological feature synthetic module (PFSM), which quantitatively maps cross-modality associations through deep neural networks, to derive the "gold standard" information contained in the corresponding pathological images from CT images. Additionally, we designed a radiological feature extraction module (RFEM) to directly acquire CT image information and integrated it with the pathological priors under an effective feature fusion framework, enabling the entire classification model to generate more indicative and specific pathologically related features and eventually output more accurate predictions. The superiority of the proposed model lies in its ability to self-generate hybrid features that contain multi-modality image information based on a single-modality input. To evaluate the effectiveness, adaptability, and generalization ability of our model, we performed extensive experiments on a large-scale multi-center dataset (i.e., 829 cases from three hospitals) to compare our model and a series of state-of-the-art (SOTA) classification models. The experimental results demonstrated the superiority of our model for lung cancer subtypes classification with significant accuracy improvements in terms of accuracy (ACC), area under the curve (AUC), positive predictive value (PPV) and F1-score.
Collapse
Affiliation(s)
- Wentao Zhu
- College of Information Engineering, Zhejiang University of Technology, Hangzhou 310014, China; Zhejiang Lab, Hangzhou 311121, China
| | - Yuan Jin
- Zhejiang Lab, Hangzhou 311121, China; Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria
| | - Gege Ma
- Zhejiang Lab, Hangzhou 311121, China
| | - Geng Chen
- School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an, Shaanxi 710072, China
| | - Jan Egger
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria
| | - Shaoting Zhang
- Shanghai Artificial Intelligence Laboratory, Shanghai 200120, China.
| | - Dimitris N Metaxas
- Department of Computer Science, Rutgers University, Piscataway, NJ 08854, USA
| |
Collapse
|
23
|
Zhai P, Cong H, Zhu E, Zhao G, Yu Y, Li J. MVCNet: Multiview Contrastive Network for Unsupervised Representation Learning for 3-D CT Lesions. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:7376-7390. [PMID: 36150004 DOI: 10.1109/tnnls.2022.3203412] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
With the renaissance of deep learning, automatic diagnostic algorithms for computed tomography (CT) have achieved many successful applications. However, they heavily rely on lesion-level annotations, which are often scarce due to the high cost of collecting pathological labels. On the other hand, the annotated CT data, especially the 3-D spatial information, may be underutilized by approaches that model a 3-D lesion with its 2-D slices, although such approaches have been proven effective and computationally efficient. This study presents a multiview contrastive network (MVCNet), which enhances the representations of 2-D views contrastively against other views of different spatial orientations. Specifically, MVCNet views each 3-D lesion from different orientations to collect multiple 2-D views; it learns to minimize a contrastive loss so that the 2-D views of the same 3-D lesion are aggregated, whereas those of different lesions are separated. To alleviate the issue of false negative examples, the uninformative negative samples are filtered out, which results in more discriminative features for downstream tasks. By linear evaluation, MVCNet achieves state-of-the-art accuracies on the lung image database consortium and image database resource initiative (LIDC-IDRI) (88.62%), lung nodule database (LNDb) (76.69%), and TianChi (84.33%) datasets for unsupervised representation learning. When fine-tuned on 10% of the labeled data, the accuracies are comparable to the supervised learning models (89.46% versus 85.03%, 73.85% versus 73.44%, 83.56% versus 83.34% on the three datasets, respectively), indicating the superiority of MVCNet in learning representations with limited annotations. Our findings suggest that contrasting multiple 2-D views is an effective approach to capturing the original 3-D information, which notably improves the utilization of the scarce and valuable annotated CT data.
Collapse
|
24
|
Zheng X, Liu K, Shen N, Gao Y, Zhu C, Li C, Rong C, Li S, Qian B, Li J, Wu X. Predicting overall survival and prophylactic cranial irradiation benefit in small-cell lung cancer with CT-based deep learning: A retrospective multicenter study. Radiother Oncol 2024; 195:110221. [PMID: 38479441 DOI: 10.1016/j.radonc.2024.110221] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 02/26/2024] [Accepted: 03/07/2024] [Indexed: 03/17/2024]
Abstract
BACKGROUND AND PURPOSE To develop a computed tomography (CT)-based deep learning model to predict overall survival (OS) among small-cell lung cancer (SCLC) patients and identify patients who could benefit from prophylactic cranial irradiation (PCI) based on OS signature risk stratification. MATERIALS AND METHODS This study retrospectively included 556 SCLC patients from three medical centers. The training, internal validation, and external validation cohorts comprised 309, 133, and 114 patients, respectively. The OS signature was built using a unified fully connected neural network. A deep learning model was developed based on the OS signature. Clinical and combined models were developed and compared with a deep learning model. Additionally, the benefits of PCI were evaluated after stratification using an OS signature. RESULTS Within the internal and external validation cohorts, the deep learning model (concordance index [C-index] 0.745, 0.733) was far superior to the clinical model (C-index: 0.635, 0.630) in predicting OS, but slightly worse than the combined model (C-index: 0.771, 0.770). Additionally, the deep learning model had excellent calibration, clinical usefulness, and improved accuracy in classifying survival outcomes. Remarkably, patients at high risk had a survival benefit from PCI in both the limited and extensive stages (all P < 0.05), whereas no significant association was observed in patients at low risk. CONCLUSIONS The CT-based deep learning model exhibited promising performance in predicting the OS of SCLC patients. The OS signature may aid in individualized treatment planning to select patients who may benefit from PCI.
Collapse
Affiliation(s)
- Xiaomin Zheng
- Department of Radiology, The First Affiliated Hospital of Anhui Medical University, Hefei 230031, China; Department of Radiation Oncology, Anhui Provincial Cancer Hospital, Hefei 230031, China
| | - Kaicai Liu
- Department of Radiology, The First Affiliated Hospital of Anhui Medical University, Hefei 230031, China; Department of Radiology, The First Affiliated Hospital of University of Science and Technology of China, Hefei 230001, China
| | - Na Shen
- Department of Radiology, The First Affiliated Hospital of Anhui Medical University, Hefei 230031, China
| | - Yankun Gao
- Department of Radiology, The First Affiliated Hospital of Anhui Medical University, Hefei 230031, China
| | - Chao Zhu
- Department of Radiology, The First Affiliated Hospital of Anhui Medical University, Hefei 230031, China
| | - Cuiping Li
- Department of Radiology, The First Affiliated Hospital of Anhui Medical University, Hefei 230031, China
| | - Chang Rong
- Department of Radiology, The First Affiliated Hospital of Anhui Medical University, Hefei 230031, China
| | - Shuai Li
- Department of Radiology, The First Affiliated Hospital of Anhui Medical University, Hefei 230031, China
| | - Baoxin Qian
- Huiying Medical Technology, Beijing 100192, China
| | - Jianying Li
- CT Advanced Application, GE HealthCare China, Beijing 100186, China
| | - Xingwang Wu
- Department of Radiology, The First Affiliated Hospital of Anhui Medical University, Hefei 230031, China.
| |
Collapse
|
25
|
Sun L, Zhang M, Lu Y, Zhu W, Yi Y, Yan F. Nodule-CLIP: Lung nodule classification based on multi-modal contrastive learning. Comput Biol Med 2024; 175:108505. [PMID: 38688129 DOI: 10.1016/j.compbiomed.2024.108505] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 02/28/2024] [Accepted: 04/21/2024] [Indexed: 05/02/2024]
Abstract
The latest developments in deep learning have demonstrated the importance of CT medical imaging for the classification of pulmonary nodules. However, challenges remain in fully leveraging the relevant medical annotations of pulmonary nodules and distinguishing between the benign and malignant labels of adjacent nodules. Therefore, this paper proposes the Nodule-CLIP model, which deeply mines the potential relationship between CT images, complex attributes of lung nodules, and benign and malignant attributes of lung nodules through a comparative learning method, and optimizes the model in the image feature extraction network by using its similarities and differences to improve its ability to distinguish similar lung nodules. Firstly, we segment the 3D lung nodule information by U-Net to reduce the interference caused by the background of lung nodules and focus on the lung nodule images. Secondly, the image features, class features, and complex attribute features are aligned by contrastive learning and loss function in Nodule-CLIP to achieve lung nodule image optimization and improve classification ability. A series of testing and ablation experiments were conducted on the public dataset LIDC-IDRI, and the final benign and malignant classification rate was 90.6%, and the recall rate was 92.81%. The experimental results show the advantages of this method in terms of lung nodule classification as well as interpretability.
Collapse
Affiliation(s)
- Lijing Sun
- College of Electrical Engineering and Control Science, Nanjing Tech University, Nanjing, 211800, Jiangsu, China
| | - Mengyi Zhang
- College of Electrical Engineering and Control Science, Nanjing Tech University, Nanjing, 211800, Jiangsu, China.
| | - Yu Lu
- College of Electrical Engineering and Control Science, Nanjing Tech University, Nanjing, 211800, Jiangsu, China
| | - Wenjun Zhu
- College of Electrical Engineering and Control Science, Nanjing Tech University, Nanjing, 211800, Jiangsu, China
| | - Yang Yi
- College of Electrical Engineering and Control Science, Nanjing Tech University, Nanjing, 211800, Jiangsu, China
| | - Fei Yan
- Jiangsu Institute of Cancer Research & The Affiliated Cancer Hospital of Nanjing Medical University, Jiangsu Cancer Hospital, Nanjing, 210009, Jiangsu, China
| |
Collapse
|
26
|
Zeng M, Wang X, Chen W. Worldwide research landscape of artificial intelligence in lung disease: A scientometric study. Heliyon 2024; 10:e31129. [PMID: 38826704 PMCID: PMC11141367 DOI: 10.1016/j.heliyon.2024.e31129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Revised: 05/09/2024] [Accepted: 05/10/2024] [Indexed: 06/04/2024] Open
Abstract
Purpose To perform a comprehensive bibliometric analysis of the application of artificial intelligence (AI) in lung disease to understand the current status and emerging trends of this field. Materials and methods AI-based lung disease research publications were selected from the Web of Science Core Collection. Citespace, VOS viewer and Excel were used to analyze and visualize co-authorship, co-citation, and co-occurrence analysis of authors, keywords, countries/regions, references and institutions in this field. Results Our study included a total of 5210 papers. The number of publications on AI in lung disease showed explosive growth since 2017. China and the United States lead in publication numbers. The most productive author were Li, Weimin and Qian Wei, with Shanghai Jiaotong University as the most productive institution. Radiology was the most co-cited journal. Lung cancer and COVID-19 emerged as the most studied diseases. Deep learning, convolutional neural network, lung cancer, radiomics will be the focus of future research. Conclusions AI-based diagnosis and treatment of lung disease has become a research hotspot in recent years, yielding significant results. Future work should focus on establishing multimodal AI models that incorporate clinical, imaging and laboratory information. Enhanced visualization of deep learning, AI-driven differential diagnosis model for lung disease and the creation of international large-scale lung disease databases should also be considered.
Collapse
Affiliation(s)
| | | | - Wei Chen
- Department of Radiology, Southwest Hospital, Third Military Medical University, Chongqing, China
| |
Collapse
|
27
|
Zou J, Lyu Y, Lin Y, Chen Y, Lai S, Wang S, Zhang X, Zhang X, Wu R, Kang W. A multi-view fusion lightweight network for CRSwNPs prediction on CT images. BMC Med Imaging 2024; 24:112. [PMID: 38755567 PMCID: PMC11100041 DOI: 10.1186/s12880-024-01296-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Accepted: 05/08/2024] [Indexed: 05/18/2024] Open
Abstract
Accurate preoperative differentiation of the chronic rhinosinusitis (CRS) endotype between eosinophilic CRS (eCRS) and non-eosinophilic CRS (non-eCRS) is an important topic in predicting postoperative outcomes and administering personalized treatment. To this end, we have constructed a sinus CT dataset, which comprises CT scan data and pathological biopsy results from 192 patients of chronic rhinosinusitis with nasal polyps (CRSwNP), treated at the Second Affiliated Hospital of Shantou University Medical College between 2020 and 2022. To differentiate CRSwNP endotype on preoperative CT and improve efficiency at the same time, we developed a multi-view fusion model that contains a mini-architecture with each network of 10 layers by modifying the deep residual neural network. The proposed model is trained on a training set and evaluated on a test set. The multi-view deep learning fusion model achieved the area under the receiver-operating characteristics curve (AUC) of 0.991, accuracy of 0.965 and F1-Score of 0.970 in test set. We compared the performance of the mini-architecture with other lightweight networks on the same Sinus CT dataset. The experimental results demonstrate that the developed ResMini architecture contribute to competitive CRSwNP endotype identification modeling in terms of accuracy and parameter number.
Collapse
Affiliation(s)
- Jisheng Zou
- College of Engineering, Shantou University, Shantou, 515063, China
| | - Yi Lyu
- Department of Otolaryngology, the Second Affiliated Hospital of Shantou University Medical College, Shantou, 515041, China
| | - Yu Lin
- Department of Otolaryngology, the Second Affiliated Hospital of Shantou University Medical College, Shantou, 515041, China
| | - Yaowen Chen
- College of Engineering, Shantou University, Shantou, 515063, China
| | - Shixin Lai
- College of Engineering, Shantou University, Shantou, 515063, China
| | - Siqi Wang
- College of Engineering, Shantou University, Shantou, 515063, China
| | - Xuan Zhang
- College of Engineering, Shantou University, Shantou, 515063, China
| | - Xiaolei Zhang
- Department of Radiology, the Second Affiliated Hospital of Shantou University Medical College, Shantou, 515041, China.
| | - Renhua Wu
- Department of Radiology, the Second Affiliated Hospital of Shantou University Medical College, Shantou, 515041, China.
| | - Weipiao Kang
- Department of Otolaryngology, the Second Affiliated Hospital of Shantou University Medical College, Shantou, 515041, China.
| |
Collapse
|
28
|
Shyamala Bharathi P, Shalini C. Advanced hybrid attention-based deep learning network with heuristic algorithm for adaptive CT and PET image fusion in lung cancer detection. Med Eng Phys 2024; 126:104138. [PMID: 38621836 DOI: 10.1016/j.medengphy.2024.104138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Revised: 02/17/2024] [Accepted: 03/02/2024] [Indexed: 04/17/2024]
Abstract
Lung cancer is one of the most deadly diseases in the world. Lung cancer detection can save the patient's life. Despite being the best imaging tool in the medical sector, clinicians find it challenging to interpret and detect cancer from Computed Tomography (CT) scan data. One of the most effective ways for the diagnosis of certain malignancies like lung tumours is Positron Emission Tomography (PET) imaging. So many diagnosis models have been implemented nowadays to diagnose various diseases. Early lung cancer identification is very important for predicting the severity level of lung cancer in cancer patients. To explore the effective model, an image fusion-based detection model is proposed for lung cancer detection using an improved heuristic algorithm of the deep learning model. Firstly, the PET and CT images are gathered from the internet. Further, these two collected images are fused for further process by using the Adaptive Dilated Convolution Neural Network (AD-CNN), in which the hyperparameters are tuned by the Modified Initial Velocity-based Capuchin Search Algorithm (MIV-CapSA). Subsequently, the abnormal regions are segmented by influencing the TransUnet3+. Finally, the segmented images are fed into the Hybrid Attention-based Deep Networks (HADN) model, encompassed with Mobilenet and Shufflenet. Therefore, the effectiveness of the novel detection model is analyzed using various metrics compared with traditional approaches. At last, the outcome evinces that it aids in early basic detection to treat the patients effectively.
Collapse
Affiliation(s)
- P Shyamala Bharathi
- Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Chennai, India.
| | - C Shalini
- Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Chennai, India
| |
Collapse
|
29
|
Zahari R, Cox J, Obara B. Uncertainty-aware image classification on 3D CT lung. Comput Biol Med 2024; 172:108324. [PMID: 38508053 DOI: 10.1016/j.compbiomed.2024.108324] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Revised: 03/06/2024] [Accepted: 03/14/2024] [Indexed: 03/22/2024]
Abstract
Early detection is crucial for lung cancer to prolong the patient's survival. Existing model architectures used in such systems have shown promising results. However, they lack reliability and robustness in their predictions and the models are typically evaluated on a single dataset, making them overconfident when a new class is present. With the existence of uncertainty, uncertain images can be referred to medical experts for a second opinion. Thus, we propose an uncertainty-aware framework that includes three phases: data preprocessing and model selection and evaluation, uncertainty quantification (UQ), and uncertainty measurement and data referral for the classification of benign and malignant nodules using 3D CT images. To quantify the uncertainty, we employed three approaches; Monte Carlo Dropout (MCD), Deep Ensemble (DE), and Ensemble Monte Carlo Dropout (EMCD). We evaluated eight different deep learning models consisting of ResNet, DenseNet, and the Inception network family, all of which achieved average F1 scores above 0.832, and the highest average value of 0.845 was obtained using InceptionResNetV2. Furthermore, incorporating the UQ demonstrated significant improvement in the overall model performance. Upon evaluation of the uncertainty estimate, MCD outperforms the other UQ models except for the metric, URecall, where DE and EMCD excel, implying that they are better at identifying incorrect predictions with higher uncertainty levels, which is vital in the medical field. Finally, we show that using a threshold for data referral can greatly improve the performance further, increasing the accuracy up to 0.959.
Collapse
Affiliation(s)
- Rahimi Zahari
- School of Computing, Newcastle University, Newcastle upon Tyne, UK
| | - Julie Cox
- County Durham and Darlington NHS Foundation Trust, County Durham, UK
| | - Boguslaw Obara
- School of Computing, Newcastle University, Newcastle upon Tyne, UK; Biosciences Institute, Newcastle University, Newcastle upon Tyne, UK.
| |
Collapse
|
30
|
Quanyang W, Yao H, Sicong W, Linlin Q, Zewei Z, Donghui H, Hongjia L, Shijun Z. Artificial intelligence in lung cancer screening: Detection, classification, prediction, and prognosis. Cancer Med 2024; 13:e7140. [PMID: 38581113 PMCID: PMC10997848 DOI: 10.1002/cam4.7140] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2023] [Revised: 03/15/2024] [Accepted: 03/16/2024] [Indexed: 04/08/2024] Open
Abstract
BACKGROUND The exceptional capabilities of artificial intelligence (AI) in extracting image information and processing complex models have led to its recognition across various medical fields. With the continuous evolution of AI technologies based on deep learning, particularly the advent of convolutional neural networks (CNNs), AI presents an expanded horizon of applications in lung cancer screening, including lung segmentation, nodule detection, false-positive reduction, nodule classification, and prognosis. METHODOLOGY This review initially analyzes the current status of AI technologies. It then explores the applications of AI in lung cancer screening, including lung segmentation, nodule detection, and classification, and assesses the potential of AI in enhancing the sensitivity of nodule detection and reducing false-positive rates. Finally, it addresses the challenges and future directions of AI in lung cancer screening. RESULTS AI holds substantial prospects in lung cancer screening. It demonstrates significant potential in improving nodule detection sensitivity, reducing false-positive rates, and classifying nodules, while also showing value in predicting nodule growth and pathological/genetic typing. CONCLUSIONS AI offers a promising supportive approach to lung cancer screening, presenting considerable potential in enhancing nodule detection sensitivity, reducing false-positive rates, and classifying nodules. However, the universality and interpretability of AI results need further enhancement. Future research should focus on the large-scale validation of new deep learning-based algorithms and multi-center studies to improve the efficacy of AI in lung cancer screening.
Collapse
Affiliation(s)
- Wu Quanyang
- Department of Diagnostic RadiologyNational Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical CollegeBeijingChina
| | - Huang Yao
- Department of Diagnostic RadiologyNational Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical CollegeBeijingChina
| | - Wang Sicong
- Magnetic Resonance Imaging ResearchGeneral Electric Healthcare (China)BeijingChina
| | - Qi Linlin
- Department of Diagnostic RadiologyNational Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical CollegeBeijingChina
| | - Zhang Zewei
- PET‐CT CenterNational Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical CollegeBeijingChina
| | - Hou Donghui
- Department of Diagnostic RadiologyNational Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical CollegeBeijingChina
| | - Li Hongjia
- PET‐CT CenterNational Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical CollegeBeijingChina
| | - Zhao Shijun
- Department of Diagnostic RadiologyNational Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical CollegeBeijingChina
| |
Collapse
|
31
|
He B, Sun C, Li H, Wang Y, She Y, Zhao M, Fang M, Zhu Y, Wang K, Liu Z, Wei Z, Mu W, Wang S, Tang Z, Wei J, Shao L, Tong L, Huang F, Tang M, Guo Y, Zhang H, Dong D, Chen C, Ma J, Tian J. Breaking boundaries in radiology: redefining AI diagnostics via raw data ahead of reconstruction. Phys Med Biol 2024; 69:075015. [PMID: 38224617 DOI: 10.1088/1361-6560/ad1e7c] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Accepted: 01/15/2024] [Indexed: 01/17/2024]
Abstract
Objective.In the realm of utilizing artificial intelligence (AI) for medical image analysis, the paradigm of 'signal-image-knowledge' has remained unchanged. However, the process of 'signal to image' inevitably introduces information distortion, ultimately leading to irrecoverable biases in the 'image to knowledge' process. Our goal is to skip reconstruction and build a diagnostic model directly from the raw data (signal).Approach. This study focuses on computed tomography (CT) and its raw data (sinogram) as the research subjects. We simulate the real-world process of 'human-signal-image' using the workflow 'CT-simulated data- reconstructed CT,' and we develop a novel AI predictive model directly targeting raw data (RCTM). This model comprises orientation, spatial, and global analysis modules, embodying the fusion of local to global information extraction from raw data. We selected 1994 patients with retrospective cases of solid lung nodules and modeled different types of data.Main results. We employed predefined radiomic features to assess the diagnostic feature differences caused by reconstruction. The results indicated that approximately 14% of the features had Spearman correlation coefficients below 0.8. These findings suggest that despite the increasing maturity of CT reconstruction algorithms, they still introduce perturbations to diagnostic features. Moreover, our proposed RCTM achieved an area under the curve (AUC) of 0.863 in the diagnosis task, showcasing a comprehensive superiority over models constructed from secondary reconstructed CTs (0.840, 0.822, and 0.825). Additionally, the performance of RCTM closely resembled that of models constructed from original CT scans (0.868, 0.878, and 0.866).Significance. The diagnostic and therapeutic approach directly based on CT raw data can enhance the precision of AI models and the concept of 'signal-to-image' can be extended to other types of imaging. AI diagnostic models tailored to raw data offer the potential to disrupt the traditional paradigm of 'signal-image-knowledge', opening up new avenues for more accurate medical diagnostics.
Collapse
Affiliation(s)
- Bingxi He
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Engineering Medicine, Beihang University, Beijing, People's Republic of China
- Key Laboratory of Big Data-Based Precision Medicine, Beihang University, Ministry of Industry and Information Technology, Beijing, People's Republic of China
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, People's Republic of China
| | - Caixia Sun
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Engineering Medicine, Beihang University, Beijing, People's Republic of China
- Key Laboratory of Big Data-Based Precision Medicine, Beihang University, Ministry of Industry and Information Technology, Beijing, People's Republic of China
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, People's Republic of China
| | - Hailin Li
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Engineering Medicine, Beihang University, Beijing, People's Republic of China
- Key Laboratory of Big Data-Based Precision Medicine, Beihang University, Ministry of Industry and Information Technology, Beijing, People's Republic of China
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, People's Republic of China
| | - Yongbo Wang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, People's Republic of China
| | - Yunlang She
- Department of Thoracic Surgery, Shanghai Pulmonary Hospital, Tongji University School of Medicine, Shanghai, People's Republic of China
| | - Mengmeng Zhao
- Department of Thoracic Surgery, Shanghai Pulmonary Hospital, Tongji University School of Medicine, Shanghai, People's Republic of China
| | - Mengjie Fang
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Engineering Medicine, Beihang University, Beijing, People's Republic of China
- Key Laboratory of Big Data-Based Precision Medicine, Beihang University, Ministry of Industry and Information Technology, Beijing, People's Republic of China
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, People's Republic of China
| | - Yongbei Zhu
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Engineering Medicine, Beihang University, Beijing, People's Republic of China
- Key Laboratory of Big Data-Based Precision Medicine, Beihang University, Ministry of Industry and Information Technology, Beijing, People's Republic of China
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, People's Republic of China
| | - Kun Wang
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, People's Republic of China
| | - Zhenyu Liu
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, People's Republic of China
| | - Ziqi Wei
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, People's Republic of China
| | - Wei Mu
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Engineering Medicine, Beihang University, Beijing, People's Republic of China
- Key Laboratory of Big Data-Based Precision Medicine, Beihang University, Ministry of Industry and Information Technology, Beijing, People's Republic of China
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, People's Republic of China
| | - Shuo Wang
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Engineering Medicine, Beihang University, Beijing, People's Republic of China
- Key Laboratory of Big Data-Based Precision Medicine, Beihang University, Ministry of Industry and Information Technology, Beijing, People's Republic of China
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, People's Republic of China
| | - Zhenchao Tang
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Engineering Medicine, Beihang University, Beijing, People's Republic of China
- Key Laboratory of Big Data-Based Precision Medicine, Beihang University, Ministry of Industry and Information Technology, Beijing, People's Republic of China
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, People's Republic of China
| | - Jingwei Wei
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, People's Republic of China
| | - Lizhi Shao
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, People's Republic of China
| | - Lixia Tong
- Neusoft Medical Systems Co. Ltd, Shenyang, People's Republic of China
| | - Feng Huang
- Neusoft Medical Systems Co. Ltd, Shenyang, People's Republic of China
| | - Mingze Tang
- School of Mechanical and Materials Engineering, North China University of Technology, Beijing, People's Republic of China
| | - Yu Guo
- Department of Radiology, The First Hospital of Jilin University, Changchun, Jilin, People's Republic of China
| | - Huimao Zhang
- Department of Radiology, The First Hospital of Jilin University, Changchun, Jilin, People's Republic of China
| | - Di Dong
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, People's Republic of China
| | - Chang Chen
- Department of Thoracic Surgery, Shanghai Pulmonary Hospital, Tongji University School of Medicine, Shanghai, People's Republic of China
| | - Jianhua Ma
- School of Biomedical Engineering, Southern Medical University, Guangzhou, People's Republic of China
| | - Jie Tian
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Engineering Medicine, Beihang University, Beijing, People's Republic of China
- Key Laboratory of Big Data-Based Precision Medicine, Beihang University, Ministry of Industry and Information Technology, Beijing, People's Republic of China
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, People's Republic of China
| |
Collapse
|
32
|
Jiang W, Zhi L, Zhang S, Zhou T. A Dual-Branch Framework With Prior Knowledge for Precise Segmentation of Lung Nodules in Challenging CT Scans. IEEE J Biomed Health Inform 2024; 28:1540-1551. [PMID: 38227405 DOI: 10.1109/jbhi.2024.3355008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2024]
Abstract
Lung cancer is one of the deadliest cancers globally, and early diagnosis is crucial for patient survival. Pulmonary nodules are the main manifestation of early lung cancer, usually assessed using CT scans. Nowadays, computer-aided diagnostic systems are widely used to assist physicians in disease diagnosis. The accurate segmentation of pulmonary nodules is affected by internal heterogeneity and external data factors. In order to overcome the segmentation challenges of subtle, mixed, adhesion-type, benign, and uncertain categories of nodules, a new mixed manual feature network that enhances sensitivity and accuracy is proposed. This method integrates feature information through a dual-branch network framework and multi-dimensional fusion module. By training and validating with multiple data sources and different data qualities, our method demonstrates leading performance on the LUNA16, Multi-thickness Slice Image dataset, LIDC, and UniToChest, with Dice similarity coefficients reaching 86.89%, 75.72%, 84.12%, and 80.74% respectively, surpassing most current methods for pulmonary nodule segmentation. Our method further improved the accuracy, reliability, and stability of lung nodule segmentation tasks even on challenging CT scans.
Collapse
|
33
|
Yang K, Song J, Liu M, Xue L, Liu S, Yin X, Liu K. TBACkp: HER2 expression status classification network focusing on intrinsic subenvironmental characteristics of breast cancer liver metastases. Comput Biol Med 2024; 170:108002. [PMID: 38277921 DOI: 10.1016/j.compbiomed.2024.108002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Revised: 12/24/2023] [Accepted: 01/13/2024] [Indexed: 01/28/2024]
Abstract
The HER2 expression status in breast cancer liver metastases is a crucial indicator for the diagnosis, treatment, and prognosis assessment of patients. And typical diagnosis involves assessing the HER2 expression status through invasive procedures like biopsy. However, this method has certain drawbacks, such as being difficult in obtaining tissue samples and requiring long examination periods. To address these limitations, we propose an AI-aided diagnostic model. This model enables rapid diagnosis. It diagnoses a patient's HER2 expression status on the basis of preprocessed images, which is the region of the lesion extracted from a CT image rather than from an actual tissue sample. The algorithm of the model adopts a parallel structure, including a Branch Block and a Trunk Block. The Branch Block is responsible for extracting the gradient characteristics between the tumor sub-environments, and the Trunk Block is for fusing the characteristics extracted by the Branch Block. The Branch Block contains CNN with self-attention, which combines the advantages of CNN and self-attention to extract more meticulous and comprehensive image features. And the Trunk Block is so designed that it fuses the extracted image feature information without affecting the transmission of the original image features. The Conv-Attention is used to calculate the attention in the Trunk Block, which uses kernel dot product and is responsible for providing the weight for the self-attention in the process of using convolution induced deviation calculation. Combined with the structure of the model and the method used, we refer to this model as TBACkp. The dataset comprises the enhanced abdominal CT images of 151 patients with liver metastases from breast cancer, together with the corresponding HER2 expression levels for each patient. The experimental results are as follows: (AUC: 0.915, ACC: 0.854, specificity: 0.809, precision: 0.863, recall: 0.881, F1-score: 0.872). The results demonstrate that this method can accurately assess the HER2 expression status in patients when compared with other advanced deep learning model.
Collapse
Affiliation(s)
- Kun Yang
- College of Quality and Technical Supervision, Hebei University, Baoding, China; Hebei Technology Innovation Center for Lightweight of New Energy Vehicle Power System, Baoding, China; Scientific Research and Innovation Team of Hebei University, Baoding, China
| | - Jie Song
- College of Quality and Technical Supervision, Hebei University, Baoding, China; Hebei Technology Innovation Center for Lightweight of New Energy Vehicle Power System, Baoding, China; Scientific Research and Innovation Team of Hebei University, Baoding, China
| | - Meng Liu
- Department of Radiology, Affiliated Hospital of Hebei University, Baoding, China
| | - Linyan Xue
- College of Quality and Technical Supervision, Hebei University, Baoding, China; Hebei Technology Innovation Center for Lightweight of New Energy Vehicle Power System, Baoding, China; Scientific Research and Innovation Team of Hebei University, Baoding, China
| | - Shuang Liu
- College of Quality and Technical Supervision, Hebei University, Baoding, China; Hebei Technology Innovation Center for Lightweight of New Energy Vehicle Power System, Baoding, China; Scientific Research and Innovation Team of Hebei University, Baoding, China
| | - Xiaoping Yin
- Department of Radiology, Affiliated Hospital of Hebei University, Baoding, China; Hebei Key Laboratory of Precise Imaging of Inflammation Related Tumors, Hebei University, Baoding, China; The Outstanding Young Scientific Research and Innovation Team of Hebei University, Baoding, China.
| | - Kun Liu
- College of Quality and Technical Supervision, Hebei University, Baoding, China; Hebei Technology Innovation Center for Lightweight of New Energy Vehicle Power System, Baoding, China; Scientific Research and Innovation Team of Hebei University, Baoding, China.
| |
Collapse
|
34
|
Liang S, Xu X, Yang Z, Du Q, Zhou L, Shao J, Guo J, Ying B, Li W, Wang C. Deep learning for precise diagnosis and subtype triage of drug-resistant tuberculosis on chest computed tomography. MedComm (Beijing) 2024; 5:e487. [PMID: 38469547 PMCID: PMC10925488 DOI: 10.1002/mco2.487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2023] [Revised: 01/08/2024] [Accepted: 01/09/2024] [Indexed: 03/13/2024] Open
Abstract
Deep learning, transforming input data into target prediction through intricate network structures, has inspired novel exploration in automated diagnosis based on medical images. The distinct morphological characteristics of chest abnormalities between drug-resistant tuberculosis (DR-TB) and drug-sensitive tuberculosis (DS-TB) on chest computed tomography (CT) are of potential value in differential diagnosis, which is challenging in the clinic. Hence, based on 1176 chest CT volumes from the equal number of patients with tuberculosis (TB), we presented a Deep learning-based system for TB drug resistance identification and subtype classification (DeepTB), which could automatically diagnose DR-TB and classify crucial subtypes, including rifampicin-resistant tuberculosis, multidrug-resistant tuberculosis, and extensively drug-resistant tuberculosis. Moreover, chest lesions were manually annotated to endow the model with robust power to assist radiologists in image interpretation and the Circos revealed the relationship between chest abnormalities and specific types of DR-TB. Finally, DeepTB achieved an area under the curve (AUC) up to 0.930 for thoracic abnormality detection and 0.943 for DR-TB diagnosis. Notably, the system demonstrated instructive value in DR-TB subtype classification with AUCs ranging from 0.880 to 0.928. Meanwhile, class activation maps were generated to express a human-understandable visual concept. Together, showing a prominent performance, DeepTB would be impactful in clinical decision-making for DR-TB.
Collapse
Affiliation(s)
- Shufan Liang
- Department of Pulmonary and Critical Care MedicineState Key Laboratory of Respiratory Health and Multimorbidity, Targeted Tracer Research and Development Laboratory, Med‐X Center for Manufacturing, Frontiers Science Center for Disease‐related Molecular Network, West China Hospital, West China School of Medicine, Sichuan UniversityChengduChina
| | - Xiuyuan Xu
- Machine Intelligence LaboratoryCollege of Computer ScienceSichuan UniversityChengduChina
| | - Zhe Yang
- Machine Intelligence LaboratoryCollege of Computer ScienceSichuan UniversityChengduChina
| | - Qiuyu Du
- Machine Intelligence LaboratoryCollege of Computer ScienceSichuan UniversityChengduChina
| | - Lingyu Zhou
- Machine Intelligence LaboratoryCollege of Computer ScienceSichuan UniversityChengduChina
| | - Jun Shao
- Department of Pulmonary and Critical Care MedicineState Key Laboratory of Respiratory Health and Multimorbidity, Targeted Tracer Research and Development Laboratory, Med‐X Center for Manufacturing, Frontiers Science Center for Disease‐related Molecular Network, West China Hospital, West China School of Medicine, Sichuan UniversityChengduChina
| | - Jixiang Guo
- Machine Intelligence LaboratoryCollege of Computer ScienceSichuan UniversityChengduChina
| | - Binwu Ying
- Department of Laboratory MedicineWest China Hospital, Sichuan UniversityChengduChina
| | - Weimin Li
- Department of Pulmonary and Critical Care MedicineState Key Laboratory of Respiratory Health and Multimorbidity, Targeted Tracer Research and Development Laboratory, Med‐X Center for Manufacturing, Frontiers Science Center for Disease‐related Molecular Network, West China Hospital, West China School of Medicine, Sichuan UniversityChengduChina
| | - Chengdi Wang
- Department of Pulmonary and Critical Care MedicineState Key Laboratory of Respiratory Health and Multimorbidity, Targeted Tracer Research and Development Laboratory, Med‐X Center for Manufacturing, Frontiers Science Center for Disease‐related Molecular Network, West China Hospital, West China School of Medicine, Sichuan UniversityChengduChina
| |
Collapse
|
35
|
UrRehman Z, Qiang Y, Wang L, Shi Y, Yang Q, Khattak SU, Aftab R, Zhao J. Effective lung nodule detection using deep CNN with dual attention mechanisms. Sci Rep 2024; 14:3934. [PMID: 38365831 PMCID: PMC10873370 DOI: 10.1038/s41598-024-51833-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Accepted: 01/10/2024] [Indexed: 02/18/2024] Open
Abstract
Novel methods are required to enhance lung cancer detection, which has overtaken other cancer-related causes of death as the major cause of cancer-related mortality. Radiologists have long-standing methods for locating lung nodules in patients with lung cancer, such as computed tomography (CT) scans. Radiologists must manually review a significant amount of CT scan pictures, which makes the process time-consuming and prone to human error. Computer-aided diagnosis (CAD) systems have been created to help radiologists with their evaluations in order to overcome these difficulties. These systems make use of cutting-edge deep learning architectures. These CAD systems are designed to improve lung nodule diagnosis efficiency and accuracy. In this study, a bespoke convolutional neural network (CNN) with a dual attention mechanism was created, which was especially crafted to concentrate on the most important elements in images of lung nodules. The CNN model extracts informative features from the images, while the attention module incorporates both channel attention and spatial attention mechanisms to selectively highlight significant features. After the attention module, global average pooling is applied to summarize the spatial information. To evaluate the performance of the proposed model, extensive experiments were conducted using benchmark dataset of lung nodules. The results of these experiments demonstrated that our model surpasses recent models and achieves state-of-the-art accuracy in lung nodule detection and classification tasks.
Collapse
Affiliation(s)
- Zia UrRehman
- College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Taiyuan, China
| | - Yan Qiang
- College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Taiyuan, China
- School of Software, North University of China, Taiyuan, China
| | - Long Wang
- Jinzhong College of Information, Jinzhong, China
| | - Yiwei Shi
- NHC Key Laboratory of Pneumoconiosis, Shanxi Key Laboratory of Respiratory Diseases, Department of Pulmonary and Critical Care Medicine, The First Hospital of Shanxi Medical University, Taiyuan, Shanxi, China
| | | | - Saeed Ullah Khattak
- Centre of Biotechnology and Microbiology, University of Peshawar, Peshawar, 25120, Pakistan
| | - Rukhma Aftab
- College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Taiyuan, China
| | - Juanjuan Zhao
- College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Taiyuan, China.
- Jinzhong College of Information, Jinzhong, China.
| |
Collapse
|
36
|
Roy R, Mazumdar S, Chowdhury AS. ADGAN: Attribute-Driven Generative Adversarial Network for Synthesis and Multiclass Classification of Pulmonary Nodules. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:2484-2495. [PMID: 35853058 DOI: 10.1109/tnnls.2022.3190331] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Lung cancer is the leading cause of cancer-related deaths worldwide. According to the American Cancer Society, early diagnosis of pulmonary nodules in computed tomography (CT) scans can improve the five-year survival rate up to 70% with proper treatment planning. In this article, we propose an attribute-driven Generative Adversarial Network (ADGAN) for synthesis and multiclass classification of Pulmonary Nodules. A self-attention U-Net (SaUN) architecture is proposed to improve the generation mechanism of the network. The generator is designed with two modules, namely, self-attention attribute module (SaAM) and a self-attention spatial module (SaSM). SaAM generates a nodule image based on given attributes whereas SaSM specifies the nodule region of the input image to be altered. A reconstruction loss along with an attention localization loss (AL) is used to produce an attention map prioritizing the nodule regions. To avoid resemblance between a generated image and a real image, we further introduce an adversarial loss containing a regularization term based on KL divergence. The discriminator part of the proposed model is designed to achieve the multiclass nodule classification task. Our proposed approach is validated over two challenging publicly available datasets, namely LIDC-IDRI and LUNGX. Exhaustive experimentation on these two datasets clearly indicate that we have achieved promising classification accuracy as compared to other state-of-the-art methods.
Collapse
|
37
|
Wu R, Liang C, Zhang J, Tan Q, Huang H. Multi-kernel driven 3D convolutional neural network for automated detection of lung nodules in chest CT scans. BIOMEDICAL OPTICS EXPRESS 2024; 15:1195-1218. [PMID: 38404310 PMCID: PMC10890889 DOI: 10.1364/boe.504875] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Revised: 12/27/2023] [Accepted: 12/28/2023] [Indexed: 02/27/2024]
Abstract
The accurate position detection of lung nodules is crucial in early chest computed tomography (CT)-based lung cancer screening, which helps to improve the survival rate of patients. Deep learning methodologies have shown impressive feature extraction ability in the CT image analysis task, but it is still a challenge to develop a robust nodule detection model due to the salient morphological heterogeneity of nodules and complex surrounding environment. In this study, a multi-kernel driven 3D convolutional neural network (MK-3DCNN) is proposed for computerized nodule detection in CT scans. In the MK-3DCNN, a residual learning-based encoder-decoder architecture is introduced to employ the multi-layer features of the deep model. Considering the various nodule sizes and shapes, a multi-kernel joint learning block is developed to capture 3D multi-scale spatial information of nodule CT images, and this is conducive to improving nodule detection performance. Furthermore, a multi-mode mixed pooling strategy is designed to replace the conventional single-mode pooling manner, and it reasonably integrates the max pooling, average pooling, and center cropping pooling operations to obtain more comprehensive nodule descriptions from complicated CT images. Experimental results on the public dataset LUNA16 illustrate that the proposed MK-3DCNN method achieves more competitive nodule detection performance compared to some state-of-the-art algorithms. The results on our constructed clinical dataset CQUCH-LND indicate that the MK-3DCNN has a good prospect in clinical practice.
Collapse
Affiliation(s)
- Ruoyu Wu
- Key Laboratory of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing 400044, China
| | - Changyu Liang
- Department of Radiology, Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400030, China
| | - Jiuquan Zhang
- Department of Radiology, Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400030, China
| | - QiJuan Tan
- Department of Radiology, Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400030, China
| | - Hong Huang
- Key Laboratory of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing 400044, China
| |
Collapse
|
38
|
Chang HH, Wu CZ, Gallogly AH. Pulmonary Nodule Classification Using a Multiview Residual Selective Kernel Network. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:347-362. [PMID: 38343233 DOI: 10.1007/s10278-023-00928-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 09/13/2023] [Accepted: 09/25/2023] [Indexed: 03/02/2024]
Abstract
Lung cancer is one of the leading causes of death worldwide and early detection is crucial to reduce the mortality. A reliable computer-aided diagnosis (CAD) system can help facilitate early detection of malignant nodules. Although existing methods provide adequate classification accuracy, there is still room for further improvement. This study is dedicated to investigating a new CAD scheme for predicting the malignant likelihood of lung nodules in computed tomography (CT) images in light of a deep learning strategy. Conceived from the residual learning and selective kernel, we investigated an efficient residual selective kernel (RSK) block to handle the diversity of lung nodules with various shapes and obscure structures. Founded on this RSK block, we established a multiview RSK network (MRSKNet), to which three anatomical planes in the axial, coronal, and sagittal directions were fed. To reinforce the classification efficiency, seven handcrafted texture features with a filter-like computation strategy were explored, among which the homogeneity (HOM) feature maps are combined with the corresponding intensity CT images for concatenation input, leading to an improved network architecture. Evaluated on the public benchmark Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) challenge database with ten-fold cross validation of binary classification, our experimental results indicated high area under receiver operating characteristic (AUC) and accuracy scores. A better compromise between recall and specificity was struck using the suggested concatenation strategy comparing to many state-of-the-art approaches. The proposed pulmonary nodule classification framework exhibited great efficacy and achieved a higher AUC of 0.9711. The association of handcrafted texture features with deep learning models is promising in advancing the classification performance. The developed pulmonary nodule CAD network architecture is of potential in facilitating the diagnosis of lung cancer for further image processing applications.
Collapse
Affiliation(s)
- Herng-Hua Chang
- Computational Biomedical Engineering Laboratory (CBEL), Department of Engineering Science and Ocean Engineering, National Taiwan University, 1 Sec. 4 Roosevelt Road, Daan, Taipei, 10617, Taiwan.
| | - Cheng-Zhe Wu
- Computational Biomedical Engineering Laboratory (CBEL), Department of Engineering Science and Ocean Engineering, National Taiwan University, 1 Sec. 4 Roosevelt Road, Daan, Taipei, 10617, Taiwan
| | - Audrey Haihong Gallogly
- Department of Radiation Oncology, Keck Medical School, University of Southern California, Los Angeles, CA, USA
| |
Collapse
|
39
|
Zheng R, Wen H, Zhu F, Lan W. Attention-guided deep neural network with a multichannel architecture for lung nodule classification. Heliyon 2024; 10:e23508. [PMID: 38169878 PMCID: PMC10758786 DOI: 10.1016/j.heliyon.2023.e23508] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Revised: 11/15/2023] [Accepted: 12/05/2023] [Indexed: 01/05/2024] Open
Abstract
Detecting and accurately identifying malignant lung nodules in chest CT scans in a timely manner is crucial for effective lung cancer treatment. This study introduces a deep learning model featuring a multi-channel attention mechanism, specifically designed for the precise diagnosis of malignant lung nodules. To start, we standardized the voxel size of CT images and generated three RGB images of varying scales for each lung nodule, viewed from three different angles. Subsequently, we applied three attention submodels to extract class-specific characteristics from these RGB images. Finally, the nodule features were consolidated in the model's final layer to make the ultimate predictions. Through the utilization of an attention mechanism, we could dynamically pinpoint the exact location of lung nodules in the images without the need for prior segmentation. This proposed approach enhances the accuracy and efficiency of lung nodule classification. We evaluated and tested our model using a dataset of 1018 CT scans sourced from the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI). The experimental results demonstrate that our model achieved a lung nodule classification accuracy of 90.11 %, with an area under the receiver operator curve (AUC) score of 95.66 %. Impressively, our method achieved this high level of performance while utilizing only 29.09 % of the time needed by the mainstream model.
Collapse
Affiliation(s)
- Rong Zheng
- Department of Gynecology, Maternal and Child Health Hospital of Hubei Province, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430070, China
| | - Hongqiao Wen
- School of Information Engineering, Wuhan University of Technology, Wuhan 430070, China
| | - Feng Zhu
- Department of Cardiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Clinic Center of Human Gene Research, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Weishun Lan
- Department of Medical Imaging, Maternal and Child Health Hospital of Hubei Province, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430070, China
| |
Collapse
|
40
|
Zhang X, Yang P, Tian J, Wen F, Chen X, Muhammad T. Classification of benign and malignant pulmonary nodule based on local-global hybrid network. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2024; 32:689-706. [PMID: 38277335 DOI: 10.3233/xst-230291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/28/2024]
Abstract
BACKGROUND The accurate classification of pulmonary nodules has great application value in assisting doctors in diagnosing conditions and meeting clinical needs. However, the complexity and heterogeneity of pulmonary nodules make it difficult to extract valuable characteristics of pulmonary nodules, so it is still challenging to achieve high-accuracy classification of pulmonary nodules. OBJECTIVE In this paper, we propose a local-global hybrid network (LGHNet) to jointly model local and global information to improve the classification ability of benign and malignant pulmonary nodules. METHODS First, we introduce the multi-scale local (MSL) block, which splits the input tensor into multiple channel groups, utilizing dilated convolutions with different dilation rates and efficient channel attention to extract fine-grained local information at different scales. Secondly, we design the hybrid attention (HA) block to capture long-range dependencies in spatial and channel dimensions to enhance the representation of global features. RESULTS Experiments are carried out on the publicly available LIDC-IDRI and LUNGx datasets, and the accuracy, sensitivity, precision, specificity, and area under the curve (AUC) of the LIDC-IDRI dataset are 94.42%, 94.25%, 93.05%, 92.87%, and 97.26%, respectively. The AUC on the LUNGx dataset was 79.26%. CONCLUSION The above classification results are superior to the state-of-the-art methods, indicating that the network has better classification performance and generalization ability.
Collapse
Affiliation(s)
- Xin Zhang
- Smart City College, Beijing Union University, Beijing, China
| | - Ping Yang
- Smart City College, Beijing Union University, Beijing, China
| | - Ji Tian
- Smart City College, Beijing Union University, Beijing, China
| | - Fan Wen
- Smart City College, Beijing Union University, Beijing, China
| | - Xi Chen
- Smart City College, Beijing Union University, Beijing, China
| | - Tayyab Muhammad
- School of Electrical and Electronic Engineering, North China Electric Power University, Beijing, China
| |
Collapse
|
41
|
Kondamuri SR, Thadikemalla VSG, Suryanarayana G, Karthik C, Reddy VS, Sahithi VB, Anitha Y, Yogitha V, Valli PR. Chest CT Image based Lung Disease Classification - A Review. Curr Med Imaging 2024; 20:1-14. [PMID: 38389342 DOI: 10.2174/0115734056248176230923143105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 07/22/2023] [Accepted: 08/22/2023] [Indexed: 02/24/2024]
Abstract
Computed tomography (CT) scans are widely used to diagnose lung conditions due to their ability to provide a detailed overview of the body's respiratory system. Despite its popularity, visual examination of CT scan images can lead to misinterpretations that impede a timely diagnosis. Utilizing technology to evaluate images for disease detection is also a challenge. As a result, there is a significant demand for more advanced systems that can accurately classify lung diseases from CT scan images. In this work, we provide an extensive analysis of different approaches and their performances that can help young researchers to build more advanced systems. First, we briefly introduce diagnosis and treatment procedures for various lung diseases. Then, a brief description of existing methods used for the classification of lung diseases is presented. Later, an overview of the general procedures for lung disease classification using machine learning (ML) is provided. Furthermore, an overview of recent progress in ML-based classification of lung diseases is provided. Finally, existing challenges in ML techniques are presented. It is concluded that deep learning techniques have revolutionized the early identification of lung disorders. We expect that this work will equip medical professionals with the awareness they require in order to recognize and classify certain medical disorders.
Collapse
Affiliation(s)
- Shri Ramtej Kondamuri
- Department of ECE, Velagapudi Ramakrishna Siddhartha Engineering College, Vijayawada, Andhra Pradesh, 520007, India
| | | | - Gunnam Suryanarayana
- Department of ECE, Velagapudi Ramakrishna Siddhartha Engineering College, Vijayawada, Andhra Pradesh, 520007, India
| | - Chandran Karthik
- Department of Robotics and Automation, Jyothi Engineering College, Thrissur, Kerala 679531, India
| | - Vanga Siva Reddy
- Department of ECE, Velagapudi Ramakrishna Siddhartha Engineering College, Vijayawada, Andhra Pradesh, 520007, India
| | - V Bhuvana Sahithi
- Department of ECE, Velagapudi Ramakrishna Siddhartha Engineering College, Vijayawada, Andhra Pradesh, 520007, India
| | - Y Anitha
- Department of ECE, Velagapudi Ramakrishna Siddhartha Engineering College, Vijayawada, Andhra Pradesh, 520007, India
| | - V Yogitha
- Department of ECE, Velagapudi Ramakrishna Siddhartha Engineering College, Vijayawada, Andhra Pradesh, 520007, India
| | - P Reshma Valli
- Department of ECE, Velagapudi Ramakrishna Siddhartha Engineering College, Vijayawada, Andhra Pradesh, 520007, India
| |
Collapse
|
42
|
Lu S, Liu J, Wang X, Zhou Y. Collaborative Multi-Metadata Fusion to Improve the Classification of Lumbar Disc Herniation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3590-3601. [PMID: 37432809 DOI: 10.1109/tmi.2023.3294248] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/13/2023]
Abstract
Computed tomography (CT) images are the most commonly used radiographic imaging modality for detecting and diagnosing lumbar diseases. Despite many outstanding advances, computer-aided diagnosis (CAD) of lumbar disc disease remains challenging due to the complexity of pathological abnormalities and poor discrimination between different lesions. Therefore, we propose a Collaborative Multi-Metadata Fusion classification network (CMMF-Net) to address these challenges. The network consists of a feature selection model and a classification model. We propose a novel Multi-scale Feature Fusion (MFF) module that can improve the edge learning ability of the network region of interest (ROI) by fusing features of different scales and dimensions. We also propose a new loss function to improve the convergence of the network to the internal and external edges of the intervertebral disc. Subsequently, we use the ROI bounding box from the feature selection model to crop the original image and calculate the distance features matrix. We then concatenate the cropped CT images, multiscale fusion features, and distance feature matrices and input them into the classification network. Next, the model outputs the classification results and the class activation map (CAM). Finally, the CAM of the original image size is returned to the feature selection network during the upsampling process to achieve collaborative model training. Extensive experiments demonstrate the effectiveness of our method. The model achieved 91.32% accuracy in the lumbar spine disease classification task. In the labelled lumbar disc segmentation task, the Dice coefficient reaches 94.39%. The classification accuracy in the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) reaches 91.82%.
Collapse
|
43
|
Ma L, Wan C, Hao K, Cai A, Liu L. A novel fusion algorithm for benign-malignant lung nodule classification on CT images. BMC Pulm Med 2023; 23:474. [PMID: 38012620 PMCID: PMC10683224 DOI: 10.1186/s12890-023-02708-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Accepted: 10/12/2023] [Indexed: 11/29/2023] Open
Abstract
The accurate recognition of malignant lung nodules on CT images is critical in lung cancer screening, which can offer patients the best chance of cure and significant reductions in mortality from lung cancer. Convolutional Neural Network (CNN) has been proven as a powerful method in medical image analysis. Radiomics which is believed to be of interest based on expert opinion can describe high-throughput extraction from CT images. Graph Convolutional Network explores the global context and makes the inference on both graph node features and relational structures. In this paper, we propose a novel fusion algorithm, RGD, for benign-malignant lung nodule classification by incorporating Radiomics study and Graph learning into the multiple Deep CNNs to form a more complete and distinctive feature representation, and ensemble the predictions for robust decision-making. The proposed method was conducted on the publicly available LIDC-IDRI dataset in a 10-fold cross-validation experiment and it obtained an average accuracy of 93.25%, a sensitivity of 89.22%, a specificity of 95.82%, precision of 92.46%, F1 Score of 0.9114 and AUC of 0.9629. Experimental results illustrate that the RGD model achieves superior performance compared with the state-of-the-art methods. Moreover, the effectiveness of the fusion strategy has been confirmed by extensive ablation studies. In the future, the proposed model which performs well on the pulmonary nodule classification on CT images will be applied to increase confidence in the clinical diagnosis of lung cancer.
Collapse
Affiliation(s)
- Ling Ma
- College of Software, Nankai University, Tianjin, 300350, China
| | - Chuangye Wan
- College of Software, Nankai University, Tianjin, 300350, China
| | - Kexin Hao
- College of Software, Nankai University, Tianjin, 300350, China
| | - Annan Cai
- College of Software, Nankai University, Tianjin, 300350, China
| | - Lizhi Liu
- Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, 510060, Guangdong, China.
| |
Collapse
|
44
|
Li W, Yu S, Yang R, Tian Y, Zhu T, Liu H, Jiao D, Zhang F, Liu X, Tao L, Gao Y, Li Q, Zhang J, Guo X. Machine Learning Model of ResNet50-Ensemble Voting for Malignant-Benign Small Pulmonary Nodule Classification on Computed Tomography Images. Cancers (Basel) 2023; 15:5417. [PMID: 38001677 PMCID: PMC10670717 DOI: 10.3390/cancers15225417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Revised: 09/21/2023] [Accepted: 09/26/2023] [Indexed: 11/26/2023] Open
Abstract
BACKGROUND The early detection of benign and malignant lung tumors enabled patients to diagnose lesions and implement appropriate health measures earlier, dramatically improving lung cancer patients' quality of living. Machine learning methods performed admirably when recognizing small benign and malignant lung nodules. However, exploration and investigation are required to fully leverage the potential of machine learning in distinguishing between benign and malignant small lung nodules. OBJECTIVE The aim of this study was to develop and evaluate the ResNet50-Ensemble Voting model for detecting the benign and malignant nature of small pulmonary nodules (<20 mm) based on CT images. METHODS In this study, 834 CT imaging data from 396 patients with small pulmonary nodules were gathered and randomly assigned to the training and validation sets in an 8:2 ratio. ResNet50 and VGG16 algorithms were utilized to extract CT image features, followed by XGBoost, SVM, and Ensemble Voting techniques for classification, for a total of ten different classes of machine learning combinatorial classifiers. Indicators such as accuracy, sensitivity, and specificity were used to assess the models. The collected features are also shown to investigate the contrasts between them. RESULTS The algorithm we presented, ResNet50-Ensemble Voting, performed best in the test set, with an accuracy of 0.943 (0.938, 0.948) and sensitivity and specificity of 0.964 and 0.911, respectively. VGG16-Ensemble Voting had an accuracy of 0.887 (0.880, 0.894), with a sensitivity and specificity of 0.952 and 0.784, respectively. CONCLUSION Machine learning models that were implemented and integrated ResNet50-Ensemble Voting performed exceptionally well in identifying benign and malignant small pulmonary nodules (<20 mm) from various sites, which might help doctors in accurately diagnosing the nature of early-stage lung nodules in clinical practice.
Collapse
Affiliation(s)
- Weiming Li
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing 100069, China; (W.L.); (S.Y.); (R.Y.); (Y.T.); (T.Z.); (H.L.); (D.J.); (F.Z.); (X.L.); (L.T.)
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing 100069, China
| | - Siqi Yu
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing 100069, China; (W.L.); (S.Y.); (R.Y.); (Y.T.); (T.Z.); (H.L.); (D.J.); (F.Z.); (X.L.); (L.T.)
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing 100069, China
| | - Runhuang Yang
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing 100069, China; (W.L.); (S.Y.); (R.Y.); (Y.T.); (T.Z.); (H.L.); (D.J.); (F.Z.); (X.L.); (L.T.)
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing 100069, China
| | - Yixing Tian
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing 100069, China; (W.L.); (S.Y.); (R.Y.); (Y.T.); (T.Z.); (H.L.); (D.J.); (F.Z.); (X.L.); (L.T.)
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing 100069, China
| | - Tianyu Zhu
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing 100069, China; (W.L.); (S.Y.); (R.Y.); (Y.T.); (T.Z.); (H.L.); (D.J.); (F.Z.); (X.L.); (L.T.)
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing 100069, China
| | - Haotian Liu
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing 100069, China; (W.L.); (S.Y.); (R.Y.); (Y.T.); (T.Z.); (H.L.); (D.J.); (F.Z.); (X.L.); (L.T.)
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing 100069, China
| | - Danyang Jiao
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing 100069, China; (W.L.); (S.Y.); (R.Y.); (Y.T.); (T.Z.); (H.L.); (D.J.); (F.Z.); (X.L.); (L.T.)
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing 100069, China
| | - Feng Zhang
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing 100069, China; (W.L.); (S.Y.); (R.Y.); (Y.T.); (T.Z.); (H.L.); (D.J.); (F.Z.); (X.L.); (L.T.)
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing 100069, China
| | - Xiangtong Liu
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing 100069, China; (W.L.); (S.Y.); (R.Y.); (Y.T.); (T.Z.); (H.L.); (D.J.); (F.Z.); (X.L.); (L.T.)
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing 100069, China
| | - Lixin Tao
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing 100069, China; (W.L.); (S.Y.); (R.Y.); (Y.T.); (T.Z.); (H.L.); (D.J.); (F.Z.); (X.L.); (L.T.)
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing 100069, China
| | - Yan Gao
- Department of Nuclear Medicine, Xuanwu Hospital Capital Medical University, Beijing 100053, China;
| | - Qiang Li
- Beijing Physical Examination Center, Beijing 100050, China; (Q.L.); (J.Z.)
| | - Jingbo Zhang
- Beijing Physical Examination Center, Beijing 100050, China; (Q.L.); (J.Z.)
| | - Xiuhua Guo
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing 100069, China; (W.L.); (S.Y.); (R.Y.); (Y.T.); (T.Z.); (H.L.); (D.J.); (F.Z.); (X.L.); (L.T.)
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing 100069, China
| |
Collapse
|
45
|
Liang H, Hu M, Ma Y, Yang L, Chen J, Lou L, Chen C, Xiao Y. Performance of Deep-Learning Solutions on Lung Nodule Malignancy Classification: A Systematic Review. Life (Basel) 2023; 13:1911. [PMID: 37763314 PMCID: PMC10532719 DOI: 10.3390/life13091911] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 09/06/2023] [Accepted: 09/12/2023] [Indexed: 09/29/2023] Open
Abstract
OBJECTIVE For several years, computer technology has been utilized to diagnose lung nodules. When compared to traditional machine learning methods for image processing, deep-learning methods can improve the accuracy of lung nodule diagnosis by avoiding the laborious pre-processing step of the picture (extraction of fake features, etc.). Our goal is to investigate how well deep-learning approaches classify lung nodule malignancy. METHOD We evaluated the performance of deep-learning methods on lung nodule malignancy classification via a systematic literature search. We conducted searches for appropriate articles in the PubMed and ISI Web of Science databases and chose those that employed deep learning to classify or predict lung nodule malignancy for our investigation. The figures were plotted, and the data were extracted using SAS version 9.4 and Microsoft Excel 2010, respectively. RESULTS Sixteen studies that met the criteria were included in this study. The articles classified or predicted pulmonary nodule malignancy using classification and summarization, using convolutional neural network (CNN), autoencoder (AE), and deep belief network (DBN). The AUC of deep-learning models is typically greater than 90% in articles. It demonstrated that deep learning performed well in the diagnosis and forecasting of lung nodules. CONCLUSION It is a thorough analysis of the most recent advancements in lung nodule deep-learning technologies. The advancement of image processing techniques, traditional machine learning techniques, deep-learning techniques, and other techniques have all been applied to the technology for pulmonary nodule diagnosis. Although the deep-learning model has demonstrated distinct advantages in the detection of pulmonary nodules, it also carries significant drawbacks that warrant additional research.
Collapse
Affiliation(s)
- Hailun Liang
- School of Public Administration and Policy, Renmin University of China, Beijing 100872, China; (H.L.)
| | - Meili Hu
- Department of Gynecology, Baoding Maternal and Child Health Care Hospital, Baoding 071000, China;
| | - Yuxin Ma
- School of Public Administration and Policy, Renmin University of China, Beijing 100872, China; (H.L.)
| | - Lei Yang
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Beijing Office for Cancer Prevention and Control, Peking University Cancer Hospital & Institute, Beijing 100142, China
| | - Jie Chen
- School of Public Administration and Policy, Renmin University of China, Beijing 100872, China; (H.L.)
| | - Liwei Lou
- School of Statistics, Renmin University of China, Beijing 100872, China
| | - Chen Chen
- School of Public Administration and Policy, Renmin University of China, Beijing 100872, China; (H.L.)
| | - Yuan Xiao
- Blockchain Research Institute, Renmin University of China, Beijing 100872, China
| |
Collapse
|
46
|
Huang Y, Yang J, Hou Y, Sun Q, Ma S, Feng C, Shang J. Automatic prediction of acute coronary syndrome based on pericoronary adipose tissue and atherosclerotic plaques. Comput Med Imaging Graph 2023; 108:102264. [PMID: 37418789 DOI: 10.1016/j.compmedimag.2023.102264] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Revised: 03/07/2023] [Accepted: 06/07/2023] [Indexed: 07/09/2023]
Abstract
Cardiovascular disease is the leading cause of human death worldwide, and acute coronary syndrome (ACS) is a common first manifestation of this. Studies have shown that pericoronary adipose tissue (PCAT) computed tomography (CT) attenuation and atherosclerotic plaque characteristics can be used to predict future adverse ACS events. However, radiomics-based methods have limitations in extracting features of PCAT and atherosclerotic plaques. Therefore, we propose a hybrid deep learning framework capable of extracting coronary CT angiography (CCTA) imaging features of both PCAT and atherosclerotic plaques for ACS prediction. The framework designs a two-stream CNN feature extraction (TSCFE) module to extract the features of PCAT and atherosclerotic plaques, respectively, and a channel feature fusion (CFF) to explore feature correlations between their features. Specifically, a trilinear-based fully-connected (FC) prediction module stepwise maps high-dimensional representations to low-dimensional label spaces. The framework was validated in retrospectively collected suspected coronary artery disease cases examined by CCTA. The prediction accuracy, sensitivity, specificity, and area under curve (AUC) are all higher than the classical image classification networks and state-of-the-art medical image classification methods. The experimental results show that the proposed method can effectively and accurately extract CCTA imaging features of PCAT and atherosclerotic plaques and explore the feature correlations to produce impressive performance. Thus, it has the potential value to be applied in clinical applications for accurate ACS prediction.
Collapse
Affiliation(s)
- Yan Huang
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China; School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Jinzhu Yang
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China; School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China.
| | - Yang Hou
- Department of Radiology, Shengjing Hospital of China Medical University, Shenyang, Liaoning, China
| | - Qi Sun
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China; School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Shuang Ma
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China; School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Chaolu Feng
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China; School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Jin Shang
- Department of Radiology, Shengjing Hospital of China Medical University, Shenyang, Liaoning, China
| |
Collapse
|
47
|
Zhang R, Zhang F, Qin S, Fan D, Fang C, Ma J, Wan X, Li G, Lin X. Multi-Task Learning With Hierarchical Guidance for Locating and Stratifying Submucosal Tumors. IEEE J Biomed Health Inform 2023; 27:4478-4488. [PMID: 37459259 DOI: 10.1109/jbhi.2023.3291433] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/07/2023]
Abstract
Locating and stratifying the submucosal tumor of the digestive tract from endoscopy ultrasound (EUS) images are of vital significance to the preliminary diagnosis of tumors. However, the above problems are challenging, due to the poor appearance contrast between different layers of the digestive tract wall (DTW) and the narrowness of each layer. Few of existing deep-learning based diagnosis algorithms are devised to tackle this issue. In this article, we build a multi-task framework for simultaneously locating and stratifying the submucosal tumor. And considering the awareness of the DTW is critical to the localization and stratification of the tumor, we integrate the DTW segmentation task into the proposed multi-task framework. Except for sharing a common backbone model, the three tasks are explicitly directed with a hierarchical guidance module, in which the probability map of DTW itself is used to locally enhance the feature representation for tumor localization, and the probability maps of DTW and tumor are jointly employed to locally enhance the feature representation for tumor stratification. Moreover, by means of the dynamic class activation map, probability maps of DTW and tumor are reused to enforce the stratification inference process to pay more attention to DTW and tumor regions, contributing to a reliable and interpretable submucosal tumor stratification model. Additionally, considering the relation with respect to other structures is beneficial for stratifying tumors, we devise a graph reasoning module to replenish non-local relation knowledge for the stratification branch. Experiments on a Stomach-Esophagus and an Intestinal EUS dataset prove that our method achieves very appealing performance on both tumor localization and stratification, significantly outperforming state-of-the-art object detection approaches.
Collapse
|
48
|
Zhi L, Jiang W, Zhang S, Zhou T. Deep neural network pulmonary nodule segmentation methods for CT images: Literature review and experimental comparisons. Comput Biol Med 2023; 164:107321. [PMID: 37595518 DOI: 10.1016/j.compbiomed.2023.107321] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Revised: 05/08/2023] [Accepted: 08/07/2023] [Indexed: 08/20/2023]
Abstract
Automatic and accurate segmentation of pulmonary nodules in CT images can help physicians perform more accurate quantitative analysis, diagnose diseases, and improve patient survival. In recent years, with the development of deep learning technology, pulmonary nodule segmentation methods based on deep neural networks have gradually replaced traditional segmentation methods. This paper reviews the recent pulmonary nodule segmentation algorithms based on deep neural networks. First, the heterogeneity of pulmonary nodules, the interpretability of segmentation results, and external environmental factors are discussed, and then the open-source 2D and 3D models in medical segmentation tasks in recent years are applied to the Lung Image Database Consortium and Image Database Resource Initiative (LIDC) and Lung Nodule Analysis 16 (Luna16) datasets for comparison, and the visual diagnostic features marked by radiologists are evaluated one by one. According to the analysis of the experimental data, the following conclusions are drawn: (1) In the pulmonary nodule segmentation task, the performance of the 2D segmentation models DSC is generally better than that of the 3D segmentation models. (2) 'Subtlety', 'Sphericity', 'Margin', 'Texture', and 'Size' have more influence on pulmonary nodule segmentation, while 'Lobulation', 'Spiculation', and 'Benign and Malignant' features have less influence on pulmonary nodule segmentation. (3) Higher accuracy in pulmonary nodule segmentation can be achieved based on better-quality CT images. (4) Good contextual information acquisition and attention mechanism design positively affect pulmonary nodule segmentation.
Collapse
Affiliation(s)
- Lijia Zhi
- School of Computer Science and Engineering, North Minzu University, Yinchuan, 750021, China; Medical Imaging Center, Ningxia Hui Autonomous Region People's Hospital, Yinchuan, 750000, China; The Key Laboratory of Images & Graphics Intelligent Processing of State Ethnic Affairs Commission, Yinchuan, 750021, China.
| | - Wujun Jiang
- School of Computer Science and Engineering, North Minzu University, Yinchuan, 750021, China.
| | - Shaomin Zhang
- School of Computer Science and Engineering, North Minzu University, Yinchuan, 750021, China; Medical Imaging Center, Ningxia Hui Autonomous Region People's Hospital, Yinchuan, 750000, China; The Key Laboratory of Images & Graphics Intelligent Processing of State Ethnic Affairs Commission, Yinchuan, 750021, China.
| | - Tao Zhou
- School of Computer Science and Engineering, North Minzu University, Yinchuan, 750021, China; The Key Laboratory of Images & Graphics Intelligent Processing of State Ethnic Affairs Commission, Yinchuan, 750021, China.
| |
Collapse
|
49
|
Zhang S, Wu J, Shi E, Yu S, Gao Y, Li LC, Kuo LR, Pomeroy MJ, Liang ZJ. MM-GLCM-CNN: A multi-scale and multi-level based GLCM-CNN for polyp classification. Comput Med Imaging Graph 2023; 108:102257. [PMID: 37301171 DOI: 10.1016/j.compmedimag.2023.102257] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Revised: 05/04/2023] [Accepted: 05/30/2023] [Indexed: 06/12/2023]
Abstract
Distinguishing malignant from benign lesions has significant clinical impacts on both early detection and optimal management of those early detections. Convolutional neural network (CNN) has shown great potential in medical imaging applications due to its powerful feature learning capability. However, it is very challenging to obtain pathological ground truth, addition to collected in vivo medical images, to construct objective training labels for feature learning, leading to the difficulty of performing lesion diagnosis. This is contrary to the requirement that CNN algorithms need a large number of datasets for the training. To explore the ability to learn features from small pathologically-proven datasets for differentiation of malignant from benign polyps, we propose a Multi-scale and Multi-level based Gray-level Co-occurrence Matrix CNN (MM-GLCM-CNN). Specifically, instead of inputting the lesions' medical images, the GLCM, which characterizes the lesion heterogeneity in terms of image texture characteristics, is fed into the MM-GLCN-CNN model for the training. This aims to improve feature extraction by introducing multi-scale and multi-level analysis into the construction of lesion texture characteristic descriptors (LTCDs). To learn and fuse multiple sets of LTCDs from small datasets for lesion diagnosis, we further propose an adaptive multi-input CNN learning framework. Furthermore, an Adaptive Weight Network is used to highlight important information and suppress redundant information after the fusion of the LTCDs. We evaluated the performance of MM-GLCM-CNN by the area under the receiver operating characteristic curve (AUC) merit on small private lesion datasets of colon polyps. The AUC score reaches 93.99% with a gain of 1.49% over current state-of-the-art lesion classification methods on the same dataset. This gain indicates the importance of incorporating lesion characteristic heterogeneity for the prediction of lesion malignancy using small pathologically-proven datasets.
Collapse
Affiliation(s)
- Shu Zhang
- Center for Brain and Brain-Inspired Computing Research, School of Computer Science, Northwestern Polytechnical University, Xi'an 710000, China.
| | - Jinru Wu
- Center for Brain and Brain-Inspired Computing Research, School of Computer Science, Northwestern Polytechnical University, Xi'an 710000, China
| | - Enze Shi
- Center for Brain and Brain-Inspired Computing Research, School of Computer Science, Northwestern Polytechnical University, Xi'an 710000, China
| | - Sigang Yu
- Center for Brain and Brain-Inspired Computing Research, School of Computer Science, Northwestern Polytechnical University, Xi'an 710000, China
| | - Yongfeng Gao
- Department of Radiology, Stony Brook University, Stony Brook, NY 11794, USA
| | - Lihong Connie Li
- Department of Engineering & Environmental Science, City University of New York, Staten Island, NY 10314, USA
| | - Licheng Ryan Kuo
- Department of Radiology, Stony Brook University, Stony Brook, NY 11794, USA; Department of Biomedical Engineering, Stony Brook University, Stony Brook, NY 11794, USA
| | - Marc Jason Pomeroy
- Department of Radiology, Stony Brook University, Stony Brook, NY 11794, USA; Department of Biomedical Engineering, Stony Brook University, Stony Brook, NY 11794, USA
| | - Zhengrong Jerome Liang
- Department of Radiology, Stony Brook University, Stony Brook, NY 11794, USA; Department of Biomedical Engineering, Stony Brook University, Stony Brook, NY 11794, USA
| |
Collapse
|
50
|
Baidya Kayal E, Ganguly S, Sasi A, Sharma S, DS D, Saini M, Rangarajan K, Kandasamy D, Bakhshi S, Mehndiratta A. A proposed methodology for detecting the malignant potential of pulmonary nodules in sarcoma using computed tomographic imaging and artificial intelligence-based models. Front Oncol 2023; 13:1212526. [PMID: 37671060 PMCID: PMC10476362 DOI: 10.3389/fonc.2023.1212526] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Accepted: 07/31/2023] [Indexed: 09/07/2023] Open
Abstract
The presence of lung metastases in patients with primary malignancies is an important criterion for treatment management and prognostication. Computed tomography (CT) of the chest is the preferred method to detect lung metastasis. However, CT has limited efficacy in differentiating metastatic nodules from benign nodules (e.g., granulomas due to tuberculosis) especially at early stages (<5 mm). There is also a significant subjectivity associated in making this distinction, leading to frequent CT follow-ups and additional radiation exposure along with financial and emotional burden to the patients and family. Even 18F-fluoro-deoxyglucose positron emission technology-computed tomography (18F-FDG PET-CT) is not always confirmatory for this clinical problem. While pathological biopsy is the gold standard to demonstrate malignancy, invasive sampling of small lung nodules is often not clinically feasible. Currently, there is no non-invasive imaging technique that can reliably characterize lung metastases. The lung is one of the favored sites of metastasis in sarcomas. Hence, patients with sarcomas, especially from tuberculosis prevalent developing countries, can provide an ideal platform to develop a model to differentiate lung metastases from benign nodules. To overcome the lack of optimal specificity of CT scan in detecting pulmonary metastasis, a novel artificial intelligence (AI)-based protocol is proposed utilizing a combination of radiological and clinical biomarkers to identify lung nodules and characterize it as benign or metastasis. This protocol includes a retrospective cohort of nearly 2,000-2,250 sample nodules (from at least 450 patients) for training and testing and an ambispective cohort of nearly 500 nodules (from 100 patients; 50 patients each from the retrospective and prospective cohort) for validation. Ground-truth annotation of lung nodules will be performed using an in-house-built segmentation tool. Ground-truth labeling of lung nodules (metastatic/benign) will be performed based on histopathological results or baseline and/or follow-up radiological findings along with clinical outcome of the patient. Optimal methods for data handling and statistical analysis are included to develop a robust protocol for early detection and classification of pulmonary metastasis at baseline and at follow-up and identification of associated potential clinical and radiological markers.
Collapse
Affiliation(s)
- Esha Baidya Kayal
- Centre for Biomedical Engineering, Indian Institute of Technology Delhi, New Delhi, India
| | - Shuvadeep Ganguly
- Medical Oncology, Dr. B.R.Ambedkar Institute Rotary Cancer Hospital, All India Institute of Medical Sciences, New Delhi, Delhi, India
| | - Archana Sasi
- Medical Oncology, Dr. B.R.Ambedkar Institute Rotary Cancer Hospital, All India Institute of Medical Sciences, New Delhi, Delhi, India
| | - Swetambri Sharma
- Medical Oncology, Dr. B.R.Ambedkar Institute Rotary Cancer Hospital, All India Institute of Medical Sciences, New Delhi, Delhi, India
| | - Dheeksha DS
- Department of Radiodiagnosis, All India Institute of Medical Sciences, New Delhi, Delhi, India
| | - Manish Saini
- Department of Radiodiagnosis, All India Institute of Medical Sciences, New Delhi, Delhi, India
| | - Krithika Rangarajan
- Radiodiagnosis, Dr. B.R.Ambedkar Institute Rotary Cancer Hospital, All India Institute of Medical Sciences, New Delhi, Delhi, India
| | | | - Sameer Bakhshi
- Medical Oncology, Dr. B.R.Ambedkar Institute Rotary Cancer Hospital, All India Institute of Medical Sciences, New Delhi, Delhi, India
| | - Amit Mehndiratta
- Centre for Biomedical Engineering, Indian Institute of Technology Delhi, New Delhi, India
- Department of Biomedical Engineering, All India Institute of Medical Sciences, New Delhi, Delhi, India
| |
Collapse
|