1
|
J J, Haw SC, Palanichamy N, Ng KW, Thillaigovindhan SK. IM- LTS: An Integrated Model for Lung Tumor Segmentation using Neural Networks and IoMT. MethodsX 2025; 14:103201. [PMID: 40026592 PMCID: PMC11869539 DOI: 10.1016/j.mex.2025.103201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2025] [Accepted: 02/03/2025] [Indexed: 03/05/2025] Open
Abstract
In recent days, Internet of Medical Things (IoMT) and Deep Learning (DL) techniques are broadly used in medical data processing in decision-making. A lung tumour, one of the most dangerous medical diseases, requires early diagnosis with a higher precision rate. With that concern, this work aims to develop an Integrated Model (IM- LTS) for Lung Tumor Segmentation using Neural Networks (NN) and the Internet of Medical Things (IoMT). The model integrates two architectures, MobileNetV2 and U-NET, for classifying the input lung data. The input CT lung images are pre-processed using Z-score Normalization. The semantic features of lung images are extracted based on texture, intensity, and shape to provide information to the training network.•In this work, the transfer learning technique is incorporated, and the pre-trained NN was used as an encoder for the U-NET model for segmentation. Furthermore, Support Vector Machine is used here to classify input lung data as benign and malignant.•The results are measured based on the metrics such as, specificity, sensitivity, precision, accuracy and F-Score, using the data from benchmark datasets. Compared to the existing lung tumor segmentation and classification models, the proposed model provides better results and evidence for earlier disease diagnosis.
Collapse
Affiliation(s)
- Jayapradha J
- Department of Computing Technologies, School of Computing, SRM Institute of Science and Technology, Kattankulathur, Tamil Nadu 603203, India
- Faculty of Computing and Informatics, Multimedia University, Jalan Multimedia, 63100 Cyberjaya, Malaysia
| | - Su-Cheng Haw
- Faculty of Computing and Informatics, Multimedia University, Jalan Multimedia, 63100 Cyberjaya, Malaysia
| | - Naveen Palanichamy
- Faculty of Computing and Informatics, Multimedia University, Jalan Multimedia, 63100 Cyberjaya, Malaysia
| | - Kok-Why Ng
- Faculty of Computing and Informatics, Multimedia University, Jalan Multimedia, 63100 Cyberjaya, Malaysia
| | - Senthil Kumar Thillaigovindhan
- Department of Computing Technologies, School of Computing, SRM Institute of Science and Technology, Kattankulathur, Tamil Nadu 603203, India
| |
Collapse
|
2
|
Linli Z, Liang X, Zhang Z, Hu K, Guo S. Enhancing brain age estimation under uncertainty: A spectral-normalized neural gaussian process approach utilizing 2.5D slicing. Neuroimage 2025; 311:121184. [PMID: 40180003 DOI: 10.1016/j.neuroimage.2025.121184] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2024] [Revised: 03/19/2025] [Accepted: 04/01/2025] [Indexed: 04/05/2025] Open
Abstract
Brain age gap, the difference between estimated brain age and chronological age via magnetic resonance imaging, has emerged as a pivotal biomarker in the detection of brain abnormalities. While deep learning is accurate in estimating brain age, the absence of uncertainty estimation may pose risks in clinical use. Moreover, current 3D brain age models are intricate, and using 2D slices hinders comprehensive dimensional data integration. Here, we introduced Spectral-normalized Neural Gaussian Process (SNGP) accompanied by 2.5D slice approach for seamless uncertainty integration in a single network with low computational expenses, and extra dimensional data integration without added model complexity. Subsequently, we compared different deep learning methods for estimating brain age uncertainty via the Pearson correlation coefficient, a metric that helps circumvent systematic underestimation of uncertainty during training. SNGP shows excellent uncertainty estimation and generalization on a dataset of 11 public datasets (N = 6327), with competitive predictive performance (MAE=2.95). Besides, SNGP demonstrates superior generalization performance (MAE=3.47) on an independent validation set (N = 301). Additionally, we conducted five controlled experiments to validate our method. Firstly, uncertainty adjustment in brain age estimation improved the detection of accelerated brain aging in adolescents with ADHD, with a 38% increase in effect size after adjustment. Secondly, the SNGP model exhibited OOD detection capabilities, showing significant differences in uncertainty across Asian and non-Asian datasets. Thirdly, the performance of DenseNet as a backbone for SNGP was slightly better than ResNeXt, attributed to DenseNet's feature reuse capability, with robust generalization on an independent validation set. Fourthly, site effect harmonization led to a decline in model performance, consistent with previous studies. Finally, the 2.5D slice approach significantly outperformed 2D methods, improving model performance without increasing network complexity. In conclusion, we present a cost-effective method for estimating brain age with uncertainty, utilizing 2.5D slicing for enhanced performance, showcasing promise for clinical applications.
Collapse
Affiliation(s)
- Zeqiang Linli
- School of Mathematics and Statistics, Guangdong University of Foreign Studies, Guangzhou, 510420, PR China; Laboratory of Language Engineering and Computing, Guangdong University of Foreign Studies, 510420, Guangzhou, PR China; MOE-LCSM, School of Mathematics and Statistics, Hunan Normal University, Changsha, 410006, PR China.
| | - Xingcheng Liang
- School of Mathematics and Statistics, Guangdong University of Foreign Studies, Guangzhou, 510420, PR China; Laboratory of Language Engineering and Computing, Guangdong University of Foreign Studies, 510420, Guangzhou, PR China.
| | - Zhenhua Zhang
- School of Mathematics and Statistics, Guangdong University of Foreign Studies, Guangzhou, 510420, PR China; Laboratory of Language Engineering and Computing, Guangdong University of Foreign Studies, 510420, Guangzhou, PR China.
| | - Kang Hu
- School of Information Engineering, Wuhan Business University, Wuhan, 430056, PR China.
| | - Shuixia Guo
- MOE-LCSM, School of Mathematics and Statistics, Hunan Normal University, Changsha, 410006, PR China; Key Laboratory of Applied Statistics and Data Science, Hunan Normal University, College of Hunan Province, Changsha, 410006, PR China.
| |
Collapse
|
3
|
Wang J, Cai J, Tang W, Dudurych I, van Tuinen M, Vliegenthart R, van Ooijen P. A comparison of an integrated and image-only deep learning model for predicting the disappearance of indeterminate pulmonary nodules. Comput Med Imaging Graph 2025; 123:102553. [PMID: 40239430 DOI: 10.1016/j.compmedimag.2025.102553] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2024] [Revised: 03/18/2025] [Accepted: 04/03/2025] [Indexed: 04/18/2025]
Abstract
BACKGROUND Indeterminate pulmonary nodules (IPNs) require follow-up CT to assess potential growth; however, benign nodules may disappear. Accurately predicting whether IPNs will resolve is a challenge for radiologists. Therefore, we aim to utilize deep-learning (DL) methods to predict the disappearance of IPNs. MATERIAL AND METHODS This retrospective study utilized data from the Dutch-Belgian Randomized Lung Cancer Screening Trial (NELSON) and Imaging in Lifelines (ImaLife) cohort. Participants underwent follow-up CT to determine the evolution of baseline IPNs. The NELSON data was used for model training. External validation was performed in ImaLife. We developed integrated DL-based models that incorporated CT images and demographic data (age, sex, smoking status, and pack years). We compared the performance of integrated methods with those limited to CT images only and calculated sensitivity, specificity, and area under the receiver operating characteristic curve (AUC). From a clinical perspective, ensuring high specificity is critical, as it minimizes false predictions of non-resolving nodules that should be monitored for evolution on follow-up CTs. Feature importance was calculated using SHapley Additive exPlanations (SHAP) values. RESULTS The training dataset included 840 IPNs (134 resolving) in 672 participants. The external validation dataset included 111 IPNs (46 resolving) in 65 participants. On the external validation set, the performance of the integrated model (sensitivity, 0.50; 95 % CI, 0.35-0.65; specificity, 0.91; 95 % CI, 0.80-0.96; AUC, 0.82; 95 % CI, 0.74-0.90) was comparable to that solely trained on CT image (sensitivity, 0.41; 95 % CI, 0.27-0.57; specificity, 0.89; 95 % CI, 0.78-0.95; AUC, 0.78; 95 % CI, 0.69-0.86; P = 0.39). The top 10 most important features were all image related. CONCLUSION Deep learning-based models can predict the disappearance of IPNs with high specificity. Integrated models using CT scans and clinical data had comparable performance to those using only CT images.
Collapse
Affiliation(s)
- Jingxuan Wang
- Department of Radiology, University of Groningen, University Medical Center of Groningen, Groningen, the Netherlands; Data Science in Health (DASH), University of Groningen, University Medical Center of Groningen, Groningen, the Netherlands
| | - Jiali Cai
- Department of Epidemiology, University of Groningen, University Medical Center of Groningen, Groningen, the Netherlands
| | - Wei Tang
- Department of Neurology, University of Groningen, University Medical Center of Groningen, Groningen, the Netherlands; Data Science in Health (DASH), University of Groningen, University Medical Center of Groningen, Groningen, the Netherlands
| | - Ivan Dudurych
- Department of Radiology, University of Groningen, University Medical Center of Groningen, Groningen, the Netherlands
| | - Marcel van Tuinen
- Department of Radiology, University of Groningen, University Medical Center of Groningen, Groningen, the Netherlands
| | - Rozemarijn Vliegenthart
- Department of Radiology, University of Groningen, University Medical Center of Groningen, Groningen, the Netherlands; Data Science in Health (DASH), University of Groningen, University Medical Center of Groningen, Groningen, the Netherlands
| | - Peter van Ooijen
- Department of Radiation Oncology, University of Groningen, University Medical Center of Groningen, Groningen, the Netherlands; Data Science in Health (DASH), University of Groningen, University Medical Center of Groningen, Groningen, the Netherlands.
| |
Collapse
|
4
|
Lee SY, Lee JW, Jung JI, Han K, Chang S. Deep Learning-Based Computer-Aided Diagnosis in Coronary Artery Calcium-Scoring CT for Pulmonary Nodule Detection: A Preliminary Study. Yonsei Med J 2025; 66:240-248. [PMID: 40134084 PMCID: PMC11955396 DOI: 10.3349/ymj.2024.0050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/04/2024] [Revised: 08/23/2024] [Accepted: 09/20/2024] [Indexed: 03/27/2025] Open
Abstract
PURPOSE To evaluate the feasibility and utility of deep learning-based computer-aided diagnosis (DL-CAD) for the detection of pulmonary nodules on coronary artery calcium (CAC)-scoring computed tomography (CT). MATERIALS AND METHODS This retrospective study included 273 patients (aged 63.9±13.2 years; 129 men) who underwent CAC-scoring CT. A DL-CAD system based on thin-section images was used for pulmonary nodule detection, and two independent junior readers reviewed the standard CAC-scoring CT scans with and without referencing DL-CAD results. A reference standard was established through the consensus of two experienced radiologists. Sensitivity, positive predictive value, and F1-score were assessed on a per-nodule and per-patient basis. The patients' medical records were monitored until November 2023. RESULTS A total of 269 nodules were identified in 129 patients. With DL-CAD assistance, the readers' sensitivity significantly improved (65% vs. 80% for reader 1; 82% vs. 86% for reader 2; all p<0.001), without a notable increase in the number of false-positives per case (0.11 vs. 0.13, p=0.078 for reader 1; 0.11 vs. 0.11, p>0.999 for reader 2). Per-patient analysis also enhanced sensitivity with DL-CAD assistance (73% vs. 84%, p<0.001 for reader 1; 89% vs. 91%, p=0.250 for reader 2). During follow-up, lung cancer was diagnosed in four patients (1.5%). Among them, two had lesions detected on CAC-scoring CT, both of which were successfully identified by DL-CAD. CONCLUSION DL-CAD based on thin-section images can assist less experienced readers in detecting pulmonary nodules on CAC-scoring CT scans, improving detection sensitivity without significantly increasing false-positives.
Collapse
Affiliation(s)
- Seung Yun Lee
- Department of Radiology, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - Ji Weon Lee
- Department of Radiology, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - Jung Im Jung
- Department of Radiology, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - Kyunghwa Han
- Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, Seoul, Korea
| | - Suyon Chang
- Department of Radiology, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Korea.
| |
Collapse
|
5
|
Patel AN, Srinivasan K. Deep learning paradigms in lung cancer diagnosis: A methodological review, open challenges, and future directions. Phys Med 2025; 131:104914. [PMID: 39938402 DOI: 10.1016/j.ejmp.2025.104914] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Revised: 12/19/2024] [Accepted: 01/30/2025] [Indexed: 02/14/2025] Open
Abstract
Lung cancer is the leading cause of global cancer-related deaths, which emphasizes the critical importance of early diagnosis in enhancing patient outcomes. Deep learning has demonstrated significant promise in lung cancer diagnosis, excelling in nodule detection, classification, and prognosis prediction. This methodological review comprehensively explores deep learning models' application in lung cancer diagnosis, uncovering their integration across various imaging modalities. Deep learning consistently achieves state-of-the-art performance, occasionally surpassing human expert accuracy. Notably, deep neural networks excel in detecting lung nodules, distinguishing between benign and malignant nodules, and predicting patient prognosis. They have also led to the development of computer-aided diagnosis systems, enhancing diagnostic accuracy for radiologists. This review follows the specified criteria for article selection outlined by PRISMA framework. Despite challenges such as data quality and interpretability limitations, this review emphasizes the potential of deep learning to significantly improve the precision and efficiency of lung cancer diagnosis, facilitating continued research efforts to overcome these obstacles and fully harness neural network's transformative impact in this field.
Collapse
Affiliation(s)
- Aryan Nikul Patel
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore, India.
| | - Kathiravan Srinivasan
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore, India.
| |
Collapse
|
6
|
Harkos C, Hadjigeorgiou AG, Voutouri C, Kumar AS, Stylianopoulos T, Jain RK. Using mathematical modelling and AI to improve delivery and efficacy of therapies in cancer. Nat Rev Cancer 2025:10.1038/s41568-025-00796-w. [PMID: 39972158 DOI: 10.1038/s41568-025-00796-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 01/30/2025] [Indexed: 02/21/2025]
Abstract
Mathematical modelling has proven to be a valuable tool in predicting the delivery and efficacy of molecular, antibody-based, nano and cellular therapy in solid tumours. Mathematical models based on our understanding of the biological processes at subcellular, cellular and tissue level are known as mechanistic models that, in turn, are divided into continuous and discrete models. Continuous models are further divided into lumped parameter models - for describing the temporal distribution of medicine in tumours and normal organs - and distributed parameter models - for studying the spatiotemporal distribution of therapy in tumours. Discrete models capture interactions at the cellular and subcellular levels. Collectively, these models are useful for optimizing the delivery and efficacy of molecular, nanoscale and cellular therapy in tumours by incorporating the biological characteristics of tumours, the physicochemical properties of drugs, the interactions among drugs, cancer cells and various components of the tumour microenvironment, and for enabling patient-specific predictions when combined with medical imaging. Artificial intelligence-based methods, such as machine learning, have ushered in a new era in oncology. These data-driven approaches complement mechanistic models and have immense potential for improving cancer detection, treatment and drug discovery. Here we review these diverse approaches and suggest ways to combine mechanistic and artificial intelligence-based models to further improve patient treatment outcomes.
Collapse
Affiliation(s)
- Constantinos Harkos
- Cancer Biophysics Laboratory, Department of Mechanical and Manufacturing Engineering, University of Cyprus, Nicosia, Cyprus
| | - Andreas G Hadjigeorgiou
- Cancer Biophysics Laboratory, Department of Mechanical and Manufacturing Engineering, University of Cyprus, Nicosia, Cyprus
| | - Chrysovalantis Voutouri
- Cancer Biophysics Laboratory, Department of Mechanical and Manufacturing Engineering, University of Cyprus, Nicosia, Cyprus
| | - Ashwin S Kumar
- Edwin L. Steele Laboratories, Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Triantafyllos Stylianopoulos
- Cancer Biophysics Laboratory, Department of Mechanical and Manufacturing Engineering, University of Cyprus, Nicosia, Cyprus.
| | - Rakesh K Jain
- Edwin L. Steele Laboratories, Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
7
|
Șerbănescu MS, Streba L, Demetrian AD, Gheorghe AG, Mămuleanu M, Pirici DN, Streba CT. Transfer Learning-Based Integration of Dual Imaging Modalities for Enhanced Classification Accuracy in Confocal Laser Endomicroscopy of Lung Cancer. Cancers (Basel) 2025; 17:611. [PMID: 40002206 PMCID: PMC11852907 DOI: 10.3390/cancers17040611] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2024] [Revised: 02/06/2025] [Accepted: 02/07/2025] [Indexed: 02/27/2025] Open
Abstract
BACKGROUND/OBJECTIVES Lung cancer remains the leading cause of cancer-related mortality, underscoring the need for improved diagnostic methods. This study seeks to enhance the classification accuracy of confocal laser endomicroscopy (pCLE) images for lung cancer by applying a dual transfer learning (TL) approach that incorporates histological imaging data. METHODS Histological samples and pCLE images, collected from 40 patients undergoing curative lung cancer surgeries, were selected to create 2 balanced datasets (800 benign and 800 malignant images each). Three CNN architectures-AlexNet, GoogLeNet, and ResNet-were pre-trained on ImageNet and re-trained on pCLE images (confocal TL) or using dual TL (first re-trained on histological images, then pCLE). Model performance was evaluated using accuracy and AUC across 50 independent runs with 10-fold cross-validation. RESULTS The dual TL approach statistically significant outperformed confocal TL, with AlexNet achieving a mean accuracy of 94.97% and an AUC of 0.98, surpassing GoogLeNet (91.43% accuracy, 0.97 AUC) and ResNet (89.87% accuracy, 0.96 AUC). All networks demonstrated statistically significant (p < 0.001) improvements in performance with dual TL. Additionally, dual TL models showed reductions in both false positives and false negatives, with class activation mappings highlighting enhanced focus on diagnostically relevant regions. CONCLUSIONS Dual TL, integrating histological and pCLE imaging, results in a statistically significant improvement in lung cancer classification. This approach offers a promising framework for enhanced tissue classification. and with future development and testing, iy has the potential to improve patient outcomes.
Collapse
Affiliation(s)
- Mircea-Sebastian Șerbănescu
- Department of Medical Informatics and Statistics, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania
| | - Liliana Streba
- Department of Oncology and Palliative Care, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania
| | - Alin Dragoș Demetrian
- Department of Thoracic Surgery, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania
| | | | - Mădălin Mămuleanu
- Department of Automatic Control and Electronics, University of Craiova, 200585 Craiova, Romania
| | - Daniel-Nicolae Pirici
- Department of Histology, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania
| | - Costin-Teodor Streba
- Department of Pulmonology, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania
| |
Collapse
|
8
|
Mahajan A, Agarwal R, Agarwal U, Ashtekar RM, Komaravolu B, Madiraju A, Vaish R, Pawar V, Punia V, Patil VM, Noronha V, Joshi A, Menon N, Prabhash K, Chaturvedi P, Rane S, Banwar P, Gupta S. A Novel Deep Learning-Based (3D U-Net Model) Automated Pulmonary Nodule Detection Tool for CT Imaging. Curr Oncol 2025; 32:95. [PMID: 39996895 PMCID: PMC11854842 DOI: 10.3390/curroncol32020095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2024] [Revised: 01/29/2025] [Accepted: 01/29/2025] [Indexed: 02/26/2025] Open
Abstract
BACKGROUND Precise detection and characterization of pulmonary nodules on computed tomography (CT) is crucial for early diagnosis and management. OBJECTIVES In this study, we propose the use of a deep learning-based algorithm to automatically detect pulmonary nodules in computed tomography (CT) scans. We evaluated the performance of the algorithm against the interpretation of radiologists to analyze the effectiveness of the algorithm. MATERIALS AND METHODS The study was conducted in collaboration with a tertiary cancer center. We used a collection of public (LUNA) and private (tertiary cancer center) datasets to train our deep learning models. The sensitivity, the number of false positives per scan, and the FROC curve along with the CPM score were used to assess the performance of the deep learning algorithm by comparing the deep learning algorithm and the radiology predictions. RESULTS We evaluated 491 scans consisting of 5669 pulmonary nodules annotated by a radiologist from our hospital; our algorithm showed a sensitivity of 90% and with only 0.3 false positives per scan with a CPM score of 0.85. Apart from the nodule-wise performance, we also assessed the algorithm for the detection of patients containing true nodules where it achieved a sensitivity of 0.95 and specificity of 1.0 over 491 scans in the test cohort. CONCLUSIONS Our multi-institutional validated deep learning-based algorithm can aid radiologists in confirming the detection of pulmonary nodules through computed tomography (CT) scans and identifying further abnormalities and can be used as an assistive tool. This will be helpful in national lung screening programs guiding early diagnosis and appropriate management.
Collapse
Affiliation(s)
- Abhishek Mahajan
- Department of Imaging, The Clatterbridge Cancer Centre NHS Foundation Trust, Liverpool L7 8YA, UK
- Faculty of Health and Life Sciences, University of Liverpool, Liverpool L69 3BX, UK
| | - Rajat Agarwal
- Department of Radiodiagnosis and Imaging, Tata Memorial Hospital, Homi Bhabha National Institute, Mumbai 400012, India; (R.A.); (U.A.); (R.M.A.); (P.B.)
| | - Ujjwal Agarwal
- Department of Radiodiagnosis and Imaging, Tata Memorial Hospital, Homi Bhabha National Institute, Mumbai 400012, India; (R.A.); (U.A.); (R.M.A.); (P.B.)
| | - Renuka M. Ashtekar
- Department of Radiodiagnosis and Imaging, Tata Memorial Hospital, Homi Bhabha National Institute, Mumbai 400012, India; (R.A.); (U.A.); (R.M.A.); (P.B.)
| | - Bharadwaj Komaravolu
- Endimension Technology Pvt Ltd., Maharashtra 400076, India; (B.K.); (A.M.); (V.P.); (V.P.)
| | - Apparao Madiraju
- Endimension Technology Pvt Ltd., Maharashtra 400076, India; (B.K.); (A.M.); (V.P.); (V.P.)
| | - Richa Vaish
- Department of Surgical Oncology, Tata Memorial Hospital, Mumbai 400012, India; (R.V.); (P.C.)
| | - Vivek Pawar
- Endimension Technology Pvt Ltd., Maharashtra 400076, India; (B.K.); (A.M.); (V.P.); (V.P.)
| | - Vivek Punia
- Endimension Technology Pvt Ltd., Maharashtra 400076, India; (B.K.); (A.M.); (V.P.); (V.P.)
| | - Vijay Maruti Patil
- Department of Medical Oncology, Tata Memorial Hospital, Mumbai 400012, India; (V.M.P.); (V.N.); (A.J.); (N.M.); (K.P.); (S.G.)
| | - Vanita Noronha
- Department of Medical Oncology, Tata Memorial Hospital, Mumbai 400012, India; (V.M.P.); (V.N.); (A.J.); (N.M.); (K.P.); (S.G.)
| | - Amit Joshi
- Department of Medical Oncology, Tata Memorial Hospital, Mumbai 400012, India; (V.M.P.); (V.N.); (A.J.); (N.M.); (K.P.); (S.G.)
| | - Nandini Menon
- Department of Medical Oncology, Tata Memorial Hospital, Mumbai 400012, India; (V.M.P.); (V.N.); (A.J.); (N.M.); (K.P.); (S.G.)
| | - Kumar Prabhash
- Department of Medical Oncology, Tata Memorial Hospital, Mumbai 400012, India; (V.M.P.); (V.N.); (A.J.); (N.M.); (K.P.); (S.G.)
| | - Pankaj Chaturvedi
- Department of Surgical Oncology, Tata Memorial Hospital, Mumbai 400012, India; (R.V.); (P.C.)
| | - Swapnil Rane
- Department of Pathology, Tata Memorial Hospital, Mumbai 400012, India;
| | - Priya Banwar
- Department of Radiodiagnosis and Imaging, Tata Memorial Hospital, Homi Bhabha National Institute, Mumbai 400012, India; (R.A.); (U.A.); (R.M.A.); (P.B.)
| | - Sudeep Gupta
- Department of Medical Oncology, Tata Memorial Hospital, Mumbai 400012, India; (V.M.P.); (V.N.); (A.J.); (N.M.); (K.P.); (S.G.)
| |
Collapse
|
9
|
Zhu H, Liu W, Gao Z, Zhang H. Explainable Classification of Benign-Malignant Pulmonary Nodules With Neural Networks and Information Bottleneck. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:2028-2039. [PMID: 37843998 DOI: 10.1109/tnnls.2023.3303395] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/18/2023]
Abstract
Computerized tomography (CT) is a clinically primary technique to differentiate benign-malignant pulmonary nodules for lung cancer diagnosis. Early classification of pulmonary nodules is essential to slow down the degenerative process and reduce mortality. The interactive paradigm assisted by neural networks is considered to be an effective means for early lung cancer screening in large populations. However, some inherent characteristics of pulmonary nodules in high-resolution CT images, e.g., diverse shapes and sparse distribution over the lung fields, have been inducing inaccurate results. On the other hand, most existing methods with neural networks are dissatisfactory from a lack of transparency. In order to overcome these obstacles, a united framework is proposed, including the classification and feature visualization stages, to learn distinctive features and provide visual results. Specifically, a bilateral scheme is employed to synchronously extract and aggregate global-local features in the classification stage, where the global branch is constructed to perceive deep-level features and the local branch is built to focus on the refined details. Furthermore, an encoder is built to generate some features, and a decoder is constructed to simulate decision behavior, followed by the information bottleneck viewpoint to optimize the objective. Extensive experiments are performed to evaluate our framework on two publicly available datasets, namely, 1) the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) and 2) the Lung and Colon Histopathological Image Dataset (LC25000). For instance, our framework achieves 92.98% accuracy and presents additional visualizations on the LIDC. The experiment results show that our framework can obtain outstanding performance and is effective to facilitate explainability. It also demonstrates that this united framework is a serviceable tool and further has the scalability to be introduced into clinical research.
Collapse
|
10
|
Huang GH, Lai WC, Chen TB, Hsu CC, Chen HY, Wu YC, Yeh LR. Deep Convolutional Neural Networks on Multiclass Classification of Three-Dimensional Brain Images for Parkinson's Disease Stage Prediction. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2025:10.1007/s10278-025-01402-z. [PMID: 39849204 DOI: 10.1007/s10278-025-01402-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/30/2024] [Revised: 12/11/2024] [Accepted: 01/01/2025] [Indexed: 01/25/2025]
Abstract
Parkinson's disease (PD), a degenerative disorder of the central nervous system, is commonly diagnosed using functional medical imaging techniques such as single-photon emission computed tomography (SPECT). In this study, we utilized two SPECT data sets (n = 634 and n = 202) from different hospitals to develop a model capable of accurately predicting PD stages, a multiclass classification task. We used the entire three-dimensional (3D) brain images as input and experimented with various model architectures. Initially, we treated the 3D images as sequences of two-dimensional (2D) slices and fed them sequentially into 2D convolutional neural network (CNN) models pretrained on ImageNet, averaging the outputs to obtain the final predicted stage. We also applied 3D CNN models pretrained on Kinetics-400. Additionally, we incorporated an attention mechanism to account for the varying importance of different slices in the prediction process. To further enhance model efficacy and robustness, we simultaneously trained the two data sets using weight sharing, a technique known as cotraining. Our results demonstrated that 2D models pretrained on ImageNet outperformed 3D models pretrained on Kinetics-400, and models utilizing the attention mechanism outperformed both 2D and 3D models. The cotraining technique proved effective in improving model performance when the cotraining data sets were sufficiently large.
Collapse
Affiliation(s)
- Guan-Hua Huang
- Institute of Statistics, National Yang Ming Chiao Tung University, Hsinchu, Taiwan.
| | - Wan-Chen Lai
- Institute of Statistics, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
| | - Tai-Been Chen
- Department of Radiological Technology, Faculty of Medical Technology, Teikyo University, Tokyo, Japan
- Infinity Co. Ltd, Taoyuan, Taiwan
- Der Lih Fuh Co. Ltd, Taoyuan, Taiwan
| | - Chien-Chin Hsu
- Department of Nuclear Medicine, Kaohsiung Chang Gung Memorial Hospital, Kaohsiung, Taiwan
| | - Huei-Yung Chen
- Department of Nuclear Medicine, E-Da Hospital, I-Shou University, Kaohsiung, Taiwan
| | - Yi-Chen Wu
- Department of Nuclear Medicine, E-Da Hospital, I-Shou University, Kaohsiung, Taiwan
- Department of Medical Imaging and Radiological Sciences, I-Shou University, Kaohsiung, Taiwan
| | - Li-Ren Yeh
- Department of Anesthesiology, E-Da Cancer Hospital, I-Shou University, Kaohsiung, Taiwan
| |
Collapse
|
11
|
Miao S, Dong Q, Liu L, Xuan Q, An Y, Qi H, Wang Q, Liu Z, Wang R. Dual biomarkers CT-based deep learning model incorporating intrathoracic fat for discriminating benign and malignant pulmonary nodules in multi-center cohorts. Phys Med 2025; 129:104877. [PMID: 39689571 DOI: 10.1016/j.ejmp.2024.104877] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/27/2024] [Revised: 12/01/2024] [Accepted: 12/02/2024] [Indexed: 12/19/2024] Open
Abstract
BACKGROUND Recent studies in the field of lung cancer have emphasized the important role of body composition, particularly fatty tissue, as a prognostic factor. However, there is still a lack of practice in combining fatty tissue to discriminate benign and malignant pulmonary nodules. PURPOSE This study proposes a deep learning (DL) approach to explore the potential predictive value of dual imaging markers, including intrathoracic fat (ITF), in patients with pulmonary nodules. METHODS We enrolled 1321 patients with pulmonary nodules from three centers. Image feature extraction was performed on computed tomography (CT) images of pulmonary nodules and ITF by DL, multimodal information was used to discriminate benign and malignant in patients with pulmonary nodules. RESULTS Here, the areas under the receiver operating characteristic curve (AUC) of the model for ITF combined with pulmonary nodules were 0.910(95 % confidence interval [CI]: 0.870-0.950, P = 0.016), 0.922(95 % CI: 0.883-0.960, P = 0.037) and 0.899(95 % CI: 0.849-0.949, P = 0.033) in the internal test cohort, external test cohort1 and external test cohort2, respectively, which were significantly better than the model for pulmonary nodules. Intrathoracic fat index (ITFI) emerged as an independent influencing factor for benign and malignant in patients with pulmonary nodules, correlating with a 9.4 % decrease in the risk of malignancy for each additional unit. CONCLUSION This study demonstrates the potential auxiliary predictive value of ITF as a noninvasive imaging biomarker in assessing pulmonary nodules.
Collapse
Affiliation(s)
- Shidi Miao
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin, China
| | - Qi Dong
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin, China
| | - Le Liu
- Department of Internal Medicine, Harbin Medical University Cancer Hospital, Harbin Medical University, Harbin, China
| | - Qifan Xuan
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin, China
| | - Yunfei An
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin, China
| | - Hongzhuo Qi
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin, China
| | - Qiujun Wang
- Department of General Practice, the Second Affiliated Hospital, Harbin Medical University, Harbin, China
| | - Zengyao Liu
- Department of Interventional Medicine, the First Affiliated Hospital, Harbin Medical University, Harbin, China
| | - Ruitao Wang
- Department of Internal Medicine, Harbin Medical University Cancer Hospital, Harbin Medical University, Harbin, China.
| |
Collapse
|
12
|
Wang Y, Zhang W, Liu X, Tian L, Li W, He P, Huang S, He F, Pan X. Artificial intelligence in precision medicine for lung cancer: A bibliometric analysis. Digit Health 2025; 11:20552076241300229. [PMID: 39758259 PMCID: PMC11696962 DOI: 10.1177/20552076241300229] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2023] [Accepted: 10/28/2024] [Indexed: 01/07/2025] Open
Abstract
Background The increasing body of evidence has been stimulating the application of artificial intelligence (AI) in precision medicine research for lung cancer. This trend necessitates a comprehensive overview of the growing number of publications to facilitate researchers' understanding of this field. Method The bibliometric data for the current analysis was extracted from the Web of Science Core Collection database, CiteSpace, VOSviewer ,and an online website were applied to the analysis. Results After the data were filtered, this search yielded 4062 manuscripts. And 92.27% of the papers were published from 2014 onwards. The main contributing countries were China, the United States, India, Japan, and Korea. These publications were mainly published in the following scientific disciplines, including Radiology Nuclear Medicine, Medical Imaging, Oncology, and Computer Science Notably, Li Weimin and Aerts Hugo J. W. L. stand out as leading authorities in this domain. In the keyword co-occurrence and co-citation cluster analysis of the publication, the knowledge base was divided into four clusters that are more easily understood, including screening, diagnosis, treatment, and prognosis. Conclusion This bibliometric study reveals deep learning frameworks and AI-based radiomics are receiving attention. High-quality and standardized data have the potential to revolutionize lung cancer screening and diagnosis in the era of precision medicine. However, the importance of high-quality clinical datasets, the development of new and combined AI models, and their consistent assessment for advancing research on AI applications in lung cancer are highlighted before current research can be effectively applied in clinical practice.
Collapse
Affiliation(s)
- Yuchai Wang
- Department of Pharmacy, Hunan University of Chinese Medicine, Changsha, Hunan Province, China
| | - Weilong Zhang
- Department of Pharmacy, Hunan University of Chinese Medicine, Changsha, Hunan Province, China
| | - Xiang Liu
- Department of Pharmacy, Hunan University of Chinese Medicine, Changsha, Hunan Province, China
| | - Li Tian
- Department of Pharmacy, Hunan University of Chinese Medicine, Changsha, Hunan Province, China
| | - Wenjiao Li
- Department of Pharmacy, Hunan University of Chinese Medicine, Changsha, Hunan Province, China
| | - Peng He
- Department of Pharmacy, Hunan University of Chinese Medicine, Changsha, Hunan Province, China
| | - Sheng Huang
- Department of Pharmacy, Hunan University of Chinese Medicine, Changsha, Hunan Province, China
- Jiuzhitang Co., Ltd, Changsha, Hunan Province, China
| | - Fuyuan He
- School of Pharmacy, Hunan University of Chinese Medicine, Changsha, Hunan Province, China
| | - Xue Pan
- School of Pharmacy, Hunan University of Chinese Medicine, Changsha, Hunan Province, China
| |
Collapse
|
13
|
Xing P, Zhang L, Wang T, Wang L, Xing W, Wang W. A deep learning algorithm that aids visualization of femoral neck fractures and improves physician training. Injury 2024; 55:111997. [PMID: 39504732 DOI: 10.1016/j.injury.2024.111997] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Revised: 09/26/2024] [Accepted: 10/26/2024] [Indexed: 11/08/2024]
Abstract
PURPOSE Missed fractures are the most common radiologic error in clinical practice, and erroneous classification could lead to inappropriate treatment and unfavorable prognosis. Here, we developed a fully automated deep learning model to detect and classify femoral neck fractures using plain radiographs, and evaluated its utility for diagnostic assistance and physician training. METHODS 1527 plain pelvic and hip radiographs obtained between April 2014 and July 2023 at our Hospital were selected for the model training and evaluation. Faster R-CNN was used to locate the femoral neck. DenseNet-121 was used for Garden classification of the femoral neck fracture, while an additional segmentation method used to visualize the probable fracture area. The model was assessed by the area under the receiver operating characteristic curve (AUC). The accuracy, sensitivity, and specificity for clinicians fracture detection in the diagnostic assistance and physician training experiments were determined. RESULTS The accuracy of the model for fracture detection was 94.1 %. The model achieved AUCs of 0.99 for no femoral neck fractures, 0.94 for Garden I/II fractures, and 0.99 for Garden III/IV fractures. In the diagnostic assistance study, the emergency physicians had an average accuracy of 86.33 % unaided and 92.03 % aided, sensitivity of 85.94 % unaided and 91.78 % aided, and specificity of 87.88 % unaided and 93.13 % aided in detecting fractures. In the physician training study, the accuracy, sensitivity, and specificity of the trainees for fracture classification were 81.83 %, 77.28 %, and 84.85 %, respectively, before training, compared with 90.65 %, 88.31 %, and 92.21 %, respectively, after training. CONCLUSIONS The model represents a valuable tool for physicians to better visualize fractures and improve training outcomes, indicating deep learning algorithms as a promising approach to improve clinical practice and medical education.
Collapse
Affiliation(s)
- Pengyi Xing
- Department of Radiology, The 989th Hospital of the PLA Joint Logistics Support Force, Luoyang, Henan Province, China
| | - Li Zhang
- Department of Gastroenterology and Endocrinology, The 989th Hospital of the PLA Joint Logistics Support Force, Luoyang, Henan Province, China
| | - Tiegong Wang
- Department of Orthopedics Trauma, Shanghai Changhai Hospital, Naval Military Medical University, Shanghai, China
| | - Lipeng Wang
- Department of Orthopedic Surgery, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Wanting Xing
- Department of Radiology, The 989th Hospital of the PLA Joint Logistics Support Force, Luoyang, Henan Province, China
| | - Wei Wang
- Department of Radiology, The 989th Hospital of the PLA Joint Logistics Support Force, Luoyang, Henan Province, China; Department of Radiology, General hospital of Central Theater Command, Wuhan, Hubei Province, China.
| |
Collapse
|
14
|
Tie Y, Wang Y, Zhang D, Zhang Z, Liu F, Qi L. Full dimensional dynamic 3D convolution and point cloud in pulmonary nodule detection. J Adv Res 2024:S2090-1232(24)00552-6. [PMID: 39617261 DOI: 10.1016/j.jare.2024.11.033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Revised: 10/06/2024] [Accepted: 11/28/2024] [Indexed: 12/07/2024] Open
Abstract
Lung cancer is a leading cause of death worldwide, making early and accurate diagnosis essential for improving patient outcomes. Recently, deep learning (DL) has proven to be a powerful tool, significantly enhancing the accuracy of computer-aided pulmonary nodule detection (PND). In this study, we introduce a novel approach called the Omni-dimension Dynamic Residual 3D Net (ODR3DNet) for PND, which utilizes full-dimensional dynamic 3D convolution, along with a specialized machine learning algorithm for detecting lung nodules in 3D point clouds. The primary goal of ODR3DNet is to overcome the limitations of conventional 3D Convolutional Neural Networks (CNNs), which often struggle with adaptability and have limited feature extraction capabilities. Our ODR3DNet algorithm achieves a high CPM (Competition Performance Metric) score of 0.885, outperforming existing mainstream PND algorithms and demonstrating its effectiveness. Through detailed ablation experiments, we confirm that the OD3D module plays a crucial role in this performance boost and identify the optimal configuration for the algorithm. Moreover, we developed a dedicated machine learning detection algorithm tailored for lung 3D point cloud data. We outline the key steps for reconstructing the lungs in 3D and establish a comprehensive process for building a lung point cloud dataset, including data preprocessing, 3D point cloud conversion, and 3D volumetric box annotation. Experimental results validate the feasibility and effectiveness of our proposed approach.
Collapse
Affiliation(s)
- Yun Tie
- School of Electrical and Information Engineering, Zhengzhou University, Zhengzhou, China
| | - Ying Wang
- School of Electrical and Information Engineering, Zhengzhou University, Zhengzhou, China.
| | - Dalong Zhang
- School of Electrical and Information Engineering, Zhengzhou University, Zhengzhou, China
| | - Zepeng Zhang
- School of Electrical and Information Engineering, Zhengzhou University, Zhengzhou, China
| | - Fenghui Liu
- The First Affiliated Hospital of Zhengzhou University, Zhengzhou University, Zhengzhou, China
| | - Lin Qi
- School of Electrical and Information Engineering, Zhengzhou University, Zhengzhou, China
| |
Collapse
|
15
|
van den Berk IAH, Jacobs C, Kanglie MMNP, Mets OM, Snoeren M, Montauban van Swijndregt AD, Taal EM, van Engelen TSR, Prins JM, Bipat S, Bossuyt PMM, Stoker J. An AI deep learning algorithm for detecting pulmonary nodules on ultra-low-dose CT in an emergency setting: a reader study. Eur Radiol Exp 2024; 8:132. [PMID: 39565453 PMCID: PMC11579269 DOI: 10.1186/s41747-024-00518-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2024] [Accepted: 09/20/2024] [Indexed: 11/21/2024] Open
Abstract
BACKGROUND To retrospectively assess the added value of an artificial intelligence (AI) algorithm for detecting pulmonary nodules on ultra-low-dose computed tomography (ULDCT) performed at the emergency department (ED). METHODS In the OPTIMACT trial, 870 patients with suspected nontraumatic pulmonary disease underwent ULDCT. The ED radiologist prospectively read the examinations and reported incidental pulmonary nodules requiring follow-up. All ULDCTs were processed post hoc using an AI deep learning software marking pulmonary nodules ≥ 6 mm. Three chest radiologists independently reviewed the subset of ULDCTs with either prospectively detected incidental nodules in 35/870 patients or AI marks in 458/870 patients; findings scored as nodules by at least two chest radiologists were used as true positive reference standard. Proportions of true and false positives were compared. RESULTS During the OPTIMACT study, 59 incidental pulmonary nodules requiring follow-up were prospectively reported. In the current analysis, 18/59 (30.5%) nodules were scored as true positive while 104/1,862 (5.6%) AI marks in 84/870 patients (9.7%) were scored as true positive. Overall, 5.8 times more (104 versus 18) true positive pulmonary nodules were detected with the use of AI, at the expense of 42.9 times more (1,758 versus 41) false positives. There was a median number of 1 (IQR: 0-2) AI mark per ULDCT. CONCLUSION The use of AI on ULDCT in patients suspected of pulmonary disease in an emergency setting results in the detection of many more incidental pulmonary nodules requiring follow-up (5.8×) with a high trade-off in terms of false positives (42.9×). RELEVANCE STATEMENT AI aids in the detection of incidental pulmonary nodules that require follow-up at chest-CT, aiding early pulmonary cancer detection but also results in an increase of false positive results that are mainly clustered in patients with major abnormalities. TRIAL REGISTRATION The OPTIMACT trial was registered on 6 December 2016 in the National Trial Register (number NTR6163) (onderzoekmetmensen.nl). KEY POINTS An AI deep learning algorithm was tested on 870 ULDCT examinations acquired in the ED. AI detected 5.8 times more pulmonary nodules requiring follow-up (true positives). AI resulted in the detection of 42.9 times more false positive results, clustered in patients with major abnormalities. AI in the ED setting may aid in early pulmonary cancer detection with a high trade-off in terms of false positives.
Collapse
Affiliation(s)
- Inge A H van den Berk
- Department of Radiology and Nuclear Medicine, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands.
| | - Colin Jacobs
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Maadrika M N P Kanglie
- Department of Radiology and Nuclear Medicine, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands
- Department of Radiology, Spaarne Gasthuis, Haarlem, The Netherlands
| | - Onno M Mets
- Department of Radiology and Nuclear Medicine, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands
| | - Miranda Snoeren
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands
| | | | - Elisabeth M Taal
- Department of Radiology and Nuclear Medicine, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands
| | - Tjitske S R van Engelen
- Division of Infectious Diseases, Department of Internal Medicine, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands
| | - Jan M Prins
- Division of Infectious Diseases, Department of Internal Medicine, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands
| | - Shandra Bipat
- Department of Radiology and Nuclear Medicine, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands
| | - Patrick M M Bossuyt
- Department of Epidemiology & Data Science, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands
- Amsterdam Public Health, Methodology, Amsterdam, The Netherlands
| | - Jaap Stoker
- Department of Radiology and Nuclear Medicine, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands
- Cancer Center Amsterdam, Imaging and Biomarkers, Amsterdam, The Netherlands
| |
Collapse
|
16
|
Palani M, Rajagopal S, Chintanpalli AK. A systematic review on feature extraction methods and deep learning models for detection of cancerous lung nodules at an early stage -the recent trends and challenges. Biomed Phys Eng Express 2024; 11:012001. [PMID: 39530659 DOI: 10.1088/2057-1976/ad9154] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2024] [Accepted: 11/12/2024] [Indexed: 11/16/2024]
Abstract
Lung cancer is one of the most common life-threatening worldwide cancers affecting both the male and the female populations. The appearance of nodules in the scan image is an early indication of the development of cancer cells in the lung. The Low Dose Computed Tomography screening technique is used for the early detection of cancer nodules. Therefore, with more Computed Tomography (CT) lung profiles, an automated lung nodule analysis system can be utilized through image processing techniques and neural network algorithms. A CT image of the lung consists of many elements such as blood vessels, ribs, nodules, sternum, bronchi and nodules. These nodules can be both benign and malignant, where the latter leads to lung cancer. Detecting them at an earlier stage can increase life expectancy by up to 5 to 10 years. To analyse only the nodules from the profile, the respected features are extracted using image processing techniques. Based on the review, textural features were the promising ones in medical image analysis and for solving computer vision problems. The importance of uncovering the hidden features allows Deep Learning algorithms (DL) to function better, especially in medical imaging, where accuracy has improved. The earlier detection of cancerous lung nodules is possible through the combination of multi-featured extraction and classification techniques using image data. This technique can be a breakthrough in the deep learning area by providing the appropriate features. One of the greatest challenges is the incorrect identification of malignant nodules results in a higher false positive rate during the prediction. The suitable features make the system more precise in prognosis. In this paper, the overview of lung cancer along with the publicly available datasets is discussed for the research purposes. They are mainly focused on the recent research that combines feature extraction and deep learning algorithms used to reduce the false positive rate in the automated detection of lung nodules. The primary objective of the paper is to provide the importance of textural features when combined with different deep-learning models. It gives insights into their advantages, disadvantages and limitations regarding possible research gaps. These papers compare the recent studies of deep learning models with and without feature extraction and conclude that DL models that include feature extraction are better than the others.
Collapse
Affiliation(s)
- Mathumetha Palani
- Department of Sensor and Biomedical Technology, School of Electronics Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu 632014, India
| | - Sivakumar Rajagopal
- Department of Sensor and Biomedical Technology, School of Electronics Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu 632014, India
| | - Anantha Krishna Chintanpalli
- Department of Communication Engineering, School of Electronics Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu 632014, India
| |
Collapse
|
17
|
Zhao B, Dercle L, Yang H, Riely GJ, Kris MG, Schwartz LH. Annotated test-retest dataset of lung cancer CT scan images reconstructed at multiple imaging parameters. Sci Data 2024; 11:1259. [PMID: 39567508 PMCID: PMC11579286 DOI: 10.1038/s41597-024-04085-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2024] [Accepted: 11/05/2024] [Indexed: 11/22/2024] Open
Abstract
Quantitative imaging biomarkers (QIB) are increasingly used in clinical research to advance precision medicine approaches in oncology. Computed tomography (CT) is a modality of choice for cancer diagnosis, prognosis, and response assessment due to its reliability and global accessibility. Here, we contribute to the cancer imaging community through The Cancer Imaging Archive (TCIA) by providing investigator-initiated, same-day repeat CT scan images of 32 non-small cell lung cancer (NSCLC) patients, along with radiologist-annotated lesion contours as a reference standard. Each scan was reconstructed into 6 image settings using various combinations of three slice thicknesses (1.25 mm, 2.5 mm, 5 mm) and two reconstruction kernels (lung, standard; GE CT equipment), which spans a wide range of CT imaging reconstruction parameters commonly used in lung cancer clinical practice and clinical trials. This holds considerable value for advancing the development of robust Radiomics, Artificial Intelligence (AI) and machine learning (ML) methods.
Collapse
Affiliation(s)
- Binsheng Zhao
- Memorial Sloan-Kettering Cancer Center, New York, NY, 10021, USA.
| | - Laurent Dercle
- Memorial Sloan-Kettering Cancer Center, New York, NY, 10021, USA
- Department of Radiology, Columbia University New York, New York, NY, 10032, USA
| | - Hao Yang
- Memorial Sloan-Kettering Cancer Center, New York, NY, 10021, USA
| | - Gregory J Riely
- Memorial Sloan-Kettering Cancer Center, New York, NY, 10021, USA
| | - Mark G Kris
- Memorial Sloan-Kettering Cancer Center, New York, NY, 10021, USA
| | | |
Collapse
|
18
|
Esha JF, Islam T, Pranto MAM, Borno AS, Faruqui N, Yousuf MA, Azad AKM, Al-Moisheer AS, Alotaibi N, Alyami SA, Moni MA. Multi-View Soft Attention-Based Model for the Classification of Lung Cancer-Associated Disabilities. Diagnostics (Basel) 2024; 14:2282. [PMID: 39451604 PMCID: PMC11506595 DOI: 10.3390/diagnostics14202282] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2024] [Revised: 09/24/2024] [Accepted: 09/29/2024] [Indexed: 10/26/2024] Open
Abstract
Background: The detection of lung nodules at their early stages may significantly enhance the survival rate and prevent progression to severe disability caused by advanced lung cancer, but it often requires manual and laborious efforts for radiologists, with limited success. To alleviate it, we propose a Multi-View Soft Attention-Based Convolutional Neural Network (MVSA-CNN) model for multi-class lung nodular classifications in three stages (benign, primary, and metastatic). Methods: Initially, patches from each nodule are extracted into three different views, each fed to our model to classify the malignancy. A dataset, namely the Lung Image Database Consortium Image Database Resource Initiative (LIDC-IDRI), is used for training and testing. The 10-fold cross-validation approach was used on the database to assess the model's performance. Results: The experimental results suggest that MVSA-CNN outperforms other competing methods with 97.10% accuracy, 96.31% sensitivity, and 97.45% specificity. Conclusions: We hope the highly predictive performance of MVSA-CNN in lung nodule classification from lung Computed Tomography (CT) scans may facilitate more reliable diagnosis, thereby improving outcomes for individuals with disabilities who may experience disparities in healthcare access and quality.
Collapse
Affiliation(s)
- Jannatul Ferdous Esha
- Department of Information and Communication Technology, Bangladesh University of Professionals, Mirpur Cantonment, Dhaka 1216, Bangladesh; (J.F.E.); (T.I.); (M.A.M.P.); (A.S.B.)
| | - Tahmidul Islam
- Department of Information and Communication Technology, Bangladesh University of Professionals, Mirpur Cantonment, Dhaka 1216, Bangladesh; (J.F.E.); (T.I.); (M.A.M.P.); (A.S.B.)
| | - Md. Appel Mahmud Pranto
- Department of Information and Communication Technology, Bangladesh University of Professionals, Mirpur Cantonment, Dhaka 1216, Bangladesh; (J.F.E.); (T.I.); (M.A.M.P.); (A.S.B.)
| | - Abrar Siam Borno
- Department of Information and Communication Technology, Bangladesh University of Professionals, Mirpur Cantonment, Dhaka 1216, Bangladesh; (J.F.E.); (T.I.); (M.A.M.P.); (A.S.B.)
| | - Nuruzzaman Faruqui
- Department of Software Engineering, Daffodil International University, Daffodil Smart City, Birulia 1216, Bangladesh;
| | - Mohammad Abu Yousuf
- Institute of Information Technology, Jahangirnagar University, Dhaka 1342, Bangladesh
| | - AKM Azad
- Department of Mathematics and Statistics, Faculty of Science, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 13318, Saudi Arabia; (A.A.); (A.S.A.-M.); (N.A.); (S.A.A.)
| | - Asmaa Soliman Al-Moisheer
- Department of Mathematics and Statistics, Faculty of Science, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 13318, Saudi Arabia; (A.A.); (A.S.A.-M.); (N.A.); (S.A.A.)
| | - Naif Alotaibi
- Department of Mathematics and Statistics, Faculty of Science, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 13318, Saudi Arabia; (A.A.); (A.S.A.-M.); (N.A.); (S.A.A.)
| | - Salem A. Alyami
- Department of Mathematics and Statistics, Faculty of Science, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 13318, Saudi Arabia; (A.A.); (A.S.A.-M.); (N.A.); (S.A.A.)
| | - Mohammad Ali Moni
- AI & Digital Health Technology, AI and Cyber Futures Institute, Charles Sturt University, Bathurst, NSW 2795, Australia
- AI & Digital Health Technology, Rural Health Research Institute, Charles Sturt University, Orange, NSW 2800, Australia
| |
Collapse
|
19
|
Masci GM, Chassagnon G, Alifano M, Tlemsani C, Boudou-Rouquette P, La Torre G, Calinghen A, Canniff E, Fournel L, Revel MP. Performance of AI for preoperative CT assessment of lung metastases: Retrospective analysis of 167 patients. Eur J Radiol 2024; 179:111667. [PMID: 39121746 DOI: 10.1016/j.ejrad.2024.111667] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2024] [Revised: 07/30/2024] [Accepted: 08/02/2024] [Indexed: 08/12/2024]
Abstract
OBJECTIVES To evaluate the performance of artificial intelligence (AI) in the preoperative detection of lung metastases on CT. MATERIALS AND METHODS Patients who underwent lung metastasectomy in our institution between 2016 and 2020 were enrolled, their preoperative CT reports having been performed before an AI solution (Veye Lung Nodules, version 3.9.2, Aidence) became available as a second reader in our department. All CT scans were retrospectively processed by AI. The sensitivities of unassisted radiologists (original CT radiology reports), AI reports alone and both combined were compared. Ground truth was established by a consensus reading of two radiologists, who analyzed whether the nodules mentioned in the pathology report were retrospectively visible on CT. Multivariate analysis was performed to identify nodule characteristics associated with detectability. RESULTS A total of 167 patients (men: 62.9 %; median age, 59 years [47-68]) with 475 resected nodules were included. AI detected an average of 4 nodules (0-17) per CT, of which 97 % were true nodules. The combination of radiologist plus AI (92.4 %) had significantly higher sensitivity than unassisted radiologists (80.4 %) (p < 0.001). In 27/57 (47.4 %) patients who had multiple preoperative CT scans, AI detected lung nodules earlier than the radiologist. Vascular contact was associated with non-detection by radiologists (OR:0.32[0.19, 0.54], p < 0.001), whilst the presence of cavitation (OR:0.26[0.13, 0.54], p < 0.001) or pleural contact (OR:0.10[0.04, 0.22], p < 0.001) was associated with non-detection by AI. CONCLUSION AI significantly increases the sensitivity of preoperative detection of lung metastases and enables earlier detection, with a significant potential benefit for patient management.
Collapse
Affiliation(s)
- Giorgio Maria Masci
- Radiology Department, Hôpital Cochin, AP-HP, 27 rue du Faubourg Saint-Jacques, 75014 Paris, France; Department of Radiological, Oncological and Pathological Sciences, Policlinico Umberto I, Università degli Studi di Roma La Sapienza, Viale del Policlinico 155, 00161 Rome, Italy
| | - Guillaume Chassagnon
- Radiology Department, Hôpital Cochin, AP-HP, 27 rue du Faubourg Saint-Jacques, 75014 Paris, France; Université de Paris Cité, 85 boulevard Saint-Germain, 75006 Paris, France
| | - Marco Alifano
- Université de Paris Cité, 85 boulevard Saint-Germain, 75006 Paris, France; Department of Thoracic Surgery, Hôpital Cochin, AP-HP, 27 rue du Faubourg Saint-Jacques, 75014 Paris, France
| | - Camille Tlemsani
- Université de Paris Cité, 85 boulevard Saint-Germain, 75006 Paris, France; Department of Medical Oncology, Hôpital Cochin, AP-HP, 27 rue du Faubourg Saint-Jacques, 75014 Paris, France
| | - Pascaline Boudou-Rouquette
- Department of Medical Oncology, Hôpital Cochin, AP-HP, 27 rue du Faubourg Saint-Jacques, 75014 Paris, France
| | - Giuseppe La Torre
- Department of Public Health and Infectious Diseases, Università degli Studi di Roma La Sapienza, Piazzale Aldo Moro 5, 00185 Rome, Italy
| | - Arvin Calinghen
- Radiology Department, Hôpital Cochin, AP-HP, 27 rue du Faubourg Saint-Jacques, 75014 Paris, France
| | - Emma Canniff
- Radiology Department, Hôpital Cochin, AP-HP, 27 rue du Faubourg Saint-Jacques, 75014 Paris, France
| | - Ludovic Fournel
- Université de Paris Cité, 85 boulevard Saint-Germain, 75006 Paris, France; Department of Thoracic Surgery, Hôpital Cochin, AP-HP, 27 rue du Faubourg Saint-Jacques, 75014 Paris, France
| | - Marie-Pierre Revel
- Radiology Department, Hôpital Cochin, AP-HP, 27 rue du Faubourg Saint-Jacques, 75014 Paris, France; Université de Paris Cité, 85 boulevard Saint-Germain, 75006 Paris, France.
| |
Collapse
|
20
|
Yu T, Zhao X, Leader JK, Wang J, Meng X, Herman J, Wilson D, Pu J. Vascular Biomarkers for Pulmonary Nodule Malignancy: Arteries vs. Veins. Cancers (Basel) 2024; 16:3274. [PMID: 39409894 PMCID: PMC11476001 DOI: 10.3390/cancers16193274] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2024] [Revised: 09/22/2024] [Accepted: 09/24/2024] [Indexed: 10/20/2024] Open
Abstract
OBJECTIVE This study aims to investigate the association between the arteries and veins surrounding a pulmonary nodule and its malignancy. METHODS A dataset of 146 subjects from a LDCT lung cancer screening program was used in this study. AI algorithms were used to automatically segment and quantify nodules and their surrounding macro-vasculature. The macro-vasculature was differentiated into arteries and veins. Vessel branch count, volume, and tortuosity were quantified for arteries and veins at different distances from the nodule surface. Univariate and multivariate logistic regression (LR) analyses were performed, with a special emphasis on the nodules with diameters ranging from 8 to 20 mm. ROC-AUC was used to assess the performance based on the k-fold cross-validation method. Average feature importance was evaluated in several machine learning models. RESULTS The LR models using macro-vasculature features achieved an AUC of 0.78 (95% CI: 0.71-0.86) for all nodules and an AUC of 0.67 (95% CI: 0.54-0.80) for nodules between 8-20 mm. Models including macro-vasculature features, demographics, and CT-derived nodule features yielded an AUC of 0.91 (95% CI: 0.87-0.96) for all nodules and an AUC of 0.82 (95% CI: 0.71-0.92) for nodules between 8-20 mm. In terms of feature importance, arteries within 5.0 mm from the nodule surface were the highest-ranked among macro-vasculature features and retained their significance even with the inclusion of demographics and CT-derived nodule features. CONCLUSIONS Arteries within 5.0 mm from the nodule surface emerged as a potential biomarker for effectively discriminating between malignant and benign nodules.
Collapse
Affiliation(s)
- Tong Yu
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15213, USA;
| | - Xiaoyan Zhao
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA 15213, USA; (X.Z.); (J.K.L.); (J.W.); (X.M.)
| | - Joseph K. Leader
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA 15213, USA; (X.Z.); (J.K.L.); (J.W.); (X.M.)
| | - Jing Wang
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA 15213, USA; (X.Z.); (J.K.L.); (J.W.); (X.M.)
| | - Xin Meng
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA 15213, USA; (X.Z.); (J.K.L.); (J.W.); (X.M.)
| | - James Herman
- Department of Medicine, University of Pittsburgh, Pittsburgh, PA 15213, USA; (J.H.); (D.W.)
| | - David Wilson
- Department of Medicine, University of Pittsburgh, Pittsburgh, PA 15213, USA; (J.H.); (D.W.)
| | - Jiantao Pu
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15213, USA;
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA 15213, USA; (X.Z.); (J.K.L.); (J.W.); (X.M.)
- Department of Ophthalmology, University of Pittsburgh, Pittsburgh, PA 15213, USA
| |
Collapse
|
21
|
Wang W, Yin S, Ye F, Chen Y, Zhu L, Yu H. GC-WIR : 3D global coordinate attention wide inverted ResNet network for pulmonary nodules classification. BMC Pulm Med 2024; 24:465. [PMID: 39304884 DOI: 10.1186/s12890-024-03272-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2024] [Accepted: 09/04/2024] [Indexed: 09/22/2024] Open
Abstract
PURPOSE Currently, deep learning methods for the classification of benign and malignant lung nodules encounter challenges encompassing intricate and unstable algorithmic models, limited data adaptability, and an abundance of model parameters.To tackle these concerns, this investigation introduces a novel approach: the 3D Global Coordinated Attention Wide Inverted ResNet Network (GC-WIR). This network aims to achieve precise classification of benign and malignant pulmonary nodules, leveraging its merits of heightened efficiency, parsimonious parameterization, and robust stability. METHODS Within this framework, a 3D Global Coordinate Attention Mechanism (3D GCA) is designed to compute the features of the input images by converting 3D channel information and multi-dimensional positional cues. By encompassing both global channel details and spatial positional cues, this approach maintains a judicious balance between flexibility and computational efficiency. Furthermore, the GC-WIR architecture incorporates a 3D Wide Inverted Residual Network (3D WIRN), which augments feature computation by expanding input channels. This augmentation mitigates information loss during feature extraction, expedites model convergence, and concurrently enhances performance. The utilization of the inverted residual structure imbues the model with heightened stability. RESULTS Empirical validation of the GC-WIR method is performed on the LUNA 16 dataset, yielding predictions that surpass those generated by previous models. This novel approach achieves an impressive accuracy rate of 94.32%, coupled with a specificity of 93.69%. Notably, the model's parameter count remains modest at 5.76M, affording optimal classification accuracy. CONCLUSION Furthermore, experimental results unequivocally demonstrate that, even under stringent computational constraints, GC-WIR outperforms alternative deep learning methodologies, establishing a new benchmark in performance.
Collapse
Affiliation(s)
- Wenju Wang
- University of Shanghai for Science and Technology, Jungong 516 Rd, Shanghai, 200093, China
| | - Shuya Yin
- University of Shanghai for Science and Technology, Jungong 516 Rd, Shanghai, 200093, China.
| | - Fang Ye
- University of Shanghai for Science and Technology, Jungong 516 Rd, Shanghai, 200093, China
| | - Yinan Chen
- Department of Radiology, Shanghai Chest Hospital, School of Medicine, Shanghai Jiao Tong University, Huaihai West Road NO.241, Shanghai, 200030, China
| | - Lin Zhu
- Department of Radiology, Shanghai Chest Hospital, School of Medicine, Shanghai Jiao Tong University, Huaihai West Road NO.241, Shanghai, 200030, China
| | - Hong Yu
- Department of Radiology, Shanghai Chest Hospital, School of Medicine, Shanghai Jiao Tong University, Huaihai West Road NO.241, Shanghai, 200030, China
| |
Collapse
|
22
|
Ma J, Yoon JH, Lu L, Yang H, Guo P, Yang D, Li J, Shen J, Schwartz LH, Zhao B. A quantitative analysis of the improvement provided by comprehensive annotation on CT lesion detection using deep learning. J Appl Clin Med Phys 2024; 25:e14434. [PMID: 39078867 PMCID: PMC11492393 DOI: 10.1002/acm2.14434] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Revised: 04/25/2024] [Accepted: 05/20/2024] [Indexed: 10/22/2024] Open
Abstract
BACKGROUND Data collected from hospitals are usually partially annotated by radiologists due to time constraints. Developing and evaluating deep learning models on these data may result in over or under estimation PURPOSE: We aimed to quantitatively investigate how the percentage of annotated lesions in CT images will influence the performance of universal lesion detection (ULD) algorithms. METHODS We trained a multi-view feature pyramid network with position-aware attention (MVP-Net) to perform ULD. Three versions of the DeepLesion dataset were created for training MVP-Net. Original DeepLesion Dataset (OriginalDL) is the publicly available, widely studied DeepLesion dataset that includes 32 735 lesions in 4427 patients which were partially labeled during routine clinical practice. Enriched DeepLesion Dataset (EnrichedDL) is an enhanced dataset that features fully labeled at one or more time points for 4145 patients with 34 317 lesions. UnionDL is the union of the OriginalDL and EnrichedDL with 54 510 labeled lesions in 4427 patients. Each dataset was used separately to train MVP-Net, resulting in the following models: OriginalCNN (replicating the original result), EnrichedCNN (testing the effect of increased annotation), and UnionCNN (featuring the greatest number of annotations). RESULTS Although the reported mean sensitivity of OriginalCNN was 84.3% using the OriginalDL testing set, the performance fell sharply when tested on the EnrichedDL testing set, yielding mean sensitivities of 56.1%, 66.0%, and 67.8% for OriginalCNN, EnrichedCNN, and UnionCNN, respectively. We also found that increasing the percentage of annotated lesions in the training set increased sensitivity, but the margin of increase in performance gradually diminished according to the power law. CONCLUSIONS We expanded and improved the existing DeepLesion dataset by annotating additional 21 775 lesions, and we demonstrated that using fully labeled CT images avoided overestimation of MVP-Net's performance while increasing the algorithm's sensitivity, which may have a huge impact to the future CT lesion detection research. The annotated lesions are at https://github.com/ComputationalImageAnalysisLab/DeepLesionData.
Collapse
Affiliation(s)
- Jingchen Ma
- Department of RadiologyMemorial Sloan Kettering Cancer CenterNew YorkNew YorkUSA
| | - Jin H. Yoon
- Department of RadiologyColumbia University Irving Medical CenterNew YorkNew YorkUSA
| | - Lin Lu
- Department of RadiologyMemorial Sloan Kettering Cancer CenterNew YorkNew YorkUSA
| | - Hao Yang
- Department of RadiologyMemorial Sloan Kettering Cancer CenterNew YorkNew YorkUSA
| | - Pingzhen Guo
- Department of RadiologyMemorial Sloan Kettering Cancer CenterNew YorkNew YorkUSA
| | - Dawei Yang
- Department of RadiologyBeijing Friendship HospitalCapital Medical UniversityBeijingChina
| | - Jing Li
- Department of RadiologyBeijing Friendship HospitalCapital Medical UniversityBeijingChina
| | - Jingxian Shen
- Medical Imaging DepartmentSun Yat‐Sen University Cancer CenterState Key Laboratory of Oncology in South ChinaGuangzhouChina
| | - Lawrence H. Schwartz
- Department of RadiologyMemorial Sloan Kettering Cancer CenterNew YorkNew YorkUSA
| | - Binsheng Zhao
- Department of RadiologyMemorial Sloan Kettering Cancer CenterNew YorkNew YorkUSA
| |
Collapse
|
23
|
Zhang G, Gao Q, Zhan Q, Wang L, Song B, Chen Y, Bian Y, Ma C, Lu J, Shao C. Label-free differentiation of pancreatic pathologies from normal pancreas utilizing end-to-end three-dimensional multimodal networks on CT. Clin Radiol 2024; 79:e1159-e1166. [PMID: 38969545 DOI: 10.1016/j.crad.2024.06.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Revised: 05/10/2024] [Accepted: 06/05/2024] [Indexed: 07/07/2024]
Abstract
AIMS To investigate the utilization of an end-to-end multimodal convolutional model in the rapid and accurate diagnosis of pancreatic diseases using abdominal CT images. MATERIALS AND METHODS In this study, a novel lightweight label-free end-to-end multimodal network (eeMulNet) model was proposed for the rapid and precise diagnosis of abnormal pancreas. The eeMulNet consists of two steps: pancreatic region localization and multimodal CT diagnosis integrating textual and image data. A research dataset comprising 715 CT scans with various types of pancreas diseases and 228 CT scans from a control group was collected. The training set and independent test set for the multimodal classification network were randomly divided in an 8:2 ratio (755 for training and 188 for testing). RESULTS The eeMulNet model demonstrated outstanding performance on an independent test set of 188 CT scans (Normal: 45, Abnormal: 143), with an area under the curve (AUC) of 1.0, accuracy of 100%, and sensitivity of 100%. The average testing duration per patient was 41.04 seconds, while the classification network took only 0.04 seconds. CONCLUSIONS The proposed eeMulNet model offers a promising approach for the diagnosis of pancreatic diseases. It can support the identification of suspicious cases during daily radiology work and enhance the accuracy of pancreatic disease diagnosis. The codes and models of eeMulNet are publicly available at Rudeguy1/eeMulNet (github.com).
Collapse
Affiliation(s)
- G Zhang
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China; Department of Radiology, Changhai Hospital of Shanghai, Naval Medical University, Shanghai 200433, China.
| | - Q Gao
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China; Department of Radiology, Changhai Hospital of Shanghai, Naval Medical University, Shanghai 200433, China.
| | - Q Zhan
- Department of Radiology, Changhai Hospital of Shanghai, Naval Medical University, Shanghai 200433, China.
| | - L Wang
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China.
| | - B Song
- Department of Pancreatic Surgery, Changhai Hospital of Shanghai, Naval Medical University, Shanghai 200433, China.
| | - Y Chen
- College of Electronic and Information Engineering, Tongji University, Shanghai 201804, China.
| | - Y Bian
- Department of Radiology, Changhai Hospital of Shanghai, Naval Medical University, Shanghai 200433, China.
| | - C Ma
- Department of Radiology, Changhai Hospital of Shanghai, Naval Medical University, Shanghai 200433, China; College of Electronic and Information Engineering, Tongji University, Shanghai 201804, China.
| | - J Lu
- Department of Radiology, Changhai Hospital of Shanghai, Naval Medical University, Shanghai 200433, China.
| | - C Shao
- Department of Radiology, Changhai Hospital of Shanghai, Naval Medical University, Shanghai 200433, China.
| |
Collapse
|
24
|
Chen Z, Liang N, Li H, Zhang H, Li H, Yan L, Hu Z, Chen Y, Zhang Y, Wang Y, Ke D, Shi N. Exploring explainable AI features in the vocal biomarkers of lung disease. Comput Biol Med 2024; 179:108844. [PMID: 38981214 DOI: 10.1016/j.compbiomed.2024.108844] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Revised: 05/15/2024] [Accepted: 06/04/2024] [Indexed: 07/11/2024]
Abstract
This review delves into the burgeoning field of explainable artificial intelligence (XAI) in the detection and analysis of lung diseases through vocal biomarkers. Lung diseases, often elusive in their early stages, pose a significant public health challenge. Recent advancements in AI have ushered in innovative methods for early detection, yet the black-box nature of many AI models limits their clinical applicability. XAI emerges as a pivotal tool, enhancing transparency and interpretability in AI-driven diagnostics. This review synthesizes current research on the application of XAI in analyzing vocal biomarkers for lung diseases, highlighting how these techniques elucidate the connections between specific vocal features and lung pathology. We critically examine the methodologies employed, the types of lung diseases studied, and the performance of various XAI models. The potential for XAI to aid in early detection, monitor disease progression, and personalize treatment strategies in pulmonary medicine is emphasized. Furthermore, this review identifies current challenges, including data heterogeneity and model generalizability, and proposes future directions for research. By offering a comprehensive analysis of explainable AI features in the context of lung disease detection, this review aims to bridge the gap between advanced computational approaches and clinical practice, paving the way for more transparent, reliable, and effective diagnostic tools.
Collapse
Affiliation(s)
- Zhao Chen
- Institute of Basic Research in Clinical Medicine, China Academy of Chinese Medical Sciences, Beijing, China
| | - Ning Liang
- Institute of Basic Research in Clinical Medicine, China Academy of Chinese Medical Sciences, Beijing, China
| | - Haoyuan Li
- Institute of Basic Research in Clinical Medicine, China Academy of Chinese Medical Sciences, Beijing, China
| | - Haili Zhang
- Institute of Basic Research in Clinical Medicine, China Academy of Chinese Medical Sciences, Beijing, China
| | - Huizhen Li
- Institute of Basic Research in Clinical Medicine, China Academy of Chinese Medical Sciences, Beijing, China
| | - Lijiao Yan
- Institute of Basic Research in Clinical Medicine, China Academy of Chinese Medical Sciences, Beijing, China
| | - Ziteng Hu
- Institute of Basic Research in Clinical Medicine, China Academy of Chinese Medical Sciences, Beijing, China
| | - Yaxin Chen
- Institute of Basic Research in Clinical Medicine, China Academy of Chinese Medical Sciences, Beijing, China
| | - Yujing Zhang
- Institute of Basic Research in Clinical Medicine, China Academy of Chinese Medical Sciences, Beijing, China
| | - Yanping Wang
- Institute of Basic Research in Clinical Medicine, China Academy of Chinese Medical Sciences, Beijing, China
| | - Dandan Ke
- Special Disease Clinic, Huaishuling Branch of Beijing Fengtai Hospital of Integrated Traditional Chinese and Western Medicine, Beijing, China.
| | - Nannan Shi
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China; Institute of Basic Research in Clinical Medicine, China Academy of Chinese Medical Sciences, Beijing, China.
| |
Collapse
|
25
|
Wang J, Liu G, Zhou C, Cui X, Wang W, Wang J, Huang Y, Jiang J, Wang Z, Tang Z, Zhang A, Cui D. Application of artificial intelligence in cancer diagnosis and tumor nanomedicine. NANOSCALE 2024; 16:14213-14246. [PMID: 39021117 DOI: 10.1039/d4nr01832j] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/20/2024]
Abstract
Cancer is a major health concern due to its high incidence and mortality rates. Advances in cancer research, particularly in artificial intelligence (AI) and deep learning, have shown significant progress. The swift evolution of AI in healthcare, especially in tools like computer-aided diagnosis, has the potential to revolutionize early cancer detection. This technology offers improved speed, accuracy, and sensitivity, bringing a transformative impact on cancer diagnosis, treatment, and management. This paper provides a concise overview of the application of artificial intelligence in the realms of medicine and nanomedicine, with a specific emphasis on the significance and challenges associated with cancer diagnosis. It explores the pivotal role of AI in cancer diagnosis, leveraging structured, unstructured, and multimodal fusion data. Additionally, the article delves into the applications of AI in nanomedicine sensors and nano-oncology drugs. The fundamentals of deep learning and convolutional neural networks are clarified, underscoring their relevance to AI-driven cancer diagnosis. A comparative analysis is presented, highlighting the accuracy and efficiency of traditional methods juxtaposed with AI-based approaches. The discussion not only assesses the current state of AI in cancer diagnosis but also delves into the challenges faced by AI in this context. Furthermore, the article envisions the future development direction and potential application of artificial intelligence in cancer diagnosis, offering a hopeful prospect for enhanced cancer detection and improved patient prognosis.
Collapse
Affiliation(s)
- Junhao Wang
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| | - Guan Liu
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| | - Cheng Zhou
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| | - Xinyuan Cui
- Imaging Department of Rui Jin Hospital, Medical School of Shanghai Jiao Tong University, Shanghai, China
| | - Wei Wang
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| | - Jiulin Wang
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| | - Yixin Huang
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| | - Jinlei Jiang
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| | - Zhitao Wang
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| | - Zengyi Tang
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| | - Amin Zhang
- Department of Food Science & Technology, School of Agriculture & Biology, Shanghai Jiao Tong University, Shanghai, China.
| | - Daxiang Cui
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China.
- School of Medicine, Henan University, Henan, China
| |
Collapse
|
26
|
Sogancioglu E, Ginneken BV, Behrendt F, Bengs M, Schlaefer A, Radu M, Xu D, Sheng K, Scalzo F, Marcus E, Papa S, Teuwen J, Scholten ET, Schalekamp S, Hendrix N, Jacobs C, Hendrix W, Sanchez CI, Murphy K. Nodule Detection and Generation on Chest X-Rays: NODE21 Challenge. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2839-2853. [PMID: 38530714 DOI: 10.1109/tmi.2024.3382042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/28/2024]
Abstract
Pulmonary nodules may be an early manifestation of lung cancer, the leading cause of cancer-related deaths among both men and women. Numerous studies have established that deep learning methods can yield high-performance levels in the detection of lung nodules in chest X-rays. However, the lack of gold-standard public datasets slows down the progression of the research and prevents benchmarking of methods for this task. To address this, we organized a public research challenge, NODE21, aimed at the detection and generation of lung nodules in chest X-rays. While the detection track assesses state-of-the-art nodule detection systems, the generation track determines the utility of nodule generation algorithms to augment training data and hence improve the performance of the detection systems. This paper summarizes the results of the NODE21 challenge and performs extensive additional experiments to examine the impact of the synthetically generated nodule training images on the detection algorithm performance.
Collapse
|
27
|
Jian M, Chen H, Zhang Z, Yang N, Zhang H, Ma L, Xu W, Zhi H. A Lung Nodule Dataset with Histopathology-based Cancer Type Annotation. Sci Data 2024; 11:824. [PMID: 39068171 PMCID: PMC11283520 DOI: 10.1038/s41597-024-03658-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2023] [Accepted: 07/17/2024] [Indexed: 07/30/2024] Open
Abstract
Recently, Computer-Aided Diagnosis (CAD) systems have emerged as indispensable tools in clinical diagnostic workflows, significantly alleviating the burden on radiologists. Nevertheless, despite their integration into clinical settings, CAD systems encounter limitations. Specifically, while CAD systems can achieve high performance in the detection of lung nodules, they face challenges in accurately predicting multiple cancer types. This limitation can be attributed to the scarcity of publicly available datasets annotated with expert-level cancer type information. This research aims to bridge this gap by providing publicly accessible datasets and reliable tools for medical diagnosis, facilitating a finer categorization of different types of lung diseases so as to offer precise treatment recommendations. To achieve this objective, we curated a diverse dataset of lung Computed Tomography (CT) images, comprising 330 annotated nodules (nodules are labeled as bounding boxes) from 95 distinct patients. The quality of the dataset was evaluated using a variety of classical classification and detection models, and these promising results demonstrate that the dataset has a feasible application and further facilitate intelligent auxiliary diagnosis.
Collapse
Affiliation(s)
- Muwei Jian
- School of Computer Science and Technology, Shandong University of Finance and Economics, Jinan, China.
- School of Information Science and Technology, Linyi University, Linyi, China.
| | - Hongyu Chen
- School of Information Science and Technology, Linyi University, Linyi, China
| | - Zaiyong Zhang
- Thoracic Surgery Department of Linyi Central Hospital, Linyi, China
| | - Nan Yang
- School of Information Science and Technology, Linyi University, Linyi, China
| | - Haorang Zhang
- School of Computer Science and Technology, Shandong University of Finance and Economics, Jinan, China
| | - Lifu Ma
- Personnel Department of Linyi Central Hospital, Linyi, China
| | - Wenjing Xu
- School of Information Science and Technology, Linyi University, Linyi, China
| | - Huixiang Zhi
- School of Information Science and Technology, Linyi University, Linyi, China
| |
Collapse
|
28
|
Xu R, Liu Z, Luo Y, Hu H, Shen L, Du B, Kuang K, Yang J. SGDA: Towards 3-D Universal Pulmonary Nodule Detection via Slice Grouped Domain Attention. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2024; 21:1093-1105. [PMID: 37028322 DOI: 10.1109/tcbb.2023.3253713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Lung cancer is the leading cause of cancer death worldwide. The best solution for lung cancer is to diagnose the pulmonary nodules in the early stage, which is usually accomplished with the aid of thoracic computed tomography (CT). As deep learning thrives, convolutional neural networks (CNNs) have been introduced into pulmonary nodule detection to help doctors in this labor-intensive task and demonstrated to be very effective. However, the current pulmonary nodule detection methods are usually domain-specific, and cannot satisfy the requirement of working in diverse real-world scenarios. To address this issue, we propose a slice grouped domain attention (SGDA) module to enhance the generalization capability of the pulmonary nodule detection networks. This attention module works in the axial, coronal, and sagittal directions. In each direction, we divide the input feature into groups, and for each group, we utilize a universal adapter bank to capture the feature subspaces of the domains spanned by all pulmonary nodule datasets. Then the bank outputs are combined from the perspective of domain to modulate the input group. Extensive experiments demonstrate that SGDA enables substantially better multi-domain pulmonary nodule detection performance compared with the state-of-the-art multi-domain learning methods.
Collapse
|
29
|
Zhu K, Shen Z, Wang M, Jiang L, Zhang Y, Yang T, Zhang H, Zhang M. Visual Knowledge Domain of Artificial Intelligence in Computed Tomography: A Review Based on Bibliometric Analysis. J Comput Assist Tomogr 2024; 48:652-662. [PMID: 38271538 DOI: 10.1097/rct.0000000000001585] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2024]
Abstract
ABSTRACT Artificial intelligence (AI)-assisted medical imaging technology is a new research area of great interest that has developed rapidly over the last decade. However, there has been no bibliometric analysis of published studies in this field. The present review focuses on AI-related studies on computed tomography imaging in the Web of Science database and uses CiteSpace and VOSviewer to generate a knowledge map and conduct the basic information analysis, co-word analysis, and co-citation analysis. A total of 7265 documents were included and the number of documents published had an overall upward trend. Scholars from the United States and China have made outstanding achievements, and there is a general lack of extensive cooperation in this field. In recent years, the research areas of great interest and difficulty have been the optimization and upgrading of algorithms, and the application of theoretical models to practical clinical applications. This review will help researchers understand the developments, research areas of great interest, and research frontiers in this field and provide reference and guidance for future studies.
Collapse
|
30
|
Dong M, Wang Y, Todo Y, Hua Y. A Novel Feature Selection Strategy Based on the Harris Hawks Optimization Algorithm for the Diagnosis of Cervical Cancer. ELECTRONICS 2024; 13:2554. [DOI: 10.3390/electronics13132554] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/07/2025]
Abstract
Cervical cancer is the fourth most commonly diagnosed cancer and one of the leading causes of cancer-related deaths among females worldwide. Early diagnosis can greatly increase the cure rate for cervical cancer. However, due to the need for substantial medical resources, it is difficult to implement in some areas. With the development of machine learning, utilizing machine learning to automatically diagnose cervical cancer has currently become one of the main research directions in the field. Such an approach typically involves a large number of features. However, a portion of these features is redundant or irrelevant. The task of eliminating redundant or irrelevant features from the entire feature set is known as feature selection (FS). Feature selection methods can roughly be divided into three types, including filter-based methods, wrapper-based methods, and embedded-based methods. Among them, wrapper-based methods are currently the most commonly used approach, and many researchers have demonstrated that these methods can reduce the number of features while improving the accuracy of diagnosis. However, this method still has some issues. Wrapper-based methods typically use heuristic algorithms for FS, which can result in significant computational time. On the other hand, heuristic algorithms are often sensitive to parameters, leading to instability in performance. To overcome this challenge, a novel wrapper-based method named the Binary Harris Hawks Optimization (BHHO) algorithm is proposed in this paper. Compared to other wrapper-based methods, the BHHO has fewer hyper-parameters, which contributes to better stability. Furthermore, we have introduced a rank-based selection mechanism into the algorithm, which endows BHHO with enhanced optimization capabilities and greater generalizability. To comprehensively evaluate the performance of the proposed BHHO, we conducted a series of experiments. The experimental results show that the proposed BHHO demonstrates better accuracy and stability compared to other common wrapper-based FS methods on the cervical cancer dataset. Additionally, even on other disease datasets, the proposed algorithm still provides competitive results, proving its generalizability.
Collapse
Affiliation(s)
- Minhui Dong
- Division of Electrical Engineering and Computer Science, Graduate School of Natural Science & Technology, Kanazawa University, Kakuma-Machi, Kanazawa 920-1192, Japan
| | - Yu Wang
- Division of Electrical Engineering and Computer Science, Graduate School of Natural Science & Technology, Kanazawa University, Kakuma-Machi, Kanazawa 920-1192, Japan
| | - Yuki Todo
- Faculty of Electrical, Information and Communication Engineering, Kanazawa University, Kakuma-Machi, Kanazawa 920-1192, Japan
| | - Yuxiao Hua
- Division of Electrical Engineering and Computer Science, Graduate School of Natural Science & Technology, Kanazawa University, Kakuma-Machi, Kanazawa 920-1192, Japan
| |
Collapse
|
31
|
Zhai P, Cong H, Zhu E, Zhao G, Yu Y, Li J. MVCNet: Multiview Contrastive Network for Unsupervised Representation Learning for 3-D CT Lesions. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:7376-7390. [PMID: 36150004 DOI: 10.1109/tnnls.2022.3203412] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
With the renaissance of deep learning, automatic diagnostic algorithms for computed tomography (CT) have achieved many successful applications. However, they heavily rely on lesion-level annotations, which are often scarce due to the high cost of collecting pathological labels. On the other hand, the annotated CT data, especially the 3-D spatial information, may be underutilized by approaches that model a 3-D lesion with its 2-D slices, although such approaches have been proven effective and computationally efficient. This study presents a multiview contrastive network (MVCNet), which enhances the representations of 2-D views contrastively against other views of different spatial orientations. Specifically, MVCNet views each 3-D lesion from different orientations to collect multiple 2-D views; it learns to minimize a contrastive loss so that the 2-D views of the same 3-D lesion are aggregated, whereas those of different lesions are separated. To alleviate the issue of false negative examples, the uninformative negative samples are filtered out, which results in more discriminative features for downstream tasks. By linear evaluation, MVCNet achieves state-of-the-art accuracies on the lung image database consortium and image database resource initiative (LIDC-IDRI) (88.62%), lung nodule database (LNDb) (76.69%), and TianChi (84.33%) datasets for unsupervised representation learning. When fine-tuned on 10% of the labeled data, the accuracies are comparable to the supervised learning models (89.46% versus 85.03%, 73.85% versus 73.44%, 83.56% versus 83.34% on the three datasets, respectively), indicating the superiority of MVCNet in learning representations with limited annotations. Our findings suggest that contrasting multiple 2-D views is an effective approach to capturing the original 3-D information, which notably improves the utilization of the scarce and valuable annotated CT data.
Collapse
|
32
|
Zhou Z, Islam MT, Xing L. Multibranch CNN With MLP-Mixer-Based Feature Exploration for High-Performance Disease Diagnosis. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:7351-7362. [PMID: 37028335 PMCID: PMC11779602 DOI: 10.1109/tnnls.2023.3250490] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Deep learning-based diagnosis is becoming an indispensable part of modern healthcare. For high-performance diagnosis, the optimal design of deep neural networks (DNNs) is a prerequisite. Despite its success in image analysis, existing supervised DNNs based on convolutional layers often suffer from their rudimentary feature exploration ability caused by the limited receptive field and biased feature extraction of conventional convolutional neural networks (CNNs), which compromises the network performance. Here, we propose a novel feature exploration network named manifold embedded multilayer perceptron (MLP) mixer (ME-Mixer), which utilizes both supervised and unsupervised features for disease diagnosis. In the proposed approach, a manifold embedding network is employed to extract class-discriminative features; then, two MLP-Mixer-based feature projectors are adopted to encode the extracted features with the global reception field. Our ME-Mixer network is quite general and can be added as a plugin to any existing CNN. Comprehensive evaluations on two medical datasets are performed. The results demonstrate that their approach greatly enhances the classification accuracy in comparison with different configurations of DNNs with acceptable computational complexity.
Collapse
|
33
|
Sun L, Zhang M, Lu Y, Zhu W, Yi Y, Yan F. Nodule-CLIP: Lung nodule classification based on multi-modal contrastive learning. Comput Biol Med 2024; 175:108505. [PMID: 38688129 DOI: 10.1016/j.compbiomed.2024.108505] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 02/28/2024] [Accepted: 04/21/2024] [Indexed: 05/02/2024]
Abstract
The latest developments in deep learning have demonstrated the importance of CT medical imaging for the classification of pulmonary nodules. However, challenges remain in fully leveraging the relevant medical annotations of pulmonary nodules and distinguishing between the benign and malignant labels of adjacent nodules. Therefore, this paper proposes the Nodule-CLIP model, which deeply mines the potential relationship between CT images, complex attributes of lung nodules, and benign and malignant attributes of lung nodules through a comparative learning method, and optimizes the model in the image feature extraction network by using its similarities and differences to improve its ability to distinguish similar lung nodules. Firstly, we segment the 3D lung nodule information by U-Net to reduce the interference caused by the background of lung nodules and focus on the lung nodule images. Secondly, the image features, class features, and complex attribute features are aligned by contrastive learning and loss function in Nodule-CLIP to achieve lung nodule image optimization and improve classification ability. A series of testing and ablation experiments were conducted on the public dataset LIDC-IDRI, and the final benign and malignant classification rate was 90.6%, and the recall rate was 92.81%. The experimental results show the advantages of this method in terms of lung nodule classification as well as interpretability.
Collapse
Affiliation(s)
- Lijing Sun
- College of Electrical Engineering and Control Science, Nanjing Tech University, Nanjing, 211800, Jiangsu, China
| | - Mengyi Zhang
- College of Electrical Engineering and Control Science, Nanjing Tech University, Nanjing, 211800, Jiangsu, China.
| | - Yu Lu
- College of Electrical Engineering and Control Science, Nanjing Tech University, Nanjing, 211800, Jiangsu, China
| | - Wenjun Zhu
- College of Electrical Engineering and Control Science, Nanjing Tech University, Nanjing, 211800, Jiangsu, China
| | - Yang Yi
- College of Electrical Engineering and Control Science, Nanjing Tech University, Nanjing, 211800, Jiangsu, China
| | - Fei Yan
- Jiangsu Institute of Cancer Research & The Affiliated Cancer Hospital of Nanjing Medical University, Jiangsu Cancer Hospital, Nanjing, 210009, Jiangsu, China
| |
Collapse
|
34
|
Zeng M, Wang X, Chen W. Worldwide research landscape of artificial intelligence in lung disease: A scientometric study. Heliyon 2024; 10:e31129. [PMID: 38826704 PMCID: PMC11141367 DOI: 10.1016/j.heliyon.2024.e31129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Revised: 05/09/2024] [Accepted: 05/10/2024] [Indexed: 06/04/2024] Open
Abstract
Purpose To perform a comprehensive bibliometric analysis of the application of artificial intelligence (AI) in lung disease to understand the current status and emerging trends of this field. Materials and methods AI-based lung disease research publications were selected from the Web of Science Core Collection. Citespace, VOS viewer and Excel were used to analyze and visualize co-authorship, co-citation, and co-occurrence analysis of authors, keywords, countries/regions, references and institutions in this field. Results Our study included a total of 5210 papers. The number of publications on AI in lung disease showed explosive growth since 2017. China and the United States lead in publication numbers. The most productive author were Li, Weimin and Qian Wei, with Shanghai Jiaotong University as the most productive institution. Radiology was the most co-cited journal. Lung cancer and COVID-19 emerged as the most studied diseases. Deep learning, convolutional neural network, lung cancer, radiomics will be the focus of future research. Conclusions AI-based diagnosis and treatment of lung disease has become a research hotspot in recent years, yielding significant results. Future work should focus on establishing multimodal AI models that incorporate clinical, imaging and laboratory information. Enhanced visualization of deep learning, AI-driven differential diagnosis model for lung disease and the creation of international large-scale lung disease databases should also be considered.
Collapse
Affiliation(s)
| | | | - Wei Chen
- Department of Radiology, Southwest Hospital, Third Military Medical University, Chongqing, China
| |
Collapse
|
35
|
Li S, Wang H, Meng Y, Zhang C, Song Z. Multi-organ segmentation: a progressive exploration of learning paradigms under scarce annotation. Phys Med Biol 2024; 69:11TR01. [PMID: 38479023 DOI: 10.1088/1361-6560/ad33b5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Accepted: 03/13/2024] [Indexed: 05/21/2024]
Abstract
Precise delineation of multiple organs or abnormal regions in the human body from medical images plays an essential role in computer-aided diagnosis, surgical simulation, image-guided interventions, and especially in radiotherapy treatment planning. Thus, it is of great significance to explore automatic segmentation approaches, among which deep learning-based approaches have evolved rapidly and witnessed remarkable progress in multi-organ segmentation. However, obtaining an appropriately sized and fine-grained annotated dataset of multiple organs is extremely hard and expensive. Such scarce annotation limits the development of high-performance multi-organ segmentation models but promotes many annotation-efficient learning paradigms. Among these, studies on transfer learning leveraging external datasets, semi-supervised learning including unannotated datasets and partially-supervised learning integrating partially-labeled datasets have led the dominant way to break such dilemmas in multi-organ segmentation. We first review the fully supervised method, then present a comprehensive and systematic elaboration of the 3 abovementioned learning paradigms in the context of multi-organ segmentation from both technical and methodological perspectives, and finally summarize their challenges and future trends.
Collapse
Affiliation(s)
- Shiman Li
- Digital Medical Research Center, School of Basic Medical Science, Fudan University, Shanghai Key Lab of Medical Image Computing and Computer Assisted Intervention, Shanghai 200032, People's Republic of China
| | - Haoran Wang
- Digital Medical Research Center, School of Basic Medical Science, Fudan University, Shanghai Key Lab of Medical Image Computing and Computer Assisted Intervention, Shanghai 200032, People's Republic of China
| | - Yucong Meng
- Digital Medical Research Center, School of Basic Medical Science, Fudan University, Shanghai Key Lab of Medical Image Computing and Computer Assisted Intervention, Shanghai 200032, People's Republic of China
| | - Chenxi Zhang
- Digital Medical Research Center, School of Basic Medical Science, Fudan University, Shanghai Key Lab of Medical Image Computing and Computer Assisted Intervention, Shanghai 200032, People's Republic of China
| | - Zhijian Song
- Digital Medical Research Center, School of Basic Medical Science, Fudan University, Shanghai Key Lab of Medical Image Computing and Computer Assisted Intervention, Shanghai 200032, People's Republic of China
| |
Collapse
|
36
|
Zhang J, Zou W, Hu N, Zhang B, Wang J. S-Net: an S-shaped network for nodule detection in 3D CT images. Phys Med Biol 2024; 69:075013. [PMID: 38382097 DOI: 10.1088/1361-6560/ad2b96] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2023] [Accepted: 02/21/2024] [Indexed: 02/23/2024]
Abstract
Objective. Accurate and automatic detection of pulmonary nodules is critical for early lung cancer diagnosis, and promising progress has been achieved in developing effective deep models for nodule detection. However, most existing nodule detection methods merely focus on integrating elaborately designed feature extraction modules into the backbone of the detection network to extract rich nodule features while ignore disadvantages of the structure of detection network itself. This study aims to address these disadvantages and develop a deep learning-based algorithm for pulmonary nodule detection to improve the accuracy of early lung cancer diagnosis.Approach. In this paper, an S-shaped network called S-Net is developed with the U-shaped network as backbone, where an information fusion branch is used to propagate lower-level details and positional information critical for nodule detection to higher-level feature maps, head shared scale adaptive detection strategy is utilized to capture information from different scales for better detecting nodules with different shapes and sizes and the feature decoupling detection head is used to allow the classification and regression branches to focus on the information required for their respective tasks. A hybrid loss function is utilized to fully exploit the interplay between the classification and regression branches.Main results. The proposed S-Net network with ResSENet and other three U-shaped backbones from SANet, OSAF-YOLOv3 and MSANet (R+SC+ECA) models achieve average CPM scores of 0.914, 0.915, 0.917 and 0.923 on the LUNA16 dataset, which are significantly higher than those achieved with other existing state-of-the-art models.Significance. The experimental results demonstrate that our proposed method effectively improves nodule detection performance, which implies potential applications of the proposed method in clinical practice.
Collapse
Affiliation(s)
- JingYu Zhang
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006, People's Republic of China
| | - Wei Zou
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006, People's Republic of China
| | - Nan Hu
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006, People's Republic of China
| | - Bin Zhang
- Department of Nuclear Medicine, the First Affiliated Hospital of Soochow University, Suzhou 215006, People's Republic of China
| | - Jiajun Wang
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006, People's Republic of China
| |
Collapse
|
37
|
Alshamrani K, Alshamrani HA. Classification of Chest CT Lung Nodules Using Collaborative Deep Learning Model. J Multidiscip Healthc 2024; 17:1459-1472. [PMID: 38596001 PMCID: PMC11002784 DOI: 10.2147/jmdh.s456167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Accepted: 03/08/2024] [Indexed: 04/11/2024] Open
Abstract
Background Early detection of lung cancer through accurate diagnosis of malignant lung nodules using chest CT scans offers patients the highest chance of successful treatment and survival. Despite advancements in computer vision through deep learning algorithms, the detection of malignant nodules faces significant challenges due to insufficient training datasets. Methods This study introduces a model based on collaborative deep learning (CDL) to differentiate between cancerous and non-cancerous nodules in chest CT scans with limited available data. The model dissects a nodule into its constituent parts using six characteristics, allowing it to learn detailed features of lung nodules. It utilizes a CDL submodel that incorporates six types of feature patches to fine-tune a network previously trained with ResNet-50. An adaptive weighting method learned through error backpropagation enhances the process of identifying lung nodules, incorporating these CDL submodels for improved accuracy. Results The CDL model demonstrated a high level of performance in classifying lung nodules, achieving an accuracy of 93.24%. This represents a significant improvement over current state-of-the-art methods, indicating the effectiveness of the proposed approach. Conclusion The findings suggest that the CDL model, with its unique structure and adaptive weighting method, offers a promising solution to the challenge of accurately detecting malignant lung nodules with limited data. This approach not only improves diagnostic accuracy but also contributes to the early detection and treatment of lung cancer, potentially saving lives.
Collapse
Affiliation(s)
- Khalaf Alshamrani
- Radiological Sciences Department, Najran University, Najran, Saudi Arabia
- Department of Oncology and Metabolism, University of Sheffield, Sheffield, UK
| | | |
Collapse
|
38
|
Wu KC, Chen SW, Hsieh TC, Yen KY, Chang CJ, Kuo YC, Chang RF, Chia-Hung K. Early prediction of distant metastasis in patients with uterine cervical cancer treated with definitive chemoradiotherapy by deep learning using pretreatment [ 18 F]fluorodeoxyglucose positron emission tomography/computed tomography. Nucl Med Commun 2024; 45:196-202. [PMID: 38165173 DOI: 10.1097/mnm.0000000000001799] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2024]
Abstract
OBJECTIVES A deep learning (DL) model using image data from pretreatment [ 18 F]fluorodeoxyglucose ([ 18 F] FDG)-PET or computed tomography (CT) augmented with a novel imaging augmentation approach was developed for the early prediction of distant metastases in patients with locally advanced uterine cervical cancer. METHODS This study used baseline [18F]FDG-PET/CT images of newly diagnosed uterine cervical cancer patients. Data from 186 to 25 patients were analyzed for training and validation cohort, respectively. All patients received chemoradiotherapy (CRT) and follow-up. PET and CT images were augmented by using three-dimensional techniques. The proposed model employed DL to predict distant metastases. Receiver operating characteristic (ROC) curve analysis was performed to measure the model's predictive performance. RESULTS The area under the ROC curves of the training and validation cohorts were 0.818 and 0.830 for predicting distant metastasis, respectively. In the training cohort, the sensitivity, specificity, and accuracy were 80.0%, 78.0%, and 78.5%, whereas, the sensitivity, specificity, and accuracy for distant failure were 73.3%, 75.5%, and 75.2% in the validation cohort, respectively. CONCLUSION Through the use of baseline [ 18 F]FDG-PET/CT images, the proposed DL model can predict the development of distant metastases for patients with locally advanced uterine cervical cancer treatment by CRT. External validation must be conducted to determine the model's predictive performance.
Collapse
Affiliation(s)
- Kuo-Chen Wu
- Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei
- Artificial Intelligence Center, China Medical University Hospital
- Department of Radiation Oncology, China Medical University Hospital
| | - Shang-Wen Chen
- Artificial Intelligence Center, China Medical University Hospital
- School of Medicine, College of Medicine, China Medical University, Taichung
- School of Medicine, College of Medicine, Taipei Medical University, Taipei
- Department of Radiation Oncology, China Medical University Hospital
| | - Te-Chun Hsieh
- Department of Nuclear Medicine and PET Center, China Medical University Hospital
- Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung
| | - Kuo-Yang Yen
- Department of Nuclear Medicine and PET Center, China Medical University Hospital
- Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung
| | - Chao-Jen Chang
- Artificial Intelligence Center, China Medical University Hospital
| | - Yu-Chieh Kuo
- Artificial Intelligence Center, China Medical University Hospital
| | - Ruey-Feng Chang
- Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei
- Artificial Intelligence Center, China Medical University Hospital
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei
| | - Kao Chia-Hung
- Artificial Intelligence Center, China Medical University Hospital
- Department of Nuclear Medicine and PET Center, China Medical University Hospital
- Graduate Institute of Biomedical Sciences, School of Medicine, College of Medicine, China Medical University
- Department of Bioinformatics and Medical Engineering, Asia University, Taichung, Taiwan
| |
Collapse
|
39
|
Al Muhaisen S, Safi O, Ulayan A, Aljawamis S, Fakhoury M, Baydoun H, Abuquteish D. Artificial Intelligence-Powered Mammography: Navigating the Landscape of Deep Learning for Breast Cancer Detection. Cureus 2024; 16:e56945. [PMID: 38665752 PMCID: PMC11044525 DOI: 10.7759/cureus.56945] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/26/2024] [Indexed: 04/28/2024] Open
Abstract
Worldwide, breast cancer (BC) is one of the most commonly diagnosed malignancies in women. Early detection is key to improving survival rates and health outcomes. This literature review focuses on how artificial intelligence (AI), especially deep learning (DL), can enhance the ability of mammography, a key tool in BC detection, to yield more accurate results. Artificial intelligence has shown promise in reducing diagnostic errors and increasing early cancer detection chances. Nevertheless, significant challenges exist, including the requirement for large amounts of high-quality data and concerns over data privacy. Despite these hurdles, AI and DL are advancing the field of radiology, offering better ways to diagnose, detect, and treat diseases. The U.S. Food and Drug Administration (FDA) has approved several AI diagnostic tools. Yet, the full potential of these technologies, especially for more advanced screening methods like digital breast tomosynthesis (DBT), depends on further clinical studies and the development of larger databases. In summary, this review highlights the exciting potential of AI in BC screening. It calls for more research and validation to fully employ the power of AI in clinical practice, ensuring that these technologies can help save lives by improving diagnosis accuracy and efficiency.
Collapse
Affiliation(s)
| | - Omar Safi
- Medicine, Faculty of Medicine, The Hashemite University, Zarqa, JOR
| | - Ahmad Ulayan
- Medicine, Faculty of Medicine, The Hashemite University, Zarqa, JOR
| | - Sara Aljawamis
- Medicine, Faculty of Medicine, The Hashemite University, Zarqa, JOR
| | - Maryam Fakhoury
- Medicine, Faculty of Medicine, The Hashemite University, Zarqa, JOR
| | - Haneen Baydoun
- Diagnostic Radiology, King Hussein Cancer Center, Amman, JOR
| | - Dua Abuquteish
- Microbiology, Pathology and Forensic Medicine, Faculty of Medicine, The Hashemite University, Zarqa, JOR
- Pathology and Laboratory Medicine, King Hussein Cancer Center, Amman, JOR
| |
Collapse
|
40
|
UrRehman Z, Qiang Y, Wang L, Shi Y, Yang Q, Khattak SU, Aftab R, Zhao J. Effective lung nodule detection using deep CNN with dual attention mechanisms. Sci Rep 2024; 14:3934. [PMID: 38365831 PMCID: PMC10873370 DOI: 10.1038/s41598-024-51833-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Accepted: 01/10/2024] [Indexed: 02/18/2024] Open
Abstract
Novel methods are required to enhance lung cancer detection, which has overtaken other cancer-related causes of death as the major cause of cancer-related mortality. Radiologists have long-standing methods for locating lung nodules in patients with lung cancer, such as computed tomography (CT) scans. Radiologists must manually review a significant amount of CT scan pictures, which makes the process time-consuming and prone to human error. Computer-aided diagnosis (CAD) systems have been created to help radiologists with their evaluations in order to overcome these difficulties. These systems make use of cutting-edge deep learning architectures. These CAD systems are designed to improve lung nodule diagnosis efficiency and accuracy. In this study, a bespoke convolutional neural network (CNN) with a dual attention mechanism was created, which was especially crafted to concentrate on the most important elements in images of lung nodules. The CNN model extracts informative features from the images, while the attention module incorporates both channel attention and spatial attention mechanisms to selectively highlight significant features. After the attention module, global average pooling is applied to summarize the spatial information. To evaluate the performance of the proposed model, extensive experiments were conducted using benchmark dataset of lung nodules. The results of these experiments demonstrated that our model surpasses recent models and achieves state-of-the-art accuracy in lung nodule detection and classification tasks.
Collapse
Affiliation(s)
- Zia UrRehman
- College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Taiyuan, China
| | - Yan Qiang
- College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Taiyuan, China
- School of Software, North University of China, Taiyuan, China
| | - Long Wang
- Jinzhong College of Information, Jinzhong, China
| | - Yiwei Shi
- NHC Key Laboratory of Pneumoconiosis, Shanxi Key Laboratory of Respiratory Diseases, Department of Pulmonary and Critical Care Medicine, The First Hospital of Shanxi Medical University, Taiyuan, Shanxi, China
| | | | - Saeed Ullah Khattak
- Centre of Biotechnology and Microbiology, University of Peshawar, Peshawar, 25120, Pakistan
| | - Rukhma Aftab
- College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Taiyuan, China
| | - Juanjuan Zhao
- College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Taiyuan, China.
- Jinzhong College of Information, Jinzhong, China.
| |
Collapse
|
41
|
Wang W, Zhao X, Jia Y, Xu J. The communication of artificial intelligence and deep learning in computer tomography image recognition of epidemic pulmonary infectious diseases. PLoS One 2024; 19:e0297578. [PMID: 38319912 PMCID: PMC10846714 DOI: 10.1371/journal.pone.0297578] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Accepted: 01/08/2024] [Indexed: 02/08/2024] Open
Abstract
The objectives are to improve the diagnostic efficiency and accuracy of epidemic pulmonary infectious diseases and to study the application of artificial intelligence (AI) in pulmonary infectious disease diagnosis and public health management. The computer tomography (CT) images of 200 patients with pulmonary infectious disease are collected and input into the AI-assisted diagnosis software based on the deep learning (DL) model, "UAI, pulmonary infectious disease intelligent auxiliary analysis system", for lesion detection. By analyzing the principles of convolutional neural networks (CNN) in deep learning (DL), the study selects the AlexNet model for the recognition and classification of pulmonary infection CT images. The software automatically detects the pneumonia lesions, marks them in batches, and calculates the lesion volume. The result shows that the CT manifestations of the patients are mainly involved in multiple lobes and density, the most common shadow is the ground-glass opacity. The detection rate of the manual method is 95.30%, the misdetection rate is 0.20% and missed diagnosis rate is 4.50%; the detection rate of the DL-based AI-assisted lesion method is 99.76%, the misdetection rate is 0.08%, and the missed diagnosis rate is 0.08%. Therefore, the proposed model can effectively identify pulmonary infectious disease lesions and provide relevant data information to objectively diagnose pulmonary infectious disease and manage public health.
Collapse
Affiliation(s)
- Weiwei Wang
- Hangzhou Xinken Culture Media Co., Ltd., Hangzhou, China
- College of Media and International Culture, Zhejiang University, Hangzhou, China
| | - Xinjie Zhao
- School of Software & Microelectronics, Peking University, Beijing, China
| | - Yanshu Jia
- Faculty of Science and Technology, Quest International University Perak, Ipoh, Perak, Malaysia
| | - Jiali Xu
- School of Mathematics, Shanghai University of Finance and Economics, Shanghai, China
| |
Collapse
|
42
|
Ma L, Li G, Feng X, Fan Q, Liu L. TiCNet: Transformer in Convolutional Neural Network for Pulmonary Nodule Detection on CT Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:196-208. [PMID: 38343213 DOI: 10.1007/s10278-023-00904-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/22/2023] [Revised: 07/19/2023] [Accepted: 08/10/2023] [Indexed: 03/02/2024]
Abstract
Lung cancer is the leading cause of cancer death. Since lung cancer appears as nodules in the early stage, detecting the pulmonary nodules in an early phase could enhance the treatment efficiency and improve the survival rate of patients. The development of computer-aided analysis technology has made it possible to automatically detect lung nodules in Computed Tomography (CT) screening. In this paper, we propose a novel detection network, TiCNet. It is attempted to embed a transformer module in the 3D Convolutional Neural Network (CNN) for pulmonary nodule detection on CT images. First, we integrate the transformer and CNN in an end-to-end structure to capture both the short- and long-range dependency to provide rich information on the characteristics of nodules. Second, we design the attention block and multi-scale skip pathways for improving the detection of small nodules. Last, we develop a two-head detector to guarantee high sensitivity and specificity. Experimental results on the LUNA16 dataset and PN9 dataset showed that our proposed TiCNet achieved superior performance compared with existing lung nodule detection methods. Moreover, the effectiveness of each module has been proven. The proposed TiCNet model is an effective tool for pulmonary nodule detection. Validation revealed that this model exhibited excellent performance, suggesting its potential usefulness to support lung cancer screening.
Collapse
Affiliation(s)
- Ling Ma
- College of Software, Nankai University, Tianjin, China
| | - Gen Li
- College of Software, Nankai University, Tianjin, China
| | - Xingyu Feng
- College of Software, Nankai University, Tianjin, China
| | - Qiliang Fan
- College of Software, Nankai University, Tianjin, China
| | - Lizhi Liu
- Department of Radiology, Sun Yat-Sen University Cancer Center, Guangdong, China.
| |
Collapse
|
43
|
Jian M, Jin H, Zhang L, Wei B, Yu H. DBPNDNet: dual-branch networks using 3DCNN toward pulmonary nodule detection. Med Biol Eng Comput 2024; 62:563-573. [PMID: 37945795 DOI: 10.1007/s11517-023-02957-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2022] [Accepted: 10/21/2023] [Indexed: 11/12/2023]
Abstract
With the advancement of artificial intelligence, CNNs have been successfully introduced into the discipline of medical data analyzing. Clinically, automatic pulmonary nodules detection remains an intractable issue since those nodules existing in the lung parenchyma or on the chest wall are tough to be visually distinguished from shadows, background noises, blood vessels, and bones. Thus, when making medical diagnosis, clinical doctors need to first pay attention to the intensity cue and contour characteristic of pulmonary nodules, so as to locate the specific spatial locations of nodules. To automate the detection process, we propose an efficient architecture of multi-task and dual-branch 3D convolution neural networks, called DBPNDNet, for automatic pulmonary nodule detection and segmentation. Among the dual-branch structure, one branch is designed for candidate region extraction of pulmonary nodule detection, while the other incorporated branch is exploited for lesion region semantic segmentation of pulmonary nodules. In addition, we develop a 3D attention weighted feature fusion module according to the doctor's diagnosis perspective, so that the captured information obtained by the designed segmentation branch can further promote the effect of the adopted detection branch mutually. The experiment has been implemented and assessed on the commonly used dataset for medical image analysis to evaluate our designed framework. On average, our framework achieved a sensitivity of 91.33% false positives per CT scan and reached 97.14% sensitivity with 8 FPs per scan. The results of the experiments indicate that our framework outperforms other mainstream approaches.
Collapse
Affiliation(s)
- Muwei Jian
- School of Computer Science and Technology, Shandong University of Finance and Economics, Jinan, China.
- School of Information Science and Technology, Linyi University, Linyi, China.
| | - Haodong Jin
- School of Computer Science and Technology, Shandong University of Finance and Economics, Jinan, China
- School of Control Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - Linsong Zhang
- School of Computer Science and Technology, Shandong University of Finance and Economics, Jinan, China
| | - Benzheng Wei
- Medical Artificial Intelligence Research Center, Shandong University of Traditional Chinese Medicine, Qingdao, China
| | - Hui Yu
- School of Control Engineering, University of Shanghai for Science and Technology, Shanghai, China
- School of Creative Technologies, University of Portsmouth, Portsmouth, UK
| |
Collapse
|
44
|
Zheng R, Wen H, Zhu F, Lan W. Attention-guided deep neural network with a multichannel architecture for lung nodule classification. Heliyon 2024; 10:e23508. [PMID: 38169878 PMCID: PMC10758786 DOI: 10.1016/j.heliyon.2023.e23508] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Revised: 11/15/2023] [Accepted: 12/05/2023] [Indexed: 01/05/2024] Open
Abstract
Detecting and accurately identifying malignant lung nodules in chest CT scans in a timely manner is crucial for effective lung cancer treatment. This study introduces a deep learning model featuring a multi-channel attention mechanism, specifically designed for the precise diagnosis of malignant lung nodules. To start, we standardized the voxel size of CT images and generated three RGB images of varying scales for each lung nodule, viewed from three different angles. Subsequently, we applied three attention submodels to extract class-specific characteristics from these RGB images. Finally, the nodule features were consolidated in the model's final layer to make the ultimate predictions. Through the utilization of an attention mechanism, we could dynamically pinpoint the exact location of lung nodules in the images without the need for prior segmentation. This proposed approach enhances the accuracy and efficiency of lung nodule classification. We evaluated and tested our model using a dataset of 1018 CT scans sourced from the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI). The experimental results demonstrate that our model achieved a lung nodule classification accuracy of 90.11 %, with an area under the receiver operator curve (AUC) score of 95.66 %. Impressively, our method achieved this high level of performance while utilizing only 29.09 % of the time needed by the mainstream model.
Collapse
Affiliation(s)
- Rong Zheng
- Department of Gynecology, Maternal and Child Health Hospital of Hubei Province, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430070, China
| | - Hongqiao Wen
- School of Information Engineering, Wuhan University of Technology, Wuhan 430070, China
| | - Feng Zhu
- Department of Cardiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Clinic Center of Human Gene Research, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Weishun Lan
- Department of Medical Imaging, Maternal and Child Health Hospital of Hubei Province, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430070, China
| |
Collapse
|
45
|
Islam NU, Zhou Z, Gehlot S, Gotway MB, Liang J. Seeking an optimal approach for Computer-aided Diagnosis of Pulmonary Embolism. Med Image Anal 2024; 91:102988. [PMID: 37924750 PMCID: PMC11039560 DOI: 10.1016/j.media.2023.102988] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2022] [Revised: 09/28/2023] [Accepted: 09/29/2023] [Indexed: 11/06/2023]
Abstract
Pulmonary Embolism (PE) represents a thrombus ("blood clot"), usually originating from a lower extremity vein, that travels to the blood vessels in the lung, causing vascular obstruction and in some patients death. This disorder is commonly diagnosed using Computed Tomography Pulmonary Angiography (CTPA). Deep learning holds great promise for the Computer-aided Diagnosis (CAD) of PE. However, numerous deep learning methods, such as Convolutional Neural Networks (CNN) and Transformer-based models, exist for a given task, causing great confusion regarding the development of CAD systems for PE. To address this confusion, we present a comprehensive analysis of competing deep learning methods applicable to PE diagnosis based on four datasets. First, we use the RSNA PE dataset, which includes (weak) slice-level and exam-level labels, for PE classification and diagnosis, respectively. At the slice level, we compare CNNs with the Vision Transformer (ViT) and the Swin Transformer. We also investigate the impact of self-supervised versus (fully) supervised ImageNet pre-training, and transfer learning over training models from scratch. Additionally, at the exam level, we compare sequence model learning with our proposed transformer-based architecture, Embedding-based ViT (E-ViT). For the second and third datasets, we utilize the CAD-PE Challenge Dataset and Ferdowsi University of Mashad's PE Dataset, where we convert (strong) clot-level masks into slice-level annotations to evaluate the optimal CNN model for slice-level PE classification. Finally, we use our in-house PE-CAD dataset, which contains (strong) clot-level masks. Here, we investigate the impact of our vessel-oriented image representations and self-supervised pre-training on PE false positive reduction at the clot level across image dimensions (2D, 2.5D, and 3D). Our experiments show that (1) transfer learning boosts performance despite differences between photographic images and CTPA scans; (2) self-supervised pre-training can surpass (fully) supervised pre-training; (3) transformer-based models demonstrate comparable performance but slower convergence compared with CNNs for slice-level PE classification; (4) model trained on the RSNA PE dataset demonstrates promising performance when tested on unseen datasets for slice-level PE classification; (5) our E-ViT framework excels in handling variable numbers of slices and outperforms sequence model learning for exam-level diagnosis; and (6) vessel-oriented image representation and self-supervised pre-training both enhance performance for PE false positive reduction across image dimensions. Our optimal approach surpasses state-of-the-art results on the RSNA PE dataset, enhancing AUC by 0.62% (slice-level) and 2.22% (exam-level). On our in-house PE-CAD dataset, 3D vessel-oriented images improve performance from 80.07% to 91.35%, a remarkable 11% gain. Codes are available at GitHub.com/JLiangLab/CAD_PE.
Collapse
Affiliation(s)
- Nahid Ul Islam
- School of Computing and Augmented Intelligence, Arizona State University, Tempe, AZ 85281, USA
| | - Zongwei Zhou
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Shiv Gehlot
- Biomedical Informants Program, Arizona State University, Phoenix, AZ 85054, USA
| | | | - Jianming Liang
- Biomedical Informants Program, Arizona State University, Phoenix, AZ 85054, USA.
| |
Collapse
|
46
|
Qian L, Wen C, Li Y, Hu Z, Zhou X, Xia X, Kim SH. Multi-scale context UNet-like network with redesigned skip connections for medical image segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 243:107885. [PMID: 37897988 DOI: 10.1016/j.cmpb.2023.107885] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/19/2022] [Revised: 10/22/2023] [Accepted: 10/24/2023] [Indexed: 10/30/2023]
Abstract
BACKGROUND AND OBJECTIVE Medical image segmentation has garnered significant research attention in the neural network community as a fundamental requirement for developing intelligent medical assistant systems. A series of UNet-like networks with an encoder-decoder architecture have achieved remarkable success in medical image segmentation. Among these networks, UNet2+ (UNet++) and UNet3+ (UNet+++) have introduced redesigned skip connections, dense skip connections, and full-scale skip connections, respectively, surpassing the performance of the original UNet. However, UNet2+ lacks comprehensive information obtained from the entire scale, which hampers its ability to learn organ placement and boundaries. Similarly, due to the limited number of neurons in its structure, UNet3+ fails to effectively segment small objects when trained with a small number of samples. METHOD In this study, we propose UNet_sharp (UNet#), a novel network topology named after the "#" symbol, which combines dense skip connections and full-scale skip connections. In the decoder sub-network, UNet# can effectively integrate feature maps of different scales and capture fine-grained features and coarse-grained semantics from the entire scale. This approach enhances the understanding of organ and lesion positions and enables accurate boundary segmentation. We employ deep supervision for model pruning to accelerate testing and enable mobile device deployment. Additionally, we construct two classification-guided modules to reduce false positives and improve segmentation accuracy. RESULTS Compared to current UNet-like networks, our proposed method achieves the highest Intersection over Union (IoU) values ((92.67±0.96)%, (92.38±1.29)%, (95.36±1.22)%, (74.01±2.03)%) and F1 scores ((91.64±1.86)%, (95.70±2.16)%, (97.34±2.76)%, (84.77±2.65)%) on the semantic segmentation tasks of nuclei, brain tumors, liver, and lung nodules, respectively. CONCLUSIONS The experimental results demonstrate that the reconstructed skip connections in UNet successfully incorporate multi-scale contextual semantic information. Compared to most state-of-the-art medical image segmentation models, our proposed method more accurately locates organs and lesions and precisely segments boundaries.
Collapse
Affiliation(s)
- Ledan Qian
- College of Mathematics and Physics, Wenzhou University, Wenzhou, 325035, Zhejiang, China
| | - Caiyun Wen
- The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325035, Zhejiang, China
| | - Yi Li
- College of Computer Science and Artificial Intelligence, Wenzhou University, Wenzhou, 325035, Zhejiang, China
| | - Zhongyi Hu
- Key Laboratory of Intelligent Image Processing and Analysis, Wenzhou, 325035, Zhejiang, China
| | - Xiao Zhou
- Information Technology Center, Wenzhou University, Wenzhou, 325035, Zhejiang, China.
| | - Xiaonyu Xia
- College of Mathematics and Physics, Wenzhou University, Wenzhou, 325035, Zhejiang, China
| | - Soo-Hyung Kim
- College of AI Convergence, Chonnam National University, Gwangju, 61186, Korea
| |
Collapse
|
47
|
Zhi L, Duan S, Zhang S. Multiple semantic X-ray medical image retrieval using efficient feature vector extracted by FPN. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2024; 32:1297-1313. [PMID: 39031428 DOI: 10.3233/xst-240069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/22/2024]
Abstract
OBJECTIVE Content-based medical image retrieval (CBMIR) has become an important part of computer-aided diagnostics (CAD) systems. The complex medical semantic information inherent in medical images is the most difficult part to improve the accuracy of image retrieval. Highly expressive feature vectors play a crucial role in the search process. In this paper, we propose an effective deep convolutional neural network (CNN) model to extract concise feature vectors for multiple semantic X-ray medical image retrieval. METHODS We build a feature pyramid based CNN model with ResNet50V2 backbone to extract multi-level semantic information. And we use the well-known public multiple semantic annotated X-ray medical image data set IRMA to train and test the proposed model. RESULTS Our method achieves an IRMA error of 32.2, which is the best score compared to the existing literature on this dataset. CONCLUSIONS The proposed CNN model can effectively extract multi-level semantic information from X-ray medical images. The concise feature vectors can improve the retrieval accuracy of multi-semantic and unevenly distributed X-ray medical images.
Collapse
Affiliation(s)
- Lijia Zhi
- School of Computer Science and Engineering, North Minzu University, Yinchuan, China
- Medical Imaging Center, Ningxia Hui Autonomous Region People's Hospital, Yinchuan, China
| | - Shaoyong Duan
- School of Computer Science and Engineering, North Minzu University, Yinchuan, China
| | - Shaomin Zhang
- School of Computer Science and Engineering, North Minzu University, Yinchuan, China
- Medical Imaging Center, Ningxia Hui Autonomous Region People's Hospital, Yinchuan, China
| |
Collapse
|
48
|
Liu B, Song H, Li Q, Lin Y, Weng X, Su Z, Yang J. 3D ARCNN: An Asymmetric Residual CNN for False Positive Reduction in Pulmonary Nodule. IEEE Trans Nanobioscience 2024; 23:18-25. [PMID: 37216265 DOI: 10.1109/tnb.2023.3278706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Lung cancer is with the highest morbidity and mortality, and detecting cancerous lesions early is essential for reducing mortality rates. Deep learning-based lung nodule detection techniques have shown better scalability than traditional methods. However, pulmonary nodule test results often include a number of false positive outcomes. In this paper, we present a novel asymmetric residual network called 3D ARCNN that leverages 3D features and spatial information of lung nodules to improve classification performance. The proposed framework uses an internally cascaded multi-level residual model for fine-grained learning of lung nodule features and multi-layer asymmetric convolution to address the problem of large neural network parameters and poor reproducibility. We evaluate the proposed framework on the LUNA16 dataset and achieve a high detection sensitivity of 91.6%, 92.7%, 93.2%, and 95.8% for 1, 2, 4, and 8 false positives per scan, respectively, with an average CPM index of 0.912. Quantitative and qualitative evaluations demonstrate the superior performance of our framework compared to existing methods. 3D ARCNN framework can effectively reduce the possibility of false positive lung nodules in the clinical.
Collapse
|
49
|
Liu D, Zhao Y, Liu B. The effectiveness of deep learning model in differentiating benign and malignant pulmonary nodules on spiral CT. Technol Health Care 2024; 32:5129-5140. [PMID: 39520159 PMCID: PMC11613059 DOI: 10.3233/thc-241079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2024] [Accepted: 07/04/2024] [Indexed: 11/16/2024]
Abstract
BACKGROUND Pulmonary nodule, one of the most common clinical phenomena, is an irregular circular lesion with a diameter of ⩽ 3 cm in the lungs, which can be classified as benign or malignant. Differentiating benign and malignant pulmonary nodules has an essential effect on clinical medical diagnosis. OBJECTIVE To explore the clinical value and diagnostic effects of the lung nodule classification and segmentation algorithm based on deep learning in differentiating benign and malignant pulmonary nodules. METHODS A deep learning model with a fine-grained classification manner for the discrimination of pulmonary models in Dr.Wise Lung Analyzer. This study retrospectively enrolled 120 patients with pulmonary nodules detected by chest spiral CT from March 2021 to September 2022 in the radiology department of Ninghai First Hospital. The DL-based method and physicians' accuracy, sensitivity, and specificity results were compared using the pathological results as the gold standard. The ROC curve of the deep learning model was plotted, and the AUCs were calculated. RESULTS On 120 CT images, pathologically diagnosed 81 malignant nodules and 122 benign modules. The AUCs of radiologists' diagnostic approach and DL-base method for differentiating patients were 0.62 and 0.81; radiologists' diagnostic approach and DL-base method achieved AUCs of 0.75 and 0.90 for benign and malignant pulmonary nodules differentiate. The accuracy, sensitivity, and specificity with the deep learning model were 73.33%, 78.75%, and 62.50%, respectively, while the accuracy, sensitivity, and specificity with the physician's diagnosis were 63.33%, 66.25%, and 57.500. CONCLUSION There was no significant difference between the diagnosis results of the proposed DL-based method and the radiologists' diagnostic approach in differentiating benign and malignant lung nodules on spiral CT (P< 0.05).
Collapse
Affiliation(s)
- Dongquan Liu
- Radiology Department, Ninghai First Hospital Medicare and Health Group, Ningbo, China
| | - Yonggang Zhao
- Radiology Department, Ninghai First Hospital Medicare and Health Group, Ningbo, China
| | - Bangquan Liu
- College of Digital Technology and Engineering, Ningbo University of Finance and Economics, Ningbo, China
| |
Collapse
|
50
|
Zhang L, Shao Y, Chen G, Tian S, Zhang Q, Wu J, Bai C, Yang D. An artificial intelligence-assisted diagnostic system for the prediction of benignity and malignancy of pulmonary nodules and its practical value for patients with different clinical characteristics. Front Med (Lausanne) 2023; 10:1286433. [PMID: 38196835 PMCID: PMC10774219 DOI: 10.3389/fmed.2023.1286433] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Accepted: 12/12/2023] [Indexed: 01/11/2024] Open
Abstract
Objectives This study aimed to explore the value of an artificial intelligence (AI)-assisted diagnostic system in the prediction of pulmonary nodules. Methods The AI system was able to make predictions of benign or malignant nodules. 260 cases of solitary pulmonary nodules (SPNs) were divided into 173 malignant cases and 87 benign cases based on the surgical pathological diagnosis. A stratified data analysis was applied to compare the diagnostic effectiveness of the AI system to distinguish between the subgroups with different clinical characteristics. Results The accuracy of AI system in judging benignity and malignancy of the nodules was 75.77% (p < 0.05). We created an ROC curve by calculating the true positive rate (TPR) and the false positive rate (FPR) at different threshold values, and the AUC was 0.755. Results of the stratified analysis were as follows. (1) By nodule position: the AUC was 0.677, 0.758, 0.744, 0.982, and 0.725, respectively, for the nodules in the left upper lobe, left lower lobe, right upper lobe, right middle lobe, and right lower lobe. (2) By nodule size: the AUC was 0.778, 0.771, and 0.686, respectively, for the nodules measuring 5-10, 10-20, and 20-30 mm in diameter. (3) The predictive accuracy was higher for the subsolid pulmonary nodules than for the solid ones (80.54 vs. 66.67%). Conclusion The AI system can be applied to assist in the prediction of benign and malignant pulmonary nodules. It can provide a valuable reference, especially for the diagnosis of subsolid nodules and small nodules measuring 5-10 mm in diameter.
Collapse
Affiliation(s)
- Lichuan Zhang
- Department of Respiratory Medicine, Affiliated Zhongshan Hospital of Dalian University, Dalian, China
| | - Yue Shao
- Department of Respiratory Medicine, Affiliated Zhongshan Hospital of Dalian University, Dalian, China
| | - Guangmei Chen
- Department of Respiratory Medicine, Affiliated Zhongshan Hospital of Dalian University, Dalian, China
| | - Simiao Tian
- Department of Respiratory Medicine, Affiliated Zhongshan Hospital of Dalian University, Dalian, China
| | - Qing Zhang
- Department of Respiratory Medicine, Affiliated Zhongshan Hospital of Dalian University, Dalian, China
| | - Jianlin Wu
- Department of Respiratory Medicine, Affiliated Zhongshan Hospital of Dalian University, Dalian, China
| | - Chunxue Bai
- Department of Pulmonary and Critical Care Medicine, Zhongshan Hospital Fudan University, Shanghai, China
- Department of Pulmonary and Critical Care Medicine, Zhongshan Hospital (Xiamen), Fudan University, Xiamen, China
- Shanghai Respiratory Research Institution, Shanghai, China
| | - Dawei Yang
- Department of Pulmonary and Critical Care Medicine, Zhongshan Hospital Fudan University, Shanghai, China
- Department of Pulmonary and Critical Care Medicine, Zhongshan Hospital (Xiamen), Fudan University, Xiamen, China
- Shanghai Respiratory Research Institution, Shanghai, China
| |
Collapse
|