1
|
Liu Y, Wang J, Du B, Li Y, Li X. Predicting malignant risk of ground-glass nodules using convolutional neural networks based on dual-time-point 18F-FDG PET/CT. Cancer Imaging 2025; 25:17. [PMID: 39966960 PMCID: PMC11837479 DOI: 10.1186/s40644-025-00834-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2024] [Accepted: 02/04/2025] [Indexed: 02/20/2025] Open
Abstract
BACKGROUND Accurately predicting the malignant risk of ground-glass nodules (GGOs) is crucial for precise treatment planning. This study aims to utilize convolutional neural networks based on dual-time-point 18F-FDG PET/CT to predict the malignant risk of GGOs. METHODS Retrospectively analyzing 311 patients with 397 GGOs, this study identified 118 low-risk GGOs and 279 high-risk GGOs through pathology and follow-up according to the new WHO classification. The dataset was randomly divided into a training set comprising 239 patients (318 lesions) and a testing set comprising 72 patients (79 lesions), we employed a self-configuring 3D nnU-net convolutional neural network with majority voting method to segment GGOs and predict malignant risk of GGOs. Three independent segmentation prediction models were developed based on thin-section lung CT, early-phase 18F-FDG PET/CT, and dual-time-point 18F-FDG PET/CT, respectively. Simultaneously, the results of the dual-time-point 18F-FDG PET/CT model on the testing set were compared with the diagnostic of nuclear medicine physicians. RESULTS The dual-time-point 18F-FDG PET/CT model achieving a Dice coefficient of 0.84 ± 0.02 for GGOs segmentation and demonstrating high accuracy (84.81%), specificity (84.62%), sensitivity (84.91%), and AUC (0.85) in predicting malignant risk. The accuracy of the thin-section CT model is 73.42%, and the accuracy of the early-phase 18F-FDG PET/CT model is 78.48%, both of which are lower than the accuracy of the dual-time-point 18F-FDG PET/CT model. The diagnostic accuracy for resident, junior and expert physicians were 67.09%, 74.68%, and 78.48%, respectively. The accuracy (84.81%) of the dual-time-point 18F-FDG PET/CT model was significantly higher than that of nuclear medicine physicians. CONCLUSIONS Based on dual-time-point 18F-FDG PET/CT images, the 3D nnU-net with a majority voting method, demonstrates excellent performance in predicting the malignant risk of GGOs. This methodology serves as a valuable adjunct for physicians in the risk prediction and assessment of GGOs.
Collapse
Affiliation(s)
- Yuhang Liu
- Department of Nuclear Medicine, The First Hospital of China Medical University, No. 155 Nanjing St, Shenyang, 110001, China
| | - Jian Wang
- Department of Nuclear Medicine, The First Hospital of China Medical University, No. 155 Nanjing St, Shenyang, 110001, China
| | - Bulin Du
- Department of Nuclear Medicine, The First Hospital of China Medical University, No. 155 Nanjing St, Shenyang, 110001, China
| | - Yaming Li
- Department of Nuclear Medicine, The First Hospital of China Medical University, No. 155 Nanjing St, Shenyang, 110001, China.
| | - Xuena Li
- Department of Nuclear Medicine, The First Hospital of China Medical University, No. 155 Nanjing St, Shenyang, 110001, China.
| |
Collapse
|
2
|
Wang TW, Hong JS, Chiu HY, Chao HS, Chen YM, Wu YT. Standalone deep learning versus experts for diagnosis lung cancer on chest computed tomography: a systematic review. Eur Radiol 2024; 34:7397-7407. [PMID: 38777902 PMCID: PMC11519296 DOI: 10.1007/s00330-024-10804-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2024] [Revised: 03/10/2024] [Accepted: 04/01/2024] [Indexed: 05/25/2024]
Abstract
PURPOSE To compare the diagnostic performance of standalone deep learning (DL) algorithms and human experts in lung cancer detection on chest computed tomography (CT) scans. MATERIALS AND METHODS This study searched for studies on PubMed, Embase, and Web of Science from their inception until November 2023. We focused on adult lung cancer patients and compared the efficacy of DL algorithms and expert radiologists in disease diagnosis on CT scans. Quality assessment was performed using QUADAS-2, QUADAS-C, and CLAIM. Bivariate random-effects and subgroup analyses were performed for tasks (malignancy classification vs invasiveness classification), imaging modalities (CT vs low-dose CT [LDCT] vs high-resolution CT), study region, software used, and publication year. RESULTS We included 20 studies on various aspects of lung cancer diagnosis on CT scans. Quantitatively, DL algorithms exhibited superior sensitivity (82%) and specificity (75%) compared to human experts (sensitivity 81%, specificity 69%). However, the difference in specificity was statistically significant, whereas the difference in sensitivity was not statistically significant. The DL algorithms' performance varied across different imaging modalities and tasks, demonstrating the need for tailored optimization of DL algorithms. Notably, DL algorithms matched experts in sensitivity on standard CT, surpassing them in specificity, but showed higher sensitivity with lower specificity on LDCT scans. CONCLUSION DL algorithms demonstrated improved accuracy over human readers in malignancy and invasiveness classification on CT scans. However, their performance varies by imaging modality, underlining the importance of continued research to fully assess DL algorithms' diagnostic effectiveness in lung cancer. CLINICAL RELEVANCE STATEMENT DL algorithms have the potential to refine lung cancer diagnosis on CT, matching human sensitivity and surpassing in specificity. These findings call for further DL optimization across imaging modalities, aiming to advance clinical diagnostics and patient outcomes. KEY POINTS Lung cancer diagnosis by CT is challenging and can be improved with AI integration. DL shows higher accuracy in lung cancer detection on CT than human experts. Enhanced DL accuracy could lead to improved lung cancer diagnosis and outcomes.
Collapse
Affiliation(s)
- Ting-Wei Wang
- Institute of Biophotonics, National Yang-Ming Chiao Tung University, Taipei, Taiwan
- School of Medicine, National Yang-Ming Chiao Tung University, Taipei, Taiwan
| | - Jia-Sheng Hong
- Institute of Biophotonics, National Yang-Ming Chiao Tung University, Taipei, Taiwan
| | - Hwa-Yen Chiu
- Institute of Biophotonics, National Yang-Ming Chiao Tung University, Taipei, Taiwan
- School of Medicine, National Yang-Ming Chiao Tung University, Taipei, Taiwan
- Department of Chest Medicine, Taipei Veteran General Hospital, Taipei, Taiwan
| | - Heng-Sheng Chao
- Department of Chest Medicine, Taipei Veteran General Hospital, Taipei, Taiwan
| | - Yuh-Min Chen
- School of Medicine, National Yang-Ming Chiao Tung University, Taipei, Taiwan
- Department of Chest Medicine, Taipei Veteran General Hospital, Taipei, Taiwan
| | - Yu-Te Wu
- Institute of Biophotonics, National Yang-Ming Chiao Tung University, Taipei, Taiwan.
| |
Collapse
|
3
|
Pan Z, Hu G, Zhu Z, Tan W, Han W, Zhou Z, Song W, Yu Y, Song L, Jin Z. Predicting Invasiveness of Lung Adenocarcinoma at Chest CT with Deep Learning Ternary Classification Models. Radiology 2024; 311:e232057. [PMID: 38591974 DOI: 10.1148/radiol.232057] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/10/2024]
Abstract
Background Preoperative discrimination of preinvasive, minimally invasive, and invasive adenocarcinoma at CT informs clinical management decisions but may be challenging for classifying pure ground-glass nodules (pGGNs). Deep learning (DL) may improve ternary classification. Purpose To determine whether a strategy that includes an adjudication approach can enhance the performance of DL ternary classification models in predicting the invasiveness of adenocarcinoma at chest CT and maintain performance in classifying pGGNs. Materials and Methods In this retrospective study, six ternary models for classifying preinvasive, minimally invasive, and invasive adenocarcinoma were developed using a multicenter data set of lung nodules. The DL-based models were progressively modified through framework optimization, joint learning, and an adjudication strategy (simulating a multireader approach to resolving discordant nodule classifications), integrating two binary classification models with a ternary classification model to resolve discordant classifications sequentially. The six ternary models were then tested on an external data set of pGGNs imaged between December 2019 and January 2021. Diagnostic performance including accuracy, specificity, and sensitivity was assessed. The χ2 test was used to compare model performance in different subgroups stratified by clinical confounders. Results A total of 4929 nodules from 4483 patients (mean age, 50.1 years ± 9.5 [SD]; 2806 female) were divided into training (n = 3384), validation (n = 579), and internal (n = 966) test sets. A total of 361 pGGNs from 281 patients (mean age, 55.2 years ± 11.1 [SD]; 186 female) formed the external test set. The proposed strategy improved DL model performance in external testing (P < .001). For classifying minimally invasive adenocarcinoma, the accuracy was 85% and 79%, sensitivity was 75% and 63%, and specificity was 89% and 85% for the model with adjudication (model 6) and the model without (model 3), respectively. Model 6 showed a relatively narrow range (maximum minus minimum) across diagnostic indexes (accuracy, 1.7%; sensitivity, 7.3%; specificity, 0.9%) compared with the other models (accuracy, 0.6%-10.8%; sensitivity, 14%-39.1%; specificity, 5.5%-17.9%). Conclusion Combining framework optimization, joint learning, and an adjudication approach improved DL classification of adenocarcinoma invasiveness at chest CT. Published under a CC BY 4.0 license. Supplemental material is available for this article. See also the editorial by Sohn and Fields in this issue.
Collapse
Affiliation(s)
- Zhengsong Pan
- From the Department of Radiology (Z.P., Z. Zhu, W.S., L.S., Z.J.), Medical Research Center (G.H.), State Key Laboratory of Complex Severe and Rare Disease (G.H.), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuaifuyuan, Dongcheng District, Beijing 100730, China; 4 + 4 Medical Doctor Program (Z.P., Z. Zhu), Department of Epidemiology and Health Statistics (W.H.), Institute of Basic Medicine Sciences (W.H.), Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China; Deepwise AI Laboratory, Beijing Deepwise & League of PhD Technology, Beijing, China (W.T., Z. Zhou, Y.Y.); and Department of Computer Science, The University of Hong Kong, Hong Kong, China (Y.Y.)
| | - Ge Hu
- From the Department of Radiology (Z.P., Z. Zhu, W.S., L.S., Z.J.), Medical Research Center (G.H.), State Key Laboratory of Complex Severe and Rare Disease (G.H.), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuaifuyuan, Dongcheng District, Beijing 100730, China; 4 + 4 Medical Doctor Program (Z.P., Z. Zhu), Department of Epidemiology and Health Statistics (W.H.), Institute of Basic Medicine Sciences (W.H.), Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China; Deepwise AI Laboratory, Beijing Deepwise & League of PhD Technology, Beijing, China (W.T., Z. Zhou, Y.Y.); and Department of Computer Science, The University of Hong Kong, Hong Kong, China (Y.Y.)
| | - Zhenchen Zhu
- From the Department of Radiology (Z.P., Z. Zhu, W.S., L.S., Z.J.), Medical Research Center (G.H.), State Key Laboratory of Complex Severe and Rare Disease (G.H.), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuaifuyuan, Dongcheng District, Beijing 100730, China; 4 + 4 Medical Doctor Program (Z.P., Z. Zhu), Department of Epidemiology and Health Statistics (W.H.), Institute of Basic Medicine Sciences (W.H.), Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China; Deepwise AI Laboratory, Beijing Deepwise & League of PhD Technology, Beijing, China (W.T., Z. Zhou, Y.Y.); and Department of Computer Science, The University of Hong Kong, Hong Kong, China (Y.Y.)
| | - Weixiong Tan
- From the Department of Radiology (Z.P., Z. Zhu, W.S., L.S., Z.J.), Medical Research Center (G.H.), State Key Laboratory of Complex Severe and Rare Disease (G.H.), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuaifuyuan, Dongcheng District, Beijing 100730, China; 4 + 4 Medical Doctor Program (Z.P., Z. Zhu), Department of Epidemiology and Health Statistics (W.H.), Institute of Basic Medicine Sciences (W.H.), Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China; Deepwise AI Laboratory, Beijing Deepwise & League of PhD Technology, Beijing, China (W.T., Z. Zhou, Y.Y.); and Department of Computer Science, The University of Hong Kong, Hong Kong, China (Y.Y.)
| | - Wei Han
- From the Department of Radiology (Z.P., Z. Zhu, W.S., L.S., Z.J.), Medical Research Center (G.H.), State Key Laboratory of Complex Severe and Rare Disease (G.H.), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuaifuyuan, Dongcheng District, Beijing 100730, China; 4 + 4 Medical Doctor Program (Z.P., Z. Zhu), Department of Epidemiology and Health Statistics (W.H.), Institute of Basic Medicine Sciences (W.H.), Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China; Deepwise AI Laboratory, Beijing Deepwise & League of PhD Technology, Beijing, China (W.T., Z. Zhou, Y.Y.); and Department of Computer Science, The University of Hong Kong, Hong Kong, China (Y.Y.)
| | - Zhen Zhou
- From the Department of Radiology (Z.P., Z. Zhu, W.S., L.S., Z.J.), Medical Research Center (G.H.), State Key Laboratory of Complex Severe and Rare Disease (G.H.), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuaifuyuan, Dongcheng District, Beijing 100730, China; 4 + 4 Medical Doctor Program (Z.P., Z. Zhu), Department of Epidemiology and Health Statistics (W.H.), Institute of Basic Medicine Sciences (W.H.), Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China; Deepwise AI Laboratory, Beijing Deepwise & League of PhD Technology, Beijing, China (W.T., Z. Zhou, Y.Y.); and Department of Computer Science, The University of Hong Kong, Hong Kong, China (Y.Y.)
| | - Wei Song
- From the Department of Radiology (Z.P., Z. Zhu, W.S., L.S., Z.J.), Medical Research Center (G.H.), State Key Laboratory of Complex Severe and Rare Disease (G.H.), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuaifuyuan, Dongcheng District, Beijing 100730, China; 4 + 4 Medical Doctor Program (Z.P., Z. Zhu), Department of Epidemiology and Health Statistics (W.H.), Institute of Basic Medicine Sciences (W.H.), Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China; Deepwise AI Laboratory, Beijing Deepwise & League of PhD Technology, Beijing, China (W.T., Z. Zhou, Y.Y.); and Department of Computer Science, The University of Hong Kong, Hong Kong, China (Y.Y.)
| | - Yizhou Yu
- From the Department of Radiology (Z.P., Z. Zhu, W.S., L.S., Z.J.), Medical Research Center (G.H.), State Key Laboratory of Complex Severe and Rare Disease (G.H.), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuaifuyuan, Dongcheng District, Beijing 100730, China; 4 + 4 Medical Doctor Program (Z.P., Z. Zhu), Department of Epidemiology and Health Statistics (W.H.), Institute of Basic Medicine Sciences (W.H.), Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China; Deepwise AI Laboratory, Beijing Deepwise & League of PhD Technology, Beijing, China (W.T., Z. Zhou, Y.Y.); and Department of Computer Science, The University of Hong Kong, Hong Kong, China (Y.Y.)
| | - Lan Song
- From the Department of Radiology (Z.P., Z. Zhu, W.S., L.S., Z.J.), Medical Research Center (G.H.), State Key Laboratory of Complex Severe and Rare Disease (G.H.), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuaifuyuan, Dongcheng District, Beijing 100730, China; 4 + 4 Medical Doctor Program (Z.P., Z. Zhu), Department of Epidemiology and Health Statistics (W.H.), Institute of Basic Medicine Sciences (W.H.), Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China; Deepwise AI Laboratory, Beijing Deepwise & League of PhD Technology, Beijing, China (W.T., Z. Zhou, Y.Y.); and Department of Computer Science, The University of Hong Kong, Hong Kong, China (Y.Y.)
| | - Zhengyu Jin
- From the Department of Radiology (Z.P., Z. Zhu, W.S., L.S., Z.J.), Medical Research Center (G.H.), State Key Laboratory of Complex Severe and Rare Disease (G.H.), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuaifuyuan, Dongcheng District, Beijing 100730, China; 4 + 4 Medical Doctor Program (Z.P., Z. Zhu), Department of Epidemiology and Health Statistics (W.H.), Institute of Basic Medicine Sciences (W.H.), Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China; Deepwise AI Laboratory, Beijing Deepwise & League of PhD Technology, Beijing, China (W.T., Z. Zhou, Y.Y.); and Department of Computer Science, The University of Hong Kong, Hong Kong, China (Y.Y.)
| |
Collapse
|
4
|
Ma L, Wan C, Hao K, Cai A, Liu L. A novel fusion algorithm for benign-malignant lung nodule classification on CT images. BMC Pulm Med 2023; 23:474. [PMID: 38012620 PMCID: PMC10683224 DOI: 10.1186/s12890-023-02708-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Accepted: 10/12/2023] [Indexed: 11/29/2023] Open
Abstract
The accurate recognition of malignant lung nodules on CT images is critical in lung cancer screening, which can offer patients the best chance of cure and significant reductions in mortality from lung cancer. Convolutional Neural Network (CNN) has been proven as a powerful method in medical image analysis. Radiomics which is believed to be of interest based on expert opinion can describe high-throughput extraction from CT images. Graph Convolutional Network explores the global context and makes the inference on both graph node features and relational structures. In this paper, we propose a novel fusion algorithm, RGD, for benign-malignant lung nodule classification by incorporating Radiomics study and Graph learning into the multiple Deep CNNs to form a more complete and distinctive feature representation, and ensemble the predictions for robust decision-making. The proposed method was conducted on the publicly available LIDC-IDRI dataset in a 10-fold cross-validation experiment and it obtained an average accuracy of 93.25%, a sensitivity of 89.22%, a specificity of 95.82%, precision of 92.46%, F1 Score of 0.9114 and AUC of 0.9629. Experimental results illustrate that the RGD model achieves superior performance compared with the state-of-the-art methods. Moreover, the effectiveness of the fusion strategy has been confirmed by extensive ablation studies. In the future, the proposed model which performs well on the pulmonary nodule classification on CT images will be applied to increase confidence in the clinical diagnosis of lung cancer.
Collapse
Affiliation(s)
- Ling Ma
- College of Software, Nankai University, Tianjin, 300350, China
| | - Chuangye Wan
- College of Software, Nankai University, Tianjin, 300350, China
| | - Kexin Hao
- College of Software, Nankai University, Tianjin, 300350, China
| | - Annan Cai
- College of Software, Nankai University, Tianjin, 300350, China
| | - Lizhi Liu
- Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, 510060, Guangdong, China.
| |
Collapse
|
5
|
Huang W, Deng H, Li Z, Xiong Z, Zhou T, Ge Y, Zhang J, Jing W, Geng Y, Wang X, Tu W, Dong P, Liu S, Fan L. Baseline whole-lung CT features deriving from deep learning and radiomics: prediction of benign and malignant pulmonary ground-glass nodules. Front Oncol 2023; 13:1255007. [PMID: 37664069 PMCID: PMC10470826 DOI: 10.3389/fonc.2023.1255007] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2023] [Accepted: 07/28/2023] [Indexed: 09/05/2023] Open
Abstract
Objective To develop and validate the model for predicting benign and malignant ground-glass nodules (GGNs) based on the whole-lung baseline CT features deriving from deep learning and radiomics. Methods This retrospective study included 385 GGNs from 3 hospitals, confirmed by pathology. We used 239 GGNs from Hospital 1 as the training and internal validation set; 115 and 31 GGNs from Hospital 2 and Hospital 3 as the external test sets 1 and 2, respectively. An additional 32 stable GGNs from Hospital 3 with more than five years of follow-up were used as the external test set 3. We evaluated clinical and morphological features of GGNs at baseline chest CT and extracted the whole-lung radiomics features simultaneously. Besides, baseline whole-lung CT image features are further assisted and extracted using the convolutional neural network. We used the back-propagation neural network to construct five prediction models based on different collocations of the features used for training. The area under the receiver operator characteristic curve (AUC) was used to compare the prediction performance among the five models. The Delong test was used to compare the differences in AUC between models pairwise. Results The model integrated clinical-morphological features, whole-lung radiomic features, and whole-lung image features (CMRI) performed best among the five models, and achieved the highest AUC in the internal validation set, external test set 1, and external test set 2, which were 0.886 (95% CI: 0.841-0.921), 0.830 (95%CI: 0.749-0.893) and 0.879 (95%CI: 0.712-0.968), respectively. In the above three sets, the differences in AUC between the CMRI model and other models were significant (all P < 0.05). Moreover, the accuracy of the CMRI model in the external test set 3 was 96.88%. Conclusion The baseline whole-lung CT features were feasible to predict the benign and malignant of GGNs, which is helpful for more refined management of GGNs.
Collapse
Affiliation(s)
- Wenjun Huang
- Department of Radiology, Changzheng Hospital, Naval Medical University, Shanghai, China
- School of Medical Imaging, Weifang Medical University, Weifang, Shandong, China
- Department of Radiology, The Second People’s hospital of Deyang, Deyang, Sichuan, China
| | - Heng Deng
- School of Medicine, Shanghai University, Shanghai, China
| | - Zhaobin Li
- Department of Radiation Oncology, Shanghai Jiao Tong University Affiliated Sixth People’s Hospital, Shanghai, China
| | - Zhanda Xiong
- Department of Artificial Intelligence Medical Imaging, Tron Technology, Shanghai, China
| | - Taohu Zhou
- Department of Radiology, Changzheng Hospital, Naval Medical University, Shanghai, China
- School of Medical Imaging, Weifang Medical University, Weifang, Shandong, China
| | - Yanming Ge
- School of Medical Imaging, Weifang Medical University, Weifang, Shandong, China
- Medical Imaging Center, Affiliated Hospital of Weifang Medical University, Weifang, Shandong, China
| | - Jing Zhang
- Department of Radiology, The Second People’s hospital of Deyang, Deyang, Sichuan, China
| | - Wenbin Jing
- Department of Radiology, The Second People’s hospital of Deyang, Deyang, Sichuan, China
| | - Yayuan Geng
- Clinical Research Institute, Shukun (Beijing) Technology Co., Ltd., Beijing, China
| | - Xiang Wang
- Department of Radiology, Changzheng Hospital, Naval Medical University, Shanghai, China
| | - Wenting Tu
- Department of Radiology, Changzheng Hospital, Naval Medical University, Shanghai, China
| | - Peng Dong
- School of Medical Imaging, Weifang Medical University, Weifang, Shandong, China
| | - Shiyuan Liu
- Department of Radiology, Changzheng Hospital, Naval Medical University, Shanghai, China
| | - Li Fan
- Department of Radiology, Changzheng Hospital, Naval Medical University, Shanghai, China
| |
Collapse
|
6
|
Possible Bias in Supervised Deep Learning Algorithms for CT Lung Nodule Detection and Classification. Cancers (Basel) 2022; 14:cancers14163867. [PMID: 36010861 PMCID: PMC9405732 DOI: 10.3390/cancers14163867] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Revised: 07/22/2022] [Accepted: 07/22/2022] [Indexed: 11/25/2022] Open
Abstract
Simple Summary Artificial Intelligence (AI) algorithms can assist clinicians in their daily tasks by automatically detecting and/or classifying nodules in chest CT scans. Bias of such algorithms is one of the reasons why implementation of them in clinical practice is still not widely adopted. There is no published review on the bias that these algorithms may contain. This review aims to present different types of bias in such algorithms and present possible ways to mitigate them. Only then it would be possible to ensure that these algorithms work as intended under many different clinical settings. Abstract Artificial Intelligence (AI) algorithms for automatic lung nodule detection and classification can assist radiologists in their daily routine of chest CT evaluation. Even though many AI algorithms for these tasks have already been developed, their implementation in the clinical workflow is still largely lacking. Apart from the significant number of false-positive findings, one of the reasons for that is the bias that these algorithms may contain. In this review, different types of biases that may exist in chest CT AI nodule detection and classification algorithms are listed and discussed. Examples from the literature in which each type of bias occurs are presented, along with ways to mitigate these biases. Different types of biases can occur in chest CT AI algorithms for lung nodule detection and classification. Mitigation of them can be very difficult, if not impossible to achieve completely.
Collapse
|
7
|
[Chinese Experts Consensus on Artificial Intelligence Assisted Management for
Pulmonary Nodule (2022 Version)]. ZHONGGUO FEI AI ZA ZHI = CHINESE JOURNAL OF LUNG CANCER 2022; 25:219-225. [PMID: 35340198 PMCID: PMC9051301 DOI: 10.3779/j.issn.1009-3419.2022.102.08] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Low-dose computed tomography (CT) for lung cancer screening has been proven to reduce lung cancer deaths in the screening group compared with the control group. The increasing number of pulmonary nodules being detected by CT scans significantly increase the workload of the radiologists for scan interpretation. Artificial intelligence (AI) has the potential to increase the efficiency of pulmonary nodule discrimination and has been tested in preliminary studies for nodule management. As more and more artificial AI products are commercialized, the consensus statement has been organized in a collaborative effort by Thoracic Surgery Committee, Department of Simulated Medicine, Wu Jieping Medical Foundation to aid clinicians in the application of AI-assisted management for pulmonary nodules.
.
Collapse
|
8
|
Fahmy D, Kandil H, Khelifi A, Yaghi M, Ghazal M, Sharafeldeen A, Mahmoud A, El-Baz A. How AI Can Help in the Diagnostic Dilemma of Pulmonary Nodules. Cancers (Basel) 2022; 14:cancers14071840. [PMID: 35406614 PMCID: PMC8997734 DOI: 10.3390/cancers14071840] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Revised: 03/29/2022] [Accepted: 03/30/2022] [Indexed: 02/04/2023] Open
Abstract
Simple Summary Pulmonary nodules are considered a sign of bronchogenic carcinoma, detecting them early will reduce their progression and can save lives. Lung cancer is the second most common type of cancer in both men and women. This manuscript discusses the current applications of artificial intelligence (AI) in lung segmentation as well as pulmonary nodule segmentation and classification using computed tomography (CT) scans, published in the last two decades, in addition to the limitations and future prospects in the field of AI. Abstract Pulmonary nodules are the precursors of bronchogenic carcinoma, its early detection facilitates early treatment which save a lot of lives. Unfortunately, pulmonary nodule detection and classification are liable to subjective variations with high rate of missing small cancerous lesions which opens the way for implementation of artificial intelligence (AI) and computer aided diagnosis (CAD) systems. The field of deep learning and neural networks is expanding every day with new models designed to overcome diagnostic problems and provide more applicable and simply used models. We aim in this review to briefly discuss the current applications of AI in lung segmentation, pulmonary nodule detection and classification.
Collapse
Affiliation(s)
- Dalia Fahmy
- Diagnostic Radiology Department, Mansoura University Hospital, Mansoura 35516, Egypt;
| | - Heba Kandil
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (H.K.); (A.S.); (A.M.)
- Information Technology Department, Faculty of Computers and Informatics, Mansoura University, Mansoura 35516, Egypt
| | - Adel Khelifi
- Computer Science and Information Technology Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates;
| | - Maha Yaghi
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates; (M.Y.); (M.G.)
| | - Mohammed Ghazal
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates; (M.Y.); (M.G.)
| | - Ahmed Sharafeldeen
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (H.K.); (A.S.); (A.M.)
| | - Ali Mahmoud
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (H.K.); (A.S.); (A.M.)
| | - Ayman El-Baz
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (H.K.); (A.S.); (A.M.)
- Correspondence:
| |
Collapse
|
9
|
Abstract
AbstractRobot localization is a fundamental capability of all mobile robots. Because of uncertainties in acting and sensing, and environmental factors such as people flocking around robots, there is always the risk that a robot loses its localization. Very often behaviors of robots rely on a reliable position estimation. Thus, for dependability of robot systems it is of great interest for the system to know the state of its localization component. In this paper we present an approach that allows a robot to asses if the localization is still correct. The approach assumes that the underlying localization approach is based on a particle filter. We use deep learning to identify temporal patterns in the particles in the case of losing/lost localization. These patterns are then combined with weak classifiers from the particle set and sensor perception for boosted learning of a localization estimator. Through the extraction of features generated by neural networks and its usage for training strong classifiers, the robots localization accuracy can be estimated. The approach is evaluated in a simulated transport robot environment where a degraded localization is provoked by disturbances cased by dynamic obstacles. Results show that it is possible to monitor the robots localization accuracy using convolutional as well as recurrent neural networks. The additional boosting using Adaboost also yields an increase in training accuracy. Thus, this paper directly contributes to the verification of localization performance.
Collapse
|
10
|
Silva F, Pereira T, Neves I, Morgado J, Freitas C, Malafaia M, Sousa J, Fonseca J, Negrão E, Flor de Lima B, Correia da Silva M, Madureira AJ, Ramos I, Costa JL, Hespanhol V, Cunha A, Oliveira HP. Towards Machine Learning-Aided Lung Cancer Clinical Routines: Approaches and Open Challenges. J Pers Med 2022; 12:480. [PMID: 35330479 PMCID: PMC8950137 DOI: 10.3390/jpm12030480] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 02/28/2022] [Accepted: 03/10/2022] [Indexed: 12/15/2022] Open
Abstract
Advancements in the development of computer-aided decision (CAD) systems for clinical routines provide unquestionable benefits in connecting human medical expertise with machine intelligence, to achieve better quality healthcare. Considering the large number of incidences and mortality numbers associated with lung cancer, there is a need for the most accurate clinical procedures; thus, the possibility of using artificial intelligence (AI) tools for decision support is becoming a closer reality. At any stage of the lung cancer clinical pathway, specific obstacles are identified and "motivate" the application of innovative AI solutions. This work provides a comprehensive review of the most recent research dedicated toward the development of CAD tools using computed tomography images for lung cancer-related tasks. We discuss the major challenges and provide critical perspectives on future directions. Although we focus on lung cancer in this review, we also provide a more clear definition of the path used to integrate AI in healthcare, emphasizing fundamental research points that are crucial for overcoming current barriers.
Collapse
Affiliation(s)
- Francisco Silva
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- FCUP—Faculty of Science, University of Porto, 4169-007 Porto, Portugal
| | - Tania Pereira
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
| | - Inês Neves
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- ICBAS—Abel Salazar Biomedical Sciences Institute, University of Porto, 4050-313 Porto, Portugal
| | - Joana Morgado
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
| | - Cláudia Freitas
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
- FMUP—Faculty of Medicine, University of Porto, 4200-319 Porto, Portugal;
| | - Mafalda Malafaia
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- FEUP—Faculty of Engineering, University of Porto, 4200-465 Porto, Portugal
| | - Joana Sousa
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
| | - João Fonseca
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- FEUP—Faculty of Engineering, University of Porto, 4200-465 Porto, Portugal
| | - Eduardo Negrão
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
| | - Beatriz Flor de Lima
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
| | - Miguel Correia da Silva
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
| | - António J. Madureira
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
- FMUP—Faculty of Medicine, University of Porto, 4200-319 Porto, Portugal;
| | - Isabel Ramos
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
- FMUP—Faculty of Medicine, University of Porto, 4200-319 Porto, Portugal;
| | - José Luis Costa
- FMUP—Faculty of Medicine, University of Porto, 4200-319 Porto, Portugal;
- i3S—Instituto de Investigação e Inovação em Saúde, Universidade do Porto, 4200-135 Porto, Portugal
- IPATIMUP—Institute of Molecular Pathology and Immunology of the University of Porto, 4200-135 Porto, Portugal
| | - Venceslau Hespanhol
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
- FMUP—Faculty of Medicine, University of Porto, 4200-319 Porto, Portugal;
| | - António Cunha
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- UTAD—University of Trás-os-Montes and Alto Douro, 5001-801 Vila Real, Portugal
| | - Hélder P. Oliveira
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- FCUP—Faculty of Science, University of Porto, 4169-007 Porto, Portugal
| |
Collapse
|
11
|
Li Y, Chen D, Wu X, Yang W, Chen Y. A narrative review of artificial intelligence-assisted histopathologic diagnosis and decision-making for non-small cell lung cancer: achievements and limitations. J Thorac Dis 2022; 13:7006-7020. [PMID: 35070383 PMCID: PMC8743410 DOI: 10.21037/jtd-21-806] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2021] [Accepted: 12/01/2021] [Indexed: 12/12/2022]
Abstract
Objective To summarize the current evidence regarding the applications, workflow, and limitations of artificial intelligence (AI) in the management of patients pathologically-diagnosed with lung cancer. Background Lung cancer is one of the most common cancers and the leading cause of cancer-related deaths worldwide. AI technologies have been applied to daily medical workflow and have achieved an excellent performance in predicting histopathologic subtypes, analyzing gene mutation profiles, and assisting in clinical decision-making for lung cancer treatment. More advanced deep learning for classifying pathologic images with minimal human interactions has been developed in addition to the conventional machine learning scheme. Methods Studies were identified by searching databases, including PubMed, EMBASE, Web of Science, and Cochrane Library, up to February 2021 without language restrictions. Conclusions A number of studies have evaluated AI pipelines and confirmed that AI is robust and efficacious in lung cancer diagnosis and decision-making, demonstrating that AI models are a useful tool for assisting oncologists in health management. Although several limitations that pose an obstacle for the widespread use of AI schemes persist, the unceasing refinement of AI techniques is poised to overcome such problems. Thus, AI technology is a promising tool for use in diagnosing and managing lung cancer.
Collapse
Affiliation(s)
- Yongzhong Li
- Department of Thoracic Surgery, the Second Affiliated Hospital of Soochow University, Suzhou, China
| | - Donglai Chen
- Department of Thoracic Surgery, Shanghai Pulmonary Hospital, Tongji University, School of Medicine, Shanghai, China
| | - Xuejie Wu
- Department of Thoracic Surgery, the Second Affiliated Hospital of Soochow University, Suzhou, China
| | - Wentao Yang
- Department of Thoracic Surgery, the Second Affiliated Hospital of Soochow University, Suzhou, China
| | - Yongbing Chen
- Department of Thoracic Surgery, the Second Affiliated Hospital of Soochow University, Suzhou, China
| |
Collapse
|
12
|
Wang J, Yuan C, Han C, Wen Y, Lu H, Liu C, She Y, Deng J, Li B, Qian D, Chen C. IMAL-Net: Interpretable multi-task attention learning network for invasive lung adenocarcinoma screening in CT images. Med Phys 2021; 48:7913-7929. [PMID: 34674280 DOI: 10.1002/mp.15293] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2021] [Revised: 08/26/2021] [Accepted: 09/29/2021] [Indexed: 12/17/2022] Open
Abstract
PURPOSE Feature maps created from deep convolutional neural networks (DCNNs) have been widely used for visual explanation of DCNN-based classification tasks. However, many clinical applications such as benign-malignant classification of lung nodules normally require quantitative and objective interpretability, rather than just visualization. In this paper, we propose a novel interpretable multi-task attention learning network named IMAL-Net for early invasive adenocarcinoma screening in chest computed tomography images, which takes advantage of segmentation prior to assist interpretable classification. METHODS Two sub-ResNets are firstly integrated together via a prior-attention mechanism for simultaneous nodule segmentation and invasiveness classification. Then, numerous radiomic features from the segmentation results are concatenated with high-level semantic features from the classification subnetwork by FC layers to achieve superior performance. Meanwhile, an end-to-end feature selection mechanism (named FSM) is designed to quantify crucial radiomic features greatly affecting the prediction of each sample, and thus it can provide clinically applicable interpretability to the prediction result. RESULTS Nodule samples from a total of 1626 patients were collected from two grade-A hospitals for large-scale verification. Five-fold cross validation demonstrated that the proposed IMAL-Net can achieve an AUC score of 93.8% ± 1.1% and a recall score of 93.8% ± 2.8% for identification of invasive lung adenocarcinoma. CONCLUSIONS It can be concluded that fusing semantic features and radiomic features can achieve obvious improvements in the invasiveness classification task. Moreover, by learning more fine-grained semantic features and highlighting the most important radiomics features, the proposed attention and FSM mechanisms not only can further improve the performance but also can be used for both visual explanations and objective analysis of the classification results.
Collapse
Affiliation(s)
- Jun Wang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Cheng Yuan
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Can Han
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Yaofeng Wen
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Hongbing Lu
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China
| | - Chen Liu
- Department of Radiology, Southwest Hospital, Third Military University (Army Medical University), Chongqing, China
| | - Yunlang She
- Department of Thoracic Surgery, Shanghai Pulmonary Hospital, Tongji University School of Medicine, Shanghai, China
| | - Jiajun Deng
- Department of Thoracic Surgery, Shanghai Pulmonary Hospital, Tongji University School of Medicine, Shanghai, China
| | - Biao Li
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai, China
| | - Dahong Qian
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Chang Chen
- Department of Thoracic Surgery, Shanghai Pulmonary Hospital, Tongji University School of Medicine, Shanghai, China
| |
Collapse
|
13
|
Pu J, Sechrist J, Meng X, Leader JK, Sciurba FC. A pilot study: Quantify lung volume and emphysema extent directly from two-dimensional scout images. Med Phys 2021; 48:4316-4325. [PMID: 34077564 DOI: 10.1002/mp.15019] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2020] [Revised: 05/26/2021] [Accepted: 05/26/2021] [Indexed: 12/12/2022] Open
Abstract
PURPOSE The potential to compute volume metrics of emphysema from planar scout images was investigated in this study. The successful implementation of this concept will have a wide impact in different fields, and specifically, maximize the diagnostic potential of the planar medical images. METHODS We investigated our premise using a well-characterized chronic obstructive pulmonary disease (COPD) cohort. In this cohort, planar scout images from computed tomography (CT) scans were used to compute lung volume and percentage of emphysema. Lung volume and percentage of emphysema were quantified on the volumetric CT images and used as the "ground truth" for developing the models to compute the variables from the corresponding scout images. We trained two classical convolutional neural networks (CNNs), including VGG19 and InceptionV3, to compute lung volume and the percentage of emphysema from the scout images. The scout images (n = 1,446) were split into three subgroups: (1) training (n = 1,235), (2) internal validation (n = 99), and (3) independent test (n = 112) at the subject level in a ratio of 8:1:1. The mean absolute difference (MAD) and R-square (R2) were the performance metrics to evaluate the prediction performance of the developed models. RESULTS The lung volumes and percentages of emphysema computed from a single planar scout image were significantly linear correlated with the measures quantified using volumetric CT images (VGG19: R2 = 0.934 for lung volume and R2 = 0.751 for emphysema percentage, and InceptionV3: R2 = 0.977 for lung volume and R2 = 0.775 for emphysema percentage). The mean absolute differences (MADs) for lung volume and percentage of emphysema were 0.302 ± 0.247L and 2.89 ± 2.58%, respectively, for VGG19, and 0.366 ± 0.287L and 3.19 ± 2.14, respectively, for InceptionV3. CONCLUSIONS Our promising results demonstrated the feasibility of inferring volume metrics from planar images using CNNs.
Collapse
Affiliation(s)
- Jiantao Pu
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA, USA
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Jacob Sechrist
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA, USA
| | - Xin Meng
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA, USA
| | - Joseph K Leader
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA, USA
| | - Frank C Sciurba
- Department of Medicine, Division of Pulmonary, Allergy and Critical Care Medicine, University of Pittsburgh, Pittsburgh, PA, USA
| |
Collapse
|
14
|
Gong J, Liu J, Li H, Zhu H, Wang T, Hu T, Li M, Xia X, Hu X, Peng W, Wang S, Tong T, Gu Y. Deep Learning-Based Stage-Wise Risk Stratification for Early Lung Adenocarcinoma in CT Images: A Multi-Center Study. Cancers (Basel) 2021; 13:cancers13133300. [PMID: 34209366 PMCID: PMC8269183 DOI: 10.3390/cancers13133300] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Revised: 06/28/2021] [Accepted: 06/28/2021] [Indexed: 12/21/2022] Open
Abstract
Simple Summary Prediction of the malignancy and invasiveness of ground glass nodules (GGNs) from computed tomography images is a crucial task for radiologists in risk stratification of early-stage lung adenocarcinoma. In order to solve this challenge, a two-stage deep neural network (DNN) was developed based on the images collected from four centers. A multi-reader multi-case observer study was conducted to evaluate the model capability. The performance of our model was comparable or even more accurate than that of senior radiologists, with average area under the curve values of 0.76 and 0.95 for two tasks, respectively. Findings suggest (1) a positive trend between the diagnostic performance and radiologist’s experience, (2) DNN yielded equivalent or even higher performance in comparison with senior radiologists, and (3) low image resolution reduced the model performance in predicting the risks of GGNs. Abstract This study aims to develop a deep neural network (DNN)-based two-stage risk stratification model for early lung adenocarcinomas in CT images, and investigate the performance compared with practicing radiologists. A total of 2393 GGNs were retrospectively collected from 2105 patients in four centers. All the pathologic results of GGNs were obtained from surgically resected specimens. A two-stage deep neural network was developed based on the 3D residual network and atrous convolution module to diagnose benign and malignant GGNs (Task1) and classify between invasive adenocarcinoma (IA) and non-IA for these malignant GGNs (Task2). A multi-reader multi-case observer study with six board-certified radiologists’ (average experience 11 years, range 2–28 years) participation was conducted to evaluate the model capability. DNN yielded area under the receiver operating characteristic curve (AUC) values of 0.76 ± 0.03 (95% confidence interval (CI): (0.69, 0.82)) and 0.96 ± 0.02 (95% CI: (0.92, 0.98)) for Task1 and Task2, which were equivalent to or higher than radiologists in the senior group with average AUC values of 0.76 and 0.95, respectively (p > 0.05). With the CT image slice thickness increasing from 1.15 mm ± 0.36 to 1.73 mm ± 0.64, DNN performance decreased 0.08 and 0.22 for the two tasks. The results demonstrated (1) a positive trend between the diagnostic performance and radiologist’s experience, (2) the DNN yielded equivalent or even higher performance in comparison with senior radiologists, and (3) low image resolution decreased model performance in predicting the risks of GGNs. Once tested prospectively in clinical practice, the DNN could have the potential to assist doctors in precision diagnosis and treatment of early lung adenocarcinoma.
Collapse
Affiliation(s)
- Jing Gong
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai 200032, China; (J.G.); (H.L.); (H.Z.); (T.W.); (T.H.); (M.L.); (W.P.)
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Jiyu Liu
- Department of Radiology, Shanghai Pulmonary Hospital, 507 Zheng Min Road, Shanghai 200433, China;
| | - Haiming Li
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai 200032, China; (J.G.); (H.L.); (H.Z.); (T.W.); (T.H.); (M.L.); (W.P.)
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Hui Zhu
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai 200032, China; (J.G.); (H.L.); (H.Z.); (T.W.); (T.H.); (M.L.); (W.P.)
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Tingting Wang
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai 200032, China; (J.G.); (H.L.); (H.Z.); (T.W.); (T.H.); (M.L.); (W.P.)
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Tingdan Hu
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai 200032, China; (J.G.); (H.L.); (H.Z.); (T.W.); (T.H.); (M.L.); (W.P.)
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Menglei Li
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai 200032, China; (J.G.); (H.L.); (H.Z.); (T.W.); (T.H.); (M.L.); (W.P.)
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Xianwu Xia
- Department of Radiology, Municipal Hospital Affiliated to Taizhou University, Taizhou 318000, China;
| | - Xianfang Hu
- Department of Radiology, Huzhou Central Hospital Affiliated Central Hospital of Huzhou University, 1558 Sanhuan North Road, Huzhou 313000, China;
| | - Weijun Peng
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai 200032, China; (J.G.); (H.L.); (H.Z.); (T.W.); (T.H.); (M.L.); (W.P.)
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Shengping Wang
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai 200032, China; (J.G.); (H.L.); (H.Z.); (T.W.); (T.H.); (M.L.); (W.P.)
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
- Correspondence: (S.W.); (T.T.); (Y.G.); Tel.: +86-13818521975 (S.W); +86-18017312912 (T.T.); +86-18017312040 (Y.G.)
| | - Tong Tong
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai 200032, China; (J.G.); (H.L.); (H.Z.); (T.W.); (T.H.); (M.L.); (W.P.)
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
- Correspondence: (S.W.); (T.T.); (Y.G.); Tel.: +86-13818521975 (S.W); +86-18017312912 (T.T.); +86-18017312040 (Y.G.)
| | - Yajia Gu
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai 200032, China; (J.G.); (H.L.); (H.Z.); (T.W.); (T.H.); (M.L.); (W.P.)
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
- Correspondence: (S.W.); (T.T.); (Y.G.); Tel.: +86-13818521975 (S.W); +86-18017312912 (T.T.); +86-18017312040 (Y.G.)
| |
Collapse
|
15
|
Chen W, Lei X, Chakrabortty R, Chandra Pal S, Sahana M, Janizadeh S. Evaluation of different boosting ensemble machine learning models and novel deep learning and boosting framework for head-cut gully erosion susceptibility. JOURNAL OF ENVIRONMENTAL MANAGEMENT 2021; 284:112015. [PMID: 33515838 DOI: 10.1016/j.jenvman.2021.112015] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/16/2020] [Revised: 12/20/2020] [Accepted: 01/16/2021] [Indexed: 06/12/2023]
Abstract
The objective of this study is to assess the gully head-cut erosion susceptibility and identify gully erosion prone areas in the Meimand watershed, Iran. In recent years, this study area has been greatly influenced by several head-cut gullies due to unusual climatic factors and human induced activity. The present study is therefore intended to address this issue by developing head-cut gully erosion prediction maps using boosting ensemble machine learning algorithms, namely Boosted Tree (BT), Boosted Generalized Linear Models (BGLM), Boosted Regression Tree (BRT), Extreme Gradient Boosting (XGB), and Deep Boost (DB). Initially, we produced a gully erosion inventory map using a variety of resources, including published reports, Google Earth images, and field records of the Global Positioning System (GPS). Subsequently, we distributed this information randomly and choose 70% (102) of the test gullies and the remaining 30% (43) for validation. The methodology was designed using morphometric and thematic determinants, including 14 head-cut gully erosion conditioning features. We have also investigated the following: (a) Multi-collinearity analysis to determine the linearity of the independent variables, (b) Predictive capability of piping models using train and test dataset and (c) Variables importance affecting head-cut gully erosion. The study reveals that altitude, land use, distances from road and soil characteristics influenced the method with the greatest impact on head-cut gully erosion susceptibility. We presented five head-cut gully erosion susceptibility maps and investigated their predictive accuracy through area under curve (AUC). The AUC test reveals that the DB machine learning method demonstrated significantly higher accuracy (AUC = 0.95) than the BT (AUC = 0.93), BGLM (AUC = 0.91), BRT (AUC = 0.94) and XGB (AUC = 0.92) approaches. The predicted head-cut gully erosion susceptibility maps can be used by policy makers and local authorities for soil conservation and to prevent threats to human activities.
Collapse
Affiliation(s)
- Wei Chen
- College of Geology & Environment, Xi'an University of Science and Technology, Xi'an, 710054, Shaanxi, China; Key Laboratory of Coal Resources Exploration and Comprehensive Utilization, Ministry of Natural Resources, Xi'an, 710021, China.
| | - Xinxiang Lei
- College of Geology & Environment, Xi'an University of Science and Technology, Xi'an, 710054, Shaanxi, China.
| | | | | | - Mehebub Sahana
- Research Associate, School of Environment, Education & Development, University of Manchester, UK.
| | - Saeid Janizadeh
- Department of Watershed Management Engineering and Sciences, Faculty in Natural Resources and Marine Science, Tarbiat Modares University, Tehran, 14115-111, Iran.
| |
Collapse
|
16
|
Wang D, Zhang T, Li M, Bueno R, Jayender J. 3D deep learning based classification of pulmonary ground glass opacity nodules with automatic segmentation. Comput Med Imaging Graph 2021; 88:101814. [PMID: 33486368 PMCID: PMC8111799 DOI: 10.1016/j.compmedimag.2020.101814] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2020] [Revised: 09/10/2020] [Accepted: 10/23/2020] [Indexed: 01/15/2023]
Abstract
Classifying ground-glass lung nodules (GGNs) into atypical adenomatous hyperplasia (AAH), adenocarcinoma in situ (AIS), minimally invasive adenocarcinoma (MIA), and invasive adenocarcinoma (IAC) on diagnostic CT images is important to evaluate the therapy options for lung cancer patients. In this paper, we propose a joint deep learning model where the segmentation can better facilitate the classification of pulmonary GGNs. Based on our observation that masking the nodule to train the model results in better lesion classification, we propose to build a cascade architecture with both segmentation and classification networks. The segmentation model works as a trainable preprocessing module to provide the classification-guided 'attention' weight map to the raw CT data to achieve better diagnosis performance. We evaluate our proposed model and compare with other baseline models for 4 clinically significant nodule classification tasks, defined by a combination of pathology types, using 4 classification metrics: Accuracy, Average F1 Score, Matthews Correlation Coefficient (MCC), and Area Under the Receiver Operating Characteristic Curve (AUC). Experimental results show that the proposed method outperforms other baseline models on all the diagnostic classification tasks.
Collapse
Affiliation(s)
- Duo Wang
- Department of Automation, Tsinghua University, Beijing 100084, China; Department of Radiology, Brigham and Women's Hospital, Boston 02115, USA.
| | - Tao Zhang
- Department of Automation, Tsinghua University, Beijing 100084, China; Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100084, China.
| | - Ming Li
- Department of Radiology, Huadong Hospital affiliated to Fudan University, Shanghai 200040, China.
| | - Raphael Bueno
- Department of Thoracic Surgery, Brigham and Women's Hospital, Boston 02115, USA; Harvard Medical School, Boston 02115, USA.
| | - Jagadeesan Jayender
- Department of Radiology, Brigham and Women's Hospital, Boston 02115, USA; Harvard Medical School, Boston 02115, USA.
| |
Collapse
|
17
|
Ashraf SF, Yin K, Meng CX, Wang Q, Wang Q, Pu J, Dhupar R. Predicting benign, preinvasive, and invasive lung nodules on computed tomography scans using machine learning. J Thorac Cardiovasc Surg 2021; 163:1496-1505.e10. [PMID: 33726909 DOI: 10.1016/j.jtcvs.2021.02.010] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/31/2020] [Revised: 01/28/2021] [Accepted: 02/02/2021] [Indexed: 12/17/2022]
Abstract
OBJECTIVE The study objective was to investigate if machine learning algorithms can predict whether a lung nodule is benign, adenocarcinoma, or its preinvasive subtype from computed tomography images alone. METHODS A dataset of chest computed tomography scans containing lung nodules was collected with their pathologic diagnosis from several sources. The dataset was split randomly into training (70%), internal validation (15%), and independent test sets (15%) at the patient level. Two machine learning algorithms were developed, trained, and validated. The first algorithm used the support vector machine model, and the second used deep learning technology: a convolutional neural network. Receiver operating characteristic analysis was used to evaluate the performance of the classification on the test dataset. RESULTS The support vector machine/convolutional neural network-based models classified nodules into 6 categories resulting in an area under the curve of 0.59/0.65 when differentiating atypical adenomatous hyperplasia versus adenocarcinoma in situ, 0.87/0.86 with minimally invasive adenocarcinoma versus invasive adenocarcinoma, 0.76/0.72 atypical adenomatous hyperplasia + adenocarcinoma in situ versus minimally invasive adenocarcinoma, 0.89/0.87 atypical adenomatous hyperplasia + adenocarcinoma in situ versus minimally invasive adenocarcinoma + invasive adenocarcinoma, and 0.93/0.92 atypical adenomatous hyperplasia + adenocarcinoma in situ + minimally invasive adenocarcinoma versus invasive adenocarcinoma. Classifying benign versus atypical adenomatous hyperplasia + adenocarcinoma in situ + minimally invasive adenocarcinoma versus invasive adenocarcinoma resulted in a micro-average area under the curve of 0.93/0.94 for the support vector machine/convolutional neural network models, respectively. The convolutional neural network-based methods had higher sensitivities than the support vector machine-based methods but lower specificities and accuracies. CONCLUSIONS The machine learning algorithms demonstrated reasonable performance in differentiating benign versus preinvasive versus invasive adenocarcinoma from computed tomography images alone. However, the prediction accuracy varies across its subtypes. This holds the potential for improved diagnostic capabilities with less-invasive means.
Collapse
Affiliation(s)
- Syed Faaz Ashraf
- Department of Cardiothoracic Surgery, University of Pittsburgh School of Medicine, Pittsburgh, Pa
| | - Ke Yin
- Department of Radiology, The Affiliated Zhongshan Hospital of Dalian University, Dalian, China
| | | | - Qi Wang
- Department of Radiology, The Fourth Hospital of Hebei Medical University, Hebei, China
| | - Qiong Wang
- Department of Radiology, The Affiliated Zhongshan Hospital of Dalian University, Dalian, China
| | - Jiantao Pu
- Department of Radiology, University of Pittsburgh, Pittsburgh, Pa; Department of Bioengineering, University of Pittsburgh, Pittsburgh, Pa
| | - Rajeev Dhupar
- Department of Cardiothoracic Surgery, University of Pittsburgh School of Medicine, Pittsburgh, Pa; VA Pittsburgh Healthcare System, Pittsburgh, Pa.
| |
Collapse
|
18
|
Wang J, Bao Y, Wen Y, Lu H, Luo H, Xiang Y, Li X, Liu C, Qian D. Prior-Attention Residual Learning for More Discriminative COVID-19 Screening in CT Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2572-2583. [PMID: 32730210 DOI: 10.1109/tmi.2020.2994908] [Citation(s) in RCA: 116] [Impact Index Per Article: 23.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/19/2023]
Abstract
We propose a conceptually simple framework for fast COVID-19 screening in 3D chest CT images. The framework can efficiently predict whether or not a CT scan contains pneumonia while simultaneously identifying pneumonia types between COVID-19 and Interstitial Lung Disease (ILD) caused by other viruses. In the proposed method, two 3D-ResNets are coupled together into a single model for the two above-mentioned tasks via a novel prior-attention strategy. We extend residual learning with the proposed prior-attention mechanism and design a new so-called prior-attention residual learning (PARL) block. The model can be easily built by stacking the PARL blocks and trained end-to-end using multi-task losses. More specifically, one 3D-ResNet branch is trained as a binary classifier using lung images with and without pneumonia so that it can highlight the lesion areas within the lungs. Simultaneously, inside the PARL blocks, prior-attention maps are generated from this branch and used to guide another branch to learn more discriminative representations for the pneumonia-type classification. Experimental results demonstrate that the proposed framework can significantly improve the performance of COVID-19 screening. Compared to other methods, it achieves a state-of-the-art result. Moreover, the proposed method can be easily extended to other similar clinical applications such as computer-aided detection and diagnosis of pulmonary nodules in CT images, glaucoma lesions in Retina fundus images, etc.
Collapse
|