1
|
Fang C, Li X, Yang Y. Unsupervised non-small cell lung cancer tumor segmentation using cycled generative adversarial network with similarity-based discriminator. J Appl Clin Med Phys 2025:e70107. [PMID: 40266997 DOI: 10.1002/acm2.70107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2024] [Revised: 03/04/2025] [Accepted: 03/14/2025] [Indexed: 04/25/2025] Open
Abstract
BACKGROUND Tumor segmentation is crucial for lung disease diagnosis and treatment. Most existing deep learning-based automatic segmentation methods rely on manually annotated data for network training. PURPOSE This study aims to develop an unsupervised tumor segmentation network smic-GAN by using a similarity-driven generative adversarial network trained with cycle strategy. The proposed method does not rely on any manual annotations and thus reduce the training data preparation workload. METHODS A total of 609 CT scans of lung cancer patients are collected, of which 504 are used for training, 35 for validation, and 70 for testing. Smic-GAN is developed and trained to transform lung CT slices with tumors into synthetic images without tumors. Residual images are obtained by subtracting synthetic images from original CT slices. Thresholding, 3D median filtering, morphological erosion, and dilation operations are implemented to generate binary tumor masks from the residual images. Dice similarity, positive predictive value (PPV), sensitivity (SEN), 95% Hausdorff distance (HD95) and average surface distance (ASD) are used to evaluate the accuracy of tumor contouring. RESULTS The smic-GAN method achieved a performance comparable to two supervised methods UNet and Incre-MRRN, and outperformed unsupervised cycle-GAN. The Dice value for smic-GAN is significantly better than cycle-GAN (74.5% ± $ \pm $ 11.2% vs. 69.1% ± $ \pm $ 16.0%, p < 0.05). The PPV for smic-GAN, UNet, and Incre-MRRN are 83.8% ± $ \pm $ 21.5%,75.1% ± $ \pm $ 19.7%, and 78.2% ± $ \pm $ 16.6% respectively. The HD95 are 10.3 ± $\pm $ 7.7, 14.5 ± $\pm $ 14.6 and 6.2 ± $\pm $ 4.0 mm, respectively. The ASD are 3.7 ± $\pm $ 2.7, 4.8 ± $\pm $ 3.8, and 2.4 ± $\pm $ 1.8 mm, respectively. CONCLUSION The proposed smic-GAN performs comparably to the existing supervised methods UNet and Incre-MRRN. It does not rely on any manual annotations and can reduce the workload of training data preparation. It can also provide a good start for manual annotation in the training of supervised networks.
Collapse
Affiliation(s)
- Chengyijue Fang
- Department of Engineering and Applied Physics, University of Science and Technology of China, Hefei, Anhui, China
| | - Xiaoyang Li
- Department of Radiation Oncology, the First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China
| | - Yidong Yang
- Department of Engineering and Applied Physics, University of Science and Technology of China, Hefei, Anhui, China
- Department of Radiation Oncology, the First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China
| |
Collapse
|
2
|
Ahmad I, Anwar SJ, Hussain B, Ur Rehman A, Bermak A. Anatomy guided modality fusion for cancer segmentation in PET CT volumes and images. Sci Rep 2025; 15:12153. [PMID: 40204866 PMCID: PMC11982402 DOI: 10.1038/s41598-025-95757-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2024] [Accepted: 03/24/2025] [Indexed: 04/11/2025] Open
Abstract
Segmentation in computed tomography (CT) provides detailed anatomical information, while positron emission tomography (PET) provide the metabolic activity of cancer. Existing segmentation models in CT and PET either rely on early fusion, which struggles to effectively capture independent features from each modality, or late fusion, which is computationally expensive and fails to leverage the complementary nature of the two modalities. This research addresses the gap by proposing an intermediate fusion approach that optimally balances the strengths of both modalities. Our method leverages anatomical features to guide the fusion process while preserving spatial representation quality. We achieve this through the separate encoding of anatomical and metabolic features followed by an attentive fusion decoder. Unlike traditional fixed normalization techniques, we introduce novel "zero layers" with learnable normalization. The proposed intermediate fusion reduces the number of filters, resulting in a lightweight model. Our approach demonstrates superior performance, achieving a dice score of 0.8184 and an [Formula: see text] score of 2.31. The implications of this study include more precise tumor delineation, leading to enhanced cancer diagnosis and more effective treatment planning.
Collapse
Affiliation(s)
- Ibtihaj Ahmad
- Northwestern Polytechnical University, Xi'an, 710072, Shaanxi, People's Republic of China
- School of Public Health, Shandong University, Jinan, Shandong, People's Republic of China
| | - Sadia Jabbar Anwar
- Northwestern Polytechnical University, Xi'an, 710072, Shaanxi, People's Republic of China
| | - Bagh Hussain
- Northwestern Polytechnical University, Xi'an, 710072, Shaanxi, People's Republic of China
| | - Atiq Ur Rehman
- Division of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar.
| | - Amine Bermak
- Division of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| |
Collapse
|
3
|
Hossain MS, Basak N, Mollah MA, Nahiduzzaman M, Ahsan M, Haider J. Ensemble-based multiclass lung cancer classification using hybrid CNN-SVD feature extraction and selection method. PLoS One 2025; 20:e0318219. [PMID: 40106514 PMCID: PMC11922248 DOI: 10.1371/journal.pone.0318219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2024] [Accepted: 01/10/2025] [Indexed: 03/22/2025] Open
Abstract
Lung cancer (LC) is a leading cause of cancer-related fatalities worldwide, underscoring the urgency of early detection for improved patient outcomes. The main objective of this research is to harness the noble strategies of artificial intelligence for identifying and classifying lung cancers more precisely from CT scan images at the early stage. This study introduces a novel lung cancer detection method, which was mainly focused on Convolutional Neural Networks (CNN) and was later customized for binary and multiclass classification utilizing a publicly available dataset of chest CT scan images of lung cancer. The main contribution of this research lies in its use of a hybrid CNN-SVD (Singular Value Decomposition) method and the use of a robust voting ensemble approach, which results in superior accuracy and effectiveness for mitigating potential errors. By employing contrast-limited adaptive histogram equalization (CLAHE), contrast-enhanced images were generated with minimal noise and prominent distinctive features. Subsequently, a CNN-SVD-Ensemble model was implemented to extract important features and reduce dimensionality. The extracted features were then processed by a set of ML algorithms along with a voting ensemble approach. Additionally, Gradient-weighted Class Activation Mapping (Grad-CAM) was integrated as an explainable AI (XAI) technique for enhancing model transparency by highlighting key influencing regions in the CT scans, which improved interpretability and ensured reliable and trustworthy results for clinical applications. This research offered state-of-the-art results, which achieved remarkable performance metrics with an accuracy, AUC, precision, recall, F1 score, Cohen's Kappa and Matthews Correlation Coefficient (MCC) of 99.49%, 99.73%, 100%, 99%, 99%, 99.15% and 99.16%, respectively, addressing the prior research gaps and setting a new benchmark in the field. Furthermore, in binary class classification, all the performance indicators attained a perfect score of 100%. The robustness of the suggested approach offered more reliable and impactful insights in the medical field, thus improving existing knowledge and setting the stage for future innovations.
Collapse
Affiliation(s)
- Md Sabbir Hossain
- Department of Electronics & Telecommunication Engineering, Rajshahi University of Engineering & Technology, Rajshahi, Bangladesh
| | - Niloy Basak
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi, Bangladesh
| | - Md Aslam Mollah
- Department of Electronics & Telecommunication Engineering, Rajshahi University of Engineering & Technology, Rajshahi, Bangladesh
| | - Md Nahiduzzaman
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi, Bangladesh
| | - Mominul Ahsan
- Department of Computer Science, University of York, York, United Kingdom
| | - Julfikar Haider
- Department of Engineering, Manchester Metropolitan University, Manchester, United Kingdom
| |
Collapse
|
4
|
Lin S, Ma Z, Yao Y, Huang H, Chen W, Tang D, Gao W. Automatic machine learning accurately predicts the efficacy of immunotherapy for patients with inoperable advanced non-small cell lung cancer using a computed tomography-based radiomics model. Diagn Interv Radiol 2025; 31:130-140. [PMID: 39817633 PMCID: PMC11880869 DOI: 10.4274/dir.2024.242972] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2024] [Accepted: 11/18/2024] [Indexed: 01/18/2025]
Abstract
PURPOSE Patients with advanced non-small cell lung cancer (NSCLC) have varying responses to immunotherapy, but there are no reliable, accepted biomarkers to accurately predict its therapeutic efficacy. The present study aimed to construct individualized models through automatic machine learning (autoML) to predict the efficacy of immunotherapy in patients with inoperable advanced NSCLC. METHODS A total of 63 eligible participants were included and randomized into training and validation groups. Radiomics features were extracted from the volumes of interest of the tumor circled in the preprocessed computed tomography (CT) images. Golden feature, clinical, radiomics, and fusion models were generated using a combination of various algorithms through autoML. The models were evaluated using a multi-class receiver operating characteristic curve. RESULTS In total, 1,219 radiomics features were extracted from regions of interest. The ensemble algorithm demonstrated superior performance in model construction. In the training cohort, the fusion model exhibited the highest accuracy at 0.84, with an area under the curve (AUC) of 0.89-0.98. In the validation cohort, the radiomics model had the highest accuracy at 0.89, with an AUC of 0.98-1.00; its prediction performance in the partial response subgroup outperformed that in both the clinical and radiomics models. Patients with low rad scores achieved improved progression-free survival (PFS); (median PFS 16.2 vs. 13.4, P = 0.009). CONCLUSION autoML accurately and robustly predicted the short-term outcomes of patients with inoperable NSCLC treated with immune checkpoint inhibitor immunotherapy by constructing CT-based radiomics models, confirming it as a powerful tool to assist in the individualized management of patients with advanced NSCLC. CLINICAL SIGNIFICANCE This article highlights that autoML promotes the accuracy and efficiency of feature selection and model construction. The radiomics model generated by autoML predicted the efficacy of immunotherapy in patients with advanced NSCLC effectively. This may provide a rapid and non-invasive method for making personalized clinical decisions.
Collapse
Affiliation(s)
- Siyun Lin
- Huadong Hospital, Fudan University, Department of Thoracic Surgery, Shanghai, China
- Shanghai Key Laboratory of Clinical Geriatric Medicine, Shanghai, China
| | - Zhuangxuan Ma
- Huadong Hospital, Fudan University, Department of Radiology, Shanghai, China
| | - Yuanshan Yao
- Shanghai Chest Hospital, Shanghai JiaoTong University School of Medicine, Department of Thoracic Surgery, Shanghai, China
| | - Hou Huang
- Shanghai Key Laboratory of Clinical Geriatric Medicine, Shanghai, China
| | - Wufei Chen
- Huadong Hospital, Fudan University, Department of Radiology, Shanghai, China
| | - Dongfang Tang
- Huadong Hospital, Fudan University, Department of Thoracic Surgery, Shanghai, China
| | - Wen Gao
- Huadong Hospital, Fudan University, Department of Thoracic Surgery, Shanghai, China
| |
Collapse
|
5
|
Jiang J, Rangnekar A, Veeraraghavan H. Self-supervised learning improves robustness of deep learning lung tumor segmentation models to CT imaging differences. Med Phys 2025; 52:1573-1588. [PMID: 39636237 DOI: 10.1002/mp.17541] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2024] [Revised: 10/10/2024] [Accepted: 11/04/2024] [Indexed: 12/07/2024] Open
Abstract
BACKGROUND Self-supervised learning (SSL) is an approach to extract useful feature representations from unlabeled data, and enable fine-tuning on downstream tasks with limited labeled examples. Self-pretraining is a SSL approach that uses curated downstream task dataset for both pretraining and fine-tuning. Availability of large, diverse, and uncurated public medical image sets presents the opportunity to potentially create foundation models by applying SSL in the "wild" that are robust to imaging variations. However, the benefit of wild- versus self-pretraining has not been studied for medical image analysis. PURPOSE Compare robustness of wild versus self-pretrained models created using convolutional neural network (CNN) and transformer (vision transformer [ViT] and hierarchical shifted window [Swin]) models for non-small cell lung cancer (NSCLC) segmentation from 3D computed tomography (CT) scans. METHODS CNN, ViT, and Swin models were wild-pretrained using unlabeled 10,412 3D CTs sourced from the cancer imaging archive and internal datasets. Self-pretraining was applied to same networks using a curated public downstream task dataset (n = 377) of patients with NSCLC. Pretext tasks introduced in self-distilled masked image transformer were used for both pretraining approaches. All models were fine-tuned to segment NSCLC (n = 377 training dataset) and tested on two separate datasets containing early (public n = 156) and advanced stage (internal n = 196) NSCLC. Models were evaluated in terms of: (a) accuracy, (b) robustness to image differences from contrast, slice thickness, and reconstruction kernels, and (c) impact of pretext tasks for pretraining. Feature reuse was evaluated using centered kernel alignment. RESULTS Wild-pretrained Swin models resulted in higher feature reuse at earlier level layers and increased feature differentiation close to output. Wild-pretrained Swin outperformed self-pretrained models for analyzed imaging acquisitions. Neither ViT nor CNN showed a clear benefit of wild-pretraining compared to self-pretraining. Masked image prediction pretext task that forces networks to learn the local structure resulted in higher accuracy compared to contrastive task that models global image information. CONCLUSION Wild-pretrained Swin networks were more robust to analyzed CT imaging differences for lung tumor segmentation than self-pretrained methods. ViT and CNN models did not show a clear benefit for wild-pretraining over self-pretraining.
Collapse
Affiliation(s)
- Jue Jiang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Aneesh Rangnekar
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Harini Veeraraghavan
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| |
Collapse
|
6
|
Kenneth Portal N, Rochman S, Szeskin A, Lederman R, Sosna J, Joskowicz L. Metastatic Lung Lesion Changes in Follow-up Chest CT: The Advantage of Deep Learning Simultaneous Analysis of Prior and Current Scans With SimU-Net. J Thorac Imaging 2025; 40:e0808. [PMID: 39808543 DOI: 10.1097/rti.0000000000000808] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2025]
Abstract
PURPOSE Radiological follow-up of oncology patients requires the detection of metastatic lung lesions and the quantitative analysis of their changes in longitudinal imaging studies. Our aim was to evaluate SimU-Net, a novel deep learning method for the automatic analysis of metastatic lung lesions and their temporal changes in pairs of chest CT scans. MATERIALS AND METHODS SimU-Net is a simultaneous multichannel 3D U-Net model trained on pairs of registered prior and current scans of a patient. It is part of a fully automatic pipeline for the detection, segmentation, matching, and classification of metastatic lung lesions in longitudinal chest CT scans. A data set of 5040 metastatic lung lesions in 344 pairs of 208 prior and current chest CT scans from 79 patients was used for training/validation (173 scans, 65 patients) and testing (35 scans, 14 patients) of a standalone 3D U-Net models and 3 simultaneous SimU-Net models. Outcome measures were the lesion detection and segmentation precision, recall, Dice score, average symmetric surface distance (ASSD), lesion matching, and classification of lesion changes from computed versus manual ground-truth annotations by an expert radiologist. RESULTS SimU-Net achieved a mean lesion detection recall and precision of 0.93±0.13 and 0.79±0.24 and a mean lesion segmentation Dice and ASSD of 0.84±0.09 and 0.33±0.22 mm. These results outperformed the standalone 3D U-Net model by 9.4% in the recall, 2.4% in Dice, and 15.4% in ASSD, with a minor 3.6% decrease in precision. The SimU-Net pipeline achieved perfect precision and recall (1.0±0.0) for lesion matching and classification of lesion changes. CONCLUSIONS Simultaneous deep learning analysis of metastatic lung lesions in prior and current chest CT scans with SimU-Net yields superior accuracy compared with individual analysis of each scan. Implementation of SimU-Net in the radiological workflow may enhance efficiency by automatically computing key metrics used to evaluate metastatic lung lesions and their temporal changes.
Collapse
Affiliation(s)
- Neta Kenneth Portal
- School of Computer Science and Engineering, The Hebrew University of Jerusalem
| | - Shalom Rochman
- School of Computer Science and Engineering, The Hebrew University of Jerusalem
| | - Adi Szeskin
- School of Computer Science and Engineering, The Hebrew University of Jerusalem
| | - Richard Lederman
- Department of Radiology, Hadassah Hebrew University Medical Center, Jerusalem, Israel
| | - Jacob Sosna
- Department of Radiology, Hadassah Hebrew University Medical Center, Jerusalem, Israel
| | - Leo Joskowicz
- School of Computer Science and Engineering, The Hebrew University of Jerusalem
| |
Collapse
|
7
|
Apte AP, LoCastro E, Iyer A, Elguindi S, Jiang J, Oh JH, Veeraraghavan H, Shukla-Dave A, Deasy JO. Artificial Intelligence Apps for Medical Image Analysis using pyCERR and Cancer Genomics Cloud. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2025.01.19.633756. [PMID: 39896472 PMCID: PMC11785098 DOI: 10.1101/2025.01.19.633756] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2025]
Abstract
This work introduces a user-friendly, cloud-based software framework for conducting Artificial Intelligence (AI) analyses of medical images. The framework allows users to deploy AI-based workflows by customizing software and hardware dependencies. The components of our software framework include the Python-native Computational Environment for Radiological Research (pyCERR) platform for radiological image processing, Cancer Genomics Cloud (CGC) for accessing hardware resources and user management utilities for accessing images from data repositories and installing AI models and their dependencies. GNU-GPL copyright pyCERR was ported to Python from MATLAB-based CERR to enable researchers to organize, access, and transform metadata from high dimensional, multi-modal datasets to build cloud-compatible workflows for AI modeling in radiation therapy and medical image analysis. pyCERR provides an extensible data structure to accommodate metadata from commonly used medical imaging file formats and a viewer to allow for multi-modal visualization. Analysis modules are provided to facilitate cloud-compatible AI-based workflows for image segmentation, radiomics, DCE MRI analysis, radiotherapy dose-volume histogram-based features, and normal tissue complication and tumor control models for radiotherapy. Image processing utilities are provided to help train and infer convolutional neural network-based models for image segmentation, registration and transformation. The framework allows for round-trip analysis of imaging data, enabling users to apply AI models to their images on CGC and retrieve and review results on their local machine without requiring local installation of specialized software or GPU hardware. The deployed AI models can be accessed using APIs provided by CGC, enabling their use in a variety of programming languages. In summary, the presented framework facilitates end-to-end radiological image analysis and reproducible research, including pulling data from sources, training or inferring from an AI model, utilities for data management, visualization, and simplified access to image metadata.
Collapse
|
8
|
Kashyap M, Wang X, Panjwani N, Hasan M, Zhang Q, Huang C, Bush K, Chin A, Vitzthum LK, Dong P, Zaky S, Loo BW, Diehn M, Xing L, Li R, Gensheimer MF, Wolfe S. Automated Deep Learning-Based Detection and Segmentation of Lung Tumors at CT. Radiology 2025; 314:e233029. [PMID: 39835976 PMCID: PMC11783160 DOI: 10.1148/radiol.233029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Revised: 12/02/2024] [Accepted: 12/30/2024] [Indexed: 01/22/2025]
Abstract
"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. Background Detection and segmentation of lung tumors on CT scans are critical for monitoring cancer progression, evaluating treatment responses, and planning radiation therapy; however, manual delineation is labor-intensive and subject to physician variability. Purpose To develop and evaluate an ensemble deep learning model for automating identification and segmentation of lung tumors on CT scans. Materials and Methods A retrospective study was conducted between July 2019 and November 2024 using a large dataset of CT simulation scans and clinical lung tumor segmentations from radiotherapy plans. This dataset was used to train a 3D U-Net-based, image-multiresolution ensemble model to detect and segment lung tumors on CT scans. Model performance was evaluated on internal and external test sets composed of CT simulation scans and lung tumor segmentations from two affiliated medical centers, including single primary and metastatic lung tumors. Performance metrics included sensitivity, specificity, false positive rate, and Dice similarity coefficient (DSC). Model-predicted tumor volumes were compared with physician-delineated volumes. Group comparisons were made with Wilcoxon signed-rank test or one-way ANOVA. P < 0.05 indicated statistical significance. Results The model, trained on 1,504 CT scans with clinical lung tumor segmentations, achieved 92% sensitivity (92/100) and 82% specificity (41/50) in detecting lung tumors on the combined 150-CT scan test set. For a subset of 100 CT scans with a single lung tumor each, the model achieved a median model-physician DSC of 0.77 (IQR: 0.65-0.83) and an interphysician DSC of 0.80 (IQR: 0.72-0.86). Segmentation time was shorter for the model than for physicians (mean 76.6 vs. 166.1-187.7 seconds; p<0.001). Conclusion Routinely collected radiotherapy data were useful for model training. The key strengths of the model include a 3D U-Net ensemble approach for balancing volumetric context with resolution, robust tumor detection and segmentation performance, and the ability to generalize to an external site.
Collapse
Affiliation(s)
- Mehr Kashyap
- Department of Medicine, Stanford University School of Medicine,
Stanford, Calif
| | - Xi Wang
- Department of Radiation Oncology, Stanford University School of
Medicine, 875 Blake Wilbur Dr, Palo Alto, CA 94304
- Zhejiang Laboratory, Hangzhou, China
- Department of Computer Science and Engineering, Chinese University of
Hong Kong, Hong Kong, China
| | - Neil Panjwani
- Department of Radiation Oncology, Stanford University School of
Medicine, 875 Blake Wilbur Dr, Palo Alto, CA 94304
- Department of Radiation Oncology, University of Washington, Seattle,
Wash
| | - Mohammad Hasan
- Department of Radiation Oncology, Stanford University School of
Medicine, 875 Blake Wilbur Dr, Palo Alto, CA 94304
| | - Qin Zhang
- Department of Radiation Oncology, Stanford University School of
Medicine, 875 Blake Wilbur Dr, Palo Alto, CA 94304
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai
Jiao Tong University School of Medicine, Shanghai, China
| | - Charles Huang
- Department of Bioengineering, Stanford University, Stanford,
Calif
| | - Karl Bush
- Department of Radiation Oncology, Stanford University School of
Medicine, 875 Blake Wilbur Dr, Palo Alto, CA 94304
| | - Alexander Chin
- Department of Radiation Oncology, Stanford University School of
Medicine, 875 Blake Wilbur Dr, Palo Alto, CA 94304
| | - Lucas K. Vitzthum
- Department of Radiation Oncology, Stanford University School of
Medicine, 875 Blake Wilbur Dr, Palo Alto, CA 94304
| | - Peng Dong
- Department of Radiation Oncology, Stanford University School of
Medicine, 875 Blake Wilbur Dr, Palo Alto, CA 94304
| | - Sandra Zaky
- Department of Radiation Oncology, Stanford University School of
Medicine, 875 Blake Wilbur Dr, Palo Alto, CA 94304
| | - Billy W. Loo
- Department of Radiation Oncology, Stanford University School of
Medicine, 875 Blake Wilbur Dr, Palo Alto, CA 94304
| | - Maximilian Diehn
- Department of Radiation Oncology, Stanford University School of
Medicine, 875 Blake Wilbur Dr, Palo Alto, CA 94304
| | - Lei Xing
- Department of Radiation Oncology, Stanford University School of
Medicine, 875 Blake Wilbur Dr, Palo Alto, CA 94304
| | - Ruijiang Li
- Department of Radiation Oncology, Stanford University School of
Medicine, 875 Blake Wilbur Dr, Palo Alto, CA 94304
| | - Michael F. Gensheimer
- Department of Radiation Oncology, Stanford University School of
Medicine, 875 Blake Wilbur Dr, Palo Alto, CA 94304
| | - Shannyn Wolfe
- Department of Medicine, Stanford University School of Medicine,
Stanford, Calif
- Department of Radiation Oncology, Stanford University School of
Medicine, 875 Blake Wilbur Dr, Palo Alto, CA 94304
- Zhejiang Laboratory, Hangzhou, China
- Department of Computer Science and Engineering, Chinese University of
Hong Kong, Hong Kong, China
- Department of Radiation Oncology, University of Washington, Seattle,
Wash
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai
Jiao Tong University School of Medicine, Shanghai, China
- Department of Bioengineering, Stanford University, Stanford,
Calif
| |
Collapse
|
9
|
Gao C, Wu L, Wu W, Huang Y, Wang X, Sun Z, Xu M, Gao C. Deep learning in pulmonary nodule detection and segmentation: a systematic review. Eur Radiol 2025; 35:255-266. [PMID: 38985185 PMCID: PMC11632000 DOI: 10.1007/s00330-024-10907-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Revised: 04/09/2024] [Accepted: 05/10/2024] [Indexed: 07/11/2024]
Abstract
OBJECTIVES The accurate detection and precise segmentation of lung nodules on computed tomography are key prerequisites for early diagnosis and appropriate treatment of lung cancer. This study was designed to compare detection and segmentation methods for pulmonary nodules using deep-learning techniques to fill methodological gaps and biases in the existing literature. METHODS This study utilized a systematic review with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines, searching PubMed, Embase, Web of Science Core Collection, and the Cochrane Library databases up to May 10, 2023. The Quality Assessment of Diagnostic Accuracy Studies 2 criteria was used to assess the risk of bias and was adjusted with the Checklist for Artificial Intelligence in Medical Imaging. The study analyzed and extracted model performance, data sources, and task-focus information. RESULTS After screening, we included nine studies meeting our inclusion criteria. These studies were published between 2019 and 2023 and predominantly used public datasets, with the Lung Image Database Consortium Image Collection and Image Database Resource Initiative and Lung Nodule Analysis 2016 being the most common. The studies focused on detection, segmentation, and other tasks, primarily utilizing Convolutional Neural Networks for model development. Performance evaluation covered multiple metrics, including sensitivity and the Dice coefficient. CONCLUSIONS This study highlights the potential power of deep learning in lung nodule detection and segmentation. It underscores the importance of standardized data processing, code and data sharing, the value of external test datasets, and the need to balance model complexity and efficiency in future research. CLINICAL RELEVANCE STATEMENT Deep learning demonstrates significant promise in autonomously detecting and segmenting pulmonary nodules. Future research should address methodological shortcomings and variability to enhance its clinical utility. KEY POINTS Deep learning shows potential in the detection and segmentation of pulmonary nodules. There are methodological gaps and biases present in the existing literature. Factors such as external validation and transparency affect the clinical application.
Collapse
Affiliation(s)
- Chuan Gao
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China
| | - Linyu Wu
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China
| | - Wei Wu
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China
| | - Yichao Huang
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China
| | - Xinyue Wang
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China
| | - Zhichao Sun
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China.
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China.
| | - Maosheng Xu
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China.
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China.
| | - Chen Gao
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China.
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China.
| |
Collapse
|
10
|
Carles M, Kuhn D, Fechter T, Baltas D, Mix M, Nestle U, Grosu AL, Martí-Bonmatí L, Radicioni G, Gkika E. Development and evaluation of two open-source nnU-Net models for automatic segmentation of lung tumors on PET and CT images with and without respiratory motion compensation. Eur Radiol 2024; 34:6701-6711. [PMID: 38662100 PMCID: PMC11399280 DOI: 10.1007/s00330-024-10751-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2024] [Revised: 02/22/2024] [Accepted: 03/28/2024] [Indexed: 04/26/2024]
Abstract
OBJECTIVES In lung cancer, one of the main limitations for the optimal integration of the biological and anatomical information derived from Positron Emission Tomography (PET) and Computed Tomography (CT) is the time and expertise required for the evaluation of the different respiratory phases. In this study, we present two open-source models able to automatically segment lung tumors on PET and CT, with and without motion compensation. MATERIALS AND METHODS This study involved time-bin gated (4D) and non-gated (3D) PET/CT images from two prospective lung cancer cohorts (Trials 108237 and 108472) and one retrospective. For model construction, the ground truth (GT) was defined by consensus of two experts, and the nnU-Net with 5-fold cross-validation was applied to 560 4D-images for PET and 100 3D-images for CT. The test sets included 270 4D- images and 19 3D-images for PET and 80 4D-images and 27 3D-images for CT, recruited at 10 different centres. RESULTS In the performance evaluation with the multicentre test sets, the Dice Similarity Coefficients (DSC) obtained for our PET model were DSC(4D-PET) = 0.74 ± 0.06, improving 19% relative to the DSC between experts and DSC(3D-PET) = 0.82 ± 0.11. The performance for CT was DSC(4D-CT) = 0.61 ± 0.28 and DSC(3D-CT) = 0.63 ± 0.34, improving 4% and 15% relative to DSC between experts. CONCLUSIONS Performance evaluation demonstrated that the automatic segmentation models have the potential to achieve accuracy comparable to manual segmentation and thus hold promise for clinical application. The resulting models can be freely downloaded and employed to support the integration of 3D- or 4D- PET/CT and to facilitate the evaluation of its impact on lung cancer clinical practice. CLINICAL RELEVANCE STATEMENT We provide two open-source nnU-Net models for the automatic segmentation of lung tumors on PET/CT to facilitate the optimal integration of biological and anatomical information in clinical practice. The models have superior performance compared to the variability observed in manual segmentations by the different experts for images with and without motion compensation, allowing to take advantage in the clinical practice of the more accurate and robust 4D-quantification. KEY POINTS Lung tumor segmentation on PET/CT imaging is limited by respiratory motion and manual delineation is time consuming and suffer from inter- and intra-variability. Our segmentation models had superior performance compared to the manual segmentations by different experts. Automating PET image segmentation allows for easier clinical implementation of biological information.
Collapse
Affiliation(s)
- Montserrat Carles
- La Fe Health Research Institute, Biomedical Imaging Research Group (GIBI230-PREBI) and Imaging La Fe node at Distributed Network for Biomedical Imaging (ReDIB) Unique Scientific and Technical Infra-structures (ICTS), Valencia, Spain.
| | - Dejan Kuhn
- Department of Radiation Oncology, Division of Medical Physics, University Medical Center Freiburg, Faculty of Medicine, Freiburg, Germany
- German Cancer Consortium (DKTK), German Cancer Research Center (DKFZ), Partner Site Freiburg, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Tobias Fechter
- Department of Radiation Oncology, Division of Medical Physics, University Medical Center Freiburg, Faculty of Medicine, Freiburg, Germany
- German Cancer Consortium (DKTK), German Cancer Research Center (DKFZ), Partner Site Freiburg, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Dimos Baltas
- Department of Radiation Oncology, Division of Medical Physics, University Medical Center Freiburg, Faculty of Medicine, Freiburg, Germany
- German Cancer Consortium (DKTK), German Cancer Research Center (DKFZ), Partner Site Freiburg, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Michael Mix
- Department of Nuclear Medicine, Faculty of Medicine, University Medical Center Freiburg, Freiburg, Germany
| | - Ursula Nestle
- German Cancer Consortium (DKTK), German Cancer Research Center (DKFZ), Partner Site Freiburg, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Department of Radiation Oncology, Faculty of Medicine, University Medical Center Freiburg, Freiburg, Germany
- Department of Radiation Oncology, Kliniken Maria Hilf GmbH Moenchengladbach, Moechengladbach, Germany
| | - Anca L Grosu
- German Cancer Consortium (DKTK), German Cancer Research Center (DKFZ), Partner Site Freiburg, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Department of Radiation Oncology, Faculty of Medicine, University Medical Center Freiburg, Freiburg, Germany
| | - Luis Martí-Bonmatí
- La Fe Health Research Institute, Biomedical Imaging Research Group (GIBI230-PREBI) and Imaging La Fe node at Distributed Network for Biomedical Imaging (ReDIB) Unique Scientific and Technical Infra-structures (ICTS), Valencia, Spain
| | - Gianluca Radicioni
- German Cancer Consortium (DKTK), German Cancer Research Center (DKFZ), Partner Site Freiburg, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Department of Radiation Oncology, Faculty of Medicine, University Medical Center Freiburg, Freiburg, Germany
| | - Eleni Gkika
- German Cancer Consortium (DKTK), German Cancer Research Center (DKFZ), Partner Site Freiburg, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Department of Radiation Oncology, Faculty of Medicine, University Medical Center Freiburg, Freiburg, Germany
| |
Collapse
|
11
|
Wang TW, Hong JS, Huang JW, Liao CY, Lu CF, Wu YT. Systematic review and meta-analysis of deep learning applications in computed tomography lung cancer segmentation. Radiother Oncol 2024; 197:110344. [PMID: 38806113 DOI: 10.1016/j.radonc.2024.110344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2024] [Revised: 05/20/2024] [Accepted: 05/22/2024] [Indexed: 05/30/2024]
Abstract
BACKGROUND Accurate segmentation of lung tumors on chest computed tomography (CT) scans is crucial for effective diagnosis and treatment planning. Deep Learning (DL) has emerged as a promising tool in medical imaging, particularly for lung cancer segmentation. However, its efficacy across different clinical settings and tumor stages remains variable. METHODS We conducted a comprehensive search of PubMed, Embase, and Web of Science until November 7, 2023. We assessed the quality of these studies by using the Checklist for Artificial Intelligence in Medical Imaging and the Quality Assessment of Diagnostic Accuracy Studies-2 tools. This analysis included data from various clinical settings and stages of lung cancer. Key performance metrics, such as the Dice similarity coefficient, were pooled, and factors affecting algorithm performance, such as clinical setting, algorithm type, and image processing techniques, were examined. RESULTS Our analysis of 37 studies revealed a pooled Dice score of 79 % (95 % CI: 76 %-83 %), indicating moderate accuracy. Radiotherapy studies had a slightly lower score of 78 % (95 % CI: 74 %-82 %). A temporal increase was noted, with recent studies (post-2022) showing improvement from 75 % (95 % CI: 70 %-81 %). to 82 % (95 % CI: 81 %-84 %). Key factors affecting performance included algorithm type, resolution adjustment, and image cropping. QUADAS-2 assessments identified ambiguous risks in 78 % of studies due to data interval omissions and concerns about generalizability in 8 % due to nodule size exclusions, and CLAIM criteria highlighted areas for improvement, with an average score of 27.24 out of 42. CONCLUSION This meta-analysis demonstrates DL algorithms' promising but varied efficacy in lung cancer segmentation, particularly higher efficacy noted in early stages. The results highlight the critical need for continued development of tailored DL models to improve segmentation accuracy across diverse clinical settings, especially in advanced cancer stages with greater challenges. As recent studies demonstrate, ongoing advancements in algorithmic approaches are crucial for future applications.
Collapse
Affiliation(s)
- Ting-Wei Wang
- Institute of Biophotonics, National Yang-Ming Chiao Tung University, Taipei, Taiwan; School of Medicine, National Yang-Ming Chiao Tung University, Taipei, Taiwan
| | - Jia-Sheng Hong
- Institute of Biophotonics, National Yang-Ming Chiao Tung University, Taipei, Taiwan
| | - Jing-Wen Huang
- Department of Radiation Oncology, Taichung Veterans General Hospital, Taichung 407, Taiwan
| | - Chien-Yi Liao
- Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming Chiao Tung University, Taipei, Taiwan; Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Chia-Feng Lu
- Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming Chiao Tung University, Taipei, Taiwan
| | - Yu-Te Wu
- Institute of Biophotonics, National Yang-Ming Chiao Tung University, Taipei, Taiwan; National Yang Ming Chiao Tung University, Brain Research Center, Taiwan.
| |
Collapse
|
12
|
Huang X, Gong H, Zhang J. HST-MRF: Heterogeneous Swin Transformer With Multi-Receptive Field for Medical Image Segmentation. IEEE J Biomed Health Inform 2024; 28:4048-4061. [PMID: 38709610 DOI: 10.1109/jbhi.2024.3397047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/08/2024]
Abstract
The Transformer has been successfully used in medical image segmentation due to its excellent long-range modeling capabilities. However, patch segmentation is necessary when building a Transformer class model. This process ignores the tissue structure features within patch, resulting in the loss of shallow representation information. In this study, we propose a Heterogeneous Swin Transformer with Multi-Receptive Field (HST-MRF) model that fuses patch information from different receptive fields to solve the problem of loss of feature information caused by patch segmentation. The heterogeneous Swin Transformer (HST) is the core module, which achieves the interaction of multi-receptive field patch information through heterogeneous attention and passes it to the next stage for progressive learning, thus complementing the patch structure information. We also designed a two-stage fusion module, multimodal bilinear pooling (MBP), to assist HST in further fusing multi-receptive field information and combining low-level and high-level semantic information for accurate localization of lesion regions. In addition, we developed adaptive patch embedding (APE) and soft channel attention (SCA) modules to retain more valuable information when acquiring patch embedding and filtering channel features, respectively, thereby improving model segmentation quality. We evaluated HST-MRF on multiple datasets for polyp, skin lesion and breast ultrasound segmentation tasks. Experimental results show that our proposed method outperforms state-of-the-art models and can achieve superior performance. Furthermore, we verified the effectiveness of each module and the benefits of multi-receptive field segmentation in reducing the loss of structural information through ablation experiments and qualitative analysis.
Collapse
|
13
|
Shafi SM, Chinnappan SK. Segmenting and classifying lung diseases with M-Segnet and Hybrid Squeezenet-CNN architecture on CT images. PLoS One 2024; 19:e0302507. [PMID: 38753712 PMCID: PMC11098347 DOI: 10.1371/journal.pone.0302507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2024] [Accepted: 04/07/2024] [Indexed: 05/18/2024] Open
Abstract
Diagnosing lung diseases accurately and promptly is essential for effectively managing this significant public health challenge on a global scale. This paper introduces a new framework called Modified Segnet-based Lung Disease Segmentation and Severity Classification (MSLDSSC). The MSLDSSC model comprises four phases: "preprocessing, segmentation, feature extraction, and classification." Initially, the input image undergoes preprocessing using an improved Wiener filter technique. This technique estimates the power spectral density of the noisy and original images and computes the SNR assisted by PSNR to evaluate image quality. Next, the preprocessed image undergoes Segmentation to identify and separate the RoI from the background objects in the lung image. We employ a Modified Segnet mechanism that utilizes a proposed hard tanh-Softplus activation function for effective Segmentation. Following Segmentation, features such as MLDN, entropy with MRELBP, shape features, and deep features are extracted. Following the feature extraction phase, the retrieved feature set is input into a hybrid severity classification model. This hybrid model comprises two classifiers: SDPA-Squeezenet and DCNN. These classifiers train on the retrieved feature set and effectively classify the severity level of lung diseases.
Collapse
Affiliation(s)
- Syed Mohammed Shafi
- School of Computer Science and Engineering Vellore Institute of Technology, Vellore, India
| | | |
Collapse
|
14
|
Zhang P, Gao C, Huang Y, Chen X, Pan Z, Wang L, Dong D, Li S, Qi X. Artificial intelligence in liver imaging: methods and applications. Hepatol Int 2024; 18:422-434. [PMID: 38376649 DOI: 10.1007/s12072-023-10630-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Accepted: 12/18/2023] [Indexed: 02/21/2024]
Abstract
Liver disease is regarded as one of the major health threats to humans. Radiographic assessments hold promise in terms of addressing the current demands for precisely diagnosing and treating liver diseases, and artificial intelligence (AI), which excels at automatically making quantitative assessments of complex medical image characteristics, has made great strides regarding the qualitative interpretation of medical imaging by clinicians. Here, we review the current state of medical-imaging-based AI methodologies and their applications concerning the management of liver diseases. We summarize the representative AI methodologies in liver imaging with focusing on deep learning, and illustrate their promising clinical applications across the spectrum of precise liver disease detection, diagnosis and treatment. We also address the current challenges and future perspectives of AI in liver imaging, with an emphasis on feature interpretability, multimodal data integration and multicenter study. Taken together, it is revealed that AI methodologies, together with the large volume of available medical image data, might impact the future of liver disease care.
Collapse
Affiliation(s)
- Peng Zhang
- Institute for TCM-X, MOE Key Laboratory of Bioinformatics, Bioinformatics Division, BNRIST, Department of Automation, Tsinghua University, Beijing, China
| | - Chaofei Gao
- Institute for TCM-X, MOE Key Laboratory of Bioinformatics, Bioinformatics Division, BNRIST, Department of Automation, Tsinghua University, Beijing, China
| | - Yifei Huang
- Department of Gastroenterology, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Xiangyi Chen
- Institute for TCM-X, MOE Key Laboratory of Bioinformatics, Bioinformatics Division, BNRIST, Department of Automation, Tsinghua University, Beijing, China
| | - Zhuoshi Pan
- Institute for TCM-X, MOE Key Laboratory of Bioinformatics, Bioinformatics Division, BNRIST, Department of Automation, Tsinghua University, Beijing, China
| | - Lan Wang
- Institute for TCM-X, MOE Key Laboratory of Bioinformatics, Bioinformatics Division, BNRIST, Department of Automation, Tsinghua University, Beijing, China
| | - Di Dong
- CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China
| | - Shao Li
- Institute for TCM-X, MOE Key Laboratory of Bioinformatics, Bioinformatics Division, BNRIST, Department of Automation, Tsinghua University, Beijing, China.
| | - Xiaolong Qi
- Center of Portal Hypertension, Department of Radiology, Zhongda Hospital, Medical School, Nurturing Center of Jiangsu Province for State Laboratory of AI Imaging & Interventional Radiology, Southeast University, Nanjing, China.
| |
Collapse
|
15
|
Shyamala Bharathi P, Shalini C. Advanced hybrid attention-based deep learning network with heuristic algorithm for adaptive CT and PET image fusion in lung cancer detection. Med Eng Phys 2024; 126:104138. [PMID: 38621836 DOI: 10.1016/j.medengphy.2024.104138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Revised: 02/17/2024] [Accepted: 03/02/2024] [Indexed: 04/17/2024]
Abstract
Lung cancer is one of the most deadly diseases in the world. Lung cancer detection can save the patient's life. Despite being the best imaging tool in the medical sector, clinicians find it challenging to interpret and detect cancer from Computed Tomography (CT) scan data. One of the most effective ways for the diagnosis of certain malignancies like lung tumours is Positron Emission Tomography (PET) imaging. So many diagnosis models have been implemented nowadays to diagnose various diseases. Early lung cancer identification is very important for predicting the severity level of lung cancer in cancer patients. To explore the effective model, an image fusion-based detection model is proposed for lung cancer detection using an improved heuristic algorithm of the deep learning model. Firstly, the PET and CT images are gathered from the internet. Further, these two collected images are fused for further process by using the Adaptive Dilated Convolution Neural Network (AD-CNN), in which the hyperparameters are tuned by the Modified Initial Velocity-based Capuchin Search Algorithm (MIV-CapSA). Subsequently, the abnormal regions are segmented by influencing the TransUnet3+. Finally, the segmented images are fed into the Hybrid Attention-based Deep Networks (HADN) model, encompassed with Mobilenet and Shufflenet. Therefore, the effectiveness of the novel detection model is analyzed using various metrics compared with traditional approaches. At last, the outcome evinces that it aids in early basic detection to treat the patients effectively.
Collapse
Affiliation(s)
- P Shyamala Bharathi
- Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Chennai, India.
| | - C Shalini
- Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Chennai, India
| |
Collapse
|
16
|
Liu C, Liu H, Zhang X, Guo J, Lv P. Multi-scale and multi-view network for lung tumor segmentation. Comput Biol Med 2024; 172:108250. [PMID: 38493603 DOI: 10.1016/j.compbiomed.2024.108250] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2024] [Revised: 02/17/2024] [Accepted: 03/06/2024] [Indexed: 03/19/2024]
Abstract
Lung tumor segmentation in medical imaging is a critical step in the diagnosis and treatment planning for lung cancer. Accurate segmentation, however, is challenging due to the variability in tumor size, shape, and contrast against surrounding tissues. In this work, we present MSMV-Net, a novel deep learning architecture that integrates multi-scale multi-view (MSMV) learning modules and multi-scale uncertainty-based deep supervision (MUDS) for enhanced segmentation of lung tumors in computed tomography images. MSMV-Net capitalizes on the strengths of multi-view analysis and multi-scale feature extraction to address the limitations posed by small 3D lung tumors. The results indicate that MSMV-Net achieves state-of-the-art performance in lung tumor segmentation, recording a global Dice score of 55.60% on the LUNA dataset and 59.94% on the MSD dataset. Ablation studies conducted on the MSD dataset further validate that our method enhances segmentation accuracy.
Collapse
Affiliation(s)
- Caiqi Liu
- Department of Gastrointestinal Medical Oncology, Harbin Medical University Cancer Hospital, Harbin, Heilongjiang, China; Key Laboratory of Molecular Oncology of Heilongjiang Province, Harbin, Heilongjiang, China
| | - Han Liu
- The Institute for Global Health, University College London, London, England, United Kingdom
| | - Xuehui Zhang
- Beidahuang Industry Group General Hospital, Harbin, Heilongjiang, China
| | - Jierui Guo
- Center for Bioinformatics, Faculty of computing, Harbin Institute of Technology, Harbin, Heilongjiang, China
| | - Pengju Lv
- School of Medical Informatics, Daqing Campus, Harbin Medical University, Daqing, Heilongjiang, China.
| |
Collapse
|
17
|
Yang Y, Wang P, Yang Z, Zeng Y, Chen F, Wang Z, Rizzo S. Segmentation method of magnetic resonance imaging brain tumor images based on improved UNet network. Transl Cancer Res 2024; 13:1567-1583. [PMID: 38617525 PMCID: PMC11009801 DOI: 10.21037/tcr-23-1858] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Accepted: 03/01/2024] [Indexed: 04/16/2024]
Abstract
Background Glioma is a primary malignant craniocerebral tumor commonly found in the central nervous system. According to research, preoperative diagnosis of glioma and a full understanding of its imaging features are very significant. Still, the traditional segmentation methods of image dispensation and machine wisdom are not acceptable in glioma segmentation. This analysis explores the potential of magnetic resonance imaging (MRI) brain tumor images as an effective segmentation method of glioma. Methods This study used 200 MRI images from the affiliated hospital and applied the 2-dimensional residual block UNet (2DResUNet). Features were extracted from input images using a 2×2 kernel size (64-kernel) 1-step 2D convolution (Conv) layer. The 2DDenseUNet model implemented in this study incorporates a ResBlock mechanism within the UNet architecture, as well as a Gaussian noise layer for data augmentation at the input stage, and a pooling layer for replacing the conventional 2D convolutional layers. Finally, the performance of the proposed protocol and its effective measures in glioma segmentation were verified. Results The outcomes of the 5-fold cross-validation evaluation show that the proposed 2DResUNet and 2DDenseUNet structure has a high sensitivity despite the slightly lower evaluation result on the Dice score. At the same time, compared with other models used in the experiment, the DM-DA-UNet model proposed in this paper was significantly improved in various indicators, increasing the reliability of the model and providing a reference and basis for the accurate formulation of clinical treatment strategies. The method used in this study showed stronger feature extraction ability than the UNet model. In addition, our findings demonstrated that using generalized die harm and prejudiced cross entropy as loss functions in the training process effectively alleviated the class imbalance of glioma data and effectively segmented glioma. Conclusions The method based on the improved UNet network has obvious advantages in the MRI brain tumor portrait segmentation procedure. The result showed that we developed a 2D residual block UNet, which can improve the incorporation of glioma segmentation into the clinical process.
Collapse
Affiliation(s)
- Yang Yang
- Department of Neurosurgery, Xiangyang Central Hospital (Hospital of Hubei University of Arts and Science), Xiangyang, China
| | - Peng Wang
- Department of Neurosurgery, Xiangyang Central Hospital (Hospital of Hubei University of Arts and Science), Xiangyang, China
| | - Zhenyu Yang
- Department of Neurosurgery, Xiangyang Central Hospital (Hospital of Hubei University of Arts and Science), Xiangyang, China
| | - Yuecheng Zeng
- Department of Neurosurgery, Xiangyang Central Hospital (Hospital of Hubei University of Arts and Science), Xiangyang, China
| | - Feng Chen
- Department of Neurosurgery, Xiangyang Central Hospital (Hospital of Hubei University of Arts and Science), Xiangyang, China
| | - Zhiyong Wang
- Department of Neurosurgery, Xiangyang Central Hospital (Hospital of Hubei University of Arts and Science), Xiangyang, China
| | - Stefania Rizzo
- Imaging della Svizzera Italiana (IIMSI), Ente Ospedaliero Cantonale (EOC), Lugano, Switzerland
- Faculty of Biomedical Sciences, Università della Svizzera italiana, Lugano, Switzerland
| |
Collapse
|
18
|
Shi J, Wang Z, Ruan S, Zhao M, Zhu Z, Kan H, An H, Xue X, Yan B. Rethinking automatic segmentation of gross target volume from a decoupling perspective. Comput Med Imaging Graph 2024; 112:102323. [PMID: 38171254 DOI: 10.1016/j.compmedimag.2023.102323] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 10/19/2023] [Accepted: 12/12/2023] [Indexed: 01/05/2024]
Abstract
Accurate and reliable segmentation of Gross Target Volume (GTV) is critical in cancer Radiation Therapy (RT) planning, but manual delineation is time-consuming and subject to inter-observer variations. Recently, deep learning methods have achieved remarkable success in medical image segmentation. However, due to the low image contrast and extreme pixel imbalance between GTV and adjacent tissues, most existing methods usually obtained limited performance on automatic GTV segmentation. In this paper, we propose a Heterogeneous Cascade Framework (HCF) from a decoupling perspective, which decomposes the GTV segmentation into independent recognition and segmentation subtasks. The former aims to screen out the abnormal slices containing GTV, while the latter performs pixel-wise segmentation of these slices. With the decoupled two-stage framework, we can efficiently filter normal slices to reduce false positives. To further improve the segmentation performance, we design a multi-level Spatial Alignment Network (SANet) based on the feature pyramid structure, which introduces a spatial alignment module into the decoder to compensate for the information loss caused by downsampling. Moreover, we propose a Combined Regularization (CR) loss and Balance-Sampling Strategy (BSS) to alleviate the pixel imbalance problem and improve network convergence. Extensive experiments on two public datasets of StructSeg2019 challenge demonstrate that our method outperforms state-of-the-art methods, especially with significant advantages in reducing false positives and accurately segmenting small objects. The code is available at https://github.com/shijun18/GTV_AutoSeg.
Collapse
Affiliation(s)
- Jun Shi
- School of Computer Science and Technology, University of Science and Technology of China, Hefei, 230026, China.
| | - Zhaohui Wang
- School of Computer Science and Technology, University of Science and Technology of China, Hefei, 230026, China.
| | - Shulan Ruan
- School of Computer Science and Technology, University of Science and Technology of China, Hefei, 230026, China.
| | - Minfan Zhao
- School of Computer Science and Technology, University of Science and Technology of China, Hefei, 230026, China.
| | - Ziqi Zhu
- School of Computer Science and Technology, University of Science and Technology of China, Hefei, 230026, China.
| | - Hongyu Kan
- School of Computer Science and Technology, University of Science and Technology of China, Hefei, 230026, China.
| | - Hong An
- School of Computer Science and Technology, University of Science and Technology of China, Hefei, 230026, China; Laoshan Laboratory Qingdao, Qindao, 266221, China.
| | - Xudong Xue
- Hubei Cancer Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430074, China.
| | - Bing Yan
- Department of radiation oncology, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230001, China.
| |
Collapse
|
19
|
Dan Y, Jin W, Yue X, Wang Z. Enhancing medical image segmentation with a multi-transformer U-Net. PeerJ 2024; 12:e17005. [PMID: 38435997 PMCID: PMC10909362 DOI: 10.7717/peerj.17005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Accepted: 02/05/2024] [Indexed: 03/05/2024] Open
Abstract
Various segmentation networks based on Swin Transformer have shown promise in medical segmentation tasks. Nonetheless, challenges such as lower accuracy and slower training convergence have persisted. To tackle these issues, we introduce a novel approach that combines the Swin Transformer and Deformable Transformer to enhance overall model performance. We leverage the Swin Transformer's window attention mechanism to capture local feature information and employ the Deformable Transformer to adjust sampling positions dynamically, accelerating model convergence and aligning it more closely with object shapes and sizes. By amalgamating both Transformer modules and incorporating additional skip connections to minimize information loss, our proposed model excels at rapidly and accurately segmenting CT or X-ray lung images. Experimental results demonstrate the remarkable, showcasing the significant prowess of our model. It surpasses the performance of the standalone Swin Transformer's Swin Unet and converges more rapidly under identical conditions, yielding accuracy improvements of 0.7% (resulting in 88.18%) and 2.7% (resulting in 98.01%) on the COVID-19 CT scan lesion segmentation dataset and Chest X-ray Masks and Labels dataset, respectively. This advancement has the potential to aid medical practitioners in early diagnosis and treatment decision-making.
Collapse
Affiliation(s)
- Yongping Dan
- School of Electronic and Information, Zhongyuan University Of Technology, Zhengzhou, Henan, China
| | - Weishou Jin
- School of Electronic and Information, Zhongyuan University Of Technology, Zhengzhou, Henan, China
| | - Xuebin Yue
- Research Organization of Science and Technology, Ritsumeikan University, Kusatsu, Japan
| | - Zhida Wang
- School of Electronic and Information, Zhongyuan University Of Technology, Zhengzhou, Henan, China
| |
Collapse
|
20
|
Zhao Z, Du S, Xu Z, Yin Z, Huang X, Huang X, Wong C, Liang Y, Shen J, Wu J, Qu J, Zhang L, Cui Y, Wang Y, Wee L, Dekker A, Han C, Liu Z, Shi Z, Liang C. SwinHR: Hemodynamic-powered hierarchical vision transformer for breast tumor segmentation. Comput Biol Med 2024; 169:107939. [PMID: 38194781 DOI: 10.1016/j.compbiomed.2024.107939] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Revised: 12/12/2023] [Accepted: 01/01/2024] [Indexed: 01/11/2024]
Abstract
Accurate and automated segmentation of breast tumors in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) plays a critical role in computer-aided diagnosis and treatment of breast cancer. However, this task is challenging, due to random variation in tumor sizes, shapes, appearances, and blurred boundaries of tumors caused by inherent heterogeneity of breast cancer. Moreover, the presence of ill-posed artifacts in DCE-MRI further complicate the process of tumor region annotation. To address the challenges above, we propose a scheme (named SwinHR) integrating prior DCE-MRI knowledge and temporal-spatial information of breast tumors. The prior DCE-MRI knowledge refers to hemodynamic information extracted from multiple DCE-MRI phases, which can provide pharmacokinetics information to describe metabolic changes of the tumor cells over the scanning time. The Swin Transformer with hierarchical re-parameterization large kernel architecture (H-RLK) can capture long-range dependencies within DCE-MRI while maintaining computational efficiency by a shifted window-based self-attention mechanism. The use of H-RLK can extract high-level features with a wider receptive field, which can make the model capture contextual information at different levels of abstraction. Extensive experiments are conducted in large-scale datasets to validate the effectiveness of our proposed SwinHR scheme, demonstrating its superiority over recent state-of-the-art segmentation methods. Also, a subgroup analysis split by MRI scanners, field strength, and tumor size is conducted to verify its generalization. The source code is released on (https://github.com/GDPHMediaLab/SwinHR).
Collapse
Affiliation(s)
- Zhihe Zhao
- School of Medicine, South China University of Technology, Guangzhou, 510006, China; Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China
| | - Siyao Du
- Department of Radiology, The First Hospital of China Medical University, Shenyang, Liaoning Province, 110001, China
| | - Zeyan Xu
- Department of Radiology, The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Yunnan Cancer Center, Kunming, 650118, China
| | - Zhi Yin
- Department of Radiology, Shanxi Province Cancer Hospital/ Shanxi Hospital Affiliated to Cancer Hospital, Chinese Academy of Medical Sciences/Cancer Hospital Affiliated to Shanxi Medical University, Taiyuan, 030013, China
| | - Xiaomei Huang
- Department of Medical Imaging, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China
| | - Xin Huang
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China; Shantou University Medical College, Shantou, 515041, China
| | - Chinting Wong
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China
| | - Yanting Liang
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China
| | - Jing Shen
- Department of Radiology, Affiliated Zhongshan Hospital of Dalian University, Dalian, 116001, China
| | - Jianlin Wu
- Department of Radiology, Affiliated Zhongshan Hospital of Dalian University, Dalian, 116001, China
| | - Jinrong Qu
- Department of Radiology, The Affiliated Cancer Hospital of Zhengzhou University & Henan Cancer Hospital, Zhengzhou, 450008, China
| | - Lina Zhang
- Department of Radiology, The First Hospital of China Medical University, Shenyang, Liaoning Province, 110001, China
| | - Yanfen Cui
- Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China; Department of Radiology, Shanxi Province Cancer Hospital/ Shanxi Hospital Affiliated to Cancer Hospital, Chinese Academy of Medical Sciences/Cancer Hospital Affiliated to Shanxi Medical University, Taiyuan, 030013, China
| | - Ying Wang
- Department of Medical Ultrasonics, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou, 510120, China
| | - Leonard Wee
- Clinical Data Science, Faculty of Health Medicine Life Sciences, Maastricht University, Maastricht, 6229 ET, The Netherlands; Department of Radiation Oncology (Maastro), GROW School of Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Andre Dekker
- Department of Radiation Oncology (Maastro), GROW School of Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Chu Han
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China; Medical Research Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China.
| | - Zaiyi Liu
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China.
| | - Zhenwei Shi
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China; Medical Research Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China.
| | - Changhong Liang
- School of Medicine, South China University of Technology, Guangzhou, 510006, China; Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China.
| |
Collapse
|
21
|
Sharma P, Nayak DR, Balabantaray BK, Tanveer M, Nayak R. A survey on cancer detection via convolutional neural networks: Current challenges and future directions. Neural Netw 2024; 169:637-659. [PMID: 37972509 DOI: 10.1016/j.neunet.2023.11.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/21/2023] [Accepted: 11/04/2023] [Indexed: 11/19/2023]
Abstract
Cancer is a condition in which abnormal cells uncontrollably split and damage the body tissues. Hence, detecting cancer at an early stage is highly essential. Currently, medical images play an indispensable role in detecting various cancers; however, manual interpretation of these images by radiologists is observer-dependent, time-consuming, and tedious. An automatic decision-making process is thus an essential need for cancer detection and diagnosis. This paper presents a comprehensive survey on automated cancer detection in various human body organs, namely, the breast, lung, liver, prostate, brain, skin, and colon, using convolutional neural networks (CNN) and medical imaging techniques. It also includes a brief discussion about deep learning based on state-of-the-art cancer detection methods, their outcomes, and the possible medical imaging data used. Eventually, the description of the dataset used for cancer detection, the limitations of the existing solutions, future trends, and challenges in this domain are discussed. The utmost goal of this paper is to provide a piece of comprehensive and insightful information to researchers who have a keen interest in developing CNN-based models for cancer detection.
Collapse
Affiliation(s)
- Pallabi Sharma
- School of Computer Science, UPES, Dehradun, 248007, Uttarakhand, India.
| | - Deepak Ranjan Nayak
- Department of Computer Science and Engineering, Malaviya National Institute of Technology, Jaipur, 302017, Rajasthan, India.
| | - Bunil Kumar Balabantaray
- Computer Science and Engineering, National Institute of Technology Meghalaya, Shillong, 793003, Meghalaya, India.
| | - M Tanveer
- Department of Mathematics, Indian Institute of Technology Indore, Simrol, 453552, Indore, India.
| | - Rajashree Nayak
- School of Applied Sciences, Birla Global University, Bhubaneswar, 751029, Odisha, India.
| |
Collapse
|
22
|
Khorshidi A. Tumor segmentation via enhanced area growth algorithm for lung CT images. BMC Med Imaging 2023; 23:189. [PMID: 37986046 PMCID: PMC10662793 DOI: 10.1186/s12880-023-01126-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2022] [Accepted: 10/16/2023] [Indexed: 11/22/2023] Open
Abstract
BACKGROUND Since lung tumors are in dynamic conditions, the study of tumor growth and its changes is of great importance in primary diagnosis. METHODS Enhanced area growth (EAG) algorithm is introduced to segment the lung tumor in 2D and 3D modes on 60 patients CT images from four different databases by MATLAB software. The contrast augmentation, color intensity and maximum primary tumor radius determination, thresholding, start and neighbor points' designation in an array, and then modifying the points in the braid on average are the early steps of the proposed algorithm. To determine the new tumor boundaries, the maximum distance from the color-intensity center point of the primary tumor to the modified points is appointed via considering a larger target region and new threshold. The tumor center is divided into different subsections and then all previous stages are repeated from new designated points to define diverse boundaries for the tumor. An interpolation between these boundaries creates a new tumor boundary. The intersections with the tumor boundaries are firmed for edge correction phase, after drawing diverse lines from the tumor center at relevant angles. Each of the new regions is annexed to the core region to achieve a segmented tumor surface by meeting certain conditions. RESULTS The multipoint-growth-starting-point grouping fashioned a desired consequence in the precise delineation of the tumor. The proposed algorithm enhanced tumor identification by more than 16% with a reasonable accuracy acceptance rate. At the same time, it largely assurances the independence of the last outcome from the starting point. By significance difference of p < 0.05, the dice coefficients were 0.80 ± 0.02 and 0.92 ± 0.03, respectively, for primary and enhanced algorithms. Lung area determination alongside automatic thresholding and also starting from several points along with edge improvement may reduce human errors in radiologists' interpretation of tumor areas and selection of the algorithm's starting point. CONCLUSIONS The proposed algorithm enhanced tumor detection by more than 18% with a sufficient acceptance ratio of accuracy. Since the enhanced algorithm is independent of matrix size and image thickness, it is very likely that it can be easily applied to other contiguous tumor images. TRIAL REGISTRATION PAZHOUHAN, PAZHOUHAN98000032. Registered 4 January 2021, http://pazhouhan.gerums.ac.ir/webreclist/view.action?webreclist_code=19300.
Collapse
Affiliation(s)
- Abdollah Khorshidi
- School of Paramedical, Gerash University of Medical Sciences, P.O. Box: 7441758666, Gerash, Iran.
| |
Collapse
|
23
|
Amorrortu R, Garcia M, Zhao Y, El Naqa I, Balagurunathan Y, Chen DT, Thieu T, Schabath MB, Rollison DE. Overview of approaches to estimate real-world disease progression in lung cancer. JNCI Cancer Spectr 2023; 7:pkad074. [PMID: 37738580 PMCID: PMC10637832 DOI: 10.1093/jncics/pkad074] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Revised: 08/28/2023] [Accepted: 09/18/2023] [Indexed: 09/24/2023] Open
Abstract
BACKGROUND Randomized clinical trials of novel treatments for solid tumors normally measure disease progression using the Response Evaluation Criteria in Solid Tumors. However, novel, scalable approaches to estimate disease progression using real-world data are needed to advance cancer outcomes research. The purpose of this narrative review is to summarize examples from the existing literature on approaches to estimate real-world disease progression and their relative strengths and limitations, using lung cancer as a case study. METHODS A narrative literature review was conducted in PubMed to identify articles that used approaches to estimate real-world disease progression in lung cancer patients. Data abstracted included data source, approach used to estimate real-world progression, and comparison to a selected gold standard (if applicable). RESULTS A total of 40 articles were identified from 2008 to 2022. Five approaches to estimate real-world disease progression were identified including manual abstraction of medical records, natural language processing of clinical notes and/or radiology reports, treatment-based algorithms, changes in tumor volume, and delta radiomics-based approaches. The accuracy of these progression approaches were assessed using different methods, including correlations between real-world endpoints and overall survival for manual abstraction (Spearman rank ρ = 0.61-0.84) and area under the curve for natural language processing approaches (area under the curve = 0.86-0.96). CONCLUSIONS Real-world disease progression has been measured in several observational studies of lung cancer. However, comparing the accuracy of methods across studies is challenging, in part, because of the lack of a gold standard and the different methods used to evaluate accuracy. Concerted efforts are needed to define a gold standard and quality metrics for real-world data.
Collapse
Affiliation(s)
| | - Melany Garcia
- Department of Cancer Epidemiology, Moffitt Cancer Center, Tampa, FL, USA
| | - Yayi Zhao
- Department of Cancer Epidemiology, Moffitt Cancer Center, Tampa, FL, USA
| | - Issam El Naqa
- Department of Machine Learning, Moffitt Cancer Center, Tampa, FL, USA
| | | | - Dung-Tsa Chen
- Department of Biostatistics and Bionformatics, Moffitt Cancer Center, Tampa, FL, USA
| | - Thanh Thieu
- Department of Machine Learning, Moffitt Cancer Center, Tampa, FL, USA
| | - Matthew B Schabath
- Department of Cancer Epidemiology, Moffitt Cancer Center, Tampa, FL, USA
| | - Dana E Rollison
- Department of Cancer Epidemiology, Moffitt Cancer Center, Tampa, FL, USA
| |
Collapse
|
24
|
Wang L, Wang J, Zhu L, Fu H, Li P, Cheng G, Feng Z, Li S, Heng PA. Dual Multiscale Mean Teacher Network for Semi-Supervised Infection Segmentation in Chest CT Volume for COVID-19. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:6363-6375. [PMID: 37015538 DOI: 10.1109/tcyb.2022.3223528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Automated detecting lung infections from computed tomography (CT) data plays an important role for combating coronavirus 2019 (COVID-19). However, there are still some challenges for developing AI system: 1) most current COVID-19 infection segmentation methods mainly relied on 2-D CT images, which lack 3-D sequential constraint; 2) existing 3-D CT segmentation methods focus on single-scale representations, which do not achieve the multiple level receptive field sizes on 3-D volume; and 3) the emergent breaking out of COVID-19 makes it hard to annotate sufficient CT volumes for training deep model. To address these issues, we first build a multiple dimensional-attention convolutional neural network (MDA-CNN) to aggregate multiscale information along different dimension of input feature maps and impose supervision on multiple predictions from different convolutional neural networks (CNNs) layers. Second, we assign this MDA-CNN as a basic network into a novel dual multiscale mean teacher network (DM [Formula: see text]-Net) for semi-supervised COVID-19 lung infection segmentation on CT volumes by leveraging unlabeled data and exploring the multiscale information. Our DM [Formula: see text]-Net encourages multiple predictions at different CNN layers from the student and teacher networks to be consistent for computing a multiscale consistency loss on unlabeled data, which is then added to the supervised loss on the labeled data from multiple predictions of MDA-CNN. Third, we collect two COVID-19 segmentation datasets to evaluate our method. The experimental results show that our network consistently outperforms the compared state-of-the-art methods.
Collapse
|
25
|
M GJ, S B. DeepNet model empowered cuckoo search algorithm for the effective identification of lung cancer nodules. FRONTIERS IN MEDICAL TECHNOLOGY 2023; 5:1157919. [PMID: 37752910 PMCID: PMC10518616 DOI: 10.3389/fmedt.2023.1157919] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 08/22/2023] [Indexed: 09/28/2023] Open
Abstract
Introduction Globally, lung cancer is a highly harmful type of cancer. An efficient diagnosis system can enable pathologists to recognize the type and nature of lung nodules and the mode of therapy to increase the patient's chance of survival. Hence, implementing an automatic and reliable system to segment lung nodules from a computed tomography (CT) image is useful in the medical industry. Methods This study develops a novel fully convolutional deep neural network (hereafter called DeepNet) model for segmenting lung nodules from CT scans. This model includes an encoder/decoder network that achieves pixel-wise image segmentation. The encoder network exploits a Visual Geometry Group (VGG-19) model as a base architecture, while the decoder network exploits 16 upsampling and deconvolution modules. The encoder used in this model has a very flexible structural design that can be modified and trained for any resolution based on the size of input scans. The decoder network upsamples and maps the low-resolution attributes of the encoder. Thus, there is a considerable drop in the number of variables used for the learning process as the network recycles the pooling indices of the encoder for segmentation. The Thresholding method and the cuckoo search algorithm determines the most useful features when categorizing cancer nodules. Results and discussion The effectiveness of the intended DeepNet model is cautiously assessed on the real-world database known as The Cancer Imaging Archive (TCIA) dataset and its effectiveness is demonstrated by comparing its representation with some other modern segmentation models in terms of selected performance measures. The empirical analysis reveals that DeepNet significantly outperforms other prevalent segmentation algorithms with 0.962 ± 0.023% of volume error, 0.968 ± 0.011 of dice similarity coefficient, 0.856 ± 0.011 of Jaccard similarity index, and 0.045 ± 0.005s average processing time.
Collapse
Affiliation(s)
- Grace John M
- Department of Electronics and Communication, Karpagam Academy of Higher Education, Coimbatore, India
| | | |
Collapse
|
26
|
Zhi L, Jiang W, Zhang S, Zhou T. Deep neural network pulmonary nodule segmentation methods for CT images: Literature review and experimental comparisons. Comput Biol Med 2023; 164:107321. [PMID: 37595518 DOI: 10.1016/j.compbiomed.2023.107321] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Revised: 05/08/2023] [Accepted: 08/07/2023] [Indexed: 08/20/2023]
Abstract
Automatic and accurate segmentation of pulmonary nodules in CT images can help physicians perform more accurate quantitative analysis, diagnose diseases, and improve patient survival. In recent years, with the development of deep learning technology, pulmonary nodule segmentation methods based on deep neural networks have gradually replaced traditional segmentation methods. This paper reviews the recent pulmonary nodule segmentation algorithms based on deep neural networks. First, the heterogeneity of pulmonary nodules, the interpretability of segmentation results, and external environmental factors are discussed, and then the open-source 2D and 3D models in medical segmentation tasks in recent years are applied to the Lung Image Database Consortium and Image Database Resource Initiative (LIDC) and Lung Nodule Analysis 16 (Luna16) datasets for comparison, and the visual diagnostic features marked by radiologists are evaluated one by one. According to the analysis of the experimental data, the following conclusions are drawn: (1) In the pulmonary nodule segmentation task, the performance of the 2D segmentation models DSC is generally better than that of the 3D segmentation models. (2) 'Subtlety', 'Sphericity', 'Margin', 'Texture', and 'Size' have more influence on pulmonary nodule segmentation, while 'Lobulation', 'Spiculation', and 'Benign and Malignant' features have less influence on pulmonary nodule segmentation. (3) Higher accuracy in pulmonary nodule segmentation can be achieved based on better-quality CT images. (4) Good contextual information acquisition and attention mechanism design positively affect pulmonary nodule segmentation.
Collapse
Affiliation(s)
- Lijia Zhi
- School of Computer Science and Engineering, North Minzu University, Yinchuan, 750021, China; Medical Imaging Center, Ningxia Hui Autonomous Region People's Hospital, Yinchuan, 750000, China; The Key Laboratory of Images & Graphics Intelligent Processing of State Ethnic Affairs Commission, Yinchuan, 750021, China.
| | - Wujun Jiang
- School of Computer Science and Engineering, North Minzu University, Yinchuan, 750021, China.
| | - Shaomin Zhang
- School of Computer Science and Engineering, North Minzu University, Yinchuan, 750021, China; Medical Imaging Center, Ningxia Hui Autonomous Region People's Hospital, Yinchuan, 750000, China; The Key Laboratory of Images & Graphics Intelligent Processing of State Ethnic Affairs Commission, Yinchuan, 750021, China.
| | - Tao Zhou
- School of Computer Science and Engineering, North Minzu University, Yinchuan, 750021, China; The Key Laboratory of Images & Graphics Intelligent Processing of State Ethnic Affairs Commission, Yinchuan, 750021, China.
| |
Collapse
|
27
|
Simeth J, Jiang J, Nosov A, Wibmer A, Zelefsky M, Tyagi N, Veeraraghavan H. Deep learning-based dominant index lesion segmentation for MR-guided radiation therapy of prostate cancer. Med Phys 2023; 50:4854-4870. [PMID: 36856092 PMCID: PMC11098147 DOI: 10.1002/mp.16320] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 01/11/2023] [Accepted: 01/29/2023] [Indexed: 03/02/2023] Open
Abstract
BACKGROUND Dose escalation radiotherapy enables increased control of prostate cancer (PCa) but requires segmentation of dominant index lesions (DIL). This motivates the development of automated methods for fast, accurate, and consistent segmentation of PCa DIL. PURPOSE To construct and validate a model for deep-learning-based automatic segmentation of PCa DIL defined by Gleason score (GS) ≥3+4 from MR images applied to MR-guided radiation therapy. Validate generalizability of constructed models across scanner and acquisition differences. METHODS Five deep-learning networks were evaluated on apparent diffusion coefficient (ADC) MRI from 500 lesions in 365 patients arising from internal training Dataset 1 (156 lesions in 125 patients, 1.5Tesla GE MR with endorectal coil), testing using Dataset 1 (35 lesions in 26 patients), external ProstateX Dataset 2 (299 lesions in 204 patients, 3Tesla Siemens MR), and internal inter-rater Dataset 3 (10 lesions in 10 patients, 3Tesla Philips MR). The five networks include: multiple resolution residually connected network (MRRN) and MRRN regularized in training with deep supervision implemented into the last convolutional block (MRRN-DS), Unet, Unet++, ResUnet, and fast panoptic segmentation (FPSnet) as well as fast panoptic segmentation with smoothed labels (FPSnet-SL). Models were evaluated by volumetric DIL segmentation accuracy using Dice similarity coefficient (DSC) and the balanced F1 measure of detection accuracy, as a function of lesion aggressiveness and size (Dataset 1 and 2), and accuracy with respect to two-raters (on Dataset 3). Upon acceptance for publication segmentation models will be made available in an open-source GitHub repository. RESULTS In general, MRRN-DS more accurately segmented tumors than other methods on the testing datasets. MRRN-DS significantly outperformed ResUnet in Dataset2 (DSC of 0.54 vs. 0.44, p < 0.001) and the Unet++ in Dataset3 (DSC of 0.45 vs. p = 0.04). FPSnet-SL was similarly accurate as MRRN-DS in Dataset2 (p = 0.30), but MRRN-DS significantly outperformed FPSnet and FPSnet-SL in both Dataset1 (0.60 vs. 0.51 [p = 0.01] and 0.54 [p = 0.049] respectively) and Dataset3 (0.45 vs. 0.06 [p = 0.002] and 0.24 [p = 0.004] respectively). Finally, MRRN-DS produced slightly higher agreement with experienced radiologist than two radiologists in Dataset 3 (DSC of 0.45 vs. 0.41). CONCLUSIONS MRRN-DS was generalizable to different MR testing datasets acquired using different scanners. It produced slightly higher agreement with an experienced radiologist than that between two radiologists. Finally, MRRN-DS more accurately segmented aggressive lesions, which are generally candidates for radiative dose ablation.
Collapse
Affiliation(s)
- Josiah Simeth
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Jue Jiang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Anton Nosov
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Andreas Wibmer
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Michael Zelefsky
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Neelam Tyagi
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Harini Veeraraghavan
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| |
Collapse
|
28
|
Dunn B, Pierobon M, Wei Q. Automated Classification of Lung Cancer Subtypes Using Deep Learning and CT-Scan Based Radiomic Analysis. Bioengineering (Basel) 2023; 10:690. [PMID: 37370621 DOI: 10.3390/bioengineering10060690] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Revised: 06/02/2023] [Accepted: 06/03/2023] [Indexed: 06/29/2023] Open
Abstract
Artificial intelligence and emerging data science techniques are being leveraged to interpret medical image scans. Traditional image analysis relies on visual interpretation by a trained radiologist, which is time-consuming and can, to some degree, be subjective. The development of reliable, automated diagnostic tools is a key goal of radiomics, a fast-growing research field which combines medical imaging with personalized medicine. Radiomic studies have demonstrated potential for accurate lung cancer diagnoses and prognostications. The practice of delineating the tumor region of interest, known as segmentation, is a key bottleneck in the development of generalized classification models. In this study, the incremental multiple resolution residual network (iMRRN), a publicly available and trained deep learning segmentation model, was applied to automatically segment CT images collected from 355 lung cancer patients included in the dataset "Lung-PET-CT-Dx", obtained from The Cancer Imaging Archive (TCIA), an open-access source for radiological images. We report a failure rate of 4.35% when using the iMRRN to segment tumor lesions within plain CT images in the lung cancer CT dataset. Seven classification algorithms were trained on the extracted radiomic features and tested for their ability to classify different lung cancer subtypes. Over-sampling was used to handle unbalanced data. Chi-square tests revealed the higher order texture features to be the most predictive when classifying lung cancers by subtype. The support vector machine showed the highest accuracy, 92.7% (0.97 AUC), when classifying three histological subtypes of lung cancer: adenocarcinoma, small cell carcinoma, and squamous cell carcinoma. The results demonstrate the potential of AI-based computer-aided diagnostic tools to automatically diagnose subtypes of lung cancer by coupling deep learning image segmentation with supervised classification. Our study demonstrated the integrated application of existing AI techniques in the non-invasive and effective diagnosis of lung cancer subtypes, and also shed light on several practical issues concerning the application of AI in biomedicine.
Collapse
Affiliation(s)
- Bryce Dunn
- Department of Bioengineering, George Mason University, Fairfax, VA 22030, USA
| | - Mariaelena Pierobon
- School of Systems Biology, Center for Applied Proteomics and Molecular Medicine, George Mason University, Fairfax, VA 22030, USA
| | - Qi Wei
- Department of Bioengineering, George Mason University, Fairfax, VA 22030, USA
| |
Collapse
|
29
|
VJ MJ, S K. Multi-classification approach for lung nodule detection and classification with proposed texture feature in X-ray images. MULTIMEDIA TOOLS AND APPLICATIONS 2023:1-28. [PMID: 37362672 PMCID: PMC10188326 DOI: 10.1007/s11042-023-15281-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Revised: 10/22/2022] [Accepted: 04/06/2023] [Indexed: 06/28/2023]
Abstract
Lung cancer is a widespread type of cancer around the world. It is, moreover, a lethal type of tumor. Nevertheless, analysis signifies that earlier recognition of lung cancer considerably develops the possibilities of survival. By deploying X-rays and Computed Tomography (CT) scans, radiologists could identify hazardous nodules at an earlier period. However, when more citizens adopt these diagnoses, the workload rises for radiologists. Computer Assisted Diagnosis (CAD)-based detection systems can identify these nodules automatically and could assist radiologists in reducing their workloads. However, they result in lower sensitivity and a higher count of false positives. The proposed work introduces a new approach for Lung Nodule (LN) detection. At first, Histogram Equalization (HE) is done during pre-processing. As the next step, improved Balanced Iterative Reducing and Clustering using Hierarchies (BIRCH) based segmentation is done. Then, the characteristics, including "Gray Level Run-Length Matrix (GLRM), Gray Level Co-Occurrence Matrix (GLCM), and the proposed Local Vector Pattern (LVP)," are retrieved. These features are then categorized utilizing an optimized Convolutional Neural Network (CNN) and itdetectsnodule or non-nodule images. Subsequently, Long Short-Term Memory (LSTM) is deployed to categorize nodule types (benign, malignant, or normal). The CNN weights are fine-tuned by the Chaotic Population-based Beetle Swarm Algorithm (CP-BSA). Finally, the superiority of the proposed approach is confirmed across various measures. The developed approach has exhibited a high precision value of 0.9575 for the best case scenario, and high sensitivity value of 0.9646 for the mean case scenario. The superiority of the proposed approach is confirmed across various measures.
Collapse
Affiliation(s)
- Mary Jaya VJ
- Department of Computer Science, Assumption Autonomous College, Changanassery, Kerala India
| | - Krishnakumar S
- Department of Electronics, School of Technology and Applied Sciences, Mahatma Gandhi University Research Centre, Kochi, Kerala India
| |
Collapse
|
30
|
Qiao P, Li H, Song G, Han H, Gao Z, Tian Y, Liang Y, Li X, Zhou SK, Chen J. Semi-Supervised CT Lesion Segmentation Using Uncertainty-Based Data Pairing and SwapMix. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1546-1562. [PMID: 37015649 DOI: 10.1109/tmi.2022.3232572] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Semi-supervised learning (SSL) methods show their powerful performance to deal with the issue of data shortage in the field of medical image segmentation. However, existing SSL methods still suffer from the problem of unreliable predictions on unannotated data due to the lack of manual annotations for them. In this paper, we propose an unreliability-diluted consistency training (UDiCT) mechanism to dilute the unreliability in SSL by assembling reliable annotated data into unreliable unannotated data. Specifically, we first propose an uncertainty-based data pairing module to pair annotated data with unannotated data based on a complementary uncertainty pairing rule, which avoids two hard samples being paired off. Secondly, we develop SwapMix, a mixed sample data augmentation method, to integrate annotated data into unannotated data for training our model in a low-unreliability manner. Finally, UDiCT is trained by minimizing a supervised loss and an unreliability-diluted consistency loss, which makes our model robust to diverse backgrounds. Extensive experiments on three chest CT datasets show the effectiveness of our method for semi-supervised CT lesion segmentation.
Collapse
|
31
|
Ebadi N, Li R, Das A, Roy A, Nikos P, Najafirad P. CBCT-guided adaptive radiotherapy using self-supervised sequential domain adaptation with uncertainty estimation. Med Image Anal 2023; 86:102800. [PMID: 37003101 DOI: 10.1016/j.media.2023.102800] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Revised: 01/29/2023] [Accepted: 03/14/2023] [Indexed: 03/17/2023]
Abstract
Adaptive radiotherapy (ART) is an advanced technology in modern cancer treatment that incorporates progressive changes in patient anatomy into active plan/dose adaption during the fractionated treatment. However, the clinical application relies on the accurate segmentation of cancer tumors on low-quality on-board images, which has posed challenges for both manual delineation and deep learning-based models. In this paper, we propose a novel sequence transduction deep neural network with an attention mechanism to learn the shrinkage of the cancer tumor based on patients' weekly cone-beam computed tomography (CBCT). We design a self-supervised domain adaption (SDA) method to learn and adapt the rich textural and spatial features from pre-treatment high-quality computed tomography (CT) to CBCT modality in order to address the poor image quality and lack of labels. We also provide uncertainty estimation for sequential segmentation, which aids not only in the risk management of treatment planning but also in the calibration and reliability of the model. Our experimental results based on a clinical non-small cell lung cancer (NSCLC) dataset with sixteen patients and ninety-six longitudinal CBCTs show that our model correctly learns weekly deformation of the tumor over time with an average dice score of 0.92 on the immediate next step, and is able to predict multiple steps (up to 5 weeks) for future patient treatments with an average dice score reduction of 0.05. By incorporating the tumor shrinkage predictions into a weekly re-planning strategy, our proposed method demonstrates a significant decrease in the risk of radiation-induced pneumonitis up to 35% while maintaining the high tumor control probability.
Collapse
Affiliation(s)
- Nima Ebadi
- Department of Electrical and Computer Engineering, The University of Texas at San Antonio, San Antonio, TX 78249, United States of America.
| | - Ruiqi Li
- Department of Radiation Oncology, UT Health San Antonio, San Antonio, TX 78229, United States of America.
| | - Arun Das
- Department of Electrical and Computer Engineering, The University of Texas at San Antonio, San Antonio, TX 78249, United States of America; Department of Medicine, The University of Pittsburgh, Pittsburgh, PA 15260, United States of America.
| | - Arkajyoti Roy
- Department of Management Science and Statistics, The University of Texas at San Antonio, San Antonio, TX 78249, United States of America.
| | - Papanikolaou Nikos
- Department of Radiation Oncology, UT Health San Antonio, San Antonio, TX 78229, United States of America.
| | - Peyman Najafirad
- Department of Computer Science, The University of Texas at San Antonio, San Antonio, TX 78249, United States of America.
| |
Collapse
|
32
|
Paudyal R, Shah AD, Akin O, Do RKG, Konar AS, Hatzoglou V, Mahmood U, Lee N, Wong RJ, Banerjee S, Shin J, Veeraraghavan H, Shukla-Dave A. Artificial Intelligence in CT and MR Imaging for Oncological Applications. Cancers (Basel) 2023; 15:cancers15092573. [PMID: 37174039 PMCID: PMC10177423 DOI: 10.3390/cancers15092573] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 04/13/2023] [Accepted: 04/17/2023] [Indexed: 05/15/2023] Open
Abstract
Cancer care increasingly relies on imaging for patient management. The two most common cross-sectional imaging modalities in oncology are computed tomography (CT) and magnetic resonance imaging (MRI), which provide high-resolution anatomic and physiological imaging. Herewith is a summary of recent applications of rapidly advancing artificial intelligence (AI) in CT and MRI oncological imaging that addresses the benefits and challenges of the resultant opportunities with examples. Major challenges remain, such as how best to integrate AI developments into clinical radiology practice, the vigorous assessment of quantitative CT and MR imaging data accuracy, and reliability for clinical utility and research integrity in oncology. Such challenges necessitate an evaluation of the robustness of imaging biomarkers to be included in AI developments, a culture of data sharing, and the cooperation of knowledgeable academics with vendor scientists and companies operating in radiology and oncology fields. Herein, we will illustrate a few challenges and solutions of these efforts using novel methods for synthesizing different contrast modality images, auto-segmentation, and image reconstruction with examples from lung CT as well as abdome, pelvis, and head and neck MRI. The imaging community must embrace the need for quantitative CT and MRI metrics beyond lesion size measurement. AI methods for the extraction and longitudinal tracking of imaging metrics from registered lesions and understanding the tumor environment will be invaluable for interpreting disease status and treatment efficacy. This is an exciting time to work together to move the imaging field forward with narrow AI-specific tasks. New AI developments using CT and MRI datasets will be used to improve the personalized management of cancer patients.
Collapse
Affiliation(s)
- Ramesh Paudyal
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Akash D Shah
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Oguz Akin
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Richard K G Do
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Amaresha Shridhar Konar
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Vaios Hatzoglou
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Usman Mahmood
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Nancy Lee
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Richard J Wong
- Head and Neck Service, Department of Surgery, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | | | | | - Harini Veeraraghavan
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Amita Shukla-Dave
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| |
Collapse
|
33
|
Sebastian AE, Dua D. Lung Nodule Detection via Optimized Convolutional Neural Network: Impact of Improved Moth Flame Algorithm. SENSING AND IMAGING 2023; 24:11. [PMID: 36936054 PMCID: PMC10009866 DOI: 10.1007/s11220-022-00406-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/12/2021] [Revised: 09/30/2022] [Accepted: 11/02/2022] [Indexed: 06/18/2023]
Abstract
Lung cancer is a high-risk disease that affects people all over the world, and lung nodules are the most common sign of early lung cancer. Since early identification of lung cancer can considerably improve a lung scanner patient's chances of survival, an accurate and efficient nodule detection system can be essential. Automatic lung nodule recognition decreases radiologists' effort, as well as the risk of misdiagnosis and missed diagnoses. Hence, this article developed a new lung nodule detection model with four stages like "Image pre-processing, segmentation, feature extraction and classification". In this processes, pre-processing is the first step, in which the input image is subjected to a series of operations. Then, the "Otsu Thresholding model" is used to segment the pre-processed pictures. Then in the third stage, the LBP features are retrieved that is then classified via optimized Convolutional Neural Network (CNN). In this, the activation function and convolutional layer count of CNN is optimally tuned via a proposed algorithm known as Improved Moth Flame Optimization (IMFO). At the end, the betterment of the scheme is validated by carrying out analysis in terms of certain measures. Especially, the accuracy of the proposed work is 6.85%, 2.91%, 1.75%, 0.73%, 1.83%, as well as 4.05% superior to the extant SVM, KNN, CNN, MFO, WTEEB as well as GWO + FRVM methods respectively.
Collapse
Affiliation(s)
| | - Disha Dua
- Indira Gandhi Delhi Technical University for Women, Delhi, Delhi, India
| |
Collapse
|
34
|
Thompson HM, Kim JK, Jimenez-Rodriguez RM, Garcia-Aguilar J, Veeraraghavan H. Deep Learning-Based Model for Identifying Tumors in Endoscopic Images From Patients With Locally Advanced Rectal Cancer Treated With Total Neoadjuvant Therapy. Dis Colon Rectum 2023; 66:383-391. [PMID: 35358109 PMCID: PMC10185333 DOI: 10.1097/dcr.0000000000002295] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
Abstract
BACKGROUND A barrier to the widespread adoption of watch-and-wait management for locally advanced rectal cancer is the inaccuracy and variability of identifying tumor response endoscopically in patients who have completed total neoadjuvant therapy (chemoradiotherapy and systemic chemotherapy). OBJECTIVE This study aimed to develop a novel method of identifying the presence or absence of a tumor in endoscopic images using deep convolutional neural network-based automatic classification and to assess the accuracy of the method. DESIGN In this prospective pilot study, endoscopic images obtained before, during, and after total neoadjuvant therapy were grouped on the basis of tumor presence. A convolutional neural network was modified for probabilistic classification of tumor versus no tumor and trained with an endoscopic image set. After training, a testing endoscopic imaging set was applied to the network. SETTINGS The study was conducted at a comprehensive cancer center. PATIENTS Images were analyzed from 109 patients who were diagnosed with locally advanced rectal cancer between December 2012 and July 2017 and who underwent total neoadjuvant therapy. MAIN OUTCOME MEASURES The main outcomes were accuracy of identifying tumor presence or absence in endoscopic images measured as area under the receiver operating characteristic for the training and testing image sets. RESULTS A total of 1392 images were included; 1099 images (468 of no tumor and 631 of tumor) were for training and 293 images (151 of no tumor and 142 of tumor) for testing. The area under the receiver operating characteristic for training and testing was 0.83. LIMITATIONS The study had a limited number of images in each set and was conducted at a single institution. CONCLUSIONS The convolutional neural network method is moderately accurate in distinguishing tumor from no tumor. Further research should focus on validating the convolutional neural network on a large image set. See Video Abstract at http://links.lww.com/DCR/B959 . MODELO BASADO EN APRENDIZAJE PROFUNDO PARA IDENTIFICAR TUMORES EN IMGENES ENDOSCPICAS DE PACIENTES CON CNCER DE RECTO LOCALMENTE AVANZADO TRATADOS CON TERAPIA NEOADYUVANTE TOTAL ANTECEDENTES:Una barrera para la aceptación generalizada del tratamiento de Observar y Esperar para el cáncer de recto localmente avanzado, es la imprecisión y la variabilidad en la identificación de la respuesta tumoral endoscópica, en pacientes que completaron la terapia neoadyuvante total (quimiorradioterapia y quimioterapia sistémica).OBJETIVO:Desarrollar un método novedoso para identificar la presencia o ausencia de un tumor en imágenes endoscópicas utilizando una clasificación automática basada en redes neuronales convolucionales profundas y evaluar la precisión del método.DISEÑO:Las imágenes endoscópicas obtenidas antes, durante y después de la terapia neoadyuvante total se agruparon en base de la presencia del tumor. Se modificó una red neuronal convolucional para la clasificación probabilística de tumor versus no tumor y se entrenó con un conjunto de imágenes endoscópicas. Después del entrenamiento, se aplicó a la red un conjunto de imágenes endoscópicas de prueba.ENTORNO CLINICO:El estudio se realizó en un centro oncológico integral.PACIENTES:Analizamos imágenes de 109 pacientes que fueron diagnosticados de cáncer de recto localmente avanzado entre diciembre de 2012 y julio de 2017 y que se sometieron a terapia neoadyuvante total.PRINCIPALES MEDIDAS DE VALORACION:La precisión en la identificación de la presencia o ausencia de tumores en imágenes endoscópicas medidas como el área bajo la curva de funcionamiento del receptor para los conjuntos de imágenes de entrenamiento y prueba.RESULTADOS:Se incluyeron mil trescientas noventa y dos imágenes: 1099 (468 sin tumor y 631 con tumor) para entrenamiento y 293 (151 sin tumor y 142 con tumor) para prueba. El área bajo la curva operativa del receptor para entrenamiento y prueba fue de 0,83.LIMITACIONES:El estudio tuvo un número limitado de imágenes en cada conjunto y se realizó en una sola institución.CONCLUSIÓN:El método de la red neuronal convolucional es moderadamente preciso para distinguir el tumor de ningún tumor. La investigación adicional debería centrarse en validar la red neuronal convolucional en un conjunto de imágenes mayor. Consulte Video Resumen en http://links.lww.com/DCR/B959 . (Traducción -Dr. Fidel Ruiz Healy ).
Collapse
Affiliation(s)
- Hannah M Thompson
- Department of Surgery, Memorial Sloan Kettering Cancer Center, New York, New York
| | - Jin K Kim
- Department of Surgery, Memorial Sloan Kettering Cancer Center, New York, New York
| | | | - Julio Garcia-Aguilar
- Department of Surgery, Memorial Sloan Kettering Cancer Center, New York, New York
| | - Harini Veeraraghavan
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York
| |
Collapse
|
35
|
Lee J, Lee MJ, Kim BS, Hong H. Automated lung tumor segmentation robust to various tumor sizes using a consistency learning-based multi-scale dual-attention network. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2023; 31:879-892. [PMID: 37424487 DOI: 10.3233/xst-230003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/11/2023]
Abstract
BACKGROUND It is often difficult to automatically segment lung tumors due to the large tumor size variation ranging from less than 1 cm to greater than 7 cm depending on the T-stage. OBJECTIVE This study aims to accurately segment lung tumors of various sizes using a consistency learning-based multi-scale dual-attention network (CL-MSDA-Net). METHODS To avoid under- and over-segmentation caused by different ratios of lung tumors and surrounding structures in the input patch according to the size of the lung tumor, a size-invariant patch is generated by normalizing the ratio to the average size of the lung tumors used for the training. Two input patches, a size-invariant patch and size-variant patch are trained on a consistency learning-based network consisting of dual branches that share weights to generate a similar output for each branch with consistency loss. The network of each branch has a multi-scale dual-attention module that learns image features of different scales and uses channel and spatial attention to enhance the scale-attention ability to segment lung tumors of different sizes. RESULTS In experiments with hospital datasets, CL-MSDA-Net showed an F1-score of 80.49%, recall of 79.06%, and precision of 86.78%. This resulted in 3.91%, 3.38%, and 2.95% higher F1-scores than the results of U-Net, U-Net with a multi-scale module, and U-Net with a multi-scale dual-attention module, respectively. In experiments with the NSCLC-Radiomics datasets, CL-MSDA-Net showed an F1-score of 71.7%, recall of 68.24%, and precision of 79.33%. This resulted in 3.66%, 3.38%, and 3.13% higher F1-scores than the results of U-Net, U-Net with a multi-scale module, and U-Net with a multi-scale dual-attention module, respectively. CONCLUSIONS CL-MSDA-Net improves the segmentation performance on average for tumors of all sizes with significant improvements especially for small sized tumors.
Collapse
Affiliation(s)
- Jumin Lee
- Department of Software Convergence, Seoul Women's University, Seoul, Republic of Korea
| | - Min-Jin Lee
- Department of Software Convergence, Seoul Women's University, Seoul, Republic of Korea
| | | | - Helen Hong
- Department of Software Convergence, Seoul Women's University, Seoul, Republic of Korea
| |
Collapse
|
36
|
Zhao G, Liang K, Pan C, Zhang F, Wu X, Hu X, Yu Y. Graph Convolution Based Cross-Network Multiscale Feature Fusion for Deep Vessel Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:183-195. [PMID: 36112564 DOI: 10.1109/tmi.2022.3207093] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Vessel segmentation is widely used to help with vascular disease diagnosis. Vessels reconstructed using existing methods are often not sufficiently accurate to meet clinical use standards. This is because 3D vessel structures are highly complicated and exhibit unique characteristics, including sparsity and anisotropy. In this paper, we propose a novel hybrid deep neural network for vessel segmentation. Our network consists of two cascaded subnetworks performing initial and refined segmentation respectively. The second subnetwork further has two tightly coupled components, a traditional CNN-based U-Net and a graph U-Net. Cross-network multi-scale feature fusion is performed between these two U-shaped networks to effectively support high-quality vessel segmentation. The entire cascaded network can be trained from end to end. The graph in the second subnetwork is constructed according to a vessel probability map as well as appearance and semantic similarities in the original CT volume. To tackle the challenges caused by the sparsity and anisotropy of vessels, a higher percentage of graph nodes are distributed in areas that potentially contain vessels while a higher percentage of edges follow the orientation of potential nearby vessels. Extensive experiments demonstrate our deep network achieves state-of-the-art 3D vessel segmentation performance on multiple public and in-house datasets.
Collapse
|
37
|
Rehman A, Butt MA, Zaman M. Attention Res-UNet. INTERNATIONAL JOURNAL OF DECISION SUPPORT SYSTEM TECHNOLOGY 2023. [DOI: 10.4018/ijdsst.315756] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
During a dermoscopy examination, accurate and automatic skin lesion detection and segmentation can assist medical experts in resecting problematic areas and decrease the risk of deaths due to skin cancer. In order to develop fully automated deep learning model for skin lesion segmentation, the authors design a model Attention Res-UNet by incorporating residual connections, squeeze and excite units, atrous spatial pyramid pooling, and attention gates in basic UNet architecture. This model uses focal tversky loss function to achieve better trade off among recall and precision when training on smaller size lesions while improving the overall outcome of the proposed model. The results of experiments have demonstrated that this design, when evaluated on publicly available ISIC 2018 skin lesion segmentation dataset, outperforms the existing standard methods with a Dice score of 89.14% and IoU of 81.16%; and achieves better trade off among precision and recall. The authors have also performed statistical test of this model with other standard methods and evaluated that this model is statistically significant.
Collapse
|
38
|
Zhang F, Wang Q, Fan E, Lu N, Chen D, Jiang H, Wang Y. Automatic segmentation of the tumor in nonsmall-cell lung cancer by combining coarse and fine segmentation. Med Phys 2022. [PMID: 36514264 DOI: 10.1002/mp.16158] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Revised: 08/05/2022] [Accepted: 11/26/2022] [Indexed: 12/15/2022] Open
Abstract
OBJECTIVES Radiotherapy plays an important role in the treatment of nonsmall-cell lung cancer (NSCLC). Accurate delineation of tumor is the key to successful radiotherapy. Compared with the commonly used manual delineation ways, which are time-consuming and laborious, the automatic segmentation methods based on deep learning can greatly improve the treatment efficiency. METHODS In this paper, we introduce an automatic segmentation method by combining coarse and fine segmentations for NSCLC. Coarse segmentation network is the first level, identifing the rough region of the tumor. In this network, according to the tissue structure distribution of the thoracic cavity where tumor is located, we designed a competition method between tumors and organs at risk (OARs), which can increase the proportion of the identified tumor covering the ground truth and reduce false identification. Fine segmentation network is the second level, carrying out precise segmentation on the results of the coarse level. These two networks are independent of each other during training. When they are used, morphological processing of small scale corrosion and large scale expansion is used for the coarse segmentation results, and the outcomes are sent to the fine segmentation part as input, so as to achieve the complementary advantages of the two networks. RESULTS In the experiment, CT images of 200 patients with NSCLC are used to train the network, and CT images of 60 patients are used to test. Finally, our method produced the Dice similarity coefficient of 0.78 ± 0.10. CONCLUSIONS The experimental results show that the proposed method can accurately segment the tumor with NSCLC, and can also provide support for clinical diagnosis and treatment.
Collapse
Affiliation(s)
- Fuli Zhang
- Radiation Oncology Department, The Seventh Medical Center of Chinese PLA General Hospital, Beijing, China
| | - Qiusheng Wang
- School of Automation Science and Electrical Engineering, Beihang University, Beijing, China
| | - Enyu Fan
- School of Automation Science and Electrical Engineering, Beihang University, Beijing, China
| | - Na Lu
- Senior Department of Oncology, The Fifth Medical Center of PLA General Hospital, Beijing, China
| | - Diandian Chen
- Senior Department of Oncology, The Fifth Medical Center of PLA General Hospital, Beijing, China
| | - Huayong Jiang
- Senior Department of Oncology, The Fifth Medical Center of PLA General Hospital, Beijing, China
| | - Yadi Wang
- Senior Department of Oncology, The Fifth Medical Center of PLA General Hospital, Beijing, China
| |
Collapse
|
39
|
Artificial intelligence for prediction of response to cancer immunotherapy. Semin Cancer Biol 2022; 87:137-147. [PMID: 36372326 DOI: 10.1016/j.semcancer.2022.11.008] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Revised: 11/02/2022] [Accepted: 11/08/2022] [Indexed: 11/13/2022]
Abstract
Artificial intelligence (AI) indicates the application of machines to imitate intelligent behaviors for solving complex tasks with minimal human intervention, including machine learning and deep learning. The use of AI in medicine improves health-care systems in multiple areas such as diagnostic confirmation, risk stratification, analysis, prognosis prediction, treatment surveillance, and virtual health support, which has considerable potential to revolutionize and reshape medicine. In terms of immunotherapy, AI has been applied to unlock underlying immune signatures to associate with responses to immunotherapy indirectly as well as predict responses to immunotherapy responses directly. The AI-based analysis of high-throughput sequences and medical images can provide useful information for management of cancer immunotherapy considering the excellent abilities in selecting appropriate subjects, improving therapeutic regimens, and predicting individualized prognosis. In present review, we aim to evaluate a broad framework about AI-based computational approaches for prediction of response to cancer immunotherapy on both indirect and direct manners. Furthermore, we summarize our perspectives about challenges and opportunities of further AI applications on cancer immunotherapy relating to clinical practicability.
Collapse
|
40
|
Liu S, Tang X, Cai T, Zhang Y, Wang C. COVID-19 CT image segmentation based on improved Res2Net. Med Phys 2022; 49:7583-7595. [PMID: 35916116 PMCID: PMC9538682 DOI: 10.1002/mp.15882] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2021] [Revised: 06/27/2022] [Accepted: 07/18/2022] [Indexed: 01/08/2023] Open
Abstract
PURPOSE Corona virus disease 2019 (COVID-19) is threatening the health of the global people and bringing great losses to our economy and society. However, computed tomography (CT) image segmentation can make clinicians quickly identify the COVID-19-infected regions. Accurate segmentation infection area of COVID-19 can contribute screen confirmed cases. METHODS We designed a segmentation network for COVID-19-infected regions in CT images. To begin with, multilayered features were extracted by the backbone network of Res2Net. Subsequently, edge features of the infected regions in the low-level feature f2 were extracted by the edge attention module. Second, we carefully designed the structure of the attention position module (APM) to extract high-level feature f5 and detect infected regions. Finally, we proposed a context exploration module consisting of two parallel explore blocks, which can remove some false positives and false negatives to reach more accurate segmentation results. RESULTS Experimental results show that, on the public COVID-19 dataset, the Dice, sensitivity, specificity,S α ${S}_\alpha $ ,E ∅ m e a n $E_\emptyset ^{mean}$ , and mean absolute error (MAE) of our method are 0.755, 0.751, 0.959, 0.795, 0.919, and 0.060, respectively. Compared with the latest COVID-19 segmentation model Inf-Net, the Dice similarity coefficient of our model has increased by 7.3%; the sensitivity (Sen) has increased by 5.9%. On contrary, the MAE has dropped by 2.2%. CONCLUSIONS Our method performs well on COVID-19 CT image segmentation. We also find that our method is so portable that can be suitable for various current popular networks. In a word, our method can help screen people infected with COVID-19 effectively and save the labor power of clinicians and radiologists.
Collapse
Affiliation(s)
- Shangwang Liu
- School of Computer and Information EngineeringHenan Normal UniversityXinxiangHenanChina
- Engineering Lab of Intelligence Business & Internet of ThingsXinxiangHenanChina
| | - Xiufang Tang
- School of Computer and Information EngineeringHenan Normal UniversityXinxiangHenanChina
| | - Tongbo Cai
- School of Computer and Information EngineeringHenan Normal UniversityXinxiangHenanChina
| | - Yangyang Zhang
- School of Computer and Information EngineeringHenan Normal UniversityXinxiangHenanChina
| | - Changgeng Wang
- School of Computer and Information EngineeringHenan Normal UniversityXinxiangHenanChina
| |
Collapse
|
41
|
Tang T, Li F, Jiang M, Xia X, Zhang R, Lin K. Improved Complementary Pulmonary Nodule Segmentation Model Based on Multi-Feature Fusion. ENTROPY (BASEL, SWITZERLAND) 2022; 24:1755. [PMID: 36554161 PMCID: PMC9778431 DOI: 10.3390/e24121755] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Revised: 11/23/2022] [Accepted: 11/28/2022] [Indexed: 06/17/2023]
Abstract
Accurate segmentation of lung nodules from pulmonary computed tomography (CT) slices plays a vital role in the analysis and diagnosis of lung cancer. Convolutional Neural Networks (CNNs) have achieved state-of-the-art performance in the automatic segmentation of lung nodules. However, they are still challenged by the large diversity of segmentation targets, and the small inter-class variances between the nodule and its surrounding tissues. To tackle this issue, we propose a features complementary network according to the process of clinical diagnosis, which made full use of the complementarity and facilitation among lung nodule location information, global coarse area, and edge information. Specifically, we first consider the importance of global features of nodules in segmentation and propose a cross-scale weighted high-level feature decoder module. Then, we develop a low-level feature decoder module for edge feature refinement. Finally, we construct a complementary module to make information complement and promote each other. Furthermore, we weight pixels located at the nodule edge on the loss function and add an edge supervision to the deep supervision, both of which emphasize the importance of edges in segmentation. The experimental results demonstrate that our model achieves robust pulmonary nodule segmentation and more accurate edge segmentation.
Collapse
Affiliation(s)
- Tiequn Tang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
- Key Laboratory of Optical Technology and Instrument for Medicine, Ministry of Education, University of Shanghai for Science and Technology, Shanghai 200093, China
- School of Physics and Electronic Engineering, Fuyang Normal University, Fuyang 236037, China
| | - Feng Li
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
- Key Laboratory of Optical Technology and Instrument for Medicine, Ministry of Education, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Minshan Jiang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
- Key Laboratory of Optical Technology and Instrument for Medicine, Ministry of Education, University of Shanghai for Science and Technology, Shanghai 200093, China
- Department of Biomedical Engineering, Florida International University, Miami, FL 33174, USA
| | - Xunpeng Xia
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
- Key Laboratory of Optical Technology and Instrument for Medicine, Ministry of Education, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Rongfu Zhang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
- Key Laboratory of Optical Technology and Instrument for Medicine, Ministry of Education, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Kailin Lin
- Fudan University Shanghai Cancer Center, Shanghai 200032, China
| |
Collapse
|
42
|
Zhang X, Zhang B, Deng S, Meng Q, Chen X, Xiang D. Cross modality fusion for modality-specific lung tumor segmentation in PET-CT images. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac994e] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2022] [Accepted: 10/11/2022] [Indexed: 11/09/2022]
Abstract
Abstract
Although positron emission tomography-computed tomography (PET-CT) images have been widely used, it is still challenging to accurately segment the lung tumor. The respiration, movement and imaging modality lead to large modality discrepancy of the lung tumors between PET images and CT images. To overcome these difficulties, a novel network is designed to simultaneously obtain the corresponding lung tumors of PET images and CT images. The proposed network can fuse the complementary information and preserve modality-specific features of PET images and CT images. Due to the complementarity between PET images and CT images, the two modality images should be fused for automatic lung tumor segmentation. Therefore, cross modality decoding blocks are designed to extract modality-specific features of PET images and CT images with the constraints of the other modality. The edge consistency loss is also designed to solve the problem of blurred boundaries of PET images and CT images. The proposed method is tested on 126 PET-CT images with non-small cell lung cancer, and Dice similarity coefficient scores of lung tumor segmentation reach 75.66 ± 19.42 in CT images and 79.85 ± 16.76 in PET images, respectively. Extensive comparisons with state-of-the-art lung tumor segmentation methods have also been performed to demonstrate the superiority of the proposed network.
Collapse
|
43
|
Wang L. Deep Learning Techniques to Diagnose Lung Cancer. Cancers (Basel) 2022; 14:5569. [PMID: 36428662 PMCID: PMC9688236 DOI: 10.3390/cancers14225569] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Revised: 11/11/2022] [Accepted: 11/11/2022] [Indexed: 11/15/2022] Open
Abstract
Medical imaging tools are essential in early-stage lung cancer diagnostics and the monitoring of lung cancer during treatment. Various medical imaging modalities, such as chest X-ray, magnetic resonance imaging, positron emission tomography, computed tomography, and molecular imaging techniques, have been extensively studied for lung cancer detection. These techniques have some limitations, including not classifying cancer images automatically, which is unsuitable for patients with other pathologies. It is urgently necessary to develop a sensitive and accurate approach to the early diagnosis of lung cancer. Deep learning is one of the fastest-growing topics in medical imaging, with rapidly emerging applications spanning medical image-based and textural data modalities. With the help of deep learning-based medical imaging tools, clinicians can detect and classify lung nodules more accurately and quickly. This paper presents the recent development of deep learning-based imaging techniques for early lung cancer detection.
Collapse
Affiliation(s)
- Lulu Wang
- Biomedical Device Innovation Center, Shenzhen Technology University, Shenzhen 518118, China
| |
Collapse
|
44
|
Xie H, Chen Z, Deng J, Zhang J, Duan H, Li Q. Automatic segmentation of the gross target volume in radiotherapy for lung cancer using transresSEUnet 2.5D Network. J Transl Med 2022; 20:524. [PMID: 36371220 PMCID: PMC9652981 DOI: 10.1186/s12967-022-03732-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Accepted: 10/28/2022] [Indexed: 11/15/2022] Open
Abstract
Objective This paper intends to propose a method of using TransResSEUnet2.5D network for accurate automatic segmentation of the Gross Target Volume (GTV) in Radiotherapy for lung cancer. Methods A total of 11,370 computed tomograms (CT), deriving from 137 cases, of lung cancer patients under radiotherapy developed by radiotherapists were used as the training set; 1642 CT images in 20 cases were used as the validation set, and 1685 CT images in 20 cases were used as the test set. The proposed network was tuned and trained to obtain the best segmentation model and its performance was measured by the Dice Similarity Coefficient (DSC) and with 95% Hausdorff distance (HD95). Lastly, as to demonstrate the accuracy of the automatic segmentation of the network proposed in this study, all possible mirrors of the input images were put into Unet2D, Unet2.5D, Unet3D, ResSEUnet3D, ResSEUnet2.5D, and TransResUnet2.5D, and their respective segmentation performances were compared and assessed. Results The segmentation results of the test set showed that TransResSEUnet2.5D performed the best in the DSC (84.08 ± 0.04) %, HD95 (8.11 ± 3.43) mm and time (6.50 ± 1.31) s metrics compared to the other three networks. Conclusions The TransResSEUnet 2.5D proposed in this study can automatically segment the GTV of radiotherapy for lung cancer patients with more accuracy.
Collapse
|
45
|
Zhang X, Jiang R, Huang P, Wang T, Hu M, Scarsbrook AF, Frangi AF. Dynamic feature learning for COVID-19 segmentation and classification. Comput Biol Med 2022; 150:106136. [PMID: 36240599 PMCID: PMC9523910 DOI: 10.1016/j.compbiomed.2022.106136] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Revised: 08/25/2022] [Accepted: 09/18/2022] [Indexed: 11/28/2022]
Abstract
Since December 2019, coronavirus SARS-CoV-2 (COVID-19) has rapidly developed into a global epidemic, with millions of patients affected worldwide. As part of the diagnostic pathway, computed tomography (CT) scans are used to help patient management. However, parenchymal imaging findings in COVID-19 are non-specific and can be seen in other diseases. In this work, we propose to first segment lesions from CT images, and further, classify COVID-19 patients from healthy persons and common pneumonia patients. In detail, a novel Dynamic Fusion Segmentation Network (DFSN) that automatically segments infection-related pixels is first proposed. Within this network, low-level features are aggregated to high-level ones to effectively capture context characteristics of infection regions, and high-level features are dynamically fused to model multi-scale semantic information of lesions. Based on DFSN, Dynamic Transfer-learning Classification Network (DTCN) is proposed to distinguish COVID-19 patients. Within DTCN, a pre-trained DFSN is transferred and used as the backbone to extract pixel-level information. Then the pixel-level information is dynamically selected and used to make a diagnosis. In this way, the pre-trained DFSN is utilized through transfer learning, and clinical significance of segmentation results is comprehensively considered. Thus DTCN becomes more sensitive to typical signs of COVID-19. Extensive experiments are conducted to demonstrate effectiveness of the proposed DFSN and DTCN frameworks. The corresponding results indicate that these two models achieve state-of-the-art performance in terms of segmentation and classification.
Collapse
Affiliation(s)
- Xiaoqin Zhang
- College of Computer Science and Artificial Intelligence, Wenzhou University, China.
| | - Runhua Jiang
- College of Computer Science and Artificial Intelligence, Wenzhou University, China
| | - Pengcheng Huang
- College of Computer Science and Artificial Intelligence, Wenzhou University, China
| | - Tao Wang
- College of Computer Science and Artificial Intelligence, Wenzhou University, China
| | - Mingjun Hu
- College of Computer Science and Artificial Intelligence, Wenzhou University, China
| | - Andrew F Scarsbrook
- Radiology Department, Leeds Teaching Hospitals NHS Trust, UK; Leeds Institute of Medical Research, University of Leeds, UK
| | - Alejandro F Frangi
- Centre for Computational Imaging and Simulation Technologies in Biomedicine, Leeds Institute for Cardiovascular and Metabolic Medicine, University of Leeds, Leeds, UK; Department of Electrical Engineering, Department of Cardiovascular Sciences, KU Leuven, Belgium
| |
Collapse
|
46
|
Zhu D, Sun D, Wang D. Dual attention mechanism network for lung cancer images super-resolution. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 226:107101. [PMID: 36367483 DOI: 10.1016/j.cmpb.2022.107101] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Revised: 08/29/2022] [Accepted: 08/29/2022] [Indexed: 06/16/2023]
Abstract
BACKGROUND AND OBJECTIVE Currently, the morbidity and mortality of lung cancer rank first among malignant tumors worldwide. Improving the resolution of thin-slice CT of the lung is particularly important for the early diagnosis of lung cancer screening. METHODS Aiming at the problems of network training difficulty and low utilization of feature information caused by the deepening of network layers in super-resolution (SR) reconstruction technology, we propose the dual attention mechanism network for single image super-resolution (SISR). Firstly, the feature of a low-resolution image is extracted directly to retain the feature information. Secondly, several independent dual attention mechanism modules are constructed to extract high-frequency details. The introduction of residual connections can effectively solve the gradient disappearance caused by network deepening, and long and short skip connections can effectively enhance the data features. Furthermore, a hybrid loss function speeds up the network's convergence and improves image SR restoration ability. Finally, through the upsampling operation, the reconstructed high-resolution image is obtained. RESULTS The results on the Set5 dataset for 4 × enlargement show that compared with traditional SR methods such as Bicubic, VDSR, and DRRN, the average PSNR/SSIM is increased by 3.33 dB / 0.079, 0.41 dB / 0.007 and 0.22 dB / 0.006 respectively. The experimental data fully show that DAMN can better restore the image contour features, obtain higher PSNR, SSIM, and better visual effect. CONCLUSION Through the DAMN reconstruction method, the image quality can be improved without increasing radiation exposure and scanning time. Radiologists can enhance their confidence in diagnosing early lung cancer, provide a basis for clinical experts to choose treatment plans, formulate follow-up strategies, and benefit patients in the early stage.
Collapse
Affiliation(s)
- Dongmei Zhu
- College of Information Management, Nanjing Agricultural University, Nanjing, 210095, China; School of Information Engineering, Shandong Huayu University of Technology, Dezhou, 253034, China
| | - Degang Sun
- School of Information Engineering, Shandong Huayu University of Technology, Dezhou, 253034, China
| | - Dongbo Wang
- College of Information Management, Nanjing Agricultural University, Nanjing, 210095, China.
| |
Collapse
|
47
|
Chi J, Zhang S, Han X, Wang H, Wu C, Yu X. MID-UNet: Multi-input directional UNet for COVID-19 lung infection segmentation from CT images. SIGNAL PROCESSING. IMAGE COMMUNICATION 2022; 108:116835. [PMID: 35935468 PMCID: PMC9344813 DOI: 10.1016/j.image.2022.116835] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Revised: 05/30/2022] [Accepted: 07/23/2022] [Indexed: 05/05/2023]
Abstract
Coronavirus Disease 2019 (COVID-19) has spread globally since the first case was reported in December 2019, becoming a world-wide existential health crisis with over 90 million total confirmed cases. Segmentation of lung infection from computed tomography (CT) scans via deep learning method has a great potential in assisting the diagnosis and healthcare for COVID-19. However, current deep learning methods for segmenting infection regions from lung CT images suffer from three problems: (1) Low differentiation of semantic features between the COVID-19 infection regions, other pneumonia regions and normal lung tissues; (2) High variation of visual characteristics between different COVID-19 cases or stages; (3) High difficulty in constraining the irregular boundaries of the COVID-19 infection regions. To solve these problems, a multi-input directional UNet (MID-UNet) is proposed to segment COVID-19 infections in lung CT images. For the input part of the network, we firstly propose an image blurry descriptor to reflect the texture characteristic of the infections. Then the original CT image, the image enhanced by the adaptive histogram equalization, the image filtered by the non-local means filter and the blurry feature map are adopted together as the input of the proposed network. For the structure of the network, we propose the directional convolution block (DCB) which consist of 4 directional convolution kernels. DCBs are applied on the short-cut connections to refine the extracted features before they are transferred to the de-convolution parts. Furthermore, we propose a contour loss based on local curvature histogram then combine it with the binary cross entropy (BCE) loss and the intersection over union (IOU) loss for better segmentation boundary constraint. Experimental results on the COVID-19-CT-Seg dataset demonstrate that our proposed MID-UNet provides superior performance over the state-of-the-art methods on segmenting COVID-19 infections from CT images.
Collapse
Affiliation(s)
- Jianning Chi
- Northeastern University, NO. 195, Chuangxin Road, Hunnan District, Shenyang, China
| | - Shuang Zhang
- Northeastern University, NO. 195, Chuangxin Road, Hunnan District, Shenyang, China
| | - Xiaoying Han
- Northeastern University, NO. 195, Chuangxin Road, Hunnan District, Shenyang, China
| | - Huan Wang
- Northeastern University, NO. 195, Chuangxin Road, Hunnan District, Shenyang, China
| | - Chengdong Wu
- Northeastern University, NO. 195, Chuangxin Road, Hunnan District, Shenyang, China
| | - Xiaosheng Yu
- Northeastern University, NO. 195, Chuangxin Road, Hunnan District, Shenyang, China
| |
Collapse
|
48
|
Savjani RR, Lauria M, Bose S, Deng J, Yuan Y, Andrearczyk V. Automated Tumor Segmentation in Radiotherapy. Semin Radiat Oncol 2022; 32:319-329. [DOI: 10.1016/j.semradonc.2022.06.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
49
|
Li Y, Wu X, Yang P, Jiang G, Luo Y. Machine Learning for Lung Cancer Diagnosis, Treatment, and Prognosis. GENOMICS, PROTEOMICS & BIOINFORMATICS 2022; 20:850-866. [PMID: 36462630 PMCID: PMC10025752 DOI: 10.1016/j.gpb.2022.11.003] [Citation(s) in RCA: 60] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Revised: 10/03/2022] [Accepted: 11/17/2022] [Indexed: 12/03/2022]
Abstract
The recent development of imaging and sequencing technologies enables systematic advances in the clinical study of lung cancer. Meanwhile, the human mind is limited in effectively handling and fully utilizing the accumulation of such enormous amounts of data. Machine learning-based approaches play a critical role in integrating and analyzing these large and complex datasets, which have extensively characterized lung cancer through the use of different perspectives from these accrued data. In this review, we provide an overview of machine learning-based approaches that strengthen the varying aspects of lung cancer diagnosis and therapy, including early detection, auxiliary diagnosis, prognosis prediction, and immunotherapy practice. Moreover, we highlight the challenges and opportunities for future applications of machine learning in lung cancer.
Collapse
Affiliation(s)
- Yawei Li
- Department of Preventive Medicine, Feinberg School of Medicine, Northwestern University, Chicago, IL 60611, USA
| | - Xin Wu
- Department of Medicine, University of Illinois at Chicago, Chicago, IL 60612, USA
| | - Ping Yang
- Department of Quantitative Health Sciences, Mayo Clinic, Rochester, MN 55905 / Scottsdale, AZ 85259, USA
| | - Guoqian Jiang
- Department of Artificial Intelligence and Informatics, Mayo Clinic, Rochester, MN 55905, USA
| | - Yuan Luo
- Department of Preventive Medicine, Feinberg School of Medicine, Northwestern University, Chicago, IL 60611, USA.
| |
Collapse
|
50
|
Automatic lung tumor segmentation from CT images using improved 3D densely connected UNet. Med Biol Eng Comput 2022; 60:3311-3323. [DOI: 10.1007/s11517-022-02667-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2020] [Accepted: 09/12/2022] [Indexed: 11/25/2022]
|