1
|
Zhu H, Liu W, Gao Z, Zhang H. Explainable Classification of Benign-Malignant Pulmonary Nodules With Neural Networks and Information Bottleneck. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:2028-2039. [PMID: 37843998 DOI: 10.1109/tnnls.2023.3303395] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/18/2023]
Abstract
Computerized tomography (CT) is a clinically primary technique to differentiate benign-malignant pulmonary nodules for lung cancer diagnosis. Early classification of pulmonary nodules is essential to slow down the degenerative process and reduce mortality. The interactive paradigm assisted by neural networks is considered to be an effective means for early lung cancer screening in large populations. However, some inherent characteristics of pulmonary nodules in high-resolution CT images, e.g., diverse shapes and sparse distribution over the lung fields, have been inducing inaccurate results. On the other hand, most existing methods with neural networks are dissatisfactory from a lack of transparency. In order to overcome these obstacles, a united framework is proposed, including the classification and feature visualization stages, to learn distinctive features and provide visual results. Specifically, a bilateral scheme is employed to synchronously extract and aggregate global-local features in the classification stage, where the global branch is constructed to perceive deep-level features and the local branch is built to focus on the refined details. Furthermore, an encoder is built to generate some features, and a decoder is constructed to simulate decision behavior, followed by the information bottleneck viewpoint to optimize the objective. Extensive experiments are performed to evaluate our framework on two publicly available datasets, namely, 1) the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) and 2) the Lung and Colon Histopathological Image Dataset (LC25000). For instance, our framework achieves 92.98% accuracy and presents additional visualizations on the LIDC. The experiment results show that our framework can obtain outstanding performance and is effective to facilitate explainability. It also demonstrates that this united framework is a serviceable tool and further has the scalability to be introduced into clinical research.
Collapse
|
2
|
Alshamrani K, Alshamrani HA. Classification of Chest CT Lung Nodules Using Collaborative Deep Learning Model. J Multidiscip Healthc 2024; 17:1459-1472. [PMID: 38596001 PMCID: PMC11002784 DOI: 10.2147/jmdh.s456167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Accepted: 03/08/2024] [Indexed: 04/11/2024] Open
Abstract
Background Early detection of lung cancer through accurate diagnosis of malignant lung nodules using chest CT scans offers patients the highest chance of successful treatment and survival. Despite advancements in computer vision through deep learning algorithms, the detection of malignant nodules faces significant challenges due to insufficient training datasets. Methods This study introduces a model based on collaborative deep learning (CDL) to differentiate between cancerous and non-cancerous nodules in chest CT scans with limited available data. The model dissects a nodule into its constituent parts using six characteristics, allowing it to learn detailed features of lung nodules. It utilizes a CDL submodel that incorporates six types of feature patches to fine-tune a network previously trained with ResNet-50. An adaptive weighting method learned through error backpropagation enhances the process of identifying lung nodules, incorporating these CDL submodels for improved accuracy. Results The CDL model demonstrated a high level of performance in classifying lung nodules, achieving an accuracy of 93.24%. This represents a significant improvement over current state-of-the-art methods, indicating the effectiveness of the proposed approach. Conclusion The findings suggest that the CDL model, with its unique structure and adaptive weighting method, offers a promising solution to the challenge of accurately detecting malignant lung nodules with limited data. This approach not only improves diagnostic accuracy but also contributes to the early detection and treatment of lung cancer, potentially saving lives.
Collapse
Affiliation(s)
- Khalaf Alshamrani
- Radiological Sciences Department, Najran University, Najran, Saudi Arabia
- Department of Oncology and Metabolism, University of Sheffield, Sheffield, UK
| | | |
Collapse
|
3
|
Zhi L, Jiang W, Zhang S, Zhou T. Deep neural network pulmonary nodule segmentation methods for CT images: Literature review and experimental comparisons. Comput Biol Med 2023; 164:107321. [PMID: 37595518 DOI: 10.1016/j.compbiomed.2023.107321] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Revised: 05/08/2023] [Accepted: 08/07/2023] [Indexed: 08/20/2023]
Abstract
Automatic and accurate segmentation of pulmonary nodules in CT images can help physicians perform more accurate quantitative analysis, diagnose diseases, and improve patient survival. In recent years, with the development of deep learning technology, pulmonary nodule segmentation methods based on deep neural networks have gradually replaced traditional segmentation methods. This paper reviews the recent pulmonary nodule segmentation algorithms based on deep neural networks. First, the heterogeneity of pulmonary nodules, the interpretability of segmentation results, and external environmental factors are discussed, and then the open-source 2D and 3D models in medical segmentation tasks in recent years are applied to the Lung Image Database Consortium and Image Database Resource Initiative (LIDC) and Lung Nodule Analysis 16 (Luna16) datasets for comparison, and the visual diagnostic features marked by radiologists are evaluated one by one. According to the analysis of the experimental data, the following conclusions are drawn: (1) In the pulmonary nodule segmentation task, the performance of the 2D segmentation models DSC is generally better than that of the 3D segmentation models. (2) 'Subtlety', 'Sphericity', 'Margin', 'Texture', and 'Size' have more influence on pulmonary nodule segmentation, while 'Lobulation', 'Spiculation', and 'Benign and Malignant' features have less influence on pulmonary nodule segmentation. (3) Higher accuracy in pulmonary nodule segmentation can be achieved based on better-quality CT images. (4) Good contextual information acquisition and attention mechanism design positively affect pulmonary nodule segmentation.
Collapse
Affiliation(s)
- Lijia Zhi
- School of Computer Science and Engineering, North Minzu University, Yinchuan, 750021, China; Medical Imaging Center, Ningxia Hui Autonomous Region People's Hospital, Yinchuan, 750000, China; The Key Laboratory of Images & Graphics Intelligent Processing of State Ethnic Affairs Commission, Yinchuan, 750021, China.
| | - Wujun Jiang
- School of Computer Science and Engineering, North Minzu University, Yinchuan, 750021, China.
| | - Shaomin Zhang
- School of Computer Science and Engineering, North Minzu University, Yinchuan, 750021, China; Medical Imaging Center, Ningxia Hui Autonomous Region People's Hospital, Yinchuan, 750000, China; The Key Laboratory of Images & Graphics Intelligent Processing of State Ethnic Affairs Commission, Yinchuan, 750021, China.
| | - Tao Zhou
- School of Computer Science and Engineering, North Minzu University, Yinchuan, 750021, China; The Key Laboratory of Images & Graphics Intelligent Processing of State Ethnic Affairs Commission, Yinchuan, 750021, China.
| |
Collapse
|
4
|
Wang Z, Zhu J, Fu S, Mao S, Ye Y. RFPNet: Reorganizing feature pyramid networks for medical image segmentation. Comput Biol Med 2023; 163:107108. [PMID: 37321104 DOI: 10.1016/j.compbiomed.2023.107108] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Revised: 05/09/2023] [Accepted: 05/30/2023] [Indexed: 06/17/2023]
Abstract
Medical image segmentation is a crucial step in clinical treatment planning. However, automatic and accurate medical image segmentation remains a challenging task, owing to the difficulty in data acquisition, the heterogeneity and large variation of the lesion tissue. In order to explore image segmentation tasks in different scenarios, we propose a novel network, called Reorganization Feature Pyramid Network (RFPNet), which uses alternately cascaded Thinned Encoder-Decoder Modules (TEDMs) to construct semantic features in various scales at different levels. The proposed RFPNet is composed of base feature construction module, feature pyramid reorganization module and multi-branch feature decoder module. The first module constructs the multi-scale input features. The second module first reorganizes the multi-level features and then recalibrates the responses between integrated feature channels. The third module weights the results obtained from different decoder branches. Extensive experiments conducted on ISIC2018, LUNA2016, RIM-ONE-r1 and CHAOS datasets show that RFPNet achieves Dice scores of 90.47%, 98.31%, 96.88%, 92.05% (Average between classes) and Jaccard scores of 83.95%, 97.05%, 94.04%, 88.78% (Average between classes). In quantitative analysis, RFPNet outperforms some classical methods as well as state-of-the-art methods. Meanwhile, the visual segmentation results demonstrate that RFPNet can excellently segment target areas from clinical datasets.
Collapse
Affiliation(s)
| | - Jiehua Zhu
- Department of Mathematical Sciences, Georgia Southern University, United States.
| | - Shujun Fu
- School of Mathematics, Shandong University, China.
| | - Shuwei Mao
- Shandong Public Health Clinical Center, Jinan, China; Cheeloo College of Medicine, Shandong University, China.
| | - Yangbo Ye
- Department of Mathematics, The University of Iowa, United States.
| |
Collapse
|
5
|
Chen Y, Wang T, Tang H, Zhao L, Zhang X, Tan T, Gao Q, Du M, Tong T. CoTrFuse: a novel framework by fusing CNN and transformer for medical image segmentation. Phys Med Biol 2023; 68:175027. [PMID: 37605997 DOI: 10.1088/1361-6560/acede8] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2023] [Accepted: 08/07/2023] [Indexed: 08/23/2023]
Abstract
Medical image segmentation is a crucial and intricate process in medical image processing and analysis. With the advancements in artificial intelligence, deep learning techniques have been widely used in recent years for medical image segmentation. One such technique is the U-Net framework based on the U-shaped convolutional neural networks (CNN) and its variants. However, these methods have limitations in simultaneously capturing both the global and the remote semantic information due to the restricted receptive domain caused by the convolution operation's intrinsic features. Transformers are attention-based models with excellent global modeling capabilities, but their ability to acquire local information is limited. To address this, we propose a network that combines the strengths of both CNN and Transformer, called CoTrFuse. The proposed CoTrFuse network uses EfficientNet and Swin Transformer as dual encoders. The Swin Transformer and CNN Fusion module are combined to fuse the features of both branches before the skip connection structure. We evaluated the proposed network on two datasets: the ISIC-2017 challenge dataset and the COVID-QU-Ex dataset. Our experimental results demonstrate that the proposed CoTrFuse outperforms several state-of-the-art segmentation methods, indicating its superiority in medical image segmentation. The codes are available athttps://github.com/BinYCn/CoTrFuse.
Collapse
Affiliation(s)
- Yuanbin Chen
- College of Physics and Information Engineering, Fuzhou University, Fuzhou 350116, People's Republic of China
- Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University, Fuzhou 350116, People's Republic of China
| | - Tao Wang
- College of Physics and Information Engineering, Fuzhou University, Fuzhou 350116, People's Republic of China
- Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University, Fuzhou 350116, People's Republic of China
| | - Hui Tang
- College of Physics and Information Engineering, Fuzhou University, Fuzhou 350116, People's Republic of China
- Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University, Fuzhou 350116, People's Republic of China
| | - Longxuan Zhao
- College of Physics and Information Engineering, Fuzhou University, Fuzhou 350116, People's Republic of China
- Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University, Fuzhou 350116, People's Republic of China
| | - Xinlin Zhang
- College of Physics and Information Engineering, Fuzhou University, Fuzhou 350116, People's Republic of China
- Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University, Fuzhou 350116, People's Republic of China
| | - Tao Tan
- Faculty of Applied Science, Macao Polytechnic University, Macao 999078, People's Republic of China
| | - Qinquan Gao
- College of Physics and Information Engineering, Fuzhou University, Fuzhou 350116, People's Republic of China
- Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University, Fuzhou 350116, People's Republic of China
| | - Min Du
- College of Physics and Information Engineering, Fuzhou University, Fuzhou 350116, People's Republic of China
- Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University, Fuzhou 350116, People's Republic of China
| | - Tong Tong
- College of Physics and Information Engineering, Fuzhou University, Fuzhou 350116, People's Republic of China
- Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University, Fuzhou 350116, People's Republic of China
| |
Collapse
|
6
|
Wang L, Zhou H, Xu N, Liu Y, Jiang X, Li S, Feng C, Xu H, Deng K, Song J. A general approach for automatic segmentation of pneumonia, pulmonary nodule, and tuberculosis in CT images. iScience 2023; 26:107005. [PMID: 37534183 PMCID: PMC10391673 DOI: 10.1016/j.isci.2023.107005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Revised: 04/27/2023] [Accepted: 05/26/2023] [Indexed: 08/04/2023] Open
Abstract
Proposing a general segmentation approach for lung lesions, including pulmonary nodules, pneumonia, and tuberculosis, in CT images will improve efficiency in radiology. However, the performance of generative adversarial networks is hampered by the limited availability of annotated samples and the catastrophic forgetting of the discriminator, whereas the universality of traditional morphology-based methods is insufficient for segmenting diverse lung lesions. A cascaded dual-attention network with a context-aware pyramid feature extraction module was designed to address these challenges. A self-supervised rotation loss was designed to mitigate discriminator forgetting. The proposed model achieved Dice coefficients of 70.92, 73.55, and 68.52% on multi-center pneumonia, lung nodule, and tuberculosis test datasets, respectively. No significant decrease in accuracy was observed (p > 0.10) when a small training sample size was used. The cyclic training of the discriminator was reduced with self-supervised rotation loss (p < 0.01). The proposed approach is promising for segmenting multiple lung lesion types in CT images.
Collapse
Affiliation(s)
- Lu Wang
- Department of Library, Shengjing Hospital of China Medical University, Shenyang, Liaoning 110004, China
- School of Health Management, China Medical University, Shenyang, Liaoning 110122, China
| | - He Zhou
- School of Health Management, China Medical University, Shenyang, Liaoning 110122, China
| | - Nan Xu
- School of Health Management, China Medical University, Shenyang, Liaoning 110122, China
| | - Yuchan Liu
- Department of Radiology, The First Affiliated Hospital of University of Science and Technology of China (USTC), Division of Life Sciences and Medicine, USTC Hefei, Anhui 230036, China
| | - Xiran Jiang
- School of Intelligent Medicine, China Medical University, Shenyang, Liaoning 110122, China
| | - Shu Li
- School of Health Management, China Medical University, Shenyang, Liaoning 110122, China
| | - Chaolu Feng
- Key Laboratory of Intelligent Computing in Medical Image (MIIC), Ministry of Education, Shenyang, Liaoning 110169, China
| | - Hainan Xu
- Department of Obstetrics and Gynecology, Pelvic Floor Disease Diagnosis and Treatment Center, Shengjing Hospital of China Medical University, Shenyang, Liaoning 110004, China
| | - Kexue Deng
- Department of Radiology, The First Affiliated Hospital of University of Science and Technology of China (USTC), Division of Life Sciences and Medicine, USTC Hefei, Anhui 230036, China
| | - Jiangdian Song
- School of Health Management, China Medical University, Shenyang, Liaoning 110122, China
| |
Collapse
|
7
|
Wang J, Li S, Yu L, Qu A, Wang Q, Liu J, Wu Q. SDPN: A Slight Dual-Path Network With Local-Global Attention Guided for Medical Image Segmentation. IEEE J Biomed Health Inform 2023; 27:2956-2967. [PMID: 37030687 DOI: 10.1109/jbhi.2023.3260026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/10/2023]
Abstract
Accurate identification of lesions is a key step in surgical planning. However, this task mainly exists two challenges: 1) Due to the complex anatomical shapes of different lesions, most segmentation methods only achieve outstanding performance for a specific structure, rather than other lesions with location differences. 2) The huge number of parameters limits existing transformer-based segmentation models. To overcome these problems, we propose a novel slight dual-path network (SDPN) to segment variable location lesions or organs with significant differences accurately. First, we design a dual-path module to integrate local with global features without obvious memory consumption. Second, a novel Multi-spectrum attention module is proposed to pay further attention to detailed information, which can automatically adapt to the variable segmentation target. Then, the compression module based on tensor ring decomposition is designed to compress convolutional and transformer structures. In the experiment, four datasets, including three benchmark datasets and a clinical dataset, are used to evaluate SDPN. Results of the experiments show that SDPN performs better than other start-of-the-art methods for brain tumor, liver tumor, endometrial tumor and cardiac segmentation. To ensure the generalizability, we train the network on Kvasir-SEG and test on CVC-ClinicDB which collected from a different institution. The quantitative analysis shows that the clinical evaluation results are consistent with the experts. Therefore, this model may be a potential candidate for the segmentation of lesions and organs segmentation with variable locations in clinical applications.
Collapse
|
8
|
Wang J, Qu A, Wang Q, Zhao Q, Liu J, Wu Q. TT-Net: Tensorized Transformer Network for 3D medical image segmentation. Comput Med Imaging Graph 2023; 107:102234. [PMID: 37075619 DOI: 10.1016/j.compmedimag.2023.102234] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Revised: 02/09/2023] [Accepted: 03/24/2023] [Indexed: 04/21/2023]
Abstract
Accurate segmentation of organs, tissues and lesions is essential for computer-assisted diagnosis. Previous works have achieved success in the field of automatic segmentation. However, there exists two limitations. (1) They are remain challenged by complex conditions, such as segmentation target is variable in location, size and shape, especially for different imaging modalities. (2) Existing transformer-based networks suffer from a high parametric complexity. To solve these limitations, we propose a new Tensorized Transformer Network (TT-Net). In this paper, (1) Multi-scale transformer with layers-fusion is proposed to faithfully capture context interaction information. (2) Cross Shared Attention (CSA) module that based on pHash similarity fusion (pSF) is well-designed to extract the global multi-variate dependency features. (3) Tensorized Self-Attention (TSA) module is proposed to deal with the large number of parameters, which can also be easily embedded into other models. In addition, TT-Net gains a good explainability through visualizing the transformer layers. The proposed method is evaluated on three widely accepted public datasets and one clinical dataset, which contains different imaging modalities. Comprehensive results show that TT-Net outperforms other state-of-the-art methods for the four different segmentation tasks. Besides, the compression module which can be easily embedded into other transformer-based methods achieves lower computation with comparable segmentation performance.
Collapse
Affiliation(s)
- Jing Wang
- Shandong University, School of Information Science and Engineering, Qingdao 266237, China
| | - Aixi Qu
- Shandong University, School of Information Science and Engineering, Qingdao 266237, China
| | - Qing Wang
- QiLu Hospital of Shandong University, Radiology Department, Jinan 250012, China
| | - Qibin Zhao
- RIKEN Center for Advanced Intelligence Project, Japan
| | - Ju Liu
- Shandong University, School of Information Science and Engineering, Qingdao 266237, China; Shandong University, Institute of Brain and Brain-Inspired Science, Jinan 250012, China.
| | - Qiang Wu
- Shandong University, School of Information Science and Engineering, Qingdao 266237, China; Shandong University, Institute of Brain and Brain-Inspired Science, Jinan 250012, China.
| |
Collapse
|
9
|
Fan L, Yang W, Tu W, Zhou X, Zou Q, Zhang H, Feng Y, Liu S. Thoracic Imaging in China: Yesterday, Today, and Tomorrow. J Thorac Imaging 2022; 37:366-373. [PMID: 35980382 PMCID: PMC9592175 DOI: 10.1097/rti.0000000000000670] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Thoracic imaging has been revolutionized through advances in technology and research around the world, and so has China. Thoracic imaging in China has progressed from anatomic observation to quantitative and functional evaluation, from using traditional approaches to using artificial intelligence. This article will review the past, present, and future of thoracic imaging in China, in an attempt to establish new accepted strategies moving forward.
Collapse
Affiliation(s)
- Li Fan
- Second Affiliated Hospital, Naval Medical University
| | - Wenjie Yang
- Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Wenting Tu
- Second Affiliated Hospital, Naval Medical University
| | - Xiuxiu Zhou
- Second Affiliated Hospital, Naval Medical University
| | - Qin Zou
- Second Affiliated Hospital, Naval Medical University
| | - Hanxiao Zhang
- Second Affiliated Hospital, Naval Medical University
| | - Yan Feng
- Second Affiliated Hospital, Naval Medical University
| | - Shiyuan Liu
- Second Affiliated Hospital, Naval Medical University
| |
Collapse
|
10
|
Chen Q, Xie W, Zhou P, Zheng C, Wu D. Multi-Crop Convolutional Neural Networks for Fast Lung Nodule Segmentation. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE 2022. [DOI: 10.1109/tetci.2021.3051910] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Affiliation(s)
- Quan Chen
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Wei Xie
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan, China
| | - Pan Zhou
- Hubei Engineering Research Center on Big Data Security, School of Cyber Science and Engineering, Huazhong University of Science and Technology, China
| | - Chuansheng Zheng
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Dapeng Wu
- Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL, USA
| |
Collapse
|
11
|
Deng K, Wang L, Liu Y, Li X, Hou Q, Cao M, Ng NN, Wang H, Chen H, Yeom KW, Zhao M, Wu N, Gao P, Shi J, Liu Z, Li W, Tian J, Song J. A deep learning-based system for survival benefit prediction of tyrosine kinase inhibitors and immune checkpoint inhibitors in stage IV non-small cell lung cancer patients: A multicenter, prognostic study. EClinicalMedicine 2022; 51:101541. [PMID: 35813093 PMCID: PMC9256845 DOI: 10.1016/j.eclinm.2022.101541] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 06/13/2022] [Accepted: 06/13/2022] [Indexed: 12/25/2022] Open
Abstract
BACKGROUND For clinical decision making, it is crucial to identify patients with stage IV non-small cell lung cancer (NSCLC) who may benefit from tyrosine kinase inhibitors (TKIs) and immune checkpoint inhibitors (ICIs). In this study, a deep learning-based system was designed and validated using pre-therapy computed tomography (CT) images to predict the survival benefits of EGFR-TKIs and ICIs in stage IV NSCLC patients. METHODS This retrospective study collected data from 570 patients with stage IV EGFR-mutant NSCLC treated with EGFR-TKIs at five institutions between 2010 and 2021 (data of 314 patients were from a previously registered study), and 129 patients with stage IV NSCLC treated with ICIs at three institutions between 2017 and 2021 to build the ICI test dataset. Five-fold cross-validation was applied to divide the EGFR-TKI-treated patients from four institutions into training and internal validation datasets randomly in a ratio of 80%:20%, and the data from another institution was used as an external test dataset. An EfficientNetV2-based survival benefit prognosis (ESBP) system was developed with pre-therapy CT images as the input and the probability score as the output to identify which patients would receive additional survival benefit longer than the median PFS. Its prognostic performance was validated on the ICI test dataset. For diagnosing which patient would receive additional survival benefit, the accuracy of ESBP was compared with the estimations of three radiologists and three oncologists with varying degrees of expertise (two, five, and ten years). Improvements in the clinicians' diagnostic accuracy with ESBP assistance were then quantified. FINDINGS ESBP achieved positive predictive values of 80·40%, 75·40%, and 77·43% for additional EGFR-TKI survival benefit prediction using the probability score of 0·2 as the threshold on the training, internal validation, and external test datasets, respectively. The higher ESBP score (>0·2) indicated a better prognosis for progression-free survival (hazard ratio: 0·36, 95% CI: 0·19-0·68, p<0·0001) in patients on the external test dataset. Patients with scores >0·2 in the ICI test dataset also showed better survival benefit (hazard ratio: 0·33, 95% CI: 0·18-0·55, p<0·0001). This suggests the potential of ESBP to identify the two subgroups of benefiting patients by decoding the commonalities from pre-therapy CT images (stage IV EGFR-mutant NSCLC patients receiving additional survival benefit from EGFR-TKIs and stage IV NSCLC patients receiving additional survival benefit from ICIs). ESBP assistance improved the diagnostic accuracy of the clinicians with two years of experience from 47·91% to 66·32%, and the clinicians with five years of experience from 53·12% to 61·41%. INTERPRETATION This study developed and externally validated a preoperative CT image-based deep learning model to predict the survival benefits of EGFR-TKI and ICI therapies in stage IV NSCLC patients, which will facilitate optimized and individualized treatment strategies. FUNDING This study received funding from the National Natural Science Foundation of China (82001904, 81930053, and 62027901), and Key-Area Research and Development Program of Guangdong Province (2021B0101420005).
Collapse
Affiliation(s)
- Kexue Deng
- Department of radiology, The First Affiliated Hospital of University of Science and Technology of China (USTC), Division of Life Sciences and Medicine, USTC, Hefei, Anhui, China
| | - Lu Wang
- Library of Shengjing Hospital of China Medical University, Shenyang, China
- School of Health Management, China Medical University, Shenyang, Liaoning, China
| | - Yuchan Liu
- Department of radiology, The First Affiliated Hospital of University of Science and Technology of China (USTC), Division of Life Sciences and Medicine, USTC, Hefei, Anhui, China
| | - Xin Li
- Department of Medical Oncology, The First Hospital of China Medical University, Shenyang, Liaoning, China
| | - Qiuyang Hou
- Department of radiology, The First Affiliated Hospital of University of Science and Technology of China (USTC), Division of Life Sciences and Medicine, USTC, Hefei, Anhui, China
| | - Mulan Cao
- Department of Medical Oncology, The First Hospital of China Medical University, Shenyang, Liaoning, China
| | - Nathan Norton Ng
- Department of Radiology, School of Medicine Stanford University, Stanford CA 94305, United States
| | - Huan Wang
- Radiation oncology department of thoracic cancer, Liaoning Cancer Hospital and Institute, Liaoning, China
| | - Huanhuan Chen
- Department of Oncology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Kristen W. Yeom
- Department of Radiology, School of Medicine Stanford University, Stanford CA 94305, United States
| | - Mingfang Zhao
- Department of Medical Oncology, The First Hospital of China Medical University, Shenyang, Liaoning, China
| | - Ning Wu
- PET-CT center, National Cancer Center/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Peng Gao
- Department of Surgical Oncology and General Surgery, The First Affiliated Hospital of China Medical University, Shenyang, Liaoning, China
| | - Jingyun Shi
- Department of Radiology, Shanghai Pulmonary Hospital, Tongji University School of Medicine, Shanghai, China
| | - Zaiyi Liu
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
- Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Weimin Li
- Department of Respiratory and Critical Care Medicine, West China Hospital, Chengdu, Sichuan, China
| | - Jie Tian
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Engineering Medicine, Beihang University, Beijing, China
- Key Laboratory of Big Data-Based Precision Medicine, Beihang University, Ministry of Industry and Information Technology, Beijing, China
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi'an, China
- CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, Beijing, China
| | - Jiangdian Song
- School of Health Management, China Medical University, Shenyang, Liaoning, China
- Corresponding author at: School of Health Management, China Medical University, Shenyang, Liaoning 110122, China.
| |
Collapse
|
12
|
Yang H, Zhu J, Xiao R, Liu Y, Yu F, Cai L, Qiu M, He F. EGFR mutation status in non-small cell lung cancer receiving PD-1/PD-L1 inhibitors and its correlation with PD-L1 expression: a meta-analysis. Cancer Immunol Immunother 2022; 71:1001-1016. [PMID: 34542660 DOI: 10.1007/s00262-021-03030-2] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2021] [Accepted: 08/04/2021] [Indexed: 12/26/2022]
Abstract
Meta-analysis was performed on the Web of Science, PubMed, Embase, and Cochrane databases to evaluate the effect of epidermal growth factor receptor (EGFR) mutation status on programmed cell death protein 1/programmed death ligand 1 (PD-1/PD-L1) immune checkpoint inhibitors, and the association between EGFR mutation status and PD-L1 expression in non-small cell lung cancer (NSCLC) patients. Pooled effect (hazard ratio/odds ratio, HR/OR) with 95% confidence interval (CI) was calculated, and the source of heterogeneity was explored by subgroup analysis and meta-regression using Stata/SE 15.0. Meta-analysis of the association between EGFR mutation status and overall survival (OS) in NSCLC with immunotherapy was calculated from four randomized controlled trials. We found that immune checkpoint inhibitors significantly prolonged OS over docetaxel overall (HR 0.71, 95% CI 0.64-0.79) and in the EGFR wild type (HR = 0.67, 95% CI = 0.60-0.75), but not in the EGFR mutant subgroup (HR = 1.11, 95% CI = 0.80-1.52). Meta-analysis of the association between EGFR mutation status and PD-L1 expression in NSCLC included 32 studies. The pooled OR and 95% CI were 0.60 (0.46-0.80), calculated by random effects model. No source of heterogeneity was found in subgroup analysis. Sensitivity analysis was carried out with a fixed model, and the influence of a single study on the pooled results showed no significant change with robust meta-analysis methods. Harbord's weighted linear regression test (P = 0.956) and Peters regression test (P = 0.489) indicated no significant publication bias. The limited benefit of single-agent PD-1/PD-L1 inhibitors in the second-line or later setting for EGFR-mutated NSCLC may be partly due to the lower expression of PD-L1.
Collapse
Affiliation(s)
- Huimin Yang
- Department of Epidemiology and Health Statistics, School of Public Health, Fujian Medical University, Fuzhou, 350108, China
- Fujian Provincial Key Laboratory of Environment Factors and Cancer, Key Laboratory of Ministry of Education for Gastrointestinal Cancer, Fujian Medical University, Fuzhou, 350108, China
| | - Jinxiu Zhu
- Department of Oncology, Fuzhou Pulmonary Hospital of Fujian, Fuzhou, 350001, China
| | - Rendong Xiao
- Department of Thoracic Surgery, The First Affiliated Hospital of Fujian Medical University, Fuzhou, 350001, China
| | - Yuhang Liu
- Department of Epidemiology and Health Statistics, School of Public Health, Fujian Medical University, Fuzhou, 350108, China
- Fujian Provincial Key Laboratory of Environment Factors and Cancer, Key Laboratory of Ministry of Education for Gastrointestinal Cancer, Fujian Medical University, Fuzhou, 350108, China
| | - Fanglin Yu
- Experiment Center, School of Public Health, Fujian Medical University, Fuzhou, 350122, China
| | - Lin Cai
- Department of Epidemiology and Health Statistics, School of Public Health, Fujian Medical University, Fuzhou, 350108, China
- Fujian Provincial Key Laboratory of Environment Factors and Cancer, Key Laboratory of Ministry of Education for Gastrointestinal Cancer, Fujian Medical University, Fuzhou, 350108, China
| | - Minglian Qiu
- Department of Thoracic Surgery, The First Affiliated Hospital of Fujian Medical University, Fuzhou, 350001, China.
| | - Fei He
- Department of Epidemiology and Health Statistics, School of Public Health, Fujian Medical University, Fuzhou, 350108, China.
- Fujian Provincial Key Laboratory of Environment Factors and Cancer, Key Laboratory of Ministry of Education for Gastrointestinal Cancer, Fujian Medical University, Fuzhou, 350108, China.
- Fujian Digital Institute of Tumor Big Data, Fujian Medical University, Fuzhou, 350122, China.
| |
Collapse
|
13
|
Radiogenomics: A Valuable Tool for the Clinical Assessment and Research of Ovarian Cancer. J Comput Assist Tomogr 2022; 46:371-378. [DOI: 10.1097/rct.0000000000001279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
14
|
HT-Net: hierarchical context-attention transformer network for medical ct image segmentation. APPL INTELL 2022. [DOI: 10.1007/s10489-021-03010-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
15
|
|
16
|
Song J, Huang SC, Kelly B, Liao G, Shi J, Wu N, Li W, Liu Z, Cui L, Lungre M, Moseley ME, Gao P, Tian J, Yeom KW. Automatic lung nodule segmentation and intra-nodular heterogeneity image generation. IEEE J Biomed Health Inform 2021; 26:2570-2581. [PMID: 34910645 DOI: 10.1109/jbhi.2021.3135647] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Automatic segmentation of lung nodules on computed tomography (CT) images is challenging owing to the variability of morphology, location, and intensity. In addition, few segmentation methods can capture intra-nodular heterogeneity to assist lung nodule diagnosis. In this study, we propose an end-to-end architecture to perform fully automated segmentation of multiple types of lung nodules and generate intra-nodular heterogeneity images for clinical use. To this end, a hybrid loss is considered by introducing a Faster R-CNN model based on generalized intersection over union loss in generative adversarial network. The Lung Image Database Consortium image collection dataset, comprising 2,635 lung nodules, was combined with 3,200 lung nodules from five hospitals for this study. Compared with manual segmentation by radiologists, the proposed model obtained an average dice coefficient (DC) of 82.05% on the test dataset. Compared with U-net, NoduleNet, nnU-net, and other three models, the proposed method achieved comparable performance on lung nodule segmentation and generated more vivid and valid intra-nodular heterogeneity images, which are beneficial in radiological diagnosis. In an external test of 91 patients from another hospital, the proposed model achieved an average DC of 81.61%. The proposed method effectively addresses the challenges of inevitable human interaction and additional pre-processing procedures in the existing solutions for lung nodule segmentation. In addition, the results show that the intra-nodular heterogeneity images generated by the proposed model are suitable to facilitate lung nodule diagnosis in radiology.
Collapse
|
17
|
Kavithaa G, Balakrishnan P, Yuvaraj SA. Lung Cancer Detection and Improving Accuracy Using Linear Subspace Image Classification Algorithm. Interdiscip Sci 2021; 13:779-786. [PMID: 34351570 DOI: 10.1007/s12539-021-00468-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2020] [Revised: 07/15/2021] [Accepted: 07/23/2021] [Indexed: 06/13/2023]
Abstract
The ability to identify lung cancer at an early stage is critical, because it can help patients live longer. However, predicting the affected area while diagnosing cancer is a huge challenge. An intelligent computer-aided diagnostic system can be utilized to detect and diagnose lung cancer by detecting the damaged region. The suggested Linear Subspace Image Classification Algorithm (LSICA) approach classifies images in a linear subspace. This methodology is used to accurately identify the damaged region, and it involves three steps: image enhancement, segmentation, and classification. The spatial image clustering technique is used to quickly segment and identify the impacted area in the image. LSICA is utilized to determine the accuracy value of the affected region for classification purposes. Therefore, a lung cancer detection system with classification-dependent image processing is used for lung cancer CT imaging. Therefore, a new method to overcome these deficiencies of the process for detection using LSICA is proposed in this work on lung cancer. MATLAB has been used in all programs. A proposed system designed to easily identify the affected region with help of the classification technique to enhance and get more accurate results.
Collapse
Affiliation(s)
- G Kavithaa
- Department of Electronics and Communication Engineering, Government College of Engineering, Salem, Tamilnadu, India.
| | - P Balakrishnan
- Malla Reddy Engineering College for Women (Autonomous), Hyderabad, 500100, India
| | - S A Yuvaraj
- Department of ECE, GRT Institute of Engineering and Technology, Tiruttani, Tamilnadu, India
| |
Collapse
|
18
|
Chen S, Zou Y, Liu PX. IBA-U-Net: Attentive BConvLSTM U-Net with Redesigned Inception for medical image segmentation. Comput Biol Med 2021; 135:104551. [PMID: 34157471 DOI: 10.1016/j.compbiomed.2021.104551] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2021] [Revised: 05/16/2021] [Accepted: 06/02/2021] [Indexed: 10/21/2022]
Abstract
Accurate segmentation of medical images plays an essential role in their analysis and has a wide range of research and application values in fields of practice such as medical research, disease diagnosis, disease analysis, and auxiliary surgery. In recent years, deep convolutional neural networks have been developed that show strong performance in medical image segmentation. However, because of the inherent challenges of medical images, such as irregularities of the dataset and the existence of outliers, segmentation approaches have not demonstrated sufficiently accurate and reliable results for clinical employment. Our method is based on three key ideas: (1) integrating the BConvLSTM block and the Attention block to reduce the semantic gap between the encoder and decoder feature maps to make the two feature maps more homogeneous, (2) factorizing convolutions with a large filter size by Redesigned Inception, which uses a multiscale feature fusion method to significantly increase the effective receptive field, and (3) devising a deep convolutional neural network with multiscale feature fusion and a Attentive BConvLSTM mechanism, which integrates the Attentive BConvLSTM block and the Redesigned Inception block into an encoder-decoder model called Attentive BConvLSTM U-Net with Redesigned Inception (IBA-U-Net). Our proposed architecture, IBA-U-Net, has been compared with the U-Net and state-of-the-art segmentation methods on three publicly available datasets, the lung image segmentation dataset, skin lesion image dataset, and retinal blood vessel image segmentation dataset, each with their unique challenges, and it has improved the prediction performance even with slightly less calculation expense and fewer network parameters. By devising a deep convolutional neural network with a multiscale feature fusion and Attentive BConvLSTM mechanism, medical image segmentation of different tasks can be completed effectively and accurately with only 45% of U-Net parameters.
Collapse
Affiliation(s)
- Siyuan Chen
- The School of Information Engineering, Nanchang University, Jiangxi, Nanchang, 330031, China
| | - Yanni Zou
- The School of Information Engineering, Nanchang University, Jiangxi, Nanchang, 330031, China.
| | - Peter X Liu
- Department of Systems and Computer Engineering, Carleton University, Ottawa, ON, KIS 5B6, Canada
| |
Collapse
|
19
|
Xue P, Fu Y, Ji H, Cui W, Dong E. Lung Respiratory Motion Estimation Based on Fast Kalman Filtering and 4D CT Image Registration. IEEE J Biomed Health Inform 2021; 25:2007-2017. [PMID: 33044936 DOI: 10.1109/jbhi.2020.3030071] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Respiratory motion estimation is an important part in image-guided radiation therapy and clinical diagnosis. However, most of the respiratory motion estimation methods rely on indirect measurements of external breathing indicators, which will not only introduce great estimation errors, but also bring invasive injury for patients. In this paper, we propose a method of lung respiratory motion estimation based on fast Kalman filtering and 4D CT image registration (LRME-4DCT). In order to perform dynamic motion estimation for continuous phases, a motion estimation model is constructed by combining two kinds of GPU-accelerated 4D CT image registration methods with fast Kalman filtering method. To address the high computational requirements of 4D CT image sequences, a multi-level processing strategy is adopted in the 4D CT image registration methods, and respiratory motion states are predicted from three independent directions. In the DIR-lab dataset and POPI dataset with 4D CT images, the average target registration error (TRE) of the LRME-4DCT method can reach 0.91 mm and 0.85 mm respectively. Compared with traditional estimation methods based on pair-wise image registration, the proposed LRME-4DCT method can estimate the physiological respiratory motion more accurately and quickly. Our proposed LRME-4DCT method fully meets the practical clinical requirements for rapid dynamic estimation of lung respiratory motion.
Collapse
|
20
|
Wang J, Chen X, Lu H, Zhang L, Pan J, Bao Y, Su J, Qian D. Feature-shared adaptive-boost deep learning for invasiveness classification of pulmonary subsolid nodules in CT images. Med Phys 2020; 47:1738-1749. [PMID: 32020649 DOI: 10.1002/mp.14068] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2019] [Revised: 01/08/2020] [Accepted: 01/22/2020] [Indexed: 12/30/2022] Open
Abstract
PURPOSE In clinical practice, invasiveness is an important reference indicator for differentiating the malignant degree of subsolid pulmonary nodules. These nodules can be classified as atypical adenomatous hyperplasia (AAH), adenocarcinoma in situ (AIS), minimally invasive adenocarcinoma (MIA), or invasive adenocarcinoma (IAC). The automatic determination of a nodule's invasiveness based on chest CT scans can guide treatment planning. However, it is challenging, owing to the insufficiency of training data and their interclass similarity and intraclass variation. To address these challenges, we propose a two-stage deep learning strategy for this task: prior-feature learning followed by adaptive-boost deep learning. METHODS The adaptive-boost deep learning is proposed to train a strong classifier for invasiveness classification of subsolid nodules in chest CT images, using multiple 3D convolutional neural network (CNN)-based weak classifiers. Because ensembles of multiple deep 3D CNN models have a huge number of parameters and require large computing resources along with more training and testing time, the prior-feature learning is proposed to reduce the computations by sharing the CNN layers between all weak classifiers. Using this strategy, all weak classifiers can be integrated into a single network. RESULTS Tenfold cross validation of binary classification was conducted on a total of 1357 nodules, including 765 noninvasive (AAH and AIS) and 592 invasive nodules (MIA and IAC). Ablation experimental results indicated that the proposed binary classifier achieved an accuracy of 73.4 \% ± 1.4 with an AUC of 81.3 \% ± 2.2 . These results are superior compared to those achieved by three experienced chest imaging specialists who achieved an accuracy of 69.1 \% , 69.3 \% , and 67.9 \% , respectively. About 200 additional nodules were also collected. These nodules covered 50 cases for each category (AAH, AIS, MIA, and IAC, respectively). Both binary and multiple classifications were performed on these data and the results demonstrated that the proposed method definitely achieves better performance than the performance achieved by nonensemble deep learning methods. CONCLUSIONS It can be concluded that the proposed adaptive-boost deep learning can significantly improve the performance of invasiveness classification of pulmonary subsolid nodules in CT images, while the prior-feature learning can significantly reduce the total size of deep models. The promising results on clinical data show that the trained models can be used as an effective lung cancer screening tool in hospitals. Moreover, the proposed strategy can be easily extended to other similar classification tasks in 3D medical images.
Collapse
Affiliation(s)
- Jun Wang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Xiaorong Chen
- Medical Imaging Department, Jinhua Municipal Central Hospital, Jinhua, 321001, China
| | - Hongbing Lu
- College of Computer Science and Technology, Zhejiang University, Hangzhou, 310027, China
| | - Lichi Zhang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Jianfeng Pan
- Medical Imaging Department, Jinhua Municipal Central Hospital, Jinhua, 321001, China
| | - Yong Bao
- Changzhou Industrial Technology Research Institute of Zhejiang University, Changzhou, 213022, China
| | - Jiner Su
- Medical Imaging Department, Jinhua Municipal Central Hospital, Jinhua, 321001, China
| | - Dahong Qian
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| |
Collapse
|
21
|
Wei W, Rong Y, Liu Z, Zhou B, Tang Z, Wang S, Dong D, Zang Y, Guo Y, Tian J. Radiomics: a Novel CT-Based Method of Predicting Postoperative Recurrence in Ovarian Cancer. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2019; 2018:4130-4133. [PMID: 30441264 DOI: 10.1109/embc.2018.8513351] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
In order to predict the 3-year recurrence of advanced ovarian cancer before surgery, we retrospective collected 94 patients to analyze by using a novel radiomics method. A total of 575 3D imaging features used for radiomics analysis were extracted, and 7 features were selected from computed tomography (CT) images that were most strongly associated with 3-year clinical recurrence-free survival (CRFS) probability to build a radiomics signature. The area under the Receiver Operating Characteristic (ROC) curve (AUC) of 0.8567 (95% CI: 0.7251-0.9498) and 0.8533 (95% CI: 0.7231-0.9671) were obtained in the training cohort and validation cohort with the logistic regression classification model respectively. Experimental results show that CT-based radiomics features were closely associated with the recurrence of advanced ovarian cancer. It is possible to prejudge the recurrence of ovarian cancer before surgery.
Collapse
|
22
|
Gu Z, Cheng J, Fu H, Zhou K, Hao H, Zhao Y, Zhang T, Gao S, Liu J. CE-Net: Context Encoder Network for 2D Medical Image Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2281-2292. [PMID: 30843824 DOI: 10.1109/tmi.2019.2903562] [Citation(s) in RCA: 774] [Impact Index Per Article: 129.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Medical image segmentation is an important step in medical image analysis. With the rapid development of a convolutional neural network in image processing, deep learning has been used for medical image segmentation, such as optic disc segmentation, blood vessel detection, lung segmentation, cell segmentation, and so on. Previously, U-net based approaches have been proposed. However, the consecutive pooling and strided convolutional operations led to the loss of some spatial information. In this paper, we propose a context encoder network (CE-Net) to capture more high-level information and preserve spatial information for 2D medical image segmentation. CE-Net mainly contains three major components: a feature encoder module, a context extractor, and a feature decoder module. We use the pretrained ResNet block as the fixed feature extractor. The context extractor module is formed by a newly proposed dense atrous convolution block and a residual multi-kernel pooling block. We applied the proposed CE-Net to different 2D medical image segmentation tasks. Comprehensive results show that the proposed method outperforms the original U-Net method and other state-of-the-art methods for optic disc segmentation, vessel detection, lung segmentation, cell contour segmentation, and retinal optical coherence tomography layer segmentation.
Collapse
|
23
|
Xie Y, Xia Y, Zhang J, Song Y, Feng D, Fulham M, Cai W. Knowledge-based Collaborative Deep Learning for Benign-Malignant Lung Nodule Classification on Chest CT. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:991-1004. [PMID: 30334786 DOI: 10.1109/tmi.2018.2876510] [Citation(s) in RCA: 195] [Impact Index Per Article: 32.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
The accurate identification of malignant lung nodules on chest CT is critical for the early detection of lung cancer, which also offers patients the best chance of cure. Deep learning methods have recently been successfully introduced to computer vision problems, although substantial challenges remain in the detection of malignant nodules due to the lack of large training data sets. In this paper, we propose a multi-view knowledge-based collaborative (MV-KBC) deep model to separate malignant from benign nodules using limited chest CT data. Our model learns 3-D lung nodule characteristics by decomposing a 3-D nodule into nine fixed views. For each view, we construct a knowledge-based collaborative (KBC) submodel, where three types of image patches are designed to fine-tune three pre-trained ResNet-50 networks that characterize the nodules' overall appearance, voxel, and shape heterogeneity, respectively. We jointly use the nine KBC submodels to classify lung nodules with an adaptive weighting scheme learned during the error back propagation, which enables the MV-KBC model to be trained in an end-to-end manner. The penalty loss function is used for better reduction of the false negative rate with a minimal effect on the overall performance of the MV-KBC model. We tested our method on the benchmark LIDC-IDRI data set and compared it to the five state-of-the-art classification approaches. Our results show that the MV-KBC model achieved an accuracy of 91.60% for lung nodule classification with an AUC of 95.70%. These results are markedly superior to the state-of-the-art approaches.
Collapse
|
24
|
Liu Z, Wang S, Dong D, Wei J, Fang C, Zhou X, Sun K, Li L, Li B, Wang M, Tian J. The Applications of Radiomics in Precision Diagnosis and Treatment of Oncology: Opportunities and Challenges. Theranostics 2019; 9:1303-1322. [PMID: 30867832 PMCID: PMC6401507 DOI: 10.7150/thno.30309] [Citation(s) in RCA: 569] [Impact Index Per Article: 94.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2018] [Accepted: 01/10/2019] [Indexed: 12/14/2022] Open
Abstract
Medical imaging can assess the tumor and its environment in their entirety, which makes it suitable for monitoring the temporal and spatial characteristics of the tumor. Progress in computational methods, especially in artificial intelligence for medical image process and analysis, has converted these images into quantitative and minable data associated with clinical events in oncology management. This concept was first described as radiomics in 2012. Since then, computer scientists, radiologists, and oncologists have gravitated towards this new tool and exploited advanced methodologies to mine the information behind medical images. On the basis of a great quantity of radiographic images and novel computational technologies, researchers developed and validated radiomic models that may improve the accuracy of diagnoses and therapy response assessments. Here, we review the recent methodological developments in radiomics, including data acquisition, tumor segmentation, feature extraction, and modelling, as well as the rapidly developing deep learning technology. Moreover, we outline the main applications of radiomics in diagnosis, treatment planning and evaluations in the field of oncology with the aim of developing quantitative and personalized medicine. Finally, we discuss the challenges in the field of radiomics and the scope and clinical applicability of these methods.
Collapse
Affiliation(s)
- Zhenyu Liu
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, 100190, China
- University of Chinese Academy of Sciences, Beijing, 100080, China
| | - Shuo Wang
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, 100190, China
- University of Chinese Academy of Sciences, Beijing, 100080, China
| | - Di Dong
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, 100190, China
- University of Chinese Academy of Sciences, Beijing, 100080, China
| | - Jingwei Wei
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, 100190, China
- University of Chinese Academy of Sciences, Beijing, 100080, China
| | - Cheng Fang
- Department of Hepatobiliary Surgery, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, 646000, China
| | - Xuezhi Zhou
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, 100190, China
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi'an, Shaanxi, 710126, China
| | - Kai Sun
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, 100190, China
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi'an, Shaanxi, 710126, China
| | - Longfei Li
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, 100190, China
- Collaborative Innovation Center for Internet Healthcare, Zhengzhou University, Zhengzhou, Henan, 450052, China
| | - Bo Li
- Department of Hepatobiliary Surgery, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, 646000, China
| | - Meiyun Wang
- Department of Radiology, Henan Provincial People's Hospital & the People's Hospital of Zhengzhou University, Zhengzhou, Henan, 450003, China
| | - Jie Tian
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, 100190, China
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi'an, Shaanxi, 710126, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Beihang University, Beijing, 100191, China
| |
Collapse
|
25
|
Fan L, Fang M, Li Z, Tu W, Wang S, Chen W, Tian J, Dong D, Liu S. Radiomics signature: a biomarker for the preoperative discrimination of lung invasive adenocarcinoma manifesting as a ground-glass nodule. Eur Radiol 2018; 29:889-897. [PMID: 29967956 DOI: 10.1007/s00330-018-5530-z] [Citation(s) in RCA: 97] [Impact Index Per Article: 13.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2017] [Revised: 04/03/2018] [Accepted: 05/07/2018] [Indexed: 12/20/2022]
Abstract
OBJECTIVES To identify the radiomics signature allowing preoperative discrimination of lung invasive adenocarcinomas from non-invasive lesions manifesting as ground-glass nodules. METHODS This retrospective primary cohort study included 160 pathologically confirmed lung adenocarcinomas. Radiomics features were extracted from preoperative non-contrast CT images to build a radiomics signature. The predictive performance and calibration of the radiomics signature were evaluated using intra-cross (n=76), external non-contrast-enhanced CT (n=75) and contrast-enhanced CT (n=84) validation cohorts. The performance of radiomics signature and CT morphological and quantitative indices were compared. RESULTS 355 three-dimensional radiomics features were extracted, and two features were identified as the best discriminators to build a radiomics signature. The radiomics signature showed a good ability to discriminate between invasive adenocarcinomas and non-invasive lesions with an accuracy of 86.3%, 90.8%, 84.0% and 88.1%, respectively, in the primary and validation cohorts. It remained an independent predictor after adjusting for traditional preoperative factors (odds ratio 1.87, p < 0.001) and demonstrated good calibration in all cohorts. It was a better independent predictor than CT morphology or mean CT value. CONCLUSIONS The radiomics signature showed good predictive performance in discriminating between invasive adenocarcinomas and non-invasive lesions. Being a non-invasive biomarker, it could assist in determining therapeutic strategies for lung adenocarcinoma. KEY POINTS • The radiomics signature was a non-invasive biomarker of lung invasive adenocarcinoma. • The radiomics signature outweighed CT morphological and quantitative indices. • A three-centre study showed that radiomics signature had good predictive performance.
Collapse
Affiliation(s)
- Li Fan
- Department of Radiology, Changzheng Hospital, Second Military Medical University, No. 415 Fengyang Road, Shanghai, 200003, China
| | - MengJie Fang
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, No.95 Zhongguancun East Road, Beijing, 100190, China
- University of Chinese Academy of Sciences, Beijing, 100190, China
| | - ZhaoBin Li
- Department of Radiation Oncology, The Sixth People's Hospital, Shanghai Jiaotong University, Shanghai, 200233, China
| | - WenTing Tu
- Department of Radiology, Changzheng Hospital, Second Military Medical University, No. 415 Fengyang Road, Shanghai, 200003, China
| | - ShengPing Wang
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai, 200032, China
| | - WuFei Chen
- Department of Radiology, Huadong Hospital Affiliated with Fudan University, Shanghai, 200040, China
| | - Jie Tian
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, No.95 Zhongguancun East Road, Beijing, 100190, China
- University of Chinese Academy of Sciences, Beijing, 100190, China
| | - Di Dong
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, No.95 Zhongguancun East Road, Beijing, 100190, China.
- University of Chinese Academy of Sciences, Beijing, 100190, China.
| | - ShiYuan Liu
- Department of Radiology, Changzheng Hospital, Second Military Medical University, No. 415 Fengyang Road, Shanghai, 200003, China.
| |
Collapse
|
26
|
Peng T, Wang Y, Xu TC, Shi L, Jiang J, Zhu S. Detection of Lung Contour with Closed Principal Curve and Machine Learning. J Digit Imaging 2018; 31:520-533. [PMID: 29450843 DOI: 10.1007/s10278-018-0058-y] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022] Open
Abstract
Radiation therapy plays an essential role in the treatment of cancer. In radiation therapy, the ideal radiation doses are delivered to the observed tumor while not affecting neighboring normal tissues. In three-dimensional computed tomography (3D-CT) scans, the contours of tumors and organs-at-risk (OARs) are often manually delineated by radiologists. The task is complicated and time-consuming, and the manually delineated results will be variable from different radiologists. We propose a semi-supervised contour detection algorithm, which firstly uses a few points of region of interest (ROI) as an approximate initialization. Then the data sequences are achieved by the closed polygonal line (CPL) algorithm, where the data sequences consist of the ordered projection indexes and the corresponding initial points. Finally, the smooth lung contour can be obtained, when the data sequences are trained by the backpropagation neural network model (BNNM). We use the private clinical dataset and the public Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) dataset to measure the accuracy of the presented method, respectively. To the private dataset, experimental results on the initial points which are as low as 15% of the manually delineated points show that the Dice coefficient reaches up to 0.95 and the global error is as low as 1.47 × 10-2. The performance of the proposed algorithm is also better than the cubic spline interpolation (CSI) algorithm. While on the public LIDC-IDRI dataset, our method achieves superior segmentation performance with average Dice of 0.83.
Collapse
Affiliation(s)
- Tao Peng
- School of Computer Science & Technology, Soochow University, No.1 Shizi Road, Suzhou, Jiangsu, 215006, China.
| | - Yihuai Wang
- School of Computer Science & Technology, Soochow University, No.1 Shizi Road, Suzhou, Jiangsu, 215006, China.
| | - Thomas Canhao Xu
- School of Computer Science & Technology, Soochow University, No.1 Shizi Road, Suzhou, Jiangsu, 215006, China
| | - Lianmin Shi
- School of Computer Science & Technology, Soochow University, No.1 Shizi Road, Suzhou, Jiangsu, 215006, China
| | - Jianwu Jiang
- School of Computer Science & Technology, Soochow University, No.1 Shizi Road, Suzhou, Jiangsu, 215006, China
| | - Shilang Zhu
- School of Computer Science & Technology, Soochow University, No.1 Shizi Road, Suzhou, Jiangsu, 215006, China
| |
Collapse
|
27
|
Lavanya M, Kannan PM. Lung Lesion Detection in CT Scan Images Using the Fuzzy Local Information Cluster Means (FLICM) Automatic Segmentation Algorithm and Back Propagation Network Classification. Asian Pac J Cancer Prev 2017; 18:3395-3399. [PMID: 29286609 PMCID: PMC5980900 DOI: 10.22034/apjcp.2017.18.12.3395] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022] Open
Abstract
Lung cancer is a frequently lethal disease often causing death of human beings at an early age because of uncontrolled
cell growth in the lung tissues. The diagnostic methods available are less than effective for detection of cancer. Therefore
an automatic lesion segmentation method with computed tomography (CT) scans has been developed. However it is
very difficult to perform automatic identification and segmentation of lung tumours with good accuracy because of
the existence of variation in lesions. This paper describes the application of a robust lesion detection and segmentation
technique to segment every individual cell from pathological images to extract the essential features. The proposed
technique based on the FLICM (Fuzzy Local Information Cluster Means) algorithm used for segmentation, with
reduced false positives in detecting lung cancers. The back propagation network used to classify cancer cells is based
on computer aided diagnosis (CAD).
Collapse
Affiliation(s)
- M Lavanya
- Department of Electrical and Electronics Engineering, Saveetha School of Engineering, Saveetha University, Thandalam, Chennai-602 105, India.
| | | |
Collapse
|
28
|
Shen C, Liu Z, Guan M, Song J, Lian Y, Wang S, Tang Z, Dong D, Kong L, Wang M, Shi D, Tian J. 2D and 3D CT Radiomics Features Prognostic Performance Comparison in Non-Small Cell Lung Cancer. Transl Oncol 2017; 10:886-894. [PMID: 28930698 PMCID: PMC5605492 DOI: 10.1016/j.tranon.2017.08.007] [Citation(s) in RCA: 110] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2017] [Revised: 08/20/2017] [Accepted: 08/22/2017] [Indexed: 11/26/2022] Open
Abstract
OBJECTIVE: To compare 2D and 3D radiomics features prognostic performance differences in CT images of non-small cell lung cancer (NSCLC). METHOD: We enrolled 588 NSCLC patients from three independent cohorts. Two sets of 463 patients from two different institutes were used as the training cohort. The remaining cohort with 125 patients was set as the validation cohort. A total of 1014 radiomics features (507 2D features and 507 3D features correspondingly) were assessed. Based on the dichotomized survival data, 2D and 3D radiomics indicators were calculated for each patient by trained classifiers. We used the area under the receiver operating characteristic curve (AUC) to assess the prediction performance of trained classifiers (the support vector machine and logistic regression). Kaplan–Meier and Cox hazard survival analyses were also employed. Harrell's concordance index (C-Index) and Akaike's information criteria (AIC) were applied to assess the trained models. RESULTS: Radiomics indicators were built and compared by AUCs. In the training cohort, 2D_AUC = 0.653, 3D_AUC = 0.671. In the validation cohort, 2D_AUC = 0.755, 3D_AUC = 0.663. Both 2D and 3D trained indicators achieved significant results (P < .05) in the Kaplan-Meier analysis and Cox regression. In the validation cohort, 2D Cox model had a C-Index = 0.683 and AIC = 789.047; 3D Cox model obtained a C-Index = 0.632 and AIC = 799.409. CONCLUSION: Both 2D and 3D CT radiomics features have a certain prognostic ability in NSCLC, but 2D features showed better performance in our tests. Considering the cost of the radiomics features calculation, 2D features are more recommended for use in the current study.
Collapse
Affiliation(s)
- Chen Shen
- School of Life Science and Technology, Xidian University, Xi'an, Shaanxi, 710126, China; CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, 100190, China
| | - Zhenyu Liu
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, 100190, China
| | - Min Guan
- Department of Radiology, Henan Provincial People's Hospital & the People's Hospital of Zhengzhou University, Zhengzhou, Henan, 450003, China
| | - Jiangdian Song
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, 100190, China; Sino-Dutch Biomedical and Information Engineering School, Northeastern University, Shenyang, 110819, China
| | - Yucheng Lian
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, 100190, China
| | - Shuo Wang
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, 100190, China
| | - Zhenchao Tang
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, 100190, China; School of Mechanical, Electrical & Information Engineering, Shandong University, Weihai, Shandong Province, 264209, China
| | - Di Dong
- School of Mechanical, Electrical & Information Engineering, Shandong University, Weihai, Shandong Province, 264209, China
| | - Lingfei Kong
- Department of Radiology, Henan Provincial People's Hospital & the People's Hospital of Zhengzhou University, Zhengzhou, Henan, 450003, China
| | - Meiyun Wang
- Department of Radiology, Henan Provincial People's Hospital & the People's Hospital of Zhengzhou University, Zhengzhou, Henan, 450003, China.
| | - Dapeng Shi
- Department of Radiology, Henan Provincial People's Hospital & the People's Hospital of Zhengzhou University, Zhengzhou, Henan, 450003, China.
| | - Jie Tian
- School of Life Science and Technology, Xidian University, Xi'an, Shaanxi, 710126, China; CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, 100190, China; University of Chinese Academy of Sciences, Beijing, 100080, China.
| |
Collapse
|
29
|
Association between tumor heterogeneity and progression-free survival in non-small cell lung cancer patients with EGFR mutations undergoing tyrosine kinase inhibitors therapy. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2017; 2016:1268-1271. [PMID: 28268556 DOI: 10.1109/embc.2016.7590937] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
For non-small cell lung cancer (NSCLC) patients with epidermal growth factor receptor (EGFR) mutations, current staging methods do not accurately predict the risk of disease recurrence after tyrosine kinase inhibitors (TKI) therapy. Developing a noninvasive method to predict whether individual could benefit from TKI therapy has great clinical significance. In this research, a radiomics approach was proposed to determine whether the tumor heterogeneity of NSCLC, which was measured by the texture on computed tomography (CT), could make an independent prediction of progression-free survival (PFS). A primary dataset contained 80 patients (median PFS, 9.5 months) with positive EGFR mutations and a validation dataset contained 72 NSCLC (median PFS, 10.2 months) patients were used for prognosis trial. The experiment results indicated that the features: "Cluster Prominence of Gray Level Co-occurrence" (hazard ratio [HR]: 2.13, 95% confidence interval [CI]: (1.33, 3.40), P = 0.010) and "Short Run High Gray Level Emphasis of Run Length" (HR: 2.43, 95%CI: (1.46, 4.05), P = 0.005) were significantly associated with PFS in the primary dataset, and these two texture features also make a consistent performance on the validation cohort. Our study further supported that the quantitative measurement of tumor heterogeneity can be associated with prognosis of NSCLC patients with EGFR mutation.
Collapse
|
30
|
Wang S, Zhou M, Liu Z, Liu Z, Gu D, Zang Y, Dong D, Gevaert O, Tian J. Central focused convolutional neural networks: Developing a data-driven model for lung nodule segmentation. Med Image Anal 2017; 40:172-183. [PMID: 28688283 PMCID: PMC5661888 DOI: 10.1016/j.media.2017.06.014] [Citation(s) in RCA: 229] [Impact Index Per Article: 28.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2017] [Revised: 06/29/2017] [Accepted: 06/29/2017] [Indexed: 11/23/2022]
Abstract
Accurate lung nodule segmentation from computed tomography (CT) images is of great importance for image-driven lung cancer analysis. However, the heterogeneity of lung nodules and the presence of similar visual characteristics between nodules and their surroundings make it difficult for robust nodule segmentation. In this study, we propose a data-driven model, termed the Central Focused Convolutional Neural Networks (CF-CNN), to segment lung nodules from heterogeneous CT images. Our approach combines two key insights: 1) the proposed model captures a diverse set of nodule-sensitive features from both 3-D and 2-D CT images simultaneously; 2) when classifying an image voxel, the effects of its neighbor voxels can vary according to their spatial locations. We describe this phenomenon by proposing a novel central pooling layer retaining much information on voxel patch center, followed by a multi-scale patch learning strategy. Moreover, we design a weighted sampling to facilitate the model training, where training samples are selected according to their degree of segmentation difficulty. The proposed method has been extensively evaluated on the public LIDC dataset including 893 nodules and an independent dataset with 74 nodules from Guangdong General Hospital (GDGH). We showed that CF-CNN achieved superior segmentation performance with average dice scores of 82.15% and 80.02% for the two datasets respectively. Moreover, we compared our results with the inter-radiologists consistency on LIDC dataset, showing a difference in average dice score of only 1.98%.
Collapse
Affiliation(s)
- Shuo Wang
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100049, China
| | - Mu Zhou
- Stanford Center for Biomedical Informatics Research (BMIR), Department of Medicine, Stanford University, CA 94305, USA
| | - Zaiyi Liu
- Guangdong General Hospital, Guangzhou, Guangdong 510080, China
| | - Zhenyu Liu
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Dongsheng Gu
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yali Zang
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100049, China
| | - Di Dong
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100049, China.
| | - Olivier Gevaert
- Stanford Center for Biomedical Informatics Research (BMIR), Department of Medicine, Stanford University, CA 94305, USA.
| | - Jie Tian
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100049, China; Beijing Key Laboratory of Molecular Imaging, Beijing 100190, China.
| |
Collapse
|
31
|
Semi-automated enhanced breast tumor segmentation for CT image. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2017; 2017:648-651. [PMID: 29059956 DOI: 10.1109/embc.2017.8036908] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Accurate detection of breast cancer region is essential for treatment. X-ray computed tomography (CT) is an effective diagnostic method of breast cancer besides MRI and ultrasound. In this paper, a semi-automated breast cancer segmentation method was proposed to CT images. First, maximum region searching was used to find the rough boundary of the lesion. Then, a modified Histogram Equalization with Iterative-Filling was adopted to enhance the lesion and avoid the unbalanced intensity in the target region. Finally, a four-seeds Random Walk was used for accurate segmentation. The method was validated on a clinical dataset with 50 cases containing 630 slices in total. The experiments showed that the Dice Coefficient of our method was 88.6%, which was higher than that of Random Walk (76.9%) and Graph-Cut (79.8%).
Collapse
|
32
|
Jose D, Chithara AN, Nirmal Kumar P, Kareemulla H. Automatic Detection of Lung Cancer Nodules in Computerized Tomography Images. NATIONAL ACADEMY SCIENCE LETTERS 2017. [DOI: 10.1007/s40009-017-0549-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
33
|
Non-small cell lung cancer: quantitative phenotypic analysis of CT images as a potential marker of prognosis. Sci Rep 2016; 6:38282. [PMID: 27922113 PMCID: PMC5138817 DOI: 10.1038/srep38282] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2016] [Accepted: 10/24/2016] [Indexed: 12/31/2022] Open
Abstract
This was a retrospective study to investigate the predictive and prognostic ability of quantitative computed tomography phenotypic features in patients with non-small cell lung cancer (NSCLC). 661 patients with pathological confirmed as NSCLC were enrolled between 2007 and 2014. 592 phenotypic descriptors was automatically extracted on the pre-therapy CT images. Firstly, support vector machine (SVM) was used to evaluate the predictive value of each feature for pathology and TNM clinical stage. Secondly, Cox proportional hazards model was used to evaluate the prognostic value of these imaging signatures selected by SVM which subjected to a primary cohort of 138 patients, and an external independent validation of 61 patients. The results indicated that predictive accuracy for histopathology, N staging, and overall clinical stage was 75.16%, 79.40% and 80.33%, respectively. Besides, Cox models indicated the signatures selected by SVM: “correlation of co-occurrence after wavelet transform” was significantly associated with overall survival in the two datasets (hazard ratio [HR]: 1.65, 95% confidence interval [CI]: 1.41–2.75, p = 0.010; and HR: 2.74, 95%CI: 1.10–6.85, p = 0.027, respectively). Our study indicates that the phenotypic features might provide some insight in metastatic potential or aggressiveness for NSCLC, which potentially offer clinical value in directing personalized therapeutic regimen selection for NSCLC.
Collapse
|
34
|
Armato SG, Drukker K, Li F, Hadjiiski L, Tourassi GD, Engelmann RM, Giger ML, Redmond G, Farahani K, Kirby JS, Clarke LP. LUNGx Challenge for computerized lung nodule classification. J Med Imaging (Bellingham) 2016; 3:044506. [PMID: 28018939 PMCID: PMC5166709 DOI: 10.1117/1.jmi.3.4.044506] [Citation(s) in RCA: 47] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2016] [Accepted: 11/17/2016] [Indexed: 11/14/2022] Open
Abstract
The purpose of this work is to describe the LUNGx Challenge for the computerized classification of lung nodules on diagnostic computed tomography (CT) scans as benign or malignant and report the performance of participants' computerized methods along with that of six radiologists who participated in an observer study performing the same Challenge task on the same dataset. The Challenge provided sets of calibration and testing scans, established a performance assessment process, and created an infrastructure for case dissemination and result submission. Ten groups applied their own methods to 73 lung nodules (37 benign and 36 malignant) that were selected to achieve approximate size matching between the two cohorts. Area under the receiver operating characteristic curve (AUC) values for these methods ranged from 0.50 to 0.68; only three methods performed statistically better than random guessing. The radiologists' AUC values ranged from 0.70 to 0.85; three radiologists performed statistically better than the best-performing computer method. The LUNGx Challenge compared the performance of computerized methods in the task of differentiating benign from malignant lung nodules on CT scans, placed in the context of the performance of radiologists on the same task. The continued public availability of the Challenge cases will provide a valuable resource for the medical imaging research community.
Collapse
Affiliation(s)
- Samuel G. Armato
- The University of Chicago, Department of Radiology, 5841 South Maryland Avenue, MC 2026, Chicago, Illinois 60637, United States
| | - Karen Drukker
- The University of Chicago, Department of Radiology, 5841 South Maryland Avenue, MC 2026, Chicago, Illinois 60637, United States
| | - Feng Li
- The University of Chicago, Department of Radiology, 5841 South Maryland Avenue, MC 2026, Chicago, Illinois 60637, United States
| | - Lubomir Hadjiiski
- University of Michigan, Department of Radiology, 1500 East Medical Center Drive, Ann Arbor, Michigan 48109, United States
| | - Georgia D. Tourassi
- Health Data Sciences Institute, Biomedical Science and Engineering Center, Oak Ridge National Laboratory, P.O. Box 2008 MS6085 Oak Ridge, Tennessee 37831-6085, United States
| | - Roger M. Engelmann
- The University of Chicago, Department of Radiology, 5841 South Maryland Avenue, MC 2026, Chicago, Illinois 60637, United States
| | - Maryellen L. Giger
- The University of Chicago, Department of Radiology, 5841 South Maryland Avenue, MC 2026, Chicago, Illinois 60637, United States
| | - George Redmond
- National Cancer Institute, Cancer Imaging Program, Division of Cancer Treatment and Diagnosis, 9609 Medical Center Drive, Bethesda, Maryland 20892, United States
| | - Keyvan Farahani
- National Cancer Institute, Cancer Imaging Program, Division of Cancer Treatment and Diagnosis, 9609 Medical Center Drive, Bethesda, Maryland 20892, United States
| | - Justin S. Kirby
- Leidos Biomedical Research, Inc., Frederick National Laboratory for Cancer Research, Cancer Imaging Program, 8560 Progress Drive, Frederick, Maryland 21702, United States
| | - Laurence P. Clarke
- National Cancer Institute, Cancer Imaging Program, Division of Cancer Treatment and Diagnosis, 9609 Medical Center Drive, Bethesda, Maryland 20892, United States
| |
Collapse
|
35
|
Wang J, Liu X, Dong D, Song J, Xu M, Zang Y, Tian J. Prediction of malignant and benign of lung tumor using a quantitative radiomic method. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2016; 2016:1272-1275. [PMID: 28268557 DOI: 10.1109/embc.2016.7590938] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Lung cancer is the leading cause of cancer mortality around the world, the early diagnosis of lung cancer plays a very important role in therapeutic regimen selection. However, lung cancers are spatially and temporally heterogeneous; this limits the use of invasive biopsy. But radiomics which refers to the comprehensive quantification of tumour phenotypes by applying a large number of quantitative image features has the ability to capture intra-tumoural heterogeneity in a non-invasive way. Here we carry out a radiomic analysis of 150 features quantifying lung tumour image intensity, shape and texture. These features are extracted from 593 patients computed tomography (CT) data on Lung Image Database Consortium Image Database Resource Initiative (LIDC-IDRI) dataset. By using support vector machine, we find that a large number of quantitative radiomic features have diagnosis power. The accuracy of prediction of malignant of lung tumor is 86% in training set and 76.1% in testing set. As CT imaging of lung tumor is widely used in routine clinical practice, our radiomic classifier will be a valuable tool which can help clinical doctor diagnose the lung cancer.
Collapse
|