1
|
Yuan L, An L, Zhu Y, Duan C, Kong W, Jiang P, Yu QQ. Machine Learning in Diagnosis and Prognosis of Lung Cancer by PET-CT. Cancer Manag Res 2024; 16:361-375. [PMID: 38699652 PMCID: PMC11063459 DOI: 10.2147/cmar.s451871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Accepted: 04/16/2024] [Indexed: 05/05/2024] Open
Abstract
As a disease with high morbidity and high mortality, lung cancer has seriously harmed people's health. Therefore, early diagnosis and treatment are more important. PET/CT is usually used to obtain the early diagnosis, staging, and curative effect evaluation of tumors, especially lung cancer, due to the heterogeneity of tumors and the differences in artificial image interpretation and other reasons, it also fails to entirely reflect the real situation of tumors. Artificial intelligence (AI) has been applied to all aspects of life. Machine learning (ML) is one of the important ways to realize AI. With the help of the ML method used by PET/CT imaging technology, there are many studies in the diagnosis and treatment of lung cancer. This article summarizes the application progress of ML based on PET/CT in lung cancer, in order to better serve the clinical. In this study, we searched PubMed using machine learning, lung cancer, and PET/CT as keywords to find relevant articles in the past 5 years or more. We found that PET/CT-based ML approaches have achieved significant results in the detection, delineation, classification of pathology, molecular subtyping, staging, and response assessment with survival and prognosis of lung cancer, which can provide clinicians a powerful tool to support and assist in critical daily clinical decisions. However, ML has some shortcomings such as slightly poor repeatability and reliability.
Collapse
Affiliation(s)
- Lili Yuan
- Jining NO.1 People’s Hospital, Shandong First Medical University, Jining, People’s Republic of China
| | - Lin An
- Jining NO.1 People’s Hospital, Shandong First Medical University, Jining, People’s Republic of China
| | - Yandong Zhu
- Jining NO.1 People’s Hospital, Shandong First Medical University, Jining, People’s Republic of China
| | - Chongling Duan
- Jining NO.1 People’s Hospital, Shandong First Medical University, Jining, People’s Republic of China
| | - Weixiang Kong
- Jining NO.1 People’s Hospital, Shandong First Medical University, Jining, People’s Republic of China
| | - Pei Jiang
- Translational Pharmaceutical Laboratory, Jining NO.1 People’s Hospital, Shandong First Medical University, Jining, People’s Republic of China
| | - Qing-Qing Yu
- Jining NO.1 People’s Hospital, Shandong First Medical University, Jining, People’s Republic of China
| |
Collapse
|
2
|
Shiri I, Amini M, Yousefirizi F, Vafaei Sadr A, Hajianfar G, Salimi Y, Mansouri Z, Jenabi E, Maghsudi M, Mainta I, Becker M, Rahmim A, Zaidi H. Information fusion for fully automated segmentation of head and neck tumors from PET and CT images. Med Phys 2024; 51:319-333. [PMID: 37475591 DOI: 10.1002/mp.16615] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Revised: 05/16/2023] [Accepted: 06/19/2023] [Indexed: 07/22/2023] Open
Abstract
BACKGROUND PET/CT images combining anatomic and metabolic data provide complementary information that can improve clinical task performance. PET image segmentation algorithms exploiting the multi-modal information available are still lacking. PURPOSE Our study aimed to assess the performance of PET and CT image fusion for gross tumor volume (GTV) segmentations of head and neck cancers (HNCs) utilizing conventional, deep learning (DL), and output-level voting-based fusions. METHODS The current study is based on a total of 328 histologically confirmed HNCs from six different centers. The images were automatically cropped to a 200 × 200 head and neck region box, and CT and PET images were normalized for further processing. Eighteen conventional image-level fusions were implemented. In addition, a modified U2-Net architecture as DL fusion model baseline was used. Three different input, layer, and decision-level information fusions were used. Simultaneous truth and performance level estimation (STAPLE) and majority voting to merge different segmentation outputs (from PET and image-level and network-level fusions), that is, output-level information fusion (voting-based fusions) were employed. Different networks were trained in a 2D manner with a batch size of 64. Twenty percent of the dataset with stratification concerning the centers (20% in each center) were used for final result reporting. Different standard segmentation metrics and conventional PET metrics, such as SUV, were calculated. RESULTS In single modalities, PET had a reasonable performance with a Dice score of 0.77 ± 0.09, while CT did not perform acceptably and reached a Dice score of only 0.38 ± 0.22. Conventional fusion algorithms obtained a Dice score range of [0.76-0.81] with guided-filter-based context enhancement (GFCE) at the low-end, and anisotropic diffusion and Karhunen-Loeve transform fusion (ADF), multi-resolution singular value decomposition (MSVD), and multi-level image decomposition based on latent low-rank representation (MDLatLRR) at the high-end. All DL fusion models achieved Dice scores of 0.80. Output-level voting-based models outperformed all other models, achieving superior results with a Dice score of 0.84 for Majority_ImgFus, Majority_All, and Majority_Fast. A mean error of almost zero was achieved for all fusions using SUVpeak , SUVmean and SUVmedian . CONCLUSION PET/CT information fusion adds significant value to segmentation tasks, considerably outperforming PET-only and CT-only methods. In addition, both conventional image-level and DL fusions achieve competitive results. Meanwhile, output-level voting-based fusion using majority voting of several algorithms results in statistically significant improvements in the segmentation of HNC.
Collapse
Affiliation(s)
- Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Mehdi Amini
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Fereshteh Yousefirizi
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, British Columbia, Canada
| | - Alireza Vafaei Sadr
- Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany
- Department of Public Health Sciences, College of Medicine, The Pennsylvania State University, Hershey, USA
| | - Ghasem Hajianfar
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Yazdan Salimi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Zahra Mansouri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Elnaz Jenabi
- Research Center for Nuclear Medicine, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Mehdi Maghsudi
- Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran
| | - Ismini Mainta
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Minerva Becker
- Service of Radiology, Geneva University Hospital, Geneva, Switzerland
| | - Arman Rahmim
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, British Columbia, Canada
- Department of Radiology and Physics, University of British Columbia, Vancouver, Canada
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
- Geneva University Neurocenter, Geneva University, Geneva, Switzerland
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
| |
Collapse
|
3
|
Usuzaki T, Takahashi K, Takagi H, Ishikuro M, Obara T, Yamaura T, Kamimoto M, Majima K. Efficacy of exponentiation method with a convolutional neural network for classifying lung nodules on CT images by malignancy level. Eur Radiol 2023; 33:9309-9319. [PMID: 37477673 DOI: 10.1007/s00330-023-09946-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Revised: 04/24/2023] [Accepted: 05/19/2023] [Indexed: 07/22/2023]
Abstract
OBJECTIVES The aim of this study was to examine the performance of a convolutional neural network (CNN) combined with exponentiating each pixel value in classifying benign and malignant lung nodules on computed tomography (CT) images. MATERIALS AND METHODS Images in the Lung Image Database Consortium-Image Database Resource Initiative (LIDC-IDRI) were analyzed. Four CNN models were then constructed to classify the lung nodules by malignancy level (malignancy level 1 vs. 2, malignancy level 1 vs. 3, malignancy level 1 vs. 4, and malignancy level 1 vs. 5). The exponentiation method was applied for exponent values of 1.0 to 10.0 in increments of 0.5. Accuracy, sensitivity, specificity, and area under the curve of receiver operating characteristics (AUC-ROC) were calculated. These statistics were compared between an exponent value of 1.0 and all other exponent values in each model by the Mann-Whitney U-test. RESULTS In malignancy 1 vs. 4, maximum test accuracy (MTA; exponent value = 2.0, 3.0, 3.5, 4.5, 5.0, 5.5, 6.0, 6.5, 7.0, 7.5, 8.0, 8.5, 9.0, 9.5, and 10.0) and specificity (6.5, 7.0, and 9.0) were improved by up to 0.012 and 0.037, respectively. In malignancy 1 vs. 5, MTA (6.5 and 7.0) and sensitivity (1.5) were improved by up to 0.030 and 0.0040, respectively. CONCLUSIONS The exponentiation method improved the performance of the CNN in the task of classifying lung nodules on CT images as benign or malignant. The exponentiation method demonstrated two advantages: improved accuracy, and the ability to adjust sensitivity and specificity by selecting an appropriate exponent value. CLINICAL RELEVANCE STATEMENT Adjustment of sensitivity and specificity by selecting an exponent value enables the construction of proper CNN models for screening, diagnosis, and treatment processes among patients with lung nodules. KEY POINTS • The exponentiation method improved the performance of the convolutional neural network. • Contrast accentuation by the exponentiation method may derive features of lung nodules. • Sensitivity and specificity can be adjusted by selecting an exponent value.
Collapse
Affiliation(s)
- Takuma Usuzaki
- Department of Diagnostic Radiology, Tohoku University Hospital, 1-1 Seiryo-Machi, Aoba-Ku, Sendai, Miyagi, 980-8574, Japan.
| | - Kengo Takahashi
- Tohoku University Graduate School of Medicine, Sendai, Japan
| | - Hidenobu Takagi
- Department of Diagnostic Radiology, Tohoku University Hospital, 1-1 Seiryo-Machi, Aoba-Ku, Sendai, Miyagi, 980-8574, Japan
- Department of Advanced MRI Collaborative Research, Graduate School of Medicine, Tohoku University, Sendai, Japan
| | - Mami Ishikuro
- Division of Molecular Epidemiology, Graduate School of Medicine, Tohoku University, Sendai, Miyagi, Japan
| | - Taku Obara
- Division of Molecular Epidemiology, Graduate School of Medicine, Tohoku University, Sendai, Miyagi, Japan
- Division of Molecular Epidemiology, Department of Preventive Medicine and Epidemiology, Tohoku Medical Megabank Organization, Tohoku University, Sendai, Japan
- Department of Pharmaceutical Sciences, Tohoku University Hospital, Sendai, Japan
| | | | | | | |
Collapse
|
4
|
Jia L, Wu W, Hou G, Zhao J, Qiang Y, Zhang Y, Cai M. Residual neural network with mixed loss based on batch training technique for identification of EGFR mutation status in lung cancer. Multimed Tools Appl 2023; 82:1-21. [PMID: 37362735 PMCID: PMC10020767 DOI: 10.1007/s11042-023-14876-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/26/2021] [Revised: 11/11/2022] [Accepted: 02/06/2023] [Indexed: 06/28/2023]
Abstract
Epidermal growth factor receptor (EGFR) is the key to targeted therapy with tyrosine kinase inhibitors in lung cancer. Traditional identification of EGFR mutation status requires biopsy and sequence testing, which may not be suitable for certain groups who cannot perform biopsy. In this paper, using easily accessible and non-invasive CT images, the residual neural network (ResNet) with mixed loss based on batch training technique is proposed for identification of EGFR mutation status in lung cancer. In this model, the ResNet is regarded as the baseline for feature extraction to avoid the gradient disappearance. Besides, a new mixed loss based on the batch similarity and the cross entropy is proposed to guide the network to better learn the model parameters. The proposed mixed loss utilizes the similarity among batch samples to evaluate the distribution of training data, which can reduce the similarity of different classes and the difference of the same classes. In the experiments, VGG16Net, DenseNet, ResNet18, ResNet34 and ResNet50 models with the mixed loss are trained on the public CT dataset with 155 patients including EGFR mutation status from TCIA. The trained networks are employed to the collected preoperative CT dataset with 56 patients from the cooperative hospital for validating the efficiency of the proposed models. Experimental results show that the proposed models are more appropriate and effective on the lung cancer dataset for identifying the EGFR mutation status. In these models, the ResNet34 with mixed loss is optimal (accuracy = 81.58%, AUC = 0.8861, sensitivity = 80.02%, specificity = 82.90%).
Collapse
Affiliation(s)
- Liye Jia
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, 030600 China
| | - Wei Wu
- Department of Physiology, Shanxi Medical University, Taiyuan, 030051 China
| | - Guojie Hou
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, 030600 China
| | - Juanjuan Zhao
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, 030600 China
| | - Yan Qiang
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, 030600 China
| | - Yanan Zhang
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, 030600 China
| | - Meiling Cai
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, 030600 China
| |
Collapse
|
5
|
Afriyie Y, Weyori BA, Opoku AA. A scaling up approach: a research agenda for medical imaging analysis with applications in deep learning. J EXP THEOR ARTIF IN 2023. [DOI: 10.1080/0952813x.2023.2165721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Affiliation(s)
- Yaw Afriyie
- Department of Computer Science and Informatics, University of Energy and Natural Resources, School of Sciences, Sunyani, Ghana
- Department of Computer Science, Faculty of Information and Communication Technology, SD Dombo University of Business and Integrated Development Studies, Wa, Ghana
| | - Benjamin A. Weyori
- Department of Computer Science and Informatics, University of Energy and Natural Resources, School of Sciences, Sunyani, Ghana
| | - Alex A. Opoku
- Department of Mathematics & Statistics, University of Energy and Natural Resources, School of Sciences, Sunyani, Ghana
| |
Collapse
|
6
|
Wang H, Xiao N, Luo S, Li R, Zhao J, Ma Y, Zhao J, Qiang Y, Wang L, Lian J. Multi-scale dense selective network based on border modeling for lung nodule segmentation. Int J Comput Assist Radiol Surg 2023; 18:845-853. [PMID: 36637749 DOI: 10.1007/s11548-022-02817-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2022] [Accepted: 12/20/2022] [Indexed: 01/14/2023]
Abstract
PURPOSE Accurate quantification of pulmonary nodules helps physicians to accurately diagnose and treat lung cancer. We try to improve the segmentation efficiency of irregular nodules while maintaining the segmentation accuracy of simple types of nodules. METHODS In this paper, we obtain the unique edge part of pulmonary nodules and process it as a single branch stream, i.e., border stream, to explicitly model the nodule edge information. We propose a multi-scale dense selective network based on border modeling (BorDenNet). Its overall framework consists of a dual-branch encoder-decoder, which achieves parallel processing of classical image stream and border stream. We design a dense attention module to facilitate a strongly coupled status of feature images to focus on key regions of pulmonary nodules. Then, during the process of model decoding, the multi-scale selective attention module is proposed to establish long-range correlation relationships between different scale features, which further achieves finer feature discrimination and spatial recovery. We introduce border context enhancement module to mutually fuse and enhance the edge-related voxel features contained in the image stream and border stream and finally achieve the accurate segmentation of pulmonary nodules. RESULTS We evaluate the BorDenNet rigorously on the lung public dataset LIDC-IDRI. For the segmentation of the target nodules, the average Dice score is 92.78[Formula: see text], the average sensitivity is 91.37[Formula: see text], and the average Hausdorff distance is 3.06 mm. We further test on a private dataset from Shanxi Provincial People's Hospital, which verifies the excellent generalization of BorDenNet. Our BorDenNet relatively improves the segmentation efficiency for multi-type nodules such as adherent pulmonary nodules and ground-glass pulmonary nodules. CONCLUSION Accurate segmentation of irregular pulmonary nodules can obtain important clinical parameters, which can be used as a guide for clinicians and improve clinical efficiency.
Collapse
Affiliation(s)
- Hexi Wang
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, 030000, Shanxi, China
| | - Ning Xiao
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, 030000, Shanxi, China
| | - Shichao Luo
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, 030000, Shanxi, China
| | - Runrui Li
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, 030000, Shanxi, China
| | - Jun Zhao
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, 030000, Shanxi, China
| | - Yulan Ma
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, 030000, Shanxi, China
| | - Juanjuan Zhao
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, 030000, Shanxi, China.
- College of Information, Jinzhong College of Information, Jinzhong, 030600, Shanxi, China.
| | - Yan Qiang
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, 030000, Shanxi, China
| | - Long Wang
- College of Information, Jinzhong College of Information, Jinzhong, 030600, Shanxi, China
| | - Jianhong Lian
- Cancer Hospital, Shanxi Cancer Hospital, Taiyuan, 030000, Shanxi, China
| |
Collapse
|
7
|
Weikert T, Jaeger PF, Yang S, Baumgartner M, Breit HC, Winkel DJ, Sommer G, Stieltjes B, Thaiss W, Bremerich J, Maier-Hein KH, Sauter AW. Automated lung cancer assessment on 18F-PET/CT using Retina U-Net and anatomical region segmentation. Eur Radiol 2023; 33:4270-4279. [PMID: 36625882 PMCID: PMC10182147 DOI: 10.1007/s00330-022-09332-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Revised: 08/13/2022] [Accepted: 10/17/2022] [Indexed: 01/11/2023]
Abstract
OBJECTIVES To develop and test a Retina U-Net algorithm for the detection of primary lung tumors and associated metastases of all stages on FDG-PET/CT. METHODS A data set consisting of 364 FDG-PET/CTs of patients with histologically confirmed lung cancer was used for algorithm development and internal testing. The data set comprised tumors of all stages. All lung tumors (T), lymphatic metastases (N), and distant metastases (M) were manually segmented as 3D volumes using whole-body PET/CT series. The data set was split into a training (n = 216), validation (n = 74), and internal test data set (n = 74). Detection performance for all lesion types at multiple classifier thresholds was evaluated and false-positive-findings-per-case (FP/c) calculated. Next, detected lesions were assigned to categories T, N, or M using an automated anatomical region segmentation. Furthermore, reasons for FPs were visually assessed and analyzed. Finally, performance was tested on 20 PET/CTs from another institution. RESULTS Sensitivity for T lesions was 86.2% (95% CI: 77.2-92.7) at a FP/c of 2.0 on the internal test set. The anatomical correlate to most FPs was the physiological activity of bone marrow (16.8%). TNM categorization based on the anatomical region approach was correct in 94.3% of lesions. Performance on the external test set confirmed the good performance of the algorithm (overall detection rate = 88.8% (95% CI: 82.5-93.5%) and FP/c = 2.7). CONCLUSIONS Retina U-Nets are a valuable tool for tumor detection tasks on PET/CT and can form the backbone of reading assistance tools in this field. FPs have anatomical correlates that can lead the way to further algorithm improvements. The code is publicly available. KEY POINTS • Detection of malignant lesions in PET/CT with Retina U-Net is feasible. • All false-positive findings had anatomical correlates, physiological bone marrow activity being the most prevalent. • Retina U-Nets can build the backbone for tools assisting imaging professionals in lung tumor staging.
Collapse
Affiliation(s)
- T Weikert
- Department of Radiology, University Hospital Basel, University of Basel, Petersgraben 4, 4031, Basel, Switzerland.
| | - P F Jaeger
- Division of Medical Image Computing, German Cancer Research Center, Im Neuenheimer Feld 223, 69120, Heidelberg, Germany
| | - S Yang
- Department of Radiology, University Hospital Basel, University of Basel, Petersgraben 4, 4031, Basel, Switzerland
| | - M Baumgartner
- Division of Medical Image Computing, German Cancer Research Center, Im Neuenheimer Feld 223, 69120, Heidelberg, Germany
| | - H C Breit
- Department of Radiology, University Hospital Basel, University of Basel, Petersgraben 4, 4031, Basel, Switzerland
| | - D J Winkel
- Department of Radiology, University Hospital Basel, University of Basel, Petersgraben 4, 4031, Basel, Switzerland
| | - G Sommer
- Institute of Radiology and Nuclear Medicine, Hirslanden Klinik St. Anna, St. Anna-Strasse 32, 6006, Lucerne, Switzerland
| | - B Stieltjes
- Department of Radiology, University Hospital Basel, University of Basel, Petersgraben 4, 4031, Basel, Switzerland
| | - W Thaiss
- Department of Nuclear Medicine, University Hospital Ulm, Albert-Einstein-Allee 23, 89081, Ulm, Germany
| | - J Bremerich
- Department of Radiology, University Hospital Basel, University of Basel, Petersgraben 4, 4031, Basel, Switzerland
| | - K H Maier-Hein
- Division of Medical Image Computing, German Cancer Research Center, Im Neuenheimer Feld 223, 69120, Heidelberg, Germany.,Department of Radiation Oncology, Pattern Analysis and Learning Group, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120, Heidelberg, Germany
| | - A W Sauter
- Department of Radiology, University Hospital Basel, University of Basel, Petersgraben 4, 4031, Basel, Switzerland
| |
Collapse
|
8
|
Hu Q, Li K, Yang C, Wang Y, Huang R, Gu M, Xiao Y, Huang Y, Chen L. The role of artificial intelligence based on PET/CT radiomics in NSCLC: Disease management, opportunities, and challenges. Front Oncol 2023; 13:1133164. [PMID: 36959810 PMCID: PMC10028142 DOI: 10.3389/fonc.2023.1133164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2022] [Accepted: 02/20/2023] [Indexed: 03/09/2023] Open
Abstract
Objectives Lung cancer has been widely characterized through radiomics and artificial intelligence (AI). This review aims to summarize the published studies of AI based on positron emission tomography/computed tomography (PET/CT) radiomics in non-small-cell lung cancer (NSCLC). Materials and methods A comprehensive search of literature published between 2012 and 2022 was conducted on the PubMed database. There were no language or publication status restrictions on the search. About 127 articles in the search results were screened and gradually excluded according to the exclusion criteria. Finally, this review included 39 articles for analysis. Results Classification is conducted according to purposes and several studies were identified at each stage of disease:1) Cancer detection (n=8), 2) histology and stage of cancer (n=11), 3) metastases (n=6), 4) genotype (n=6), 5) treatment outcome and survival (n=8). There is a wide range of heterogeneity among studies due to differences in patient sources, evaluation criteria and workflow of radiomics. On the whole, most models show diagnostic performance comparable to or even better than experts, and the common problems are repeatability and clinical transformability. Conclusion AI-based PET/CT Radiomics play potential roles in NSCLC clinical management. However, there is still a long way to go before being translated into clinical application. Large-scale, multi-center, prospective research is the direction of future efforts, while we need to face the risk of repeatability of radiomics features and the limitation of access to large databases.
Collapse
Affiliation(s)
- Qiuyuan Hu
- Department of positron emission tomography/computed tomography (PET/CT) Center, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Cancer Center of Yunnan Province, Kunming, Yunnan, China
| | - Ke Li
- Department of Cancer Biotherapy Center, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Cancer Center of Yunnan Province, Kunming, Yunnan, China
| | - Conghui Yang
- Department of positron emission tomography/computed tomography (PET/CT) Center, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Cancer Center of Yunnan Province, Kunming, Yunnan, China
| | - Yue Wang
- Department of positron emission tomography/computed tomography (PET/CT) Center, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Cancer Center of Yunnan Province, Kunming, Yunnan, China
| | - Rong Huang
- Department of positron emission tomography/computed tomography (PET/CT) Center, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Cancer Center of Yunnan Province, Kunming, Yunnan, China
| | - Mingqiu Gu
- Department of positron emission tomography/computed tomography (PET/CT) Center, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Cancer Center of Yunnan Province, Kunming, Yunnan, China
| | - Yuqiang Xiao
- Department of positron emission tomography/computed tomography (PET/CT) Center, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Cancer Center of Yunnan Province, Kunming, Yunnan, China
| | - Yunchao Huang
- Department of Thoracic Surgery I, Key Laboratory of Lung Cancer of Yunnan Province, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Cancer Center of Yunnan Province, Kunming, Yunnan, China
- *Correspondence: Long Chen, ; Yunchao Huang,
| | - Long Chen
- Department of positron emission tomography/computed tomography (PET/CT) Center, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Cancer Center of Yunnan Province, Kunming, Yunnan, China
- *Correspondence: Long Chen, ; Yunchao Huang,
| |
Collapse
|
9
|
Wang H, Xiao N, Zhang J, Yang W, Ma Y, Suo Y, Zhao J, Qiang Y, Lian J, Yang Q. Static-Dynamic coordinated Transformer for Tumor Longitudinal Growth Prediction. Comput Biol Med 2022. [DOI: 10.1016/j.compbiomed.2022.105922] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2022] [Revised: 07/18/2022] [Accepted: 07/30/2022] [Indexed: 11/20/2022]
|
10
|
Han J, Xiao N, Yang W, Luo S, Zhao J, Qiang Y, Chaudhary S, Zhao J. MS-ResNet: disease-specific survival prediction using longitudinal CT images and clinical data. Int J Comput Assist Radiol Surg 2022; 17:1049-1057. [PMID: 35445285 PMCID: PMC9020752 DOI: 10.1007/s11548-022-02625-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Accepted: 03/24/2022] [Indexed: 11/29/2022]
Abstract
PURPOSE Medical imaging data of lung cancer in different stages contain a large amount of time information related to its evolution (emergence, development, or extinction). We try to explore the evolution process of lung images in time dimension to improve the prediction of lung cancer survival by using longitudinal CT images and clinical data jointly. METHODS In this paper, we propose an innovative multi-branch spatiotemporal residual network (MS-ResNet) for disease-specific survival (DSS) prediction by integrating the longitudinal computed tomography (CT) images at different times and clinical data. Specifically, we first extract the deep features from the multi-period CT images by an improved residual network. Then, the feature selection algorithm is used to select the most relevant feature subset from the clinical data. Finally, we integrate the deep features and feature subsets to take full advantage of the complementarity between the two types of data to generate the final prediction results. RESULTS The experimental results demonstrate that our MS-ResNet model is superior to other methods, achieving a promising 86.78% accuracy in the classification of short-survivor, med-survivor, and long-survivor. CONCLUSION In computer-aided prognostic analysis of cancer, the time dimension features of the course of disease and the integration of patient clinical data and CT data can effectively improve the prediction accuracy.
Collapse
Affiliation(s)
- Jiahao Han
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Ning Xiao
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Wanting Yang
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Shichao Luo
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Jun Zhao
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Yan Qiang
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Suman Chaudhary
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Juanjuan Zhao
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China.
| |
Collapse
|
11
|
Protonotarios NE, Katsamenis I, Sykiotis S, Dikaios N, Kastis GA, Chatziioannou SN, Metaxas M, Doulamis N, Doulamis A. A few-shot U-Net deep learning model for lung cancer lesion segmentation via PET/CT imaging. Biomed Phys Eng Express 2022; 8. [PMID: 35144242 DOI: 10.1088/2057-1976/ac53bd] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Accepted: 02/10/2022] [Indexed: 11/12/2022]
Abstract
Over the past few years, positron emission tomography/computed tomography (PET/CT) imaging for computer-aided diagnosis has received increasing attention. Supervised deep learning architectures are usually employed for the detection of abnormalities, with anatomical localization, especially in the case of CT scans. However, the main limitations of the supervised learning paradigm include (i) large amounts of data required for model training, and (ii) the assumption of fixed network weights upon training completion, implying that the performance of the model cannot be further improved after training. In order to overcome these limitations, we apply a few-shot learning (FSL) scheme. Contrary to traditional deep learning practices, in FSL the model is provided with less data during training. The model then utilizes end-user feedback after training to constantly improve its performance. We integrate FSL in a U-Net architecture for lung cancer lesion segmentation on PET/CT scans, allowing for dynamic model weight fine-tuning and resulting in an online supervised learning scheme. Constant online readjustments of the model weights according to the user's feedback, increase the detection and classification accuracy, especially in cases where low detection performance is encountered. Our proposed method is validated on the Lung-PET-CT-DX TCIA database. PET/CT scans from 87 patients were included in the dataset and were acquired 60 minutes after intravenous18F-FDG injection. Experimental results indicate the superiority of our approach compared to other state of the art methods.
Collapse
Affiliation(s)
- Nicholas E Protonotarios
- Department of Applied Mathematics and Theoretical Physics (DAMTP), University of Cambridge, University of Cambridge, Cambridge, CB3 0WA, UNITED KINGDOM OF GREAT BRITAIN AND NORTHERN IRELAND
| | - Iason Katsamenis
- School of Rural and Surveying Engineering, National Technical University of Athens, 9, Heroon Polytechniou, Zografou, Attica, 157 73, GREECE
| | - Stavros Sykiotis
- School of Rural and Surveying Engineering, National Technical University of Athens, 9, Heroon Polytechniou, Zografou, Attica, 157 73, GREECE
| | - Nikolaos Dikaios
- Mathematics Research Center, Academy of Athens, 4, Soranou Efesiou, Athens, 115 27, GREECE
| | - George Anthony Kastis
- Mathematics Research Center, Academy of Athens, 4, Soranou Efesiou, Athens, Attica, 115 27, GREECE
| | - Sofia N Chatziioannou
- PET/CT, Biomedical Research Foundation of the Academy of Athens, 4, Soranou Efesiou, Athens, Attica, 115 27, GREECE
| | - Marinos Metaxas
- PET/CT, Biomedical Research Foundation of the Academy of Athens, 4, Soranou Efesiou, Athens, Attica, 115 27, GREECE
| | - Nikolaos Doulamis
- School of Rural and Surveying Engineering, National Technical University of Athens, 9, Heroon Polytechniou, Zografou, Attica, 157 73, GREECE
| | - Anastasios Doulamis
- School of Rural and Surveying Engineering, National Technical University of Athens, 9, Heroon Polytechniou, Zografou, Attica, 157 73, GREECE
| |
Collapse
|
12
|
Xue Z, Li P, Zhang L, Lu X, Zhu G, Shen P, Ali Shah SA, Bennamoun M. Multi-Modal Co-Learning for Liver Lesion Segmentation on PET-CT Images. IEEE Trans Med Imaging 2021; 40:3531-3542. [PMID: 34133275 DOI: 10.1109/tmi.2021.3089702] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Liver lesion segmentation is an essential process to assist doctors in hepatocellular carcinoma diagnosis and treatment planning. Multi-modal positron emission tomography and computed tomography (PET-CT) scans are widely utilized due to their complementary feature information for this purpose. However, current methods ignore the interaction of information across the two modalities during feature extraction, omit the co-learning of the feature maps of different resolutions, and do not ensure that shallow and deep features complement each others sufficiently. In this paper, our proposed model can achieve feature interaction across multi-modal channels by sharing the down-sampling blocks between two encoding branches to eliminate misleading features. Furthermore, we combine feature maps of different resolutions to derive spatially varying fusion maps and enhance the lesions information. In addition, we introduce a similarity loss function for consistency constraint in case that predictions of separated refactoring branches for the same regions vary a lot. We evaluate our model for liver tumor segmentation using a PET-CT scans dataset, compare our method with the baseline techniques for multi-modal (multi-branches, multi-channels and cascaded networks) and then demonstrate that our method has a significantly higher accuracy ( ) than the baseline models.
Collapse
|
13
|
Bi L, Fulham M, Li N, Liu Q, Song S, Dagan Feng D, Kim J. Recurrent feature fusion learning for multi-modality pet-ct tumor segmentation. Comput Methods Programs Biomed 2021; 203:106043. [PMID: 33744750 DOI: 10.1016/j.cmpb.2021.106043] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2020] [Accepted: 03/04/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE [18f]-fluorodeoxyglucose (fdg) positron emission tomography - computed tomography (pet-ct) is now the preferred imaging modality for staging many cancers. Pet images characterize tumoral glucose metabolism while ct depicts the complementary anatomical localization of the tumor. Automatic tumor segmentation is an important step in image analysis in computer aided diagnosis systems. Recently, fully convolutional networks (fcns), with their ability to leverage annotated datasets and extract image feature representations, have become the state-of-the-art in tumor segmentation. There are limited fcn based methods that support multi-modality images and current methods have primarily focused on the fusion of multi-modality image features at various stages, i.e., early-fusion where the multi-modality image features are fused prior to fcn, late-fusion with the resultant features fused and hyper-fusion where multi-modality image features are fused across multiple image feature scales. Early- and late-fusion methods, however, have inherent, limited freedom to fuse complementary multi-modality image features. The hyper-fusion methods learn different image features across different image feature scales that can result in inaccurate segmentations, in particular, in situations where the tumors have heterogeneous textures. METHODS we propose a recurrent fusion network (rfn), which consists of multiple recurrent fusion phases to progressively fuse the complementary multi-modality image features with intermediary segmentation results derived at individual recurrent fusion phases: (1) the recurrent fusion phases iteratively learn the image features and then refine the subsequent segmentation results; and, (2) the intermediary segmentation results allows our method to focus on learning the multi-modality image features around these intermediary segmentation results, which minimize the risk of inconsistent feature learning. RESULTS we evaluated our method on two pathologically proven non-small cell lung cancer pet-ct datasets. We compared our method to the commonly used fusion methods (early-fusion, late-fusion and hyper-fusion) and the state-of-the-art pet-ct tumor segmentation methods on various network backbones (resnet, densenet and 3d-unet). Our results show that the rfn provides more accurate segmentation compared to the existing methods and is generalizable to different datasets. CONCLUSIONS we show that learning through multiple recurrent fusion phases allows the iterative re-use of multi-modality image features that refines tumor segmentation results. We also identify that our rfn produces consistent segmentation results across different network architectures.
Collapse
Affiliation(s)
- Lei Bi
- School of Computer Science, University of Sydney, NSW, Australia; Australian Research Council Training Centre for Innovative Bioengineering, NSW, Australia.
| | - Michael Fulham
- School of Computer Science, University of Sydney, NSW, Australia; Australian Research Council Training Centre for Innovative Bioengineering, NSW, Australia; Department of Molecular Imaging, Royal Prince Alfred Hospital, NSW, Australia
| | - Nan Li
- Department of Nuclear Medicine, Fudan University Shanghai Cancer Center, Fudan University, Shanghai, China
| | - Qiufang Liu
- Department of Nuclear Medicine, Fudan University Shanghai Cancer Center, Fudan University, Shanghai, China
| | - Shaoli Song
- Department of Nuclear Medicine, Fudan University Shanghai Cancer Center, Fudan University, Shanghai, China
| | - David Dagan Feng
- School of Computer Science, University of Sydney, NSW, Australia; Australian Research Council Training Centre for Innovative Bioengineering, NSW, Australia; Med-X Research Institute, Shanghai Jiao Tong University, Shanghai, China
| | - Jinman Kim
- School of Computer Science, University of Sydney, NSW, Australia; Australian Research Council Training Centre for Innovative Bioengineering, NSW, Australia.
| |
Collapse
|
14
|
Abstract
Aim: To provide a historical and global picture of research concerning lung nodules, compare the contributions of major countries and explore research trends over the past 10 years. Methods: A bibliometric analysis of publications from Scopus (1970-2020) and Web of Science (2011-2020). Results: Publications about pulmonary nodules showed an enormous growth trend from 1970 to 2020. There is a high level of collaboration among the 20 most productive countries and regions, with the USA located at the center of the collaboration network. The keywords 'deep learning', 'artificial intelligence' and 'machine learning' are current hotspots. Conclusions: Abundant research has focused on pulmonary nodules. Deep learning is emerging as a promising tool for lung cancer diagnosis and management.
Collapse
Affiliation(s)
- Ning Li
- Department of Epidemiology & Biostatistics, Institute of Basic Medicine Sciences, Chinese Academy of Medical Sciences/School of Basic Medicine, Peking Union Medical College, Beijing, 100005, China
| | - Lei Wang
- Department of Epidemiology & Biostatistics, Institute of Basic Medicine Sciences, Chinese Academy of Medical Sciences/School of Basic Medicine, Peking Union Medical College, Beijing, 100005, China
| | - Yaoda Hu
- Department of Epidemiology & Biostatistics, Institute of Basic Medicine Sciences, Chinese Academy of Medical Sciences/School of Basic Medicine, Peking Union Medical College, Beijing, 100005, China
| | - Wei Han
- Department of Epidemiology & Biostatistics, Institute of Basic Medicine Sciences, Chinese Academy of Medical Sciences/School of Basic Medicine, Peking Union Medical College, Beijing, 100005, China
| | - Fuling Zheng
- Department of Radiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Wei Song
- Department of Radiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Jingmei Jiang
- Department of Epidemiology & Biostatistics, Institute of Basic Medicine Sciences, Chinese Academy of Medical Sciences/School of Basic Medicine, Peking Union Medical College, Beijing, 100005, China
| |
Collapse
|
15
|
Puttagunta M, Ravi S. Medical image analysis based on deep learning approach. Multimed Tools Appl 2021; 80:24365-24398. [PMID: 33841033 PMCID: PMC8023554 DOI: 10.1007/s11042-021-10707-4] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2020] [Revised: 11/28/2020] [Accepted: 02/10/2021] [Indexed: 05/05/2023]
Abstract
Medical imaging plays a significant role in different clinical applications such as medical procedures used for early detection, monitoring, diagnosis, and treatment evaluation of various medical conditions. Basicsof the principles and implementations of artificial neural networks and deep learning are essential for understanding medical image analysis in computer vision. Deep Learning Approach (DLA) in medical image analysis emerges as a fast-growing research field. DLA has been widely used in medical imaging to detect the presence or absence of the disease. This paper presents the development of artificial neural networks, comprehensive analysis of DLA, which delivers promising medical imaging applications. Most of the DLA implementations concentrate on the X-ray images, computerized tomography, mammography images, and digital histopathology images. It provides a systematic review of the articles for classification, detection, and segmentation of medical images based on DLA. This review guides the researchers to think of appropriate changes in medical image analysis based on DLA.
Collapse
Affiliation(s)
- Muralikrishna Puttagunta
- Department of Computer Science, School of Engineering and Technology, Pondicherry University, Pondicherry, India
| | - S. Ravi
- Department of Computer Science, School of Engineering and Technology, Pondicherry University, Pondicherry, India
| |
Collapse
|
16
|
Quijano‐Cuervo LG, Méndez‐Castro FE, Rao D, Escobar Sarria F, Negrete‐Yankelevich S. Spatial relationships between spiders and their host vascular epiphytes within shade trees in a Mexican coffee plantation. Biotropica 2021. [DOI: 10.1111/btp.12941] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Affiliation(s)
| | | | - Dinesh Rao
- Instituto de Biotecnología y Ecología Aplicada (INBIOTECA) Universidad Veracruzana Xalapa México
| | | | | |
Collapse
|
17
|
Piñeiro-Fiel M, Moscoso A, Pubul V, Ruibal Á, Silva-Rodríguez J, Aguiar P. A Systematic Review of PET Textural Analysis and Radiomics in Cancer. Diagnostics (Basel) 2021; 11:380. [PMID: 33672285 DOI: 10.3390/diagnostics11020380] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Revised: 02/10/2021] [Accepted: 02/19/2021] [Indexed: 12/12/2022] Open
Abstract
Background: Although many works have supported the utility of PET radiomics, several authors have raised concerns over the robustness and replicability of the results. This study aimed to perform a systematic review on the topic of PET radiomics and the used methodologies. Methods: PubMed was searched up to 15 October 2020. Original research articles based on human data specifying at least one tumor type and PET image were included, excluding those that apply only first-order statistics and those including fewer than 20 patients. Each publication, cancer type, objective and several methodological parameters (number of patients and features, validation approach, among other things) were extracted. Results: A total of 290 studies were included. Lung (28%) and head and neck (24%) were the most studied cancers. The most common objective was prognosis/treatment response (46%), followed by diagnosis/staging (21%), tumor characterization (18%) and technical evaluations (15%). The average number of patients included was 114 (median = 71; range 20–1419), and the average number of high-order features calculated per study was 31 (median = 26, range 1–286). Conclusions: PET radiomics is a promising field, but the number of patients in most publications is insufficient, and very few papers perform in-depth validations. The role of standardization initiatives will be crucial in the upcoming years.
Collapse
|
18
|
Zhang G, Yang Z, Gong L, Jiang S, Wang L, Zhang H. Classification of lung nodules based on CT images using squeeze-and-excitation network and aggregated residual transformations. Radiol Med 2020; 125:374-383. [PMID: 31916105 DOI: 10.1007/s11547-019-01130-9] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2019] [Accepted: 12/27/2019] [Indexed: 12/19/2022]
Abstract
Lung cancer is pointed as a leading cause of cancer death worldwide. Early lung nodule diagnosis has great significance for treating lung cancer and increasing patient survival. In this paper, we present a novel method to classify the malignant from benign lung nodules based on CT images using squeeze-and-excitation network and aggregated residual transformations (SE-ResNeXt). The state-of-the-art SE-ResNeXt module, which integrates the advantages of SENet for feature recalibration and ResNeXt for feature reuse, has great ability in boosting feature discriminability on imaging pattern recognition. The method is evaluated on the public available LUng Nodule Analysis 2016 (LUNA16) database with 1004 (450 malignant and 554 benign) nodules, achieving an area under the receiver operating characteristic curve (AUC) of 0. 9563 and accuracy of 91.67%. The promising results demonstrate that our method has strong robustness in the classification of nodules. The method has the potential to help radiologists better interpret diagnostic data and differentiate the benign from malignant lung nodules on CT images in clinical practice. To our best knowledge, the effectiveness of SE-ResNeXt on lung nodule classification has not been extensively explored.
Collapse
Affiliation(s)
- Guobin Zhang
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| | - Zhiyong Yang
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| | - Li Gong
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| | - Shan Jiang
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China. .,Centre for Advanced Mechanisms and Robotics, Tianjin University, 135 Yaguan Road, Jinnan District, Tianjin, 300350, China.
| | - Lu Wang
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| | - Hongyun Zhang
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| |
Collapse
|
19
|
Li D, Mikela Vilmun B, Frederik Carlsen J, Albrecht-Beste E, Ammitzbøl Lauridsen C, Bachmann Nielsen M, Lindskov Hansen K. The Performance of Deep Learning Algorithms on Automatic Pulmonary Nodule Detection and Classification Tested on Different Datasets That Are Not Derived from LIDC-IDRI: A Systematic Review. Diagnostics (Basel) 2019; 9:E207. [PMID: 31795409 DOI: 10.3390/diagnostics9040207] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2019] [Revised: 11/25/2019] [Accepted: 11/28/2019] [Indexed: 12/27/2022] Open
Abstract
The aim of this study was to systematically review the performance of deep learning technology in detecting and classifying pulmonary nodules on computed tomography (CT) scans that were not from the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) database. Furthermore, we explored the difference in performance when the deep learning technology was applied to test datasets different from the training datasets. Only peer-reviewed, original research articles utilizing deep learning technology were included in this study, and only results from testing on datasets other than the LIDC-IDRI were included. We searched a total of six databases: EMBASE, PubMed, Cochrane Library, the Institute of Electrical and Electronics Engineers, Inc. (IEEE), Scopus, and Web of Science. This resulted in 1782 studies after duplicates were removed, and a total of 26 studies were included in this systematic review. Three studies explored the performance of pulmonary nodule detection only, 16 studies explored the performance of pulmonary nodule classification only, and 7 studies had reports of both pulmonary nodule detection and classification. Three different deep learning architectures were mentioned amongst the included studies: convolutional neural network (CNN), massive training artificial neural network (MTANN), and deep stacked denoising autoencoder extreme learning machine (SDAE-ELM). The studies reached a classification accuracy between 68–99.6% and a detection accuracy between 80.6–94%. Performance of deep learning technology in studies using different test and training datasets was comparable to studies using same type of test and training datasets. In conclusion, deep learning was able to achieve high levels of accuracy, sensitivity, and/or specificity in detecting and/or classifying nodules when applied to pulmonary CT scans not from the LIDC-IDRI database.
Collapse
|
20
|
Domingues I, Pereira G, Martins P, Duarte H, Santos J, Abreu PH. Using deep learning techniques in medical imaging: a systematic review of applications on CT and PET. Artif Intell Rev 2020; 53:4093-160. [DOI: 10.1007/s10462-019-09788-3] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
|
21
|
Nasrullah N, Sang J, Alam MS, Mateen M, Cai B, Hu H. Automated Lung Nodule Detection and Classification Using Deep Learning Combined with Multiple Strategies. Sensors (Basel) 2019; 19:s19173722. [PMID: 31466261 PMCID: PMC6749467 DOI: 10.3390/s19173722] [Citation(s) in RCA: 113] [Impact Index Per Article: 22.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/16/2019] [Revised: 08/13/2019] [Accepted: 08/26/2019] [Indexed: 01/12/2023]
Abstract
Lung cancer is one of the major causes of cancer-related deaths due to its aggressive nature and delayed detections at advanced stages. Early detection of lung cancer is very important for the survival of an individual, and is a significant challenging problem. Generally, chest radiographs (X-ray) and computed tomography (CT) scans are used initially for the diagnosis of the malignant nodules; however, the possible existence of benign nodules leads to erroneous decisions. At early stages, the benign and the malignant nodules show very close resemblance to each other. In this paper, a novel deep learning-based model with multiple strategies is proposed for the precise diagnosis of the malignant nodules. Due to the recent achievements of deep convolutional neural networks (CNN) in image analysis, we have used two deep three-dimensional (3D) customized mixed link network (CMixNet) architectures for lung nodule detection and classification, respectively. Nodule detections were performed through faster R-CNN on efficiently-learned features from CMixNet and U-Net like encoder-decoder architecture. Classification of the nodules was performed through a gradient boosting machine (GBM) on the learned features from the designed 3D CMixNet structure. To reduce false positives and misdiagnosis results due to different types of errors, the final decision was performed in connection with physiological symptoms and clinical biomarkers. With the advent of the internet of things (IoT) and electro-medical technology, wireless body area networks (WBANs) provide continuous monitoring of patients, which helps in diagnosis of chronic diseases-especially metastatic cancers. The deep learning model for nodules' detection and classification, combined with clinical factors, helps in the reduction of misdiagnosis and false positive (FP) results in early-stage lung cancer diagnosis. The proposed system was evaluated on LIDC-IDRI datasets in the form of sensitivity (94%) and specificity (91%), and better results were obatined compared to the existing methods.
Collapse
Affiliation(s)
- Nasrullah Nasrullah
- Key Laboratory of Dependable Service Computing in Cyber Physical Society of Ministry of Education, Chongqing University, Chongqing 400044, China
- School of Big Data & Software Engineering, Chongqing University, Chongqing 401331, China
- Department of Software Engineering, Foundation University Islamabad, Islamabad 44000, Pakistan
| | - Jun Sang
- Key Laboratory of Dependable Service Computing in Cyber Physical Society of Ministry of Education, Chongqing University, Chongqing 400044, China.
- School of Big Data & Software Engineering, Chongqing University, Chongqing 401331, China.
| | - Mohammad S Alam
- Frank H. Dotterweich College of Engineering, Texas A&M University-Kingsville, Kingsville, TX 78363-8202, USA
| | - Muhammad Mateen
- Key Laboratory of Dependable Service Computing in Cyber Physical Society of Ministry of Education, Chongqing University, Chongqing 400044, China
- School of Big Data & Software Engineering, Chongqing University, Chongqing 401331, China
| | - Bin Cai
- Key Laboratory of Dependable Service Computing in Cyber Physical Society of Ministry of Education, Chongqing University, Chongqing 400044, China
- School of Big Data & Software Engineering, Chongqing University, Chongqing 401331, China
| | - Haibo Hu
- Key Laboratory of Dependable Service Computing in Cyber Physical Society of Ministry of Education, Chongqing University, Chongqing 400044, China
- School of Big Data & Software Engineering, Chongqing University, Chongqing 401331, China
| |
Collapse
|
22
|
Huang W, Xue Y, Wu Y. A CAD system for pulmonary nodule prediction based on deep three-dimensional convolutional neural networks and ensemble learning. PLoS One 2019; 14:e0219369. [PMID: 31299053 PMCID: PMC6625700 DOI: 10.1371/journal.pone.0219369] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2018] [Accepted: 06/21/2019] [Indexed: 01/08/2023] Open
Abstract
Background Detection of pulmonary nodules is an important aspect of an automatic detection system. Incomputer-aided diagnosis (CAD) systems, the ability to detect pulmonary nodules is highly important, which plays an important role in the diagnosis and early treatment of lung cancer. Currently, the detection of pulmonary nodules depends mainly on doctor experience, which varies. This paper aims to address the challenge of pulmonary nodule detection more effectively. Methods A method for detecting pulmonary nodules based on an improved neural network is presented in this paper. Nodules are clusters of tissue with a diameter of 3 mm to 30 mm in the pulmonary parenchyma. Because pulmonary nodules are similar to other lung structures and have a low density, false positive nodules often occur. Thus, our team proposed an improved convolutional neural network (CNN) framework to detect nodules. First, a nonsharpening mask is used to enhance the nodules in computed tomography (CT) images; then, CT images of 512×512 pixels are segmented into smaller images of 96×96 pixels. Second, in the 96×96 pixel images which contain or exclude pulmonary nodules, the plaques corresponding to positive and negative samples are segmented. Third, CT images segmented into 96×96 pixels are down-sampled to 64×64 and 32×32 size respectively. Fourth, an improved fusion neural network structure is constructed that consists of three three-dimensional convolutional neural networks, designated as CNN-1, CNN-2, and CNN-3, to detect false positive pulmonary nodules. The networks’ input sizes are 32×32×32, 64×64×64, and 96×96×96 and include 5, 7, and 9 layers, respectively. Finally, we use the AdaBoost classifier to fuse the results of CNN-1, CNN-2, and CNN-3. We call this new neural network framework the Amalgamated-Convolutional Neural Network (A-CNN) and use it to detect pulmonary nodules. Findings Our team trained A-CNN using the LUNA16 and Ali Tianchi datasets and evaluated its performance using the LUNA16 dataset. We discarded nodules less than 5mm in diameter. When the average number of false positives per scan was 0.125 and 0.25, the sensitivity of A-CNN reached as high as 81.7% and 85.1%, respectively.
Collapse
Affiliation(s)
- Wenkai Huang
- Center for Research on Leading Technology of Special Equipment, School of Mechanical & Electrical Engineering, Guangzhou University, Guangzhou, P.R. China
| | - Yihao Xue
- School of Mechanical & Electrical Engineering, Guangzhou University, Guangzhou, P.R. China
| | - Yu Wu
- Laboratory Center, Guangzhou University, Guangzhou, P.R. China
- * E-mail:
| |
Collapse
|
23
|
Weikert T, Akinci D'Antonoli T, Bremerich J, Stieltjes B, Sommer G, Sauter AW. Evaluation of an AI-Powered Lung Nodule Algorithm for Detection and 3D Segmentation of Primary Lung Tumors. Contrast Media Mol Imaging 2019; 2019:1545747. [PMID: 31354393 DOI: 10.1155/2019/1545747] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/29/2019] [Accepted: 05/26/2019] [Indexed: 01/12/2023]
Abstract
Automated detection and segmentation is a prerequisite for the deployment of image-based secondary analyses, especially for lung tumors. However, currently only applications for lung nodules ≤3 cm exist. Therefore, we tested the performance of a fully automated AI-based lung nodule algorithm for detection and 3D segmentation of primary lung tumors in the context of tumor staging using the CT component of FDG-PET/CT and including all T-categories (T1-T4). FDG-PET/CTs of 320 patients with histologically confirmed lung cancer performed between 01/2010 and 06/2016 were selected. First, the main primary lung tumor within each scan was manually segmented using the CT component of the PET/CTs as reference. Second, the CT series were transferred to a platform with AI-based algorithms trained on chest CTs for detection and segmentation of lung nodules. Detection and segmentation performance were analyzed. Factors influencing detection rates were explored with binominal logistic regression and radiomic analysis. We also processed 94 PET/CTs negative for pulmonary nodules to investigate frequency and reasons of false-positive findings. The ratio of detected tumors was best in the T1-category (90.4%) and decreased continuously: T2 (70.8%), T3 (29.4%), and T4 (8.8%). Tumor contact with the pleura was a strong predictor of misdetection. Segmentation performance was excellent for T1 tumors (r = 0.908, p < 0.001) and tumors without pleural contact (r = 0.971, p < 0.001). Volumes of larger tumors were systematically underestimated. There were 0.41 false-positive findings per exam. The algorithm tested facilitates a reliable detection and 3D segmentation of T1/T2 lung tumors on FDG-PET/CTs. The detection and segmentation of more advanced lung tumors is currently imprecise due to the conception of the algorithm for lung nodules <3 cm. Future efforts should therefore focus on this collective to facilitate segmentation of all tumor types and sizes to bridge the gap between CAD applications for screening and staging of lung cancer.
Collapse
|
24
|
Zhang G, Yang Z, Gong L, Jiang S, Wang L. Classification of benign and malignant lung nodules from CT images based on hybrid features. ACTA ACUST UNITED AC 2019; 64:125011. [DOI: 10.1088/1361-6560/ab2544] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
|
25
|
Abstract
The analysis of multi-modality positron emission tomography and computed tomography (PET-CT) images for computer aided diagnosis applications (e.g., detection and segmentation) requires combining the sensitivity of PET to detect abnormal regions with anatomical localization from CT. Current methods for PET-CT image analysis either process the modalities separately or fuse information from each modality based on knowledge about the image analysis task. These methods generally do not consider the spatially varying visual characteristics that encode different information across the different modalities, which have different priorities at different locations. For example, a high abnormal PET uptake in the lungs is more meaningful for tumor detection than physiological PET uptake in the heart. Our aim is to improve fusion of the complementary information in multi-modality PET-CT with a new supervised convolutional neural network (CNN) that learns to fuse complementary information for multi-modality medical image analysis. Our CNN first encodes modality-specific features and then uses them to derive a spatially varying fusion map that quantifies the relative importance of each modality's features across different spatial locations. These fusion maps are then multiplied with the modality-specific feature maps to obtain a representation of the complementary multi-modality information at different locations, which can then be used for image analysis. We evaluated the ability of our CNN to detect and segment multiple regions (lungs, mediastinum, tumors) with different fusion requirements using a dataset of PET-CT images of lung cancer. We compared our method to baseline techniques for multi-modality image fusion (fused inputs (FS), multi-branch (MB) techniques, and multichannel (MC) techniques) and segmentation. Our findings show that our CNN had a significantly higher foreground detection accuracy (99.29%, p < 0:05) than the fusion baselines (FS: 99.00%, MB: 99.08%, TC: 98.92%) and a significantly higher Dice score (63.85%) than recent PET-CT tumor segmentation methods.
Collapse
|
26
|
Zhang G, Yang Z, Gong L, Jiang S, Wang L, Cao X, Wei L, Zhang H, Liu Z. An Appraisal of Nodule Diagnosis for Lung Cancer in CT Images. J Med Syst 2019; 43:181. [PMID: 31093830 DOI: 10.1007/s10916-019-1327-0] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2019] [Accepted: 05/08/2019] [Indexed: 12/17/2022]
Abstract
As "the second eyes" of radiologists, computer-aided diagnosis systems play a significant role in nodule detection and diagnosis for lung cancer. In this paper, we aim to provide a systematic survey of state-of-the-art techniques (both traditional techniques and deep learning techniques) for nodule diagnosis from computed tomography images. This review first introduces the current progress and the popular structure used for nodule diagnosis. In particular, we provide a detailed overview of the five major stages in the computer-aided diagnosis systems: data acquisition, nodule segmentation, feature extraction, feature selection and nodule classification. Second, we provide a detailed report of the selected works and make a comprehensive comparison between selected works. The selected papers are from the IEEE Xplore, Science Direct, PubMed, and Web of Science databases up to December 2018. Third, we discuss and summarize the better techniques used in nodule diagnosis and indicate the existing future challenges in this field, such as improving the area under the receiver operating characteristic curve and accuracy, developing new deep learning-based diagnosis techniques, building efficient feature sets (fusing traditional features and deep features), developing high-quality labeled databases with malignant and benign nodules and promoting the cooperation between medical organizations and academic institutions.
Collapse
Affiliation(s)
- Guobin Zhang
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| | - Zhiyong Yang
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| | - Li Gong
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| | - Shan Jiang
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China. .,Centre for advanced Mechanisms and Robotics, Tianjin University, 135 Yaguan Road, Jinnan District, Tianjin, 300350, China.
| | - Lu Wang
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| | - Xi Cao
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| | - Lin Wei
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| | - Hongyun Zhang
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| | - Ziqi Liu
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| |
Collapse
|
27
|
Basso Dias A, Zanon M, Altmayer S, Sartori Pacini G, Henz Concatto N, Watte G, Garcez A, Mohammed TL, Verma N, Medeiros T, Marchiori E, Irion K, Hochhegger B. Fluorine 18-FDG PET/CT and Diffusion-weighted MRI for Malignant versus Benign Pulmonary Lesions: A Meta-Analysis. Radiology 2018; 290:525-534. [PMID: 30480492 DOI: 10.1148/radiol.2018181159] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
Purpose To perform a meta-analysis of the literature to compare the diagnostic performance of fluorine 18 fluorodeoxyglucose PET/CT and diffusion-weighted (DW) MRI in the differentiation of malignant and benign pulmonary nodules and masses. Materials and Methods Published English-language studies on the diagnostic accuracy of PET/CT and/or DW MRI in the characterization of pulmonary lesions were searched in relevant databases through December 2017. The primary focus was on studies in which joint DW MRI and PET/CT were performed in the entire study population, to reduce interstudy heterogeneity. For DW MRI, lesion-to-spinal cord signal intensity ratio and apparent diffusion coefficient were evaluated; for PET/CT, maximum standard uptake value was evaluated. The pooled sensitivities, specificities, diagnostic odds ratios, and areas under the receiver operating characteristic curve (AUCs) for PET/CT and DW MRI were determined along with 95% confidence intervals (CIs). Results Thirty-seven studies met the inclusion criteria, with a total of 4224 participants and 4463 lesions (3090 malignant lesions [69.2%]). In the primary analysis of joint DW MRI and PET/CT studies (n = 6), DW MRI had a pooled sensitivity and specificity of 83% (95% CI: 75%, 89%) and 91% (95% CI: 80%, 96%), respectively, compared with 78% (95% CI: 70%, 84%) (P = .01 vs DW MRI) and 81% (95% CI: 72%, 88%) (P = .056 vs DW MRI) for PET/CT. DW MRI yielded an AUC of 0.93 (95% CI: 0.90, 0.95), versus 0.86 (95% CI: 0.83, 0.89) for PET/CT (P = .001). The diagnostic odds ratio of DW MRI (50 [95% CI: 19, 132]) was superior to that of PET/CT (15 [95% CI: 7, 32]) (P = .006). Conclusion The diagnostic performance of diffusion-weighted MRI is comparable or superior to that of fluorine 18 fluorodeoxyglucose PET/CT in the differentiation of malignant and benign pulmonary lesions. © RSNA, 2018 Online supplemental material is available for this article. See also the editorial by Schiebler in this issue.
Collapse
Affiliation(s)
- Adriano Basso Dias
- From the Medical Imaging Research Laboratory, LABIMED, Department of Radiology, Pavilhão Pereira Filho Hospital, Irmandade Santa Casa de Misericórdia de Porto Alegre, Av Independência 75, Porto Alegre, Brazil 90020160 (A.B.D., M.Z., S.A., G.S.P., G.W., B.H.); Department of Diagnostic Methods, Federal University of Health Sciences of Porto Alegre, Porto Alegre, Brazil (A.B.D., M.Z., S.A., G.S.P., B.H.); Department of Radiology, Hospital de Clínicas de Porto Alegre, Porto Alegre, Brazil (N.H.C.); Post-graduate Program in Collective Health, University of Vale do Rio dos Sinos, São Leopoldo, Brazil (A.G.); Department of Radiology, College of Medicine, University of Florida, Gainesville, Fla (T.L.M., N.V.); Department of Radiology, Pontificia Universidade Católica do Rio Grande do Sul, Porto Alegre, Brazil (T.M., B.H.); Department of Radiology, Federal University of Rio de Janeiro Medical School, Rio de Janeiro, Brazil (E.M.); and Department of Radiology, Central Manchester University Hospitals, NHS Foundation Trust-Trust Headquarters, Cobbett House, Manchester Royal Infirmary, Manchester, England (K.I.)
| | - Matheus Zanon
- From the Medical Imaging Research Laboratory, LABIMED, Department of Radiology, Pavilhão Pereira Filho Hospital, Irmandade Santa Casa de Misericórdia de Porto Alegre, Av Independência 75, Porto Alegre, Brazil 90020160 (A.B.D., M.Z., S.A., G.S.P., G.W., B.H.); Department of Diagnostic Methods, Federal University of Health Sciences of Porto Alegre, Porto Alegre, Brazil (A.B.D., M.Z., S.A., G.S.P., B.H.); Department of Radiology, Hospital de Clínicas de Porto Alegre, Porto Alegre, Brazil (N.H.C.); Post-graduate Program in Collective Health, University of Vale do Rio dos Sinos, São Leopoldo, Brazil (A.G.); Department of Radiology, College of Medicine, University of Florida, Gainesville, Fla (T.L.M., N.V.); Department of Radiology, Pontificia Universidade Católica do Rio Grande do Sul, Porto Alegre, Brazil (T.M., B.H.); Department of Radiology, Federal University of Rio de Janeiro Medical School, Rio de Janeiro, Brazil (E.M.); and Department of Radiology, Central Manchester University Hospitals, NHS Foundation Trust-Trust Headquarters, Cobbett House, Manchester Royal Infirmary, Manchester, England (K.I.)
| | - Stephan Altmayer
- From the Medical Imaging Research Laboratory, LABIMED, Department of Radiology, Pavilhão Pereira Filho Hospital, Irmandade Santa Casa de Misericórdia de Porto Alegre, Av Independência 75, Porto Alegre, Brazil 90020160 (A.B.D., M.Z., S.A., G.S.P., G.W., B.H.); Department of Diagnostic Methods, Federal University of Health Sciences of Porto Alegre, Porto Alegre, Brazil (A.B.D., M.Z., S.A., G.S.P., B.H.); Department of Radiology, Hospital de Clínicas de Porto Alegre, Porto Alegre, Brazil (N.H.C.); Post-graduate Program in Collective Health, University of Vale do Rio dos Sinos, São Leopoldo, Brazil (A.G.); Department of Radiology, College of Medicine, University of Florida, Gainesville, Fla (T.L.M., N.V.); Department of Radiology, Pontificia Universidade Católica do Rio Grande do Sul, Porto Alegre, Brazil (T.M., B.H.); Department of Radiology, Federal University of Rio de Janeiro Medical School, Rio de Janeiro, Brazil (E.M.); and Department of Radiology, Central Manchester University Hospitals, NHS Foundation Trust-Trust Headquarters, Cobbett House, Manchester Royal Infirmary, Manchester, England (K.I.)
| | - Gabriel Sartori Pacini
- From the Medical Imaging Research Laboratory, LABIMED, Department of Radiology, Pavilhão Pereira Filho Hospital, Irmandade Santa Casa de Misericórdia de Porto Alegre, Av Independência 75, Porto Alegre, Brazil 90020160 (A.B.D., M.Z., S.A., G.S.P., G.W., B.H.); Department of Diagnostic Methods, Federal University of Health Sciences of Porto Alegre, Porto Alegre, Brazil (A.B.D., M.Z., S.A., G.S.P., B.H.); Department of Radiology, Hospital de Clínicas de Porto Alegre, Porto Alegre, Brazil (N.H.C.); Post-graduate Program in Collective Health, University of Vale do Rio dos Sinos, São Leopoldo, Brazil (A.G.); Department of Radiology, College of Medicine, University of Florida, Gainesville, Fla (T.L.M., N.V.); Department of Radiology, Pontificia Universidade Católica do Rio Grande do Sul, Porto Alegre, Brazil (T.M., B.H.); Department of Radiology, Federal University of Rio de Janeiro Medical School, Rio de Janeiro, Brazil (E.M.); and Department of Radiology, Central Manchester University Hospitals, NHS Foundation Trust-Trust Headquarters, Cobbett House, Manchester Royal Infirmary, Manchester, England (K.I.)
| | - Natália Henz Concatto
- From the Medical Imaging Research Laboratory, LABIMED, Department of Radiology, Pavilhão Pereira Filho Hospital, Irmandade Santa Casa de Misericórdia de Porto Alegre, Av Independência 75, Porto Alegre, Brazil 90020160 (A.B.D., M.Z., S.A., G.S.P., G.W., B.H.); Department of Diagnostic Methods, Federal University of Health Sciences of Porto Alegre, Porto Alegre, Brazil (A.B.D., M.Z., S.A., G.S.P., B.H.); Department of Radiology, Hospital de Clínicas de Porto Alegre, Porto Alegre, Brazil (N.H.C.); Post-graduate Program in Collective Health, University of Vale do Rio dos Sinos, São Leopoldo, Brazil (A.G.); Department of Radiology, College of Medicine, University of Florida, Gainesville, Fla (T.L.M., N.V.); Department of Radiology, Pontificia Universidade Católica do Rio Grande do Sul, Porto Alegre, Brazil (T.M., B.H.); Department of Radiology, Federal University of Rio de Janeiro Medical School, Rio de Janeiro, Brazil (E.M.); and Department of Radiology, Central Manchester University Hospitals, NHS Foundation Trust-Trust Headquarters, Cobbett House, Manchester Royal Infirmary, Manchester, England (K.I.)
| | - Guilherme Watte
- From the Medical Imaging Research Laboratory, LABIMED, Department of Radiology, Pavilhão Pereira Filho Hospital, Irmandade Santa Casa de Misericórdia de Porto Alegre, Av Independência 75, Porto Alegre, Brazil 90020160 (A.B.D., M.Z., S.A., G.S.P., G.W., B.H.); Department of Diagnostic Methods, Federal University of Health Sciences of Porto Alegre, Porto Alegre, Brazil (A.B.D., M.Z., S.A., G.S.P., B.H.); Department of Radiology, Hospital de Clínicas de Porto Alegre, Porto Alegre, Brazil (N.H.C.); Post-graduate Program in Collective Health, University of Vale do Rio dos Sinos, São Leopoldo, Brazil (A.G.); Department of Radiology, College of Medicine, University of Florida, Gainesville, Fla (T.L.M., N.V.); Department of Radiology, Pontificia Universidade Católica do Rio Grande do Sul, Porto Alegre, Brazil (T.M., B.H.); Department of Radiology, Federal University of Rio de Janeiro Medical School, Rio de Janeiro, Brazil (E.M.); and Department of Radiology, Central Manchester University Hospitals, NHS Foundation Trust-Trust Headquarters, Cobbett House, Manchester Royal Infirmary, Manchester, England (K.I.)
| | - Anderson Garcez
- From the Medical Imaging Research Laboratory, LABIMED, Department of Radiology, Pavilhão Pereira Filho Hospital, Irmandade Santa Casa de Misericórdia de Porto Alegre, Av Independência 75, Porto Alegre, Brazil 90020160 (A.B.D., M.Z., S.A., G.S.P., G.W., B.H.); Department of Diagnostic Methods, Federal University of Health Sciences of Porto Alegre, Porto Alegre, Brazil (A.B.D., M.Z., S.A., G.S.P., B.H.); Department of Radiology, Hospital de Clínicas de Porto Alegre, Porto Alegre, Brazil (N.H.C.); Post-graduate Program in Collective Health, University of Vale do Rio dos Sinos, São Leopoldo, Brazil (A.G.); Department of Radiology, College of Medicine, University of Florida, Gainesville, Fla (T.L.M., N.V.); Department of Radiology, Pontificia Universidade Católica do Rio Grande do Sul, Porto Alegre, Brazil (T.M., B.H.); Department of Radiology, Federal University of Rio de Janeiro Medical School, Rio de Janeiro, Brazil (E.M.); and Department of Radiology, Central Manchester University Hospitals, NHS Foundation Trust-Trust Headquarters, Cobbett House, Manchester Royal Infirmary, Manchester, England (K.I.)
| | - Tan-Lucien Mohammed
- From the Medical Imaging Research Laboratory, LABIMED, Department of Radiology, Pavilhão Pereira Filho Hospital, Irmandade Santa Casa de Misericórdia de Porto Alegre, Av Independência 75, Porto Alegre, Brazil 90020160 (A.B.D., M.Z., S.A., G.S.P., G.W., B.H.); Department of Diagnostic Methods, Federal University of Health Sciences of Porto Alegre, Porto Alegre, Brazil (A.B.D., M.Z., S.A., G.S.P., B.H.); Department of Radiology, Hospital de Clínicas de Porto Alegre, Porto Alegre, Brazil (N.H.C.); Post-graduate Program in Collective Health, University of Vale do Rio dos Sinos, São Leopoldo, Brazil (A.G.); Department of Radiology, College of Medicine, University of Florida, Gainesville, Fla (T.L.M., N.V.); Department of Radiology, Pontificia Universidade Católica do Rio Grande do Sul, Porto Alegre, Brazil (T.M., B.H.); Department of Radiology, Federal University of Rio de Janeiro Medical School, Rio de Janeiro, Brazil (E.M.); and Department of Radiology, Central Manchester University Hospitals, NHS Foundation Trust-Trust Headquarters, Cobbett House, Manchester Royal Infirmary, Manchester, England (K.I.)
| | - Nupur Verma
- From the Medical Imaging Research Laboratory, LABIMED, Department of Radiology, Pavilhão Pereira Filho Hospital, Irmandade Santa Casa de Misericórdia de Porto Alegre, Av Independência 75, Porto Alegre, Brazil 90020160 (A.B.D., M.Z., S.A., G.S.P., G.W., B.H.); Department of Diagnostic Methods, Federal University of Health Sciences of Porto Alegre, Porto Alegre, Brazil (A.B.D., M.Z., S.A., G.S.P., B.H.); Department of Radiology, Hospital de Clínicas de Porto Alegre, Porto Alegre, Brazil (N.H.C.); Post-graduate Program in Collective Health, University of Vale do Rio dos Sinos, São Leopoldo, Brazil (A.G.); Department of Radiology, College of Medicine, University of Florida, Gainesville, Fla (T.L.M., N.V.); Department of Radiology, Pontificia Universidade Católica do Rio Grande do Sul, Porto Alegre, Brazil (T.M., B.H.); Department of Radiology, Federal University of Rio de Janeiro Medical School, Rio de Janeiro, Brazil (E.M.); and Department of Radiology, Central Manchester University Hospitals, NHS Foundation Trust-Trust Headquarters, Cobbett House, Manchester Royal Infirmary, Manchester, England (K.I.)
| | - Tássia Medeiros
- From the Medical Imaging Research Laboratory, LABIMED, Department of Radiology, Pavilhão Pereira Filho Hospital, Irmandade Santa Casa de Misericórdia de Porto Alegre, Av Independência 75, Porto Alegre, Brazil 90020160 (A.B.D., M.Z., S.A., G.S.P., G.W., B.H.); Department of Diagnostic Methods, Federal University of Health Sciences of Porto Alegre, Porto Alegre, Brazil (A.B.D., M.Z., S.A., G.S.P., B.H.); Department of Radiology, Hospital de Clínicas de Porto Alegre, Porto Alegre, Brazil (N.H.C.); Post-graduate Program in Collective Health, University of Vale do Rio dos Sinos, São Leopoldo, Brazil (A.G.); Department of Radiology, College of Medicine, University of Florida, Gainesville, Fla (T.L.M., N.V.); Department of Radiology, Pontificia Universidade Católica do Rio Grande do Sul, Porto Alegre, Brazil (T.M., B.H.); Department of Radiology, Federal University of Rio de Janeiro Medical School, Rio de Janeiro, Brazil (E.M.); and Department of Radiology, Central Manchester University Hospitals, NHS Foundation Trust-Trust Headquarters, Cobbett House, Manchester Royal Infirmary, Manchester, England (K.I.)
| | - Edson Marchiori
- From the Medical Imaging Research Laboratory, LABIMED, Department of Radiology, Pavilhão Pereira Filho Hospital, Irmandade Santa Casa de Misericórdia de Porto Alegre, Av Independência 75, Porto Alegre, Brazil 90020160 (A.B.D., M.Z., S.A., G.S.P., G.W., B.H.); Department of Diagnostic Methods, Federal University of Health Sciences of Porto Alegre, Porto Alegre, Brazil (A.B.D., M.Z., S.A., G.S.P., B.H.); Department of Radiology, Hospital de Clínicas de Porto Alegre, Porto Alegre, Brazil (N.H.C.); Post-graduate Program in Collective Health, University of Vale do Rio dos Sinos, São Leopoldo, Brazil (A.G.); Department of Radiology, College of Medicine, University of Florida, Gainesville, Fla (T.L.M., N.V.); Department of Radiology, Pontificia Universidade Católica do Rio Grande do Sul, Porto Alegre, Brazil (T.M., B.H.); Department of Radiology, Federal University of Rio de Janeiro Medical School, Rio de Janeiro, Brazil (E.M.); and Department of Radiology, Central Manchester University Hospitals, NHS Foundation Trust-Trust Headquarters, Cobbett House, Manchester Royal Infirmary, Manchester, England (K.I.)
| | - Klaus Irion
- From the Medical Imaging Research Laboratory, LABIMED, Department of Radiology, Pavilhão Pereira Filho Hospital, Irmandade Santa Casa de Misericórdia de Porto Alegre, Av Independência 75, Porto Alegre, Brazil 90020160 (A.B.D., M.Z., S.A., G.S.P., G.W., B.H.); Department of Diagnostic Methods, Federal University of Health Sciences of Porto Alegre, Porto Alegre, Brazil (A.B.D., M.Z., S.A., G.S.P., B.H.); Department of Radiology, Hospital de Clínicas de Porto Alegre, Porto Alegre, Brazil (N.H.C.); Post-graduate Program in Collective Health, University of Vale do Rio dos Sinos, São Leopoldo, Brazil (A.G.); Department of Radiology, College of Medicine, University of Florida, Gainesville, Fla (T.L.M., N.V.); Department of Radiology, Pontificia Universidade Católica do Rio Grande do Sul, Porto Alegre, Brazil (T.M., B.H.); Department of Radiology, Federal University of Rio de Janeiro Medical School, Rio de Janeiro, Brazil (E.M.); and Department of Radiology, Central Manchester University Hospitals, NHS Foundation Trust-Trust Headquarters, Cobbett House, Manchester Royal Infirmary, Manchester, England (K.I.)
| | - Bruno Hochhegger
- From the Medical Imaging Research Laboratory, LABIMED, Department of Radiology, Pavilhão Pereira Filho Hospital, Irmandade Santa Casa de Misericórdia de Porto Alegre, Av Independência 75, Porto Alegre, Brazil 90020160 (A.B.D., M.Z., S.A., G.S.P., G.W., B.H.); Department of Diagnostic Methods, Federal University of Health Sciences of Porto Alegre, Porto Alegre, Brazil (A.B.D., M.Z., S.A., G.S.P., B.H.); Department of Radiology, Hospital de Clínicas de Porto Alegre, Porto Alegre, Brazil (N.H.C.); Post-graduate Program in Collective Health, University of Vale do Rio dos Sinos, São Leopoldo, Brazil (A.G.); Department of Radiology, College of Medicine, University of Florida, Gainesville, Fla (T.L.M., N.V.); Department of Radiology, Pontificia Universidade Católica do Rio Grande do Sul, Porto Alegre, Brazil (T.M., B.H.); Department of Radiology, Federal University of Rio de Janeiro Medical School, Rio de Janeiro, Brazil (E.M.); and Department of Radiology, Central Manchester University Hospitals, NHS Foundation Trust-Trust Headquarters, Cobbett House, Manchester Royal Infirmary, Manchester, England (K.I.)
| |
Collapse
|
28
|
Xiao X, Zhao J, Qiang Y, Wang H, Xiao Y, Zhang X, Zhang Y. An Automated Segmentation Method for Lung Parenchyma Image Sequences Based on Fractal Geometry and Convex Hull Algorithm. Applied Sciences 2018; 8:832. [DOI: 10.3390/app8050832] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
29
|
|
30
|
Giacomini G, Pavan ALM, Altemani JMC, Duarte SB, Fortaleza CMCB, Miranda JRDA, de Pina DR. Computed tomography-based volumetric tool for standardized measurement of the maxillary sinus. PLoS One 2018; 13:e0190770. [PMID: 29304130 PMCID: PMC5755892 DOI: 10.1371/journal.pone.0190770] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2016] [Accepted: 11/29/2017] [Indexed: 11/19/2022] Open
Abstract
Volume measurements of maxillary sinus may be useful to identify diseases affecting paranasal sinuses. However, literature shows a lack of consensus in studies measuring the volume. This may be attributable to different computed tomography data acquisition techniques, segmentation methods, focuses of investigation, among other reasons. Furthermore, methods for volumetrically quantifying the maxillary sinus are commonly manual or semiautomated, which require substantial user expertise and are time-consuming. The purpose of the present study was to develop an automated tool for quantifying the total and air-free volume of the maxillary sinus based on computed tomography images. The quantification tool seeks to standardize maxillary sinus volume measurements, thus allowing better comparisons and determinations of factors that influence maxillary sinus size. The automated tool utilized image processing techniques (watershed, threshold, and morphological operators). The maxillary sinus volume was quantified in 30 patients. To evaluate the accuracy of the automated tool, the results were compared with manual segmentation that was performed by an experienced radiologist using a standard procedure. The mean percent differences between the automated and manual methods were 7.19% ± 5.83% and 6.93% ± 4.29% for total and air-free maxillary sinus volume, respectively. Linear regression and Bland-Altman statistics showed good agreement and low dispersion between both methods. The present automated tool for maxillary sinus volume assessment was rapid, reliable, robust, accurate, and reproducible and may be applied in clinical practice. The tool may be used to standardize measurements of maxillary volume. Such standardization is extremely important for allowing comparisons between studies, providing a better understanding of the role of the maxillary sinus, and determining the factors that influence maxillary sinus size under normal and pathological conditions.
Collapse
Affiliation(s)
- Guilherme Giacomini
- Instituto de Biociências de Botucatu, Universidade Estadual Paulista (IBB-UNESP), Botucatu, São Paulo, Brazil
| | - Ana Luiza Menegatti Pavan
- Instituto de Biociências de Botucatu, Universidade Estadual Paulista (IBB-UNESP), Botucatu, São Paulo, Brazil
| | | | - Sergio Barbosa Duarte
- Centro Brasileiro de Pesquisas Físicas (CBPF), Rio de Janeiro, Rio de Janeiro, Brazil
| | | | | | - Diana Rodrigues de Pina
- Faculdade de Medicina de Botucatu, Universidade Estadual Paulista (FMB-UNESP), Botucatu, São Paulo, Brazil
- * E-mail:
| |
Collapse
|
31
|
Pan L, Qiang Y, Yuan J, Wu L. Rapid Retrieval of Lung Nodule CT Images Based on Hashing and Pruning Methods. Biomed Res Int 2016; 2016:3162649. [PMID: 27995140 DOI: 10.1155/2016/3162649] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/19/2016] [Accepted: 10/18/2016] [Indexed: 11/18/2022]
Abstract
The similarity-based retrieval of lung nodule computed tomography (CT) images is an important task in the computer-aided diagnosis of lung lesions. It can provide similar clinical cases for physicians and help them make reliable clinical diagnostic decisions. However, when handling large-scale lung images with a general-purpose computer, traditional image retrieval methods may not be efficient. In this paper, a new retrieval framework based on a hashing method for lung nodule CT images is proposed. This method can translate high-dimensional image features into a compact hash code, so the retrieval time and required memory space can be reduced greatly. Moreover, a pruning algorithm is presented to further improve the retrieval speed, and a pruning-based decision rule is presented to improve the retrieval precision. Finally, the proposed retrieval method is validated on 2,450 lung nodule CT images selected from the public Lung Image Database Consortium (LIDC) database. The experimental results show that the proposed pruning algorithm effectively reduces the retrieval time of lung nodule CT images and improves the retrieval precision. In addition, the retrieval framework is evaluated by differentiating benign and malignant nodules, and the classification accuracy can reach 86.62%, outperforming other commonly used classification methods.
Collapse
|
32
|
Liao X, Zhao J, Jiao C, Lei L, Qiang Y, Cui Q. A Segmentation Method for Lung Parenchyma Image Sequences Based on Superpixels and a Self-Generating Neural Forest. PLoS One 2016; 11:e0160556. [PMID: 27532214 PMCID: PMC4988714 DOI: 10.1371/journal.pone.0160556] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2016] [Accepted: 07/21/2016] [Indexed: 01/10/2023] Open
Abstract
Background Lung parenchyma segmentation is often performed as an important pre-processing step in the computer-aided diagnosis of lung nodules based on CT image sequences. However, existing lung parenchyma image segmentation methods cannot fully segment all lung parenchyma images and have a slow processing speed, particularly for images in the top and bottom of the lung and the images that contain lung nodules. Method Our proposed method first uses the position of the lung parenchyma image features to obtain lung parenchyma ROI image sequences. A gradient and sequential linear iterative clustering algorithm (GSLIC) for sequence image segmentation is then proposed to segment the ROI image sequences and obtain superpixel samples. The SGNF, which is optimized by a genetic algorithm (GA), is then utilized for superpixel clustering. Finally, the grey and geometric features of the superpixel samples are used to identify and segment all of the lung parenchyma image sequences. Results Our proposed method achieves higher segmentation precision and greater accuracy in less time. It has an average processing time of 42.21 seconds for each dataset and an average volume pixel overlap ratio of 92.22 ± 4.02% for four types of lung parenchyma image sequences.
Collapse
Affiliation(s)
- Xiaolei Liao
- College of Computer Science and Technology, Taiyuan University of Technology, Taiyuan, 030024, China
| | - Juanjuan Zhao
- College of Computer Science and Technology, Taiyuan University of Technology, Taiyuan, 030024, China
| | - Cheng Jiao
- PET/CT center of Shanxi coal Central Hospital, Taiyuan, Shanxi, 030024, China
| | - Lei Lei
- College of Computer Science and Technology, Taiyuan University of Technology, Taiyuan, 030024, China
| | - Yan Qiang
- College of Computer Science and Technology, Taiyuan University of Technology, Taiyuan, 030024, China
- * E-mail:
| | - Qiang Cui
- College of Computer Science and Technology, Taiyuan University of Technology, Taiyuan, 030024, China
| |
Collapse
|
33
|
Chen X, Xu L, Wang W, Li X, Sun Y, Politis C. Computer-aided design and manufacturing of surgical templates and their clinical applications: a review. Expert Rev Med Devices 2016; 13:853-64. [DOI: 10.1080/17434440.2016.1218758] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
|