1
|
Chang S, Gao Y, Pomeroy MJ, Bai T, Zhang H, Lu S, Pickhardt PJ, Gupta A, Reiter MJ, Gould ES, Liang Z. Exploring Dual-Energy CT Spectral Information for Machine Learning-Driven Lesion Diagnosis in Pre-Log Domain. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1835-1845. [PMID: 37022248 PMCID: PMC10238622 DOI: 10.1109/tmi.2023.3240847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
In this study, we proposed a computer-aided diagnosis (CADx) framework under dual-energy spectral CT (DECT), which operates directly on the transmission data in the pre-log domain, called CADxDE, to explore the spectral information for lesion diagnosis. The CADxDE includes material identification and machine learning (ML) based CADx. Benefits from DECT's capability of performing virtual monoenergetic imaging with the identified materials, the responses of different tissue types (e.g., muscle, water, and fat) in lesions at each energy can be explored by ML for CADx. Without losing essential factors in the DECT scan, a pre-log domain model-based iterative reconstruction is adopted to obtain decomposed material images, which are then used to generate the virtual monoenergetic images (VMIs) at selected n energies. While these VMIs have the same anatomy, their contrast distribution patterns contain rich information along with the n energies for tissue characterization. Thus, a corresponding ML-based CADx is developed to exploit the energy-enhanced tissue features for differentiating malignant from benign lesions. Specifically, an original image-driven multi-channel three-dimensional convolutional neural network (CNN) and extracted lesion feature-based ML CADx methods are developed to show the feasibility of CADxDE. Results from three pathologically proven clinical datasets showed 4.01% to 14.25% higher AUC (area under the receiver operating characteristic curve) scores than the scores of both the conventional DECT data (high and low energy spectrum separately) and the conventional CT data. The mean gain >9.13% in AUC scores indicated that the energy spectral-enhanced tissue features from CADxDE have great potential to improve lesion diagnosis performance.
Collapse
Affiliation(s)
- Shaojie Chang
- Department of Radiology, Stony Brook University, Stony Brook, NY 11794, USA
| | - Yongfeng Gao
- Department of Radiology, Stony Brook University, Stony Brook, NY 11794, USA
| | - Marc J. Pomeroy
- Departments of Radiology and Biomedical Engineering, Stony Brook University, Stony Brook, NY 11794, USA
| | - Ti Bai
- Department of Radiation Oncology, University of Texas Southwestern Medical Centre, Dallas, TX 75390, USA
| | - Hao Zhang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, NY 10065, USA
| | - Siming Lu
- Departments of Radiology and Biomedical Engineering, Stony Brook University, Stony Brook, NY 11794, USA
| | - Perry J. Pickhardt
- Department of Radiology, School of Medicine, University of Wisconsin, Madison, WI 53792, USA
| | - Amit Gupta
- Department of Radiology, Stony Brook University, Stony Brook, NY 11794, USA
| | - Michael J. Reiter
- Department of Radiology, Stony Brook University, Stony Brook, NY 11794, USA
| | - Elaine S. Gould
- Department of Radiology, Stony Brook University, Stony Brook, NY 11794, USA
| | - Zhengrong Liang
- Departments of Radiology and Biomedical Engineering, Stony Brook University, Stony Brook, NY 11794, USA
| |
Collapse
|
2
|
Kanipriya M, Hemalatha C, Sridevi N, SriVidhya S, Jany Shabu S. An improved capuchin search algorithm optimized hybrid CNN-LSTM architecture for malignant lung nodule detection. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103973] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
3
|
Wu L, Hu S, Liu C. MR brain segmentation based on DE-ResUnet combining texture features and background knowledge. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103541] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|
4
|
Wu Z, Wang F, Cao W, Qin C, Dong X, Yang Z, Zheng Y, Luo Z, Zhao L, Yu Y, Xu Y, Li J, Tang W, Shen S, Wu N, Tan F, Li N, He J. Lung cancer risk prediction models based on pulmonary nodules: A systematic review. Thorac Cancer 2022; 13:664-677. [PMID: 35137543 PMCID: PMC8888150 DOI: 10.1111/1759-7714.14333] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2021] [Revised: 01/10/2022] [Accepted: 01/11/2022] [Indexed: 10/25/2022] Open
Abstract
BACKGROUND Screening with low-dose computed tomography (LDCT) is an efficient way to detect lung cancer at an earlier stage, but has a high false-positive rate. Several pulmonary nodules risk prediction models were developed to solve the problem. This systematic review aimed to compare the quality and accuracy of these models. METHODS The keywords "lung cancer," "lung neoplasms," "lung tumor," "risk," "lung carcinoma" "risk," "predict," "assessment," and "nodule" were used to identify relevant articles published before February 2021. All studies with multivariate risk models developed and validated on human LDCT data were included. Informal publications or studies with incomplete procedures were excluded. Information was extracted from each publication and assessed. RESULTS A total of 41 articles and 43 models were included. External validation was performed for 23.2% (10/43) models. Deep learning algorithms were applied in 62.8% (27/43) models; 60.0% (15/25) deep learning based researches compared their algorithms with traditional methods, and received better discrimination. Models based on Asian and Chinese populations were usually built on single-center or small sample retrospective studies, and the majority of the Asian models (12/15, 80.0%) were not validated using external datasets. CONCLUSION The existing models showed good discrimination for identifying high-risk pulmonary nodules, but lacked external validation. Deep learning algorithms are increasingly being used with good performance. More researches are required to improve the quality of deep learning models, particularly for the Asian population.
Collapse
Affiliation(s)
- Zheng Wu
- Office of Cancer Screening, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Fei Wang
- Office of Cancer Screening, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Wei Cao
- Office of Cancer Screening, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Chao Qin
- Office of Cancer Screening, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Xuesi Dong
- Office of Cancer Screening, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Zhuoyu Yang
- Office of Cancer Screening, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yadi Zheng
- Office of Cancer Screening, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Zilin Luo
- Office of Cancer Screening, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Liang Zhao
- Office of Cancer Screening, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yiwen Yu
- Office of Cancer Screening, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yongjie Xu
- Office of Cancer Screening, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Jiang Li
- Office of Cancer Screening, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China.,Chinese Academy of Medical Sciences Key Laboratory for National Cancer Big Data Analysis and Implement, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Wei Tang
- PET-CT Center, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Sipeng Shen
- Department of Epidemiology, Center for Global Health, School of Public Health, Nanjing Medical University, Nanjing, China.,Jiangsu Key Lab of Cancer Biomarkers, Prevention and Treatment, Collaborative Innovation Center for Cancer Personalized Medicine, Nanjing Medical University, Nanjing, China
| | - Ning Wu
- PET-CT Center, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China.,Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Fengwei Tan
- Department of Thoracic Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Ni Li
- Office of Cancer Screening, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China.,Chinese Academy of Medical Sciences Key Laboratory for National Cancer Big Data Analysis and Implement, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Jie He
- Office of Cancer Screening, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China.,Department of Thoracic Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
5
|
Cheng X, Wen H, You H, Hua L, Xiaohua W, Qiuting C, Jiabao L. Recognition of Peripheral Lung Cancer and Focal Pneumonia on Chest Computed Tomography Images Based on Convolutional Neural Network. Technol Cancer Res Treat 2022; 21:15330338221085375. [PMID: 35293240 PMCID: PMC8935416 DOI: 10.1177/15330338221085375] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Introduction: Chest computed tomography (CT) is important for the early screening of lung diseases and clinical diagnosis, particularly during the COVID-19 pandemic. We propose a method for classifying peripheral lung cancer and focal pneumonia on chest CT images and undertake 5 window settings to study the effect on the artificial intelligence processing results. Methods: A retrospective collection of CT images from 357 patients with peripheral lung cancer having solitary solid nodule or focal pneumonia with a solitary consolidation was applied. We segmented and aligned the lung parenchyma based on some morphological methods and cropped this region of the lung parenchyma with the minimum 3D bounding box. Using these 3D cropped volumes of all cases, we designed a 3D neural network to classify them into 2 categories. We also compared the classification results of the 3 physicians with different experience levels on the same dataset. Results: We conducted experiments using 5 window settings. After cropping and alignment based on an automatic preprocessing procedure, our neural network achieved an average classification accuracy of 91.596% under a 5-fold cross-validation in the full window, in which the area under the curve (AUC) was 0.946. The classification accuracy and AUC value were 90.48% and 0.957 for the junior physician, 94.96% and 0.989 for the intermediate physician, and 96.92% and 0.980 for the senior physician, respectively. After removing the error prediction, the accuracy improved significantly, reaching 98.79% in the self-defined window2. Conclusion: Using the proposed neural network, in separating peripheral lung cancer and focal pneumonia in chest CT data, we achieved an accuracy competitive to that of a junior physician. Through a data ablation study, the proposed 3D CNN can achieve a slightly higher accuracy compared with senior physicians in the same subset. The self-defined window2 was the best for data training and evaluation.
Collapse
Affiliation(s)
- Xiaoyue Cheng
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - He Wen
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Hao You
- Key laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China.,University of Chinese Academy of Sciences, Beijing, China
| | - Li Hua
- Key laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
| | - Wu Xiaohua
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Cao Qiuting
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Liu Jiabao
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
6
|
Shi J, Ye Y, Zhu D, Su L, Huang Y, Huang J. Comparative analysis of pulmonary nodules segmentation using multiscale residual U-Net and fuzzy C-means clustering. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 209:106332. [PMID: 34365313 DOI: 10.1016/j.cmpb.2021.106332] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/30/2020] [Accepted: 07/28/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE Pulmonary nodules have different shapes and uneven density, and some nodules adhere to blood vessels, pleura and other anatomical structures, which increase the difficulty of nodule segmentation. The purpose of this paper is to use multiscale residual U-Net to accurately segment lung nodules with complex geometric shapes, while comparing it with fuzzy C-means clustering and manual segmentation. METHOD We selected 58 computed tomography (CT) scan images of patients with different lung nodules for image segmentation. This paper proposes an automatic segmentation algorithm for lung nodules based on multiscale residual U-Net. In order to verify the accuracy of the method, we also conducted comparative experiments, while comparing it with fuzzy C-means clustering. RESULTS Compared with the other two methods, the segmentation of lung nodules based on multiscale residual U-Net has a higher accuracy, with an accuracy rate of 94.57%. This method not only maintains a high accuracy rate, but also shortens the recognition time significantly with a segmentation time of 3.15 s. CONCLUSIONS The diagnosis method of lung nodules combined with deep learning has a good market prospect and can improve the efficiency of doctors in diagnosing benign and malignant lung nodules.
Collapse
Affiliation(s)
- Jianshe Shi
- Fujian Provincial Key Laboratory of Data Intensive Computing, Quanzhou Normal University, Quanzhou 362000, China; Department of General Surgery, Huaqiao University Affiliated Strait Hospital, Quanzhou, Fujian 362000, China; Key Laboratory of Intelligent Computing and Information Processing, Quanzhou Normal University, Fujian Province University, Quanzhou 362000, China
| | - Yuguang Ye
- Fujian Provincial Key Laboratory of Data Intensive Computing, Quanzhou Normal University, Quanzhou 362000, China; Key Laboratory of Intelligent Computing and Information Processing, Quanzhou Normal University, Fujian Province University, Quanzhou 362000, China; Faculty of Mathematics and Computer Science, Quanzhou Normal University, Quanzhou 362000, China
| | - Daxin Zhu
- Fujian Provincial Key Laboratory of Data Intensive Computing, Quanzhou Normal University, Quanzhou 362000, China; Key Laboratory of Intelligent Computing and Information Processing, Quanzhou Normal University, Fujian Province University, Quanzhou 362000, China; Faculty of Mathematics and Computer Science, Quanzhou Normal University, Quanzhou 362000, China
| | - Lianta Su
- Fujian Provincial Key Laboratory of Data Intensive Computing, Quanzhou Normal University, Quanzhou 362000, China; Key Laboratory of Intelligent Computing and Information Processing, Quanzhou Normal University, Fujian Province University, Quanzhou 362000, China; Faculty of Mathematics and Computer Science, Quanzhou Normal University, Quanzhou 362000, China
| | - Yifeng Huang
- Department of Diagnostic Radiology, Huaqiao University Affiliated Strait Hospital, Quanzhou, Fujian 362000, China.
| | - Jianlong Huang
- Fujian Provincial Key Laboratory of Data Intensive Computing, Quanzhou Normal University, Quanzhou 362000, China; Key Laboratory of Intelligent Computing and Information Processing, Quanzhou Normal University, Fujian Province University, Quanzhou 362000, China; Faculty of Mathematics and Computer Science, Quanzhou Normal University, Quanzhou 362000, China.
| |
Collapse
|
7
|
Cao W, Liang Z, Gao Y, Pomeroy MJ, Han F, Abbasi A, Pickhardt PJ. A dynamic lesion model for differentiation of malignant and benign pathologies. Sci Rep 2021; 11:3485. [PMID: 33568762 PMCID: PMC7875978 DOI: 10.1038/s41598-021-83095-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2020] [Accepted: 01/20/2021] [Indexed: 11/21/2022] Open
Abstract
Malignant lesions have a high tendency to invade their surrounding environment compared to benign ones. This paper proposes a dynamic lesion model and explores the 2nd order derivatives at each image voxel, which reflect the rate of change of image intensity, as a quantitative measure of the tendency. The 2nd order derivatives at each image voxel are usually represented by the Hessian matrix, but it is difficult to quantify a matrix field (or image) through the lesion space as a measure of the tendency. We conjecture that the three eigenvalues contain important information of the Hessian matrix and are chosen as the surrogate representation of the Hessian matrix. By treating the three eigenvalues as a vector, called Hessian vector, which is defined in a local coordinate formed by three orthogonal Hessian eigenvectors and further adapting the gray level occurrence computing method to extract the vector texture descriptors (or measures) from the Hessian vector, a quantitative presentation for the dynamic lesion model is completed. The vector texture descriptors were applied to differentiate malignant from benign lesions from two pathologically proven datasets: colon polyps and lung nodules. The classification results not only outperform four state-of-the-art methods but also three radiologist experts.
Collapse
Affiliation(s)
- Weiguo Cao
- Department of Radiology, State University of New York at Stony Brook, Stony Brook, NY, USA
| | - Zhengrong Liang
- Department of Radiology, State University of New York at Stony Brook, Stony Brook, NY, USA.
- Department of Biomedical Engineering, State University of New York at Stony Brook, Stony Brook, NY, USA.
| | - Yongfeng Gao
- Department of Radiology, State University of New York at Stony Brook, Stony Brook, NY, USA
| | - Marc J Pomeroy
- Department of Radiology, State University of New York at Stony Brook, Stony Brook, NY, USA
- Department of Biomedical Engineering, State University of New York at Stony Brook, Stony Brook, NY, USA
| | - Fangfang Han
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, Guangdong, People's Republic of China
| | - Almas Abbasi
- Department of Radiology, State University of New York at Stony Brook, Stony Brook, NY, USA
| | - Perry J Pickhardt
- Department of Radiology, School of Medicine, University of Wisconsin, Madison, WI, USA
| |
Collapse
|
8
|
Liew CJY. Medicine and artificial intelligence: a strategy for the future, employing Porter's classic framework. Singapore Med J 2020; 61:447. [PMID: 31197371 PMCID: PMC7926585 DOI: 10.11622/smedj.2019047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/01/2024]
|
9
|
Zhao J, Zhang C, Li D, Niu J. Combining multi-scale feature fusion with multi-attribute grading, a CNN model for benign and malignant classification of pulmonary nodules. J Digit Imaging 2020; 33:869-878. [PMID: 32285220 PMCID: PMC7522130 DOI: 10.1007/s10278-020-00333-1] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022] Open
Abstract
Lung cancer has the highest mortality rate of all cancers, and early detection can improve survival rates. In the recent years, low-dose CT has been widely used to detect lung cancer. However, the diagnosis is limited by the subjective experience of doctors. Therefore, the main purpose of this study is to use convolutional neural network to realize the benign and malignant classification of pulmonary nodules in CT images. We collected 1004 cases of pulmonary nodules from LIDC-IDRI dataset, among which 554 cases were benign and 450 cases were malignant. According to the doctors' annotates on the center coordinates of the nodules, two 3D CT image patches of pulmonary nodules with different scales were extracted. In this study, our work focuses on two aspects. Firstly, we constructed a multi-stream multi-task network (MSMT), which combined multi-scale feature with multi-attribute classification for the first time, and applied it to the classification of benign and malignant pulmonary nodules. Secondly, we proposed a new loss function to balance the relationship between different attributes. The final experimental results showed that our model was effective compared with the same type of study. The area under ROC curve, accuracy, sensitivity, and specificity were 0.979, 93.92%, 92.60%, and 96.25%, respectively.
Collapse
Affiliation(s)
- Jumin Zhao
- College of Information and Computer, Taiyuan University of Technology, Jinzhong, China
- Technology Research Center of Spatial Information Network Engineering of Shanxi, Jinzhong, China
| | - Chen Zhang
- College of Information and Computer, Taiyuan University of Technology, Jinzhong, China
| | - Dengao Li
- Technology Research Center of Spatial Information Network Engineering of Shanxi, Jinzhong, China.
- College of Data Science, Taiyuan University of Technology, Jinzhong, China.
| | - Jing Niu
- College of Information and Computer, Taiyuan University of Technology, Jinzhong, China
| |
Collapse
|
10
|
Tan J, Gao Y, Liang Z, Cao W, Pomeroy MJ, Huo Y, Li L, Barish MA, Abbasi AF, Pickhardt PJ. 3D-GLCM CNN: A 3-Dimensional Gray-Level Co-Occurrence Matrix-Based CNN Model for Polyp Classification via CT Colonography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2013-2024. [PMID: 31899419 PMCID: PMC7269812 DOI: 10.1109/tmi.2019.2963177] [Citation(s) in RCA: 47] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
Accurately classifying colorectal polyps, or differentiating malignant from benign ones, has a significant clinical impact on early detection and identifying optimal treatment of colorectal cancer. Convolution neural network (CNN) has shown great potential in recognizing different objects (e.g. human faces) from multiple slice (or color) images, a task similar to the polyp differentiation, given a large learning database. This study explores the potential of CNN learning from multiple slice (or feature) images to differentiate malignant from benign polyps from a relatively small database with pathological ground truth, including 32 malignant and 31 benign polyps represented by volumetric computed tomographic (CT) images. The feature image in this investigation is the gray-level co-occurrence matrix (GLCM). For each volumetric polyp, there are 13 GLCMs, computed from each of the 13 directions through the polyp volume. For comparison purpose, the CNN learning is also applied to the multi-slice CT images of the volumetric polyps. The comparison study is further extended to include Random Forest (RF) classification of the Haralick texture features (derived from the GLCMs). From the relatively small database, this study achieved scores of 0.91/0.93 (two-fold/leave-one-out evaluations) AUC (area under curve of the receiver operating characteristics) by using the CNN on the GLCMs, while the RF reached 0.84/0.86 AUC on the Haralick features and the CNN rendered 0.79/0.80 AUC on the multiple-slice CT images. The presented CNN learning from the GLCMs can relieve the challenge associated with relatively small database, improve the classification performance over the CNN on the raw CT images and the RF on the Haralick features, and have the potential to perform the clinical task of differentiating malignant from benign polyps with pathological ground truth.
Collapse
|
11
|
Han F, Yan L, Chen J, Teng Y, Chen S, Qi S, Qian W, Yang J, Moore W, Zhang S, Liang Z. Predicting Unnecessary Nodule Biopsies from a Small, Unbalanced, and Pathologically Proven Dataset by Transfer Learning. J Digit Imaging 2020; 33:685-696. [PMID: 32144499 PMCID: PMC7256141 DOI: 10.1007/s10278-019-00306-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022] Open
Abstract
This study explores an automatic diagnosis method to predict unnecessary nodule biopsy from a small, unbalanced, and pathologically proven database. The automatic diagnosis method is based on a convolutional neural network (CNN) model. Because of the small and unbalanced samples, the presented method aims to improve the transfer learning capability via the VGG16 architecture and optimize the related transfer learning parameters. For comparison purpose, a traditional machine learning method is implemented, which extracts the texture features and classifies the features by support vector machine (SVM). The database includes 68 biopsied nodules, 16 are pathologically proven benign and the remaining 52 are malignant. To consider the volumetric data by the CNN model, each image slice from each nodule volume is selected randomly until all image slices of each nodule are utilized. The leave-one-out and 10-folder cross validations are applied to train and test the randomly selected 68 image slices (one image slice from one nodule) in each experiment, respectively. The averages over all the experimental outcomes are the final results. The experiments revealed that the features from both the medical and the natural images share the similarity of focusing on simpler and less-abstract objects, leading to the conclusion that not the more the transfer convolutional layers, the better the classification results. Transfer learning from other larger datasets can supply additional information to small and unbalanced datasets to improve the classification performance. The presented method has shown the potential to adapt CNN architecture to improve the prediction of unnecessary nodule biopsy from small, unbalanced, and pathologically proven volumetric dataset.
Collapse
Affiliation(s)
- Fangfang Han
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515 People’s Republic of China
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, 110169 People’s Republic of China
| | - Linkai Yan
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, 110169 People’s Republic of China
| | - Junxin Chen
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, 110169 People’s Republic of China
| | - Yueyang Teng
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, 110169 People’s Republic of China
| | - Shuo Chen
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, 110169 People’s Republic of China
| | - Shouliang Qi
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, 110169 People’s Republic of China
| | - Wei Qian
- College of Engineering, University of Texas at El Paso, El Paso, TX 79968 USA
| | - Jie Yang
- Department of Family, Population and Preventive Medicine, Stony Brook University, Stony Brook, NY 11794 USA
| | - William Moore
- Department of Radiology, Stony Brook University, Stony Brook, NY 11794 USA
| | - Shu Zhang
- Department of Radiology, Stony Brook University, Stony Brook, NY 11794 USA
| | - Zhengrong Liang
- Department of Radiology, Stony Brook University, Stony Brook, NY 11794 USA
| |
Collapse
|
12
|
Mastouri R, Khlifa N, Neji H, Hantous-Zannad S. Deep learning-based CAD schemes for the detection and classification of lung nodules from CT images: A survey. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2020; 28:591-617. [PMID: 32568165 DOI: 10.3233/xst-200660] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
BACKGROUND Lung cancer is the most common cancer in the world. Computed tomography (CT) is the standard medical imaging modality for early lung nodule detection and diagnosis that improves patient's survival rate. Recently, deep learning algorithms, especially convolutional neural networks (CNNs), have become a preferred methodology for developing computer-aided detection and diagnosis (CAD) schemes of lung CT images. OBJECTIVE Several CNN-based research projects have been initiated to design robust and efficient CAD schemes for the detection and classification of lung nodules. This paper reviews the recent works in this area and gives an insight into technical progress. METHODS First, a brief overview of CNN models and their basic structures is presented in this investigation. Then, we provide an analytic comparison of the existing approaches to discover recent trend and upcoming challenges. We also introduce an objective description of both handcrafted and deep learning features, as well as the types of nodules, the medical imaging modalities, the widely used databases, and related works in the last three years. The articles presented in this work were selected from various databases. About 57% of reviewed articles published in the last year. RESULTS Our analysis reveals that several methods achieved promising performance with high sensitivity rates ranging from 66% to 100% under the false-positive rates ranging from 1 to 15 per CT scan. It can be noted that CNN models have contributed to the accurate detection and early diagnosis of lung nodules. CONCLUSIONS From the critical discussion and an outline for prospective directions, this survey provide researchers valuable information to master the deep learning concepts and to deepen their knowledge of the trend and latest techniques in developing CAD schemes of lung CT images.
Collapse
Affiliation(s)
- Rekka Mastouri
- University of Tunis el Manar, Higher Institute of Medical Technologies of Tunis, Research Laboratory of Biophysics and Medical Technologies, 1006 Tunis, Tunisia
| | - Nawres Khlifa
- University of Tunis el Manar, Higher Institute of Medical Technologies of Tunis, Research Laboratory of Biophysics and Medical Technologies, 1006 Tunis, Tunisia
| | - Henda Neji
- University of Tunis el Manar, Faculty of Medicine of Tunis, 1007 Tunis, Tunisia
- Department of Medical Imaging, Abderrahmen Mami Hospital, 2035 Ariana, Tunisia
| | - Saoussen Hantous-Zannad
- University of Tunis el Manar, Faculty of Medicine of Tunis, 1007 Tunis, Tunisia
- Department of Medical Imaging, Abderrahmen Mami Hospital, 2035 Ariana, Tunisia
| |
Collapse
|
13
|
Gao Y, Tan J, Liang Z, Li L, Huo Y. Improved computer-aided detection of pulmonary nodules via deep learning in the sinogram domain. Vis Comput Ind Biomed Art 2019; 2:15. [PMID: 32240409 PMCID: PMC7099542 DOI: 10.1186/s42492-019-0029-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2019] [Accepted: 10/16/2019] [Indexed: 12/02/2022] Open
Abstract
Computer aided detection (CADe) of pulmonary nodules plays an important role in assisting radiologists’ diagnosis and alleviating interpretation burden for lung cancer. Current CADe systems, aiming at simulating radiologists’ examination procedure, are built upon computer tomography (CT) images with feature extraction for detection and diagnosis. Human visual perception in CT image is reconstructed from sinogram, which is the original raw data acquired from CT scanner. In this work, different from the conventional image based CADe system, we propose a novel sinogram based CADe system in which the full projection information is used to explore additional effective features of nodules in the sinogram domain. Facing the challenges of limited research in this concept and unknown effective features in the sinogram domain, we design a new CADe system that utilizes the self-learning power of the convolutional neural network to learn and extract effective features from sinogram. The proposed system was validated on 208 patient cases from the publicly available online Lung Image Database Consortium database, with each case having at least one juxtapleural nodule annotation. Experimental results demonstrated that our proposed method obtained a value of 0.91 of the area under the curve (AUC) of receiver operating characteristic based on sinogram alone, comparing to 0.89 based on CT image alone. Moreover, a combination of sinogram and CT image could further improve the value of AUC to 0.92. This study indicates that pulmonary nodule detection in the sinogram domain is feasible with deep learning.
Collapse
Affiliation(s)
- Yongfeng Gao
- Department of Radiology, State University of New York, Stony Brook, NY, 11794, USA
| | - Jiaxing Tan
- Department of Radiology, State University of New York, Stony Brook, NY, 11794, USA.,Departments of Computer Science, City University of New York/CSI, Staten Island, NY, 10314, USA
| | - Zhengrong Liang
- Department of Radiology, State University of New York, Stony Brook, NY, 11794, USA.
| | - Lihong Li
- Engineering and Environmental Science, City University of New York/CSI, Staten Island,, NY, 10314, USA
| | - Yumei Huo
- Departments of Computer Science, City University of New York/CSI, Staten Island, NY, 10314, USA
| |
Collapse
|
14
|
Gao Y, Shi Y, Cao W, Zhang S, Liang Z. Energy enhanced tissue texture in spectral computed tomography for lesion classification. Vis Comput Ind Biomed Art 2019; 2:16. [PMID: 32226923 PMCID: PMC7089716 DOI: 10.1186/s42492-019-0028-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2019] [Accepted: 10/16/2019] [Indexed: 12/30/2022] Open
Abstract
Tissue texture reflects the spatial distribution of contrasts of image voxel gray levels, i.e., the tissue heterogeneity, and has been recognized as important biomarkers in various clinical tasks. Spectral computed tomography (CT) is believed to be able to enrich tissue texture by providing different voxel contrast images using different X-ray energies. Therefore, this paper aims to address two related issues for clinical usage of spectral CT, especially the photon counting CT (PCCT): (1) texture enhancement by spectral CT image reconstruction, and (2) spectral energy enriched tissue texture for improved lesion classification. For issue (1), we recently proposed a tissue-specific texture prior in addition to low rank prior for the individual energy-channel low-count image reconstruction problems in PCCT under the Bayesian theory. Reconstruction results showed the proposed method outperforms existing methods of total variation (TV), low-rank TV and tensor dictionary learning in terms of not only preserving texture features but also suppressing image noise. For issue (2), this paper will investigate three models to incorporate the enriched texture by PCCT in accordance with three types of inputs: one is the spectral images, another is the co-occurrence matrices (CMs) extracted from the spectral images, and the third one is the Haralick features (HF) extracted from the CMs. Studies were performed on simulated photon counting data by introducing attenuation-energy response curve to the traditional CT images from energy integration detectors. Classification results showed the spectral CT enriched texture model can improve the area under the receiver operating characteristic curve (AUC) score by 7.3%, 0.42% and 3.0% for the spectral images, CMs and HFs respectively on the five-energy spectral data over the original single energy data only. The CM- and HF-inputs can achieve the best AUC of 0.934 and 0.927. This texture themed study shows the insight that incorporating clinical important prior information, e.g., tissue texture in this paper, into the medical imaging, such as the upstream image reconstruction, the downstream diagnosis, and so on, can benefit the clinical tasks.
Collapse
Affiliation(s)
- Yongfeng Gao
- 1Department of Radiology, Stony Brook University, Stony Brook, NY 11794 USA
| | - Yongyi Shi
- 1Department of Radiology, Stony Brook University, Stony Brook, NY 11794 USA.,2Institute of Image Processing and Pattern Recognition, Xi'an Jiaotong University, Xi'an, 710049 Shanxi China
| | - Weiguo Cao
- 1Department of Radiology, Stony Brook University, Stony Brook, NY 11794 USA
| | - Shu Zhang
- 1Department of Radiology, Stony Brook University, Stony Brook, NY 11794 USA
| | - Zhengrong Liang
- 3Departments of Radiology, Biomedical Engineering, Computer Science, and Electrical Engineering, Stony Brook University, Stony Brook, NY 11794 USA
| |
Collapse
|
15
|
Smith MJ, Bean S. AI and Ethics in Medical Radiation Sciences. J Med Imaging Radiat Sci 2019; 50:S24-S26. [PMID: 31563532 DOI: 10.1016/j.jmir.2019.08.005] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2019] [Accepted: 08/15/2019] [Indexed: 11/16/2022]
Affiliation(s)
- Maxwell J Smith
- Faculty of Health Sciences, School of Health Studies, Western University, London, Ontario, Canada
| | - Sally Bean
- Health Ethics Alliance and Policy, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada; Dalla Lana School of Public Health & Institute of Health Policy, Management & Evaluation, University of Toronto, Toronto, Ontario, Canada.
| |
Collapse
|
16
|
Gong J, Liu J, Hao W, Nie S, Wang S, Peng W. Computer-aided diagnosis of ground-glass opacity pulmonary nodules using radiomic features analysis. Phys Med Biol 2019; 64:135015. [PMID: 31167172 DOI: 10.1088/1361-6560/ab2757] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
Abstract
This study aims to develop a CT-based radiomic features analysis approach for diagnosis of ground-glass opacity (GGO) pulmonary nodules, and also assess whether computer-aided diagnosis (CADx) performance changes in classifying between benign and malignant nodules associated with histopathological subtypes namely, adenocarcinoma in situ (AIS), minimally invasive adenocarcinoma (MIA), and invasive adenocarcinoma (IAC), respectively. The study involves 182 histopathology-confirmed GGO nodules collected from two cancer centers. Among them, 59 are benign, 50 are AIS, 32 are MIA, and 41 are IAC nodules. Four training/testing data sets-(1) all nodules, (2) benign and AIS nodules, (3) benign and MIA nodules, (4) benign and IAC nodules-are assembled based on their histopathological subtypes. We first segment pulmonary nodules depicted in CT images by using a 3D region growing and geodesic active contour level set algorithm. Then, we computed and extracted 1117 quantitative imaging features based on the 3D segmented nodules. After conducting radiomic features normalization process, we apply a leave-one-out cross-validation (LOOCV) method to build models by embedding with a Relief feature selection, synthetic minority oversampling technique (SMOTE) and three machine-learning classifiers namely, support vector machine classifier, logistic regression classifier and Gaussian Naïve Bayes classifier. When separately using four data sets to train and test three classifiers, the average areas under receiver operating characteristic curves (AUC) are 0.75, 0.55, 0.77 and 0.93, respectively. When testing on an independent data set, our scheme yields higher accuracy than two radiologists (61.3% versus radiologist 1: 53.1% and radiologist 2: 56.3%). This study demonstrates that: (1) the feasibility of using CT-based radiomic features analysis approach to distinguish between benign and malignant GGO nodules, (2) higher performance of CADx scheme in diagnosing GGO nodules comparing with radiologist, and (3) a consistently positive trend between classification performance and invasive grade of GGO nodules. Thus, to improve the CADx performance in diagnosing of GGO nodules, one should assemble an optimal training data set dominated with more nodules associated with non-invasive lung adenocarcinoma (i.e. AIS and MIA).
Collapse
Affiliation(s)
- Jing Gong
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai 200032, People's Republic of China. Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China. Jing Gong and Jiyu Liu contributed equally to this work
| | | | | | | | | | | |
Collapse
|
17
|
Zhang G, Yang Z, Gong L, Jiang S, Wang L. Classification of benign and malignant lung nodules from CT images based on hybrid features. ACTA ACUST UNITED AC 2019; 64:125011. [DOI: 10.1088/1361-6560/ab2544] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
|
18
|
Zhang G, Yang Z, Gong L, Jiang S, Wang L, Cao X, Wei L, Zhang H, Liu Z. An Appraisal of Nodule Diagnosis for Lung Cancer in CT Images. J Med Syst 2019; 43:181. [PMID: 31093830 DOI: 10.1007/s10916-019-1327-0] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2019] [Accepted: 05/08/2019] [Indexed: 12/17/2022]
Abstract
As "the second eyes" of radiologists, computer-aided diagnosis systems play a significant role in nodule detection and diagnosis for lung cancer. In this paper, we aim to provide a systematic survey of state-of-the-art techniques (both traditional techniques and deep learning techniques) for nodule diagnosis from computed tomography images. This review first introduces the current progress and the popular structure used for nodule diagnosis. In particular, we provide a detailed overview of the five major stages in the computer-aided diagnosis systems: data acquisition, nodule segmentation, feature extraction, feature selection and nodule classification. Second, we provide a detailed report of the selected works and make a comprehensive comparison between selected works. The selected papers are from the IEEE Xplore, Science Direct, PubMed, and Web of Science databases up to December 2018. Third, we discuss and summarize the better techniques used in nodule diagnosis and indicate the existing future challenges in this field, such as improving the area under the receiver operating characteristic curve and accuracy, developing new deep learning-based diagnosis techniques, building efficient feature sets (fusing traditional features and deep features), developing high-quality labeled databases with malignant and benign nodules and promoting the cooperation between medical organizations and academic institutions.
Collapse
Affiliation(s)
- Guobin Zhang
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| | - Zhiyong Yang
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| | - Li Gong
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| | - Shan Jiang
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China. .,Centre for advanced Mechanisms and Robotics, Tianjin University, 135 Yaguan Road, Jinnan District, Tianjin, 300350, China.
| | - Lu Wang
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| | - Xi Cao
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| | - Lin Wei
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| | - Hongyun Zhang
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| | - Ziqi Liu
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| |
Collapse
|
19
|
Soffer S, Ben-Cohen A, Shimon O, Amitai MM, Greenspan H, Klang E. Convolutional Neural Networks for Radiologic Images: A Radiologist's Guide. Radiology 2019; 290:590-606. [PMID: 30694159 DOI: 10.1148/radiol.2018180547] [Citation(s) in RCA: 308] [Impact Index Per Article: 51.3] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Deep learning has rapidly advanced in various fields within the past few years and has recently gained particular attention in the radiology community. This article provides an introduction to deep learning technology and presents the stages that are entailed in the design process of deep learning radiology research. In addition, the article details the results of a survey of the application of deep learning-specifically, the application of convolutional neural networks-to radiologic imaging that was focused on the following five major system organs: chest, breast, brain, musculoskeletal system, and abdomen and pelvis. The survey of the studies is followed by a discussion about current challenges and future trends and their potential implications for radiology. This article may be used as a guide for radiologists planning research in the field of radiologic image analysis using convolutional neural networks.
Collapse
Affiliation(s)
- Shelly Soffer
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Avi Ben-Cohen
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Orit Shimon
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Michal Marianne Amitai
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Hayit Greenspan
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Eyal Klang
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| |
Collapse
|
20
|
Tan J, Huo Y, Liang Z, Li L. Expert knowledge-infused deep learning for automatic lung nodule detection. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2019; 27:17-35. [PMID: 30452432 PMCID: PMC6453714 DOI: 10.3233/xst-180426] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
BACKGROUND Computer aided detection (CADe) of pulmonary nodules from computed tomography (CT) is crucial for early diagnosis of lung cancer. Self-learned features obtained by training datasets via deep learning have facilitated CADe of the nodules. However, the complexity of CT lung images renders a challenge of extracting effective features by self-learning only. This condition is exacerbated for limited size of datasets. On the other hand, the engineered features have been widely studied. OBJECTIVE We proposed a novel nodule CADe which aims to relieve the challenge by the use of available engineered features to prevent convolution neural networks (CNN) from overfitting under dataset limitation and reduce the running-time complexity of self-learning. METHODS The CADe methodology infuses adequately the engineered features, particularly texture features, into the deep learning process. RESULTS The methodology was validated on 208 patients with at least one juxta-pleural nodule from the public LIDC-IDRI database. Results demonstrated that the methodology achieves a sensitivity of 88% with 1.9 false positives per scan and a sensitivity of 94.01% with 4.01 false positives per scan. CONCLUSIONS The methodology shows high performance compared with the state-of-the-art results, in terms of accuracy and efficiency, from both existing CNN-based approaches and engineered feature-based classifications.
Collapse
Affiliation(s)
- Jiaxing Tan
- Department of Computer Science, City University of New York, the Graduate Center, NY, USA
| | - Yumei Huo
- Department of Computer Science, City University of New York at CSI, NY, USA
| | - Zhengrong Liang
- Department of Radiology, State University of New York at Stony Brook, NY, USA
- Corresponding author: Zhengrong Liang, Department of Radiology, Electrical and Computer Engineering, and Biomedical Engineering, Stony Brook University, Stony Brook, NY 11794, USA. .
| | - Lihong Li
- Department of Engineering Science and Physics, City University of New York at CSI, NY, USA
| |
Collapse
|
21
|
Zhao X, Qi S, Zhang B, Ma H, Qian W, Yao Y, Sun J. Deep CNN models for pulmonary nodule classification: Model modification, model integration, and transfer learning. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2019; 27:615-629. [PMID: 31227682 DOI: 10.3233/xst-180490] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
BACKGROUND Deep learning has made spectacular achievements in analysing natural images, but it faces challenges for medical applications partly due to inadequate images. OBJECTIVE Aiming to classify malignant and benign pulmonary nodules using CT images, we explore different strategies to utilize the state-of-the-art deep convolutional neural networks (CNN). METHODS Experiments are conducted using the Lung Image Database Consortium image collection (LIDC-IDRI), which is a public database containing 1018 cases. Three strategies are implemented including to 1) modify some state-of-the-art CNN architectures, 2) integrate different CNNs and 3) adopt transfer learning. Totally, 11 deep CNN models are compared using the same dataset. RESULTS Study demonstrates that, for the model modification scheme, a concise CifarNet performs better than the other modified CNNs with more complex architectures, achieving an area under ROC curve of AUC = 0.90. Integrated CNN models do not significantly improve the classification performance, but the model complexity is reduced. Transfer learning outperforms the other two schemes and ResNet with fine-tuning leads to the best performance with an AUC = 0.94, as well as the sensitivity of 91% and an overall accuracy of 88%. CONCLUSIONS Model modification, model integration, and transfer learning can play important roles to identify and generate optimal deep CNN models in classifying pulmonary nodules based on CT images efficiently. Transfer learning is preferred when applying deep learning to medical imaging applications.
Collapse
Affiliation(s)
- Xinzhuo Zhao
- Sino-Dutch Biomedical and Information Engineering School, Northeastern University, Shenyang, China
- Border Biomedical Research Center, University of Texas at El Paso, El Paso, USA
| | - Shouliang Qi
- Sino-Dutch Biomedical and Information Engineering School, Northeastern University, Shenyang, China
- Neusoft Research of Intelligent Healthcare Technology, Co. Ltd., Shenyang, China
| | - Baihua Zhang
- Sino-Dutch Biomedical and Information Engineering School, Northeastern University, Shenyang, China
| | - He Ma
- Sino-Dutch Biomedical and Information Engineering School, Northeastern University, Shenyang, China
| | - Wei Qian
- Sino-Dutch Biomedical and Information Engineering School, Northeastern University, Shenyang, China
- College of Engineering, University of Texas at El Paso, El Paso, USA
| | - Yudong Yao
- Sino-Dutch Biomedical and Information Engineering School, Northeastern University, Shenyang, China
- Electrical and Computer Engineering, Stevens Institute of Technology, USA
| | - Jianjun Sun
- Border Biomedical Research Center, University of Texas at El Paso, El Paso, USA
| |
Collapse
|
22
|
Affiliation(s)
- Eyal Klang
- Department of Radiology, The Chaim Sheba Medical Center, Tel Hashomer, Israel
| |
Collapse
|
23
|
Gong J, Liu JY, Sun XW, Zheng B, Nie SD. Computer-aided diagnosis of lung cancer: the effect of training data sets on classification accuracy of lung nodules. ACTA ACUST UNITED AC 2018; 63:035036. [DOI: 10.1088/1361-6560/aaa610] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
24
|
Arai T, Nagashima C, Muramatsu Y, Murao K, Yamaguchi I, Ushio N, Hanai K, Kaneko M. Can radiological technologists serve as primary screeners of low-dose computed tomography for the diagnosis of lung cancer? JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2018; 26:909-917. [PMID: 30103369 DOI: 10.3233/xst-180409] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
BACKGROUND The Accreditation Council for Lung Cancer CT Screening of Japan established guidelines for the certification of Radiological Technologists in 2009. OBJECTIVE To analyze the trends in examination pass rates of the Radiological Technologists and discuss the reasons. METHODS The cohort comprised 1593 Radiological Technologists (as examinees) based on 10-year of data (with a total of 17 examination runs). First, the examinees' written test results were analyzed. Second, an abnormal finding detection test was conducted using >100 client PCs connected to a dedicated server containing low-dose lung cancer CT screening images of 60 cases. The passing scores were correct answer rate >60% and sensitivity (TP) of >90%, respectively. RESULTS Overall, 1243 examinees passed with an overall rate of 78%. The average pass rate for the written test was 91%, whereas that for the abnormal findings detection test was 85%. There was a moderate correlation between the test pass rate and average years of clinical experience of the examinees for the abnormal findings detection test (R = 0.558), whereas no such correlation existed for the written test (R = 0.105). CONCLUSIONS In order for accredited Radiological Technologists to serve as primary screeners of low-dose computed tomography, it is important to revise the educational system according to current standard practices.
Collapse
Affiliation(s)
- T Arai
- Center Hospital of the National Center to Global Health and Medicine, Toyama Shinjuku-ku, Tokyo, Japan
| | - C Nagashima
- National Cancer Center Japan Tsukiji Campus, Tsukiji, Chuo-ku, Tokyo, Japan
| | - Y Muramatsu
- National Cancer Center Japan Kashiwa Campus, Kashiwanoha, Kashiwa-shi Chiba, Japan
| | - K Murao
- National Institute of Informatics, Hitotsubashi, Chiyoda-ku, Tokyo, Japan
| | - I Yamaguchi
- Butsuryo College of Osaka, Otorikitamachi, Sakai-shi, Osaka, Japan
| | - N Ushio
- Shiga University of Medical Science Hospital Hospital, Otsu-shi, seta, tukinowa-cho, Shiga, Japan
| | - K Hanai
- Fukujiji Hospital, Matsuyama, Kiyose-shi, Tokyo, Japan
| | - M Kaneko
- Tokyo Health Service Association, Ichigayasadoharacho, Shinjuku-ku, Tokyo, Japan
| |
Collapse
|