1
|
Palani M, Rajagopal S, Chintanpalli AK. A systematic review on feature extraction methods and deep learning models for detection of cancerous lung nodules at an early stage -the recent trends and challenges. Biomed Phys Eng Express 2024; 11:012001. [PMID: 39530659 DOI: 10.1088/2057-1976/ad9154] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2024] [Accepted: 11/12/2024] [Indexed: 11/16/2024]
Abstract
Lung cancer is one of the most common life-threatening worldwide cancers affecting both the male and the female populations. The appearance of nodules in the scan image is an early indication of the development of cancer cells in the lung. The Low Dose Computed Tomography screening technique is used for the early detection of cancer nodules. Therefore, with more Computed Tomography (CT) lung profiles, an automated lung nodule analysis system can be utilized through image processing techniques and neural network algorithms. A CT image of the lung consists of many elements such as blood vessels, ribs, nodules, sternum, bronchi and nodules. These nodules can be both benign and malignant, where the latter leads to lung cancer. Detecting them at an earlier stage can increase life expectancy by up to 5 to 10 years. To analyse only the nodules from the profile, the respected features are extracted using image processing techniques. Based on the review, textural features were the promising ones in medical image analysis and for solving computer vision problems. The importance of uncovering the hidden features allows Deep Learning algorithms (DL) to function better, especially in medical imaging, where accuracy has improved. The earlier detection of cancerous lung nodules is possible through the combination of multi-featured extraction and classification techniques using image data. This technique can be a breakthrough in the deep learning area by providing the appropriate features. One of the greatest challenges is the incorrect identification of malignant nodules results in a higher false positive rate during the prediction. The suitable features make the system more precise in prognosis. In this paper, the overview of lung cancer along with the publicly available datasets is discussed for the research purposes. They are mainly focused on the recent research that combines feature extraction and deep learning algorithms used to reduce the false positive rate in the automated detection of lung nodules. The primary objective of the paper is to provide the importance of textural features when combined with different deep-learning models. It gives insights into their advantages, disadvantages and limitations regarding possible research gaps. These papers compare the recent studies of deep learning models with and without feature extraction and conclude that DL models that include feature extraction are better than the others.
Collapse
Affiliation(s)
- Mathumetha Palani
- Department of Sensor and Biomedical Technology, School of Electronics Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu 632014, India
| | - Sivakumar Rajagopal
- Department of Sensor and Biomedical Technology, School of Electronics Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu 632014, India
| | - Anantha Krishna Chintanpalli
- Department of Communication Engineering, School of Electronics Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu 632014, India
| |
Collapse
|
2
|
Sun C, Zhu X, Ma C, Wang Z, Yue S, Fu K, Li X, Zhang H, Chen J. Electrical Properties of Human Lung Nodules In Vitro From 100 Hz to 100 MHz. IEEE Trans Biomed Eng 2024; 71:1355-1369. [PMID: 38048236 DOI: 10.1109/tbme.2023.3334865] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/06/2023]
Abstract
OBJECTIVE The incidence of pulmonary nodules has been increasing over the past 30 years. Different types of nodules are associated with varying degrees of malignancy, and they engender inconsistent treatment approaches. Therefore, correct distinction is essential for the optimal treatment and recovery of the patients. The commonly-used medical imaging methods have limitations in distinguishing lung nodules to date. A new approach to this problem may be provided by electrical properties of lung nodules. Nevertheless, difference identification is the basis of correct distinction. So, this paper aims to investigate the differences in electrical properties between various lung nodules. METHODS At variance with existing studies, benign samples were included for analysis. A total of 252 specimens were collected, including 126 normal tissues, 15 benign nodules, 76 adenocarcinomas, and 35 squamous cell carcinomas. The dispersion properties of each tissue were measured over a frequency range of 100 Hz to 100 MHz. And the relaxation mechanism was analyzed by fitting the Cole-Cole plot. The corresponding equivalent circuit was estimated accordingly. RESULTS Results validated the significant differences between malignant and normal tissue. Significant differences between benign and malignant lesions were observed in conductivity and relative permittivity. Adenocarcinomas and squamous cell carcinomas are significantly different in conductivity, first-order, second-order differences of conductivity, α-band Cole-Cole plot parameters and capacitance of equivalent circuit. The combination of the different features increased the tissue groups' differences measured by Euclidean distance up to 94.7%. CONCLUSION AND SIGNIFICANCE In conclusion, the four tissue groups reveal dissimilarity in electrical properties. This characteristic potentially lends itself to future diagnosis of non-invasive lung cancer.
Collapse
|
3
|
Wu R, Liang C, Zhang J, Tan Q, Huang H. Multi-kernel driven 3D convolutional neural network for automated detection of lung nodules in chest CT scans. BIOMEDICAL OPTICS EXPRESS 2024; 15:1195-1218. [PMID: 38404310 PMCID: PMC10890889 DOI: 10.1364/boe.504875] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Revised: 12/27/2023] [Accepted: 12/28/2023] [Indexed: 02/27/2024]
Abstract
The accurate position detection of lung nodules is crucial in early chest computed tomography (CT)-based lung cancer screening, which helps to improve the survival rate of patients. Deep learning methodologies have shown impressive feature extraction ability in the CT image analysis task, but it is still a challenge to develop a robust nodule detection model due to the salient morphological heterogeneity of nodules and complex surrounding environment. In this study, a multi-kernel driven 3D convolutional neural network (MK-3DCNN) is proposed for computerized nodule detection in CT scans. In the MK-3DCNN, a residual learning-based encoder-decoder architecture is introduced to employ the multi-layer features of the deep model. Considering the various nodule sizes and shapes, a multi-kernel joint learning block is developed to capture 3D multi-scale spatial information of nodule CT images, and this is conducive to improving nodule detection performance. Furthermore, a multi-mode mixed pooling strategy is designed to replace the conventional single-mode pooling manner, and it reasonably integrates the max pooling, average pooling, and center cropping pooling operations to obtain more comprehensive nodule descriptions from complicated CT images. Experimental results on the public dataset LUNA16 illustrate that the proposed MK-3DCNN method achieves more competitive nodule detection performance compared to some state-of-the-art algorithms. The results on our constructed clinical dataset CQUCH-LND indicate that the MK-3DCNN has a good prospect in clinical practice.
Collapse
Affiliation(s)
- Ruoyu Wu
- Key Laboratory of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing 400044, China
| | - Changyu Liang
- Department of Radiology, Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400030, China
| | - Jiuquan Zhang
- Department of Radiology, Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400030, China
| | - QiJuan Tan
- Department of Radiology, Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400030, China
| | - Hong Huang
- Key Laboratory of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing 400044, China
| |
Collapse
|
4
|
Ruksakulpiwat S, Phianhasin L, Benjasirisan C, Schiltz NK. Using Neural Networks Algorithm in Ischemic Stroke Diagnosis: A Systematic Review. J Multidiscip Healthc 2023; 16:2593-2602. [PMID: 37674890 PMCID: PMC10478777 DOI: 10.2147/jmdh.s421280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2023] [Accepted: 08/15/2023] [Indexed: 09/08/2023] Open
Abstract
Objective To evaluate the evidence of artificial neural network (NNs) techniques in diagnosing ischemic stroke (IS) in adults. Methods The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) was utilized as a guideline for this review. PubMed, MEDLINE, Web of Science, and CINAHL Plus Full Text were searched to identify studies published between 2018 and 2022, reporting using NNs in IS diagnosis. The Critical Appraisal Checklist for Diagnostic Test Accuracy Studies was adopted to evaluate the included studies. Results Nine studies were included in this systematic review. Non-contrast computed tomography (NCCT) (n = 4 studies, 26.67%) and computed tomography angiography (CTA) (n = 4 studies, 26.67%) are among the most common features. Five algorithms were used in the included studies. Deep Convolutional Neural Networks (DCNNs) were commonly used for IS diagnosis (n = 3 studies, 33.33%). Other algorithms including three-dimensional convolutional neural networks (3D-CNNs) (n = 2 studies, 22.22%), two-stage deep convolutional neural networks (Two-stage DCNNs) (n = 2 studies, 22.22%), the local higher-order singular value decomposition denoising algorithm (GL-HOSVD) (n = 1 study, 11.11%), and a new deconvolution network model based on deep learning (AD-CNNnet) (n = 1 study, 11.11%) were also utilized for the diagnosis of IS. Conclusion The number of studies ensuring the effectiveness of NNs algorithms in IS diagnosis has increased. Still, more feasibility and cost-effectiveness evaluations are needed to support the implementation of NNs in IS diagnosis in clinical settings.
Collapse
Affiliation(s)
- Suebsarn Ruksakulpiwat
- Department of Medical Nursing, Faculty of Nursing, Mahidol University, Bangkok, Thailand
| | - Lalipat Phianhasin
- Department of Medical Nursing, Faculty of Nursing, Mahidol University, Bangkok, Thailand
| | | | - Nicholas K Schiltz
- Frances Payne Bolton School of Nursing, Case Western Reserve University, Cleveland, OH, USA
| |
Collapse
|
5
|
Gugulothu VK, Balaji S. An automatic classification of pulmonary nodules for lung cancer diagnosis using novel LLXcepNN classifier. J Cancer Res Clin Oncol 2023; 149:6049-6057. [PMID: 36645508 DOI: 10.1007/s00432-022-04539-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Accepted: 12/16/2022] [Indexed: 01/17/2023]
Abstract
INTRODUCTION A critical step to ameliorate diagnosis and extend patient survival is Benign-malignant Pulmonary Nodule (PN) classification at earlier detection. On account of the noise of Computed Tomography (CT) images, the prevailing Lung Nodule (LN) detection techniques exhibit broad variation in accurate prediction. METHODS Thus, a novel Nodule Detection along with Classification algorithm for early diagnosis of Lung Cancer (LC) has been proposed. Initially, employing the Adaptive Mode Ostu Binarization (AMOB) technique, the Lung Volumes (LVs) isextortedas of the image together with the extracted lung regions is pre-processed. Then, detection of LNs takes place, and utilizing Geodesic Fuzzy C-Means Clustering (GFCM) Segmentation Algorithm, it is segmented.Next, the vital features are extracted, and the Nodules are classified by utilizing Logarithmic Layer Xception Neural Network (LLXcepNN) Classifier grounded on the extracted feature. RESULTS The nodules are classified as Benign Nodules (BN) and Malignant Nodules (MN) by the proposed classifier. Lastly, the Lung CT images are scrutinized. DISCUSSION Thus, when weighed against the prevailing techniques, the proposed systems' acquired outcomes exhibit that the rate of accuracy of classification is enhanced.
Collapse
Affiliation(s)
- Vijay Kumar Gugulothu
- Department of Computer Science & Engineering, Koneru Lakshmaiah Education Foundation, Deemed to be University and Govt. Polytechnic, Masab Tank, Hyderabad, Telangana, India.
| | - Savadam Balaji
- Department of Computer Science & Engineering, Koneru Lakshmaiah Education Foundation, Deemed to be University and Govt. Polytechnic, Masab Tank, Hyderabad, Telangana, India
| |
Collapse
|
6
|
Javed MA, Bin Liaqat H, Meraj T, Alotaibi A, Alshammari M. Identification and Classification of Lungs Focal Opacity Using CNN Segmentation and Optimal Feature Selection. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2023; 2023:6357252. [PMID: 37538561 PMCID: PMC10396675 DOI: 10.1155/2023/6357252] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Revised: 09/07/2022] [Accepted: 09/26/2022] [Indexed: 08/05/2023]
Abstract
Lung cancer is one of the deadliest cancers around the world, with high mortality rate in comparison to other cancers. A lung cancer patient's survival probability in late stages is very low. However, if it can be detected early, the patient survival rate can be improved. Diagnosing lung cancer early is a complicated task due to having the visual similarity of lungs nodules with trachea, vessels, and other surrounding tissues that leads toward misclassification of lung nodules. Therefore, correct identification and classification of nodules is required. Previous studies have used noisy features, which makes results comprising. A predictive model has been proposed to accurately detect and classify the lung nodules to address this problem. In the proposed framework, at first, the semantic segmentation was performed to identify the nodules in images in the Lungs image database consortium (LIDC) dataset. Optimal features for classification include histogram oriented gradients (HOGs), local binary patterns (LBPs), and geometric features are extracted after segmentation of nodules. The results shown that support vector machines performed better in identifying the nodules than other classifiers, achieving the highest accuracy of 97.8% with sensitivity of 100%, specificity of 93%, and false positive rate of 6.7%.
Collapse
Affiliation(s)
| | - Hannan Bin Liaqat
- Department of Information Technology, Division of Science and Technology University of Education, Township Campus Lahore, Lahore, Pakistan
| | - Talha Meraj
- Department of Computer Science, COMSATS University Islamabad—Wah Campus, Wah Cantt, Rawalpindi 47040, Pakistan
| | - Aziz Alotaibi
- Department of Computer Science, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia
| | - Majid Alshammari
- Department of Information Technology, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia
| |
Collapse
|
7
|
Wang L. Deep Learning Techniques to Diagnose Lung Cancer. Cancers (Basel) 2022; 14:5569. [PMID: 36428662 PMCID: PMC9688236 DOI: 10.3390/cancers14225569] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Revised: 11/11/2022] [Accepted: 11/11/2022] [Indexed: 11/15/2022] Open
Abstract
Medical imaging tools are essential in early-stage lung cancer diagnostics and the monitoring of lung cancer during treatment. Various medical imaging modalities, such as chest X-ray, magnetic resonance imaging, positron emission tomography, computed tomography, and molecular imaging techniques, have been extensively studied for lung cancer detection. These techniques have some limitations, including not classifying cancer images automatically, which is unsuitable for patients with other pathologies. It is urgently necessary to develop a sensitive and accurate approach to the early diagnosis of lung cancer. Deep learning is one of the fastest-growing topics in medical imaging, with rapidly emerging applications spanning medical image-based and textural data modalities. With the help of deep learning-based medical imaging tools, clinicians can detect and classify lung nodules more accurately and quickly. This paper presents the recent development of deep learning-based imaging techniques for early lung cancer detection.
Collapse
Affiliation(s)
- Lulu Wang
- Biomedical Device Innovation Center, Shenzhen Technology University, Shenzhen 518118, China
| |
Collapse
|
8
|
Palm V, Norajitra T, von Stackelberg O, Heussel CP, Skornitzke S, Weinheimer O, Kopytova T, Klein A, Almeida SD, Baumgartner M, Bounias D, Scherer J, Kades K, Gao H, Jäger P, Nolden M, Tong E, Eckl K, Nattenmüller J, Nonnenmacher T, Naas O, Reuter J, Bischoff A, Kroschke J, Rengier F, Schlamp K, Debic M, Kauczor HU, Maier-Hein K, Wielpütz MO. AI-Supported Comprehensive Detection and Quantification of Biomarkers of Subclinical Widespread Diseases at Chest CT for Preventive Medicine. Healthcare (Basel) 2022; 10:2166. [PMID: 36360507 PMCID: PMC9690402 DOI: 10.3390/healthcare10112166] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Revised: 10/26/2022] [Accepted: 10/27/2022] [Indexed: 08/12/2023] Open
Abstract
Automated image analysis plays an increasing role in radiology in detecting and quantifying image features outside of the perception of human eyes. Common AI-based approaches address a single medical problem, although patients often present with multiple interacting, frequently subclinical medical conditions. A holistic imaging diagnostics tool based on artificial intelligence (AI) has the potential of providing an overview of multi-system comorbidities within a single workflow. An interdisciplinary, multicentric team of medical experts and computer scientists designed a pipeline, comprising AI-based tools for the automated detection, quantification and characterization of the most common pulmonary, metabolic, cardiovascular and musculoskeletal comorbidities in chest computed tomography (CT). To provide a comprehensive evaluation of each patient, a multidimensional workflow was established with algorithms operating synchronously on a decentralized Joined Imaging Platform (JIP). The results of each patient are transferred to a dedicated database and summarized as a structured report with reference to available reference values and annotated sample images of detected pathologies. Hence, this tool allows for the comprehensive, large-scale analysis of imaging-biomarkers of comorbidities in chest CT, first in science and then in clinical routine. Moreover, this tool accommodates the quantitative analysis and classification of each pathology, providing integral diagnostic and prognostic value, and subsequently leading to improved preventive patient care and further possibilities for future studies.
Collapse
Affiliation(s)
- Viktoria Palm
- Department of Diagnostic and Interventional Radiology, Subdivision of Pulmonary Imaging, University Hospital of Heidelberg, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany
- Translational Lung Research Center Heidelberg (TLRC), German Center for Lung Research (DZL), Im Neuenheimer Feld 156, 69120 Heidelberg, Germany
- Department of Diagnostic and Interventional Radiology with Nuclear Medicine, Thoraxklinik at the University Hospital of Heidelberg, Röntgenstr. 1, 69126 Heidelberg, Germany
| | - Tobias Norajitra
- Translational Lung Research Center Heidelberg (TLRC), German Center for Lung Research (DZL), Im Neuenheimer Feld 156, 69120 Heidelberg, Germany
- Division of Medical Imaging Computing, German Cancer Research Center Heidelberg, Im Neuenheimer Feld 223, 69120 Heidelberg, Germany
- Pattern Analysis and Learning Group, Department of Radiation Oncology, University Hospital of Heidelberg, Im Neuenheimer Feld 672, 69120 Heidelberg, Germany
| | - Oyunbileg von Stackelberg
- Department of Diagnostic and Interventional Radiology, Subdivision of Pulmonary Imaging, University Hospital of Heidelberg, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany
- Translational Lung Research Center Heidelberg (TLRC), German Center for Lung Research (DZL), Im Neuenheimer Feld 156, 69120 Heidelberg, Germany
- Department of Diagnostic and Interventional Radiology with Nuclear Medicine, Thoraxklinik at the University Hospital of Heidelberg, Röntgenstr. 1, 69126 Heidelberg, Germany
| | - Claus P. Heussel
- Department of Diagnostic and Interventional Radiology, Subdivision of Pulmonary Imaging, University Hospital of Heidelberg, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany
- Translational Lung Research Center Heidelberg (TLRC), German Center for Lung Research (DZL), Im Neuenheimer Feld 156, 69120 Heidelberg, Germany
- Department of Diagnostic and Interventional Radiology with Nuclear Medicine, Thoraxklinik at the University Hospital of Heidelberg, Röntgenstr. 1, 69126 Heidelberg, Germany
| | - Stephan Skornitzke
- Department of Diagnostic and Interventional Radiology, Subdivision of Pulmonary Imaging, University Hospital of Heidelberg, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany
- Translational Lung Research Center Heidelberg (TLRC), German Center for Lung Research (DZL), Im Neuenheimer Feld 156, 69120 Heidelberg, Germany
- Department of Diagnostic and Interventional Radiology with Nuclear Medicine, Thoraxklinik at the University Hospital of Heidelberg, Röntgenstr. 1, 69126 Heidelberg, Germany
| | - Oliver Weinheimer
- Department of Diagnostic and Interventional Radiology, Subdivision of Pulmonary Imaging, University Hospital of Heidelberg, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany
- Translational Lung Research Center Heidelberg (TLRC), German Center for Lung Research (DZL), Im Neuenheimer Feld 156, 69120 Heidelberg, Germany
- Department of Diagnostic and Interventional Radiology with Nuclear Medicine, Thoraxklinik at the University Hospital of Heidelberg, Röntgenstr. 1, 69126 Heidelberg, Germany
| | - Taisiya Kopytova
- Division of Medical Imaging Computing, German Cancer Research Center Heidelberg, Im Neuenheimer Feld 223, 69120 Heidelberg, Germany
| | - Andre Klein
- Division of Medical Imaging Computing, German Cancer Research Center Heidelberg, Im Neuenheimer Feld 223, 69120 Heidelberg, Germany
- Medical Faculty, University of Heidelberg, Im Neuenheimer Feld 672, 69120 Heidelberg, Germany
| | - Silvia D. Almeida
- Translational Lung Research Center Heidelberg (TLRC), German Center for Lung Research (DZL), Im Neuenheimer Feld 156, 69120 Heidelberg, Germany
- Division of Medical Imaging Computing, German Cancer Research Center Heidelberg, Im Neuenheimer Feld 223, 69120 Heidelberg, Germany
- Medical Faculty, University of Heidelberg, Im Neuenheimer Feld 672, 69120 Heidelberg, Germany
| | - Michael Baumgartner
- Division of Medical Imaging Computing, German Cancer Research Center Heidelberg, Im Neuenheimer Feld 223, 69120 Heidelberg, Germany
| | - Dimitrios Bounias
- Division of Medical Imaging Computing, German Cancer Research Center Heidelberg, Im Neuenheimer Feld 223, 69120 Heidelberg, Germany
- Medical Faculty, University of Heidelberg, Im Neuenheimer Feld 672, 69120 Heidelberg, Germany
| | - Jonas Scherer
- Division of Medical Imaging Computing, German Cancer Research Center Heidelberg, Im Neuenheimer Feld 223, 69120 Heidelberg, Germany
- Medical Faculty, University of Heidelberg, Im Neuenheimer Feld 672, 69120 Heidelberg, Germany
| | - Klaus Kades
- Division of Medical Imaging Computing, German Cancer Research Center Heidelberg, Im Neuenheimer Feld 223, 69120 Heidelberg, Germany
| | - Hanno Gao
- Division of Medical Imaging Computing, German Cancer Research Center Heidelberg, Im Neuenheimer Feld 223, 69120 Heidelberg, Germany
| | - Paul Jäger
- Interactive Machine Learning Research Group, German Cancer Research Center Heidelberg, Im Neuenheimer Feld 223, 69120 Heidelberg, Germany
| | - Marco Nolden
- Division of Medical Imaging Computing, German Cancer Research Center Heidelberg, Im Neuenheimer Feld 223, 69120 Heidelberg, Germany
- Pattern Analysis and Learning Group, Department of Radiation Oncology, University Hospital of Heidelberg, Im Neuenheimer Feld 672, 69120 Heidelberg, Germany
| | - Elizabeth Tong
- Department of Diagnostic and Interventional Radiology, Subdivision of Pulmonary Imaging, University Hospital of Heidelberg, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany
- Department of Diagnostic and Interventional Radiology with Nuclear Medicine, Thoraxklinik at the University Hospital of Heidelberg, Röntgenstr. 1, 69126 Heidelberg, Germany
| | - Kira Eckl
- Department of Diagnostic and Interventional Radiology, Subdivision of Pulmonary Imaging, University Hospital of Heidelberg, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany
- Department of Diagnostic and Interventional Radiology with Nuclear Medicine, Thoraxklinik at the University Hospital of Heidelberg, Röntgenstr. 1, 69126 Heidelberg, Germany
| | - Johanna Nattenmüller
- Department of Diagnostic and Interventional Radiology, Subdivision of Pulmonary Imaging, University Hospital of Heidelberg, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany
- Translational Lung Research Center Heidelberg (TLRC), German Center for Lung Research (DZL), Im Neuenheimer Feld 156, 69120 Heidelberg, Germany
- Department of Diagnostic and Interventional Radiology with Nuclear Medicine, Thoraxklinik at the University Hospital of Heidelberg, Röntgenstr. 1, 69126 Heidelberg, Germany
- Department of Diagnostic and Interventional Radiology, Medical Center, Faculty of Medicine Freiburg, University of Freiburg, Hugstetter Str. 55, 79106 Freiburg, Germany
| | - Tobias Nonnenmacher
- Department of Diagnostic and Interventional Radiology, Subdivision of Pulmonary Imaging, University Hospital of Heidelberg, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany
- Translational Lung Research Center Heidelberg (TLRC), German Center for Lung Research (DZL), Im Neuenheimer Feld 156, 69120 Heidelberg, Germany
- Department of Diagnostic and Interventional Radiology with Nuclear Medicine, Thoraxklinik at the University Hospital of Heidelberg, Röntgenstr. 1, 69126 Heidelberg, Germany
| | - Omar Naas
- Department of Diagnostic and Interventional Radiology, Subdivision of Pulmonary Imaging, University Hospital of Heidelberg, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany
- Translational Lung Research Center Heidelberg (TLRC), German Center for Lung Research (DZL), Im Neuenheimer Feld 156, 69120 Heidelberg, Germany
- Department of Diagnostic and Interventional Radiology with Nuclear Medicine, Thoraxklinik at the University Hospital of Heidelberg, Röntgenstr. 1, 69126 Heidelberg, Germany
| | - Julia Reuter
- Department of Diagnostic and Interventional Radiology, Subdivision of Pulmonary Imaging, University Hospital of Heidelberg, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany
- Translational Lung Research Center Heidelberg (TLRC), German Center for Lung Research (DZL), Im Neuenheimer Feld 156, 69120 Heidelberg, Germany
- Department of Diagnostic and Interventional Radiology with Nuclear Medicine, Thoraxklinik at the University Hospital of Heidelberg, Röntgenstr. 1, 69126 Heidelberg, Germany
| | - Arved Bischoff
- Department of Diagnostic and Interventional Radiology, Subdivision of Pulmonary Imaging, University Hospital of Heidelberg, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany
- Translational Lung Research Center Heidelberg (TLRC), German Center for Lung Research (DZL), Im Neuenheimer Feld 156, 69120 Heidelberg, Germany
- Department of Diagnostic and Interventional Radiology with Nuclear Medicine, Thoraxklinik at the University Hospital of Heidelberg, Röntgenstr. 1, 69126 Heidelberg, Germany
| | - Jonas Kroschke
- Department of Diagnostic and Interventional Radiology, Subdivision of Pulmonary Imaging, University Hospital of Heidelberg, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany
- Translational Lung Research Center Heidelberg (TLRC), German Center for Lung Research (DZL), Im Neuenheimer Feld 156, 69120 Heidelberg, Germany
- Department of Diagnostic and Interventional Radiology with Nuclear Medicine, Thoraxklinik at the University Hospital of Heidelberg, Röntgenstr. 1, 69126 Heidelberg, Germany
| | - Fabian Rengier
- Department of Diagnostic and Interventional Radiology, Subdivision of Pulmonary Imaging, University Hospital of Heidelberg, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany
- Translational Lung Research Center Heidelberg (TLRC), German Center for Lung Research (DZL), Im Neuenheimer Feld 156, 69120 Heidelberg, Germany
- Department of Diagnostic and Interventional Radiology with Nuclear Medicine, Thoraxklinik at the University Hospital of Heidelberg, Röntgenstr. 1, 69126 Heidelberg, Germany
| | - Kai Schlamp
- Department of Diagnostic and Interventional Radiology, Subdivision of Pulmonary Imaging, University Hospital of Heidelberg, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany
- Translational Lung Research Center Heidelberg (TLRC), German Center for Lung Research (DZL), Im Neuenheimer Feld 156, 69120 Heidelberg, Germany
- Department of Diagnostic and Interventional Radiology with Nuclear Medicine, Thoraxklinik at the University Hospital of Heidelberg, Röntgenstr. 1, 69126 Heidelberg, Germany
| | - Manuel Debic
- Department of Diagnostic and Interventional Radiology, Subdivision of Pulmonary Imaging, University Hospital of Heidelberg, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany
- Translational Lung Research Center Heidelberg (TLRC), German Center for Lung Research (DZL), Im Neuenheimer Feld 156, 69120 Heidelberg, Germany
- Department of Diagnostic and Interventional Radiology with Nuclear Medicine, Thoraxklinik at the University Hospital of Heidelberg, Röntgenstr. 1, 69126 Heidelberg, Germany
| | - Hans-Ulrich Kauczor
- Department of Diagnostic and Interventional Radiology, Subdivision of Pulmonary Imaging, University Hospital of Heidelberg, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany
- Translational Lung Research Center Heidelberg (TLRC), German Center for Lung Research (DZL), Im Neuenheimer Feld 156, 69120 Heidelberg, Germany
- Department of Diagnostic and Interventional Radiology with Nuclear Medicine, Thoraxklinik at the University Hospital of Heidelberg, Röntgenstr. 1, 69126 Heidelberg, Germany
| | - Klaus Maier-Hein
- Translational Lung Research Center Heidelberg (TLRC), German Center for Lung Research (DZL), Im Neuenheimer Feld 156, 69120 Heidelberg, Germany
- Division of Medical Imaging Computing, German Cancer Research Center Heidelberg, Im Neuenheimer Feld 223, 69120 Heidelberg, Germany
- Pattern Analysis and Learning Group, Department of Radiation Oncology, University Hospital of Heidelberg, Im Neuenheimer Feld 672, 69120 Heidelberg, Germany
| | - Mark O. Wielpütz
- Department of Diagnostic and Interventional Radiology, Subdivision of Pulmonary Imaging, University Hospital of Heidelberg, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany
- Translational Lung Research Center Heidelberg (TLRC), German Center for Lung Research (DZL), Im Neuenheimer Feld 156, 69120 Heidelberg, Germany
- Department of Diagnostic and Interventional Radiology with Nuclear Medicine, Thoraxklinik at the University Hospital of Heidelberg, Röntgenstr. 1, 69126 Heidelberg, Germany
| |
Collapse
|
9
|
Gao W, Wang C, Li Q, Zhang X, Yuan J, Li D, Sun Y, Chen Z, Gu Z. Application of medical imaging methods and artificial intelligence in tissue engineering and organ-on-a-chip. Front Bioeng Biotechnol 2022; 10:985692. [PMID: 36172022 PMCID: PMC9511994 DOI: 10.3389/fbioe.2022.985692] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Accepted: 08/08/2022] [Indexed: 12/02/2022] Open
Abstract
Organ-on-a-chip (OOC) is a new type of biochip technology. Various types of OOC systems have been developed rapidly in the past decade and found important applications in drug screening and precision medicine. However, due to the complexity in the structure of both the chip-body itself and the engineered-tissue inside, the imaging and analysis of OOC have still been a big challenge for biomedical researchers. Considering that medical imaging is moving towards higher spatial and temporal resolution and has more applications in tissue engineering, this paper aims to review medical imaging methods, including CT, micro-CT, MRI, small animal MRI, and OCT, and introduces the application of 3D printing in tissue engineering and OOC in which medical imaging plays an important role. The achievements of medical imaging assisted tissue engineering are reviewed, and the potential applications of medical imaging in organoids and OOC are discussed. Moreover, artificial intelligence - especially deep learning - has demonstrated its excellence in the analysis of medical imaging; we will also present the application of artificial intelligence in the image analysis of 3D tissues, especially for organoids developed in novel OOC systems.
Collapse
Affiliation(s)
- Wanying Gao
- State Key Laboratory of Bioelectronics, School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
| | - Chunyan Wang
- State Key Laboratory of Space Medicine Fundamentals and Application, Chinese Astronaut Science Researching and Training Center, Beijing, China
| | - Qiwei Li
- State Key Laboratory of Bioelectronics, School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
| | - Xijing Zhang
- Central Research Institute, United Imaging Group, Shanghai, China
| | - Jianmin Yuan
- Central Research Institute, United Imaging Group, Shanghai, China
| | - Dianfu Li
- The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Yu Sun
- International Children’s Medical Imaging Research Laboratory, School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
| | - Zaozao Chen
- State Key Laboratory of Bioelectronics, School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
| | - Zhongze Gu
- State Key Laboratory of Bioelectronics, School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
| |
Collapse
|
10
|
Iqbal A, Usman M, Ahmed Z. An efficient deep learning-based framework for tuberculosis detection using chest X-ray images. Tuberculosis (Edinb) 2022; 136:102234. [PMID: 35872406 DOI: 10.1016/j.tube.2022.102234] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Revised: 06/15/2022] [Accepted: 07/13/2022] [Indexed: 10/17/2022]
Abstract
Early diagnosis of tuberculosis (TB) is an essential and challenging task to prevent disease, decrease mortality risk, and stop transmission to other people. The chest X-ray (CXR) is the top choice for lung disease screening in clinics because it is cost-effective and easily accessible in most countries. However, manual screening of CXR images is a heavy burden for radiologists, resulting in a high rate of inter-observer variances. Hence, proposing a cost-effective and accurate computer aided diagnosis (CAD) system for TB diagnosis is challenging for researchers. In this research, we proposed an efficient and straightforward deep learning network called TBXNet, which can accurately classify a large number of TB CXR images. The network is based on five dual convolutions blocks with varying filter sizes of 32, 64, 128, 256 and 512, respectively. The dual convolution blocks are fused with a pre-trained layer in the fusion layer of the network. In addition, the pre-trained layer is utilized for transferring pre-trained knowledge into the fusion layer. The proposed TBXNet has achieved an accuracy of 98.98%, and 99.17% on Dataset A and Dataset B, respectively. Furthermore, the generalizability of the proposed work is validated against Dataset C, which is based on normal, tuberculous, pneumonia, and COVID-19 CXR images. The TBXNet has obtained the highest results in Precision (95.67%), Recall (95.10%), F1-score (95.38%), and Accuracy (95.10%), which is comparatively better than all other state-of-the-art methods.
Collapse
Affiliation(s)
- Ahmed Iqbal
- Predictive Analytics Lab, Shaheed Zulfikar Ali Bhutto Institute of Science and Technology, Islamabad, Pakistan.
| | - Muhammad Usman
- Predictive Analytics Lab, Shaheed Zulfikar Ali Bhutto Institute of Science and Technology, Islamabad, Pakistan
| | - Zohair Ahmed
- Predictive Analytics Lab, Shaheed Zulfikar Ali Bhutto Institute of Science and Technology, Islamabad, Pakistan
| |
Collapse
|
11
|
Zhang J, Tao X, Jiang Y, Wu X, Yan D, Xue W, Zhuang S, Chen L, Luo L, Ni D. Application of Convolution Neural Network Algorithm Based on Multicenter ABUS Images in Breast Lesion Detection. Front Oncol 2022; 12:938413. [PMID: 35898876 PMCID: PMC9310547 DOI: 10.3389/fonc.2022.938413] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Accepted: 05/30/2022] [Indexed: 11/24/2022] Open
Abstract
Objective This study aimed to evaluate a convolution neural network algorithm for breast lesion detection with multi-center ABUS image data developed based on ABUS image and Yolo v5. Methods A total of 741 cases with 2,538 volume data of ABUS examinations were analyzed, which were recruited from 7 hospitals between October 2016 and December 2020. A total of 452 volume data of 413 cases were used as internal validation data, and 2,086 volume data from 328 cases were used as external validation data. There were 1,178 breast lesions in 413 patients (161 malignant and 1,017 benign) and 1,936 lesions in 328 patients (57 malignant and 1,879 benign). The efficiency and accuracy of the algorithm were analyzed in detecting lesions with different allowable false positive values and lesion sizes, and the differences were compared and analyzed, which included the various indicators in internal validation and external validation data. Results The study found that the algorithm had high sensitivity for all categories of lesions, even when using internal or external validation data. The overall detection rate of the algorithm was as high as 78.1 and 71.2% in the internal and external validation sets, respectively. The algorithm could detect more lesions with increasing nodule size (87.4% in ≥10 mm lesions but less than 50% in <10 mm). The detection rate of BI-RADS 4/5 lesions was higher than that of BI-RADS 3 or 2 (96.5% vs 79.7% vs 74.7% internal, 95.8% vs 74.7% vs 88.4% external). Furthermore, the detection performance was better for malignant nodules than benign (98.1% vs 74.9% internal, 98.2% vs 70.4% external). Conclusions This algorithm showed good detection efficiency in the internal and external validation sets, especially for category 4/5 lesions and malignant lesions. However, there are still some deficiencies in detecting category 2 and 3 lesions and lesions smaller than 10 mm.
Collapse
Affiliation(s)
- Jianxing Zhang
- Department of Medical Imaging Center, The First Affiliated Hospital, Jinan University, Guangzhou, China
- Department of Ultrasound, Remote Consultation Center of ABUS, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
- *Correspondence: Jianxing Zhang, ; Liangping Luo, ; Dong Ni,
| | - Xing Tao
- Medical Ultrasound Image Computing Lab, Shenzhen University, Shenzhen, China
| | - Yanhui Jiang
- Medical Ultrasound Image Computing Lab, Shenzhen University, Shenzhen, China
| | - Xiaoxi Wu
- Department of Ultrasound, Remote Consultation Center of ABUS, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Dan Yan
- Department of Ultrasound, Remote Consultation Center of ABUS, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Wen Xue
- Department of Ultrasound, Remote Consultation Center of ABUS, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Shulian Zhuang
- Department of Ultrasound, Remote Consultation Center of ABUS, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Ling Chen
- Department of Ultrasound, Remote Consultation Center of ABUS, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Liangping Luo
- Department of Medical Imaging Center, The First Affiliated Hospital, Jinan University, Guangzhou, China
- *Correspondence: Jianxing Zhang, ; Liangping Luo, ; Dong Ni,
| | - Dong Ni
- Medical Ultrasound Image Computing Lab, Shenzhen University, Shenzhen, China
- *Correspondence: Jianxing Zhang, ; Liangping Luo, ; Dong Ni,
| |
Collapse
|
12
|
A hybrid approach for lung cancer diagnosis using optimized random forest classification and K-means visualization algorithm. HEALTH AND TECHNOLOGY 2022. [DOI: 10.1007/s12553-022-00679-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
|
13
|
Huang H, Wu R, Li Y, Peng C. Self-Supervised Transfer Learning Based on Domain Adaptation for Benign-Malignant Lung Nodule Classification on Thoracic CT. IEEE J Biomed Health Inform 2022; 26:3860-3871. [PMID: 35503850 DOI: 10.1109/jbhi.2022.3171851] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
The spatial heterogeneity is an important indicator of the malignancy of lung nodules in lung cancer diagnosis. Compared with 2D nodule CT images, the 3D volumes with entire nodule objects hold richer discriminative information. However, for deep learning methods driven by massive data, effectively capturing the 3D discriminative features of nodules in limited labeled samples is a challenging task. Different from previous models that proposed transfer learning models in a 2D pattern or learning from scratch 3D models, we develop a self-supervised transfer learning based on domain adaptation (SSTL-DA) 3D CNN framework for benign-malignant lung nodule classification. At first, a data pre-processing strategy termed adaptive slice selection (ASS) is developed to eliminate the redundant noise of the input samples with lung nodules. Then, the self-supervised learning network is constructed to learn robust image representation from CT images. Finally, a transfer learning method based on domain adaptation is designed to obtain discriminant features for classification. The proposed SSTL-DA method has been assessed on the LIDC-IDRI benchmark dataset, and it obtains an accuracy of 91.07% and an AUC of 95.84%. These results demonstrate that the SSTL-DA model achieves quite a competitive classification performance compared with some state-of-the-art approaches.
Collapse
|
14
|
Zhang S, Lv B, Zheng X, Li Y, Ge W, Zhang L, Mo F, Qiu J. Dosimetric Study of Deep Learning-Guided ITV Prediction in Cone-beam CT for Lung Stereotactic Body Radiotherapy. Front Public Health 2022; 10:860135. [PMID: 35392465 PMCID: PMC8980420 DOI: 10.3389/fpubh.2022.860135] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2022] [Accepted: 02/21/2022] [Indexed: 11/13/2022] Open
Abstract
Purpose The purpose of this study was to evaluate the accuracy of a lung stereotactic body radiotherapy (SBRT) treatment plan with the target of a newly predicted internal target volume (ITVpredict) and the feasibility of its clinical application. ITVpredict was automatically generated by our in-house deep learning model according to the cone-beam CT (CBCT) image database. Method A retrospective study of 45 patients who underwent SBRT was involved, and Mask R-CNN based algorithm model helped to predict the internal target volume (ITV) using the CBCT image database. The geometric accuracy of ITVpredict was verified by the Dice Similarity Coefficient (DSC), 3D Motion Range (R3D), Relative Volume Index (RVI), and Hausdorff Distance (HD). The PTVpredict was generated by ITVpredict, which was registered and then projected on free-breath CT (FBCT) images. The PTVFBCT was margined from the GTV on FBCT images gross tumor volume on free-breath CT (GTVFBCT). Treatment plans with the target of Predict planning target volume on CBCT images (PTVpredict) and planning target volume on free-breath CT (PTVFBCT) were respectively re-established, and the dosimetric parameters included the ratio of the volume of patients receiving at least the prescribed dose to the volume of PTV (R100%), the ratio of the volume of patients receiving at least 50% of the prescribed dose to the volume of PTV in the Radiation Therapy Oncology Group (RTOG) 0813 Trial (R50%), Gradient Index (GI), and the maximum dose 2 cm from the PTV (D2cm), which were evaluated via Plan4DCT, plan which based on PTVpredict (Planpredict), and plan which based on PTVFBCT (PlanFBCT). Result The geometric results showed that there existed a good correlation between ITVpredict and ITV on the 4-dimensional CT [ITV4DCT; DSC= 0.83 ±0.18]. However, the average volume of ITVpredict was 10% less than that of ITV4DCT (p = 0.333). No significant difference in dose coverage was found in V100% for the ITV with 99.98 ± 0.04% in the ITV4DCT vs. 97.56 ± 4.71% in the ITVpredict (p = 0.162). Dosimetry parameters of PTV, including R100%, R50%, GI and D2cm showed no statistically significant difference between each plan (p > 0.05). Conclusion Dosimetric parameters of Planpredict are clinically comparable to those of the original Plan4DCT. This study confirmed that the treatment plan based on ITVpredict produced by our model could automatically meet clinical requirements. Thus, for patients undergoing lung SBRT, the model has great potential for using CBCT images for ITV contouring which can be used in treatment planning.
Collapse
|
15
|
Min Y, Hu L, Wei L, Nie S. Computer-aided detection of pulmonary nodules based on convolutional neural networks: a review. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac568e] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2021] [Accepted: 02/18/2022] [Indexed: 02/08/2023]
Abstract
Abstract
Computer-aided detection (CADe) technology has been proven to increase the detection rate of pulmonary nodules that has important clinical significance for the early diagnosis of lung cancer. In this study, we systematically review the latest techniques in pulmonary nodule CADe based on deep learning models with convolutional neural networks in computed tomography images. First, the brief descriptions and popular architecture of convolutional neural networks are introduced. Second, several common public databases and evaluation metrics are briefly described. Third, state-of-the-art approaches with excellent performances are selected. Subsequently, we combine the clinical diagnostic process and the traditional four steps of pulmonary nodule CADe into two stages, namely, data preprocessing and image analysis. Further, the major optimizations of deep learning models and algorithms are highlighted according to the progressive evaluation effect of each method, and some clinical evidence is added. Finally, various methods are summarized and compared. The innovative or valuable contributions of each method are expected to guide future research directions. The analyzed results show that deep learning-based methods significantly transformed the detection of pulmonary nodules, and the design of these methods can be inspired by clinical imaging diagnostic procedures. Moreover, focusing on the image analysis stage will result in improved returns. In particular, optimal results can be achieved by optimizing the steps of candidate nodule generation and false positive reduction. End-to-end methods, with greater operating speeds and lower computational consumptions, are superior to other methods in CADe of pulmonary nodules.
Collapse
|
16
|
Xu Y, Wang S, Sun X, Yang Y, Fan J, Jin W, Li Y, Su F, Zhang W, Cui Q, Hu Y, Wang S, Zhang J, Chen C. Identification of Benign and Malignant Lung Nodules in CT Images Based on Ensemble Learning Method. Interdiscip Sci 2022; 14:130-140. [PMID: 34727340 DOI: 10.1007/s12539-021-00472-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Revised: 08/02/2021] [Accepted: 08/03/2021] [Indexed: 12/19/2022]
Abstract
BACKGROUND AND OBJECTIVE Under the background of urgent need for computer-aided technology to provide physicians with objective decision support, aiming at reducing the false positive rate of nodule CT detection in pulmonary nodules detection and improving the accuracy of lung nodule recognition, this paper puts forward a method based on ensemble learning to distinguish between malignant and benign pulmonary nodules. METHODS Firstly, trained on a public data set, a multi-layer feature fusion YOLOv3 network is used to detect lung nodules. Secondly, a CNN was trained to differentiate benign from malignant pulmonary nodules. Then, based on the idea of ensemble learning, the confidence probability of the above two models and the label of the training set are taken as data features to build a Logistic regression model. Finally, two test sets (public data set and private data set) were tested, and the confidence probability output by the two models was fused into the established logistic regression model to determine benign and malignant pulmonary nodules. RESULTS The YOLOv3 network was trained to detect chest CT images of the test set. The number of pulmonary nodules detected in the public and private test sets was 356 and 314, respectively. The accuracy, sensitivity and specificity of the two test sets were 80.97%, 81.63%, 78.75% and 79.69%, 86.59%, 72.16%, respectively. With CNN training pulmonary nodules benign and malignant discriminant model analysis of two kinds of test set, the result of accuracy, sensitivity and specificity were 90.12%, 90.66%, 89.47% and 88.57%, 85.62%, 90.87%, respectively. Fused model based on YOLOv3 network and CNN is tested on two test sets, and the result of accuracy, sensitivity and specificity were 93.82%, 94.85%, 92.59% and 92.31%, 92.68%, 91.89%, respectively. CONCLUSION The ensemble learning model is more effective than YOLOv3 network and CNN in removing false positives, and the accuracy of the ensemble. Learning model is higher than the other two networks in identifying pulmonary nodules.
Collapse
Affiliation(s)
- Yifei Xu
- Medical Engineering Technology and Data Mining Institute, Zhengzhou University, Zhengzhou, 450001, Henan, China.,School of Information Engineering, Zhengzhou University, Zhengzhou, 450001, Henan, China
| | - Shijie Wang
- Medical Engineering Technology and Data Mining Institute, Zhengzhou University, Zhengzhou, 450001, Henan, China
| | - Xiaoqian Sun
- Medical Engineering Technology and Data Mining Institute, Zhengzhou University, Zhengzhou, 450001, Henan, China
| | - Yanjun Yang
- Medical Engineering Technology and Data Mining Institute, Zhengzhou University, Zhengzhou, 450001, Henan, China
| | - Jiaxing Fan
- Medical Engineering Technology and Data Mining Institute, Zhengzhou University, Zhengzhou, 450001, Henan, China
| | - Wenwen Jin
- Medical Engineering Technology and Data Mining Institute, Zhengzhou University, Zhengzhou, 450001, Henan, China
| | - Yingyue Li
- Medical Engineering Technology and Data Mining Institute, Zhengzhou University, Zhengzhou, 450001, Henan, China
| | - Fangchu Su
- Medical Engineering Technology and Data Mining Institute, Zhengzhou University, Zhengzhou, 450001, Henan, China
| | - Weihua Zhang
- Medical Engineering Technology and Data Mining Institute, Zhengzhou University, Zhengzhou, 450001, Henan, China
| | - Qingli Cui
- Henan Cancer Hospital, Affiliated Cancer Hospital of Zhengzhou University, Zhengzhou, 450004, Henan, China
| | - Yanhui Hu
- Henan Cancer Hospital, Affiliated Cancer Hospital of Zhengzhou University, Zhengzhou, 450004, Henan, China
| | - Sheng Wang
- Henan Cancer Hospital, Affiliated Cancer Hospital of Zhengzhou University, Zhengzhou, 450004, Henan, China
| | - Jianhua Zhang
- Medical Engineering Technology and Data Mining Institute, Zhengzhou University, Zhengzhou, 450001, Henan, China.
| | - Chuanliang Chen
- Henan Provincial People's Hospital, People's Hospital of Zhengzhou University, Zhengzhou, 450003, Henan, China.
| |
Collapse
|
17
|
Zhang H, Peng Y, Guo Y. Pulmonary nodules detection based on multi-scale attention networks. Sci Rep 2022; 12:1466. [PMID: 35087078 PMCID: PMC8795451 DOI: 10.1038/s41598-022-05372-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2021] [Accepted: 01/10/2022] [Indexed: 12/24/2022] Open
Abstract
Pulmonary nodules are the main manifestation of early lung cancer. Therefore, accurate detection of nodules in CT images is vital for lung cancer diagnosis. A 3D automatic detection system of pulmonary nodules based on multi-scale attention networks is proposed in this paper to use multi-scale features of nodules and avoid network over-fitting problems. The system consists of two parts, nodule candidate detection (determining the locations of candidate nodules), false positive reduction (minimizing the number of false positive nodules). Specifically, with Res2Net structure, using pre-activation operation and convolutional quadruplet attention module, the 3D multi-scale attention block is designed. It makes full use of multi-scale information of pulmonary nodules by extracting multi-scale features at a granular level and alleviates over-fitting by pre-activation. The U-Net-like encoder-decoder structure is combined with multi-scale attention blocks as the backbone network of Faster R-CNN for detection of candidate nodules. Then a 3D deep convolutional neural network based on multi-scale attention blocks is designed for false positive reduction. The extensive experiments on LUNA16 and TianChi competition datasets demonstrate that the proposed approach can effectively improve the detection sensitivity and control the number of false positive nodules, which has clinical application value.
Collapse
Affiliation(s)
- Hui Zhang
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao, 266590, Shandong, China
| | - Yanjun Peng
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao, 266590, Shandong, China.
- Shandong Province Key Laboratory of Wisdom Mining Information Technology, Shandong University of Science and Technology, Qingdao, 266590, Shandong, China.
| | - Yanfei Guo
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao, 266590, Shandong, China
| |
Collapse
|
18
|
Kavithaa G, Balakrishnan P, Yuvaraj SA. Lung Cancer Detection and Improving Accuracy Using Linear Subspace Image Classification Algorithm. Interdiscip Sci 2021; 13:779-786. [PMID: 34351570 DOI: 10.1007/s12539-021-00468-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2020] [Revised: 07/15/2021] [Accepted: 07/23/2021] [Indexed: 06/13/2023]
Abstract
The ability to identify lung cancer at an early stage is critical, because it can help patients live longer. However, predicting the affected area while diagnosing cancer is a huge challenge. An intelligent computer-aided diagnostic system can be utilized to detect and diagnose lung cancer by detecting the damaged region. The suggested Linear Subspace Image Classification Algorithm (LSICA) approach classifies images in a linear subspace. This methodology is used to accurately identify the damaged region, and it involves three steps: image enhancement, segmentation, and classification. The spatial image clustering technique is used to quickly segment and identify the impacted area in the image. LSICA is utilized to determine the accuracy value of the affected region for classification purposes. Therefore, a lung cancer detection system with classification-dependent image processing is used for lung cancer CT imaging. Therefore, a new method to overcome these deficiencies of the process for detection using LSICA is proposed in this work on lung cancer. MATLAB has been used in all programs. A proposed system designed to easily identify the affected region with help of the classification technique to enhance and get more accurate results.
Collapse
Affiliation(s)
- G Kavithaa
- Department of Electronics and Communication Engineering, Government College of Engineering, Salem, Tamilnadu, India.
| | - P Balakrishnan
- Malla Reddy Engineering College for Women (Autonomous), Hyderabad, 500100, India
| | - S A Yuvaraj
- Department of ECE, GRT Institute of Engineering and Technology, Tiruttani, Tamilnadu, India
| |
Collapse
|
19
|
Zhang X, Liu X, Zhang B, Dong J, Zhang B, Zhao S, Li S. Accurate segmentation for different types of lung nodules on CT images using improved U-Net convolutional network. Medicine (Baltimore) 2021; 100:e27491. [PMID: 34622882 PMCID: PMC8500581 DOI: 10.1097/md.0000000000027491] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/27/2020] [Revised: 09/02/2021] [Accepted: 09/23/2021] [Indexed: 01/05/2023] Open
Abstract
ABSTRACT Since lung nodules on computed tomography images can have different shapes, contours, textures or locations and may be attached to neighboring blood vessels or pleural surfaces, accurate segmentation is still challenging. In this study, we propose an accurate segmentation method based on an improved U-Net convolutional network for different types of lung nodules on computed tomography images.The first phase is to segment lung parenchyma and correct the lung contour by applying α-hull algorithm. The second phase is to extract image pairs of patches containing lung nodules in the center and the corresponding ground truth and build an improved U-Net network with introduction of batch normalization.A large number of experiments manifest that segmentation performance of Dice loss has superior results than mean square error and Binary_crossentropy loss. The α-hull algorithm and batch normalization can improve the segmentation performance effectively. Our best result for Dice similar coefficient (0.8623) is also more competitive than other state-of-the-art segmentation algorithms.In order to segment different types of lung nodules accurately, we propose an improved U-Net network, which can improve the segmentation accuracy effectively. Moreover, this work also has practical value in helping radiologists segment lung nodules and diagnose lung cancer.
Collapse
|
20
|
Quantitative analysis of metastatic breast cancer in mice using deep learning on cryo-image data. Sci Rep 2021; 11:17527. [PMID: 34471169 PMCID: PMC8410829 DOI: 10.1038/s41598-021-96838-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2021] [Accepted: 08/17/2021] [Indexed: 11/30/2022] Open
Abstract
Cryo-imaging sections and images a whole mouse and provides ~ 120-GBytes of microscopic 3D color anatomy and fluorescence images, making fully manual analysis of metastases an onerous task. A convolutional neural network (CNN)-based metastases segmentation algorithm included three steps: candidate segmentation, candidate classification, and semi-automatic correction of the classification result. The candidate segmentation generated > 5000 candidates in each of the breast cancer-bearing mice. Random forest classifier with multi-scale CNN features and hand-crafted intensity and morphology features achieved 0.8645 ± 0.0858, 0.9738 ± 0.0074, and 0.9709 ± 0.0182 sensitivity, specificity, and area under the curve (AUC) of the receiver operating characteristic (ROC), with fourfold cross validation. Classification results guided manual correction by an expert with our in-house MATLAB software. Finally, 225, 148, 165, and 344 metastases were identified in the four cancer mice. With CNN-based segmentation, the human intervention time was reduced from > 12 to ~ 2 h. We demonstrated that 4T1 breast cancer metastases spread to the lung, liver, bone, and brain. Assessing the size and distribution of metastases proves the usefulness and robustness of cryo-imaging and our software for evaluating new cancer imaging and therapeutics technologies. Application of the method with only minor modification to a pancreatic metastatic cancer model demonstrated generalizability to other tumor models.
Collapse
|
21
|
Zhang Y, Liu M, Yu F, Zeng T, Wang Y. An O-shape Neural Network With Attention Modules to Detect Junctions in Biomedical Images Without Segmentation. IEEE J Biomed Health Inform 2021; 26:774-785. [PMID: 34197332 DOI: 10.1109/jbhi.2021.3094187] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Junction plays an important role in biomedical research such as retinal biometric identification, retinal image registration, eye-related disease diagnosis and neuron reconstruction. However, junction detection in original biomedical images is extremely challenging. For example, retinal images contain many tiny blood vessels with complicated structures and low contrast, which makes it challenging to detect junctions. In this paper, we propose an O-shape Network architecture with Attention modules (Attention O-Net), which includes Junction Detection Branch (JDB) and Local Enhancement Branch (LEB) to detect junctions in biomedical images without segmentation. In JDB, the heatmap indicating the probabilities of junctions is estimated and followed by choosing the positions with the local highest value as the junctions, whereas it is challenging to detect junctions when the images contain weak filament signals. Therefore, LEB is constructed to enhance the thin branch foreground and make the network pay more attention to the regions with low contrast, which is helpful to alleviate the imbalance of the foreground between thin and thick branches and to detect the junctions of the thin branch. Furthermore, attention modules are utilized to introduce the feature maps from LEB to JDB, which can establish a complementary relationship and further integrate local features and contextual information between the two branches. The proposed method achieves the highest average F1-scores of 0.82, 0.73 and 0.94 in two retinal datasets and one neuron dataset, respectively. The experimental results confirm that Attention O-Net outperforms other state-of-the-art detection methods, and is helpful for retinal biometric identification.
Collapse
|
22
|
Wang Z, Yin Z, Argyris YA. Detecting Medical Misinformation on Social Media Using Multimodal Deep Learning. IEEE J Biomed Health Inform 2021; 25:2193-2203. [PMID: 33170786 DOI: 10.1109/jbhi.2020.3037027] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In 2019, outbreaks of vaccine-preventable diseases reached the highest number in the US since 1992. Medical misinformation, such as antivaccine content propagating through social media, is associated with increases in vaccine delay and refusal. Our overall goal is to develop an automatic detector for antivaccine messages to counteract the negative impact that antivaccine messages have on the public health. Very few extant detection systems have considered multimodality of social media posts (images, texts, and hashtags), and instead focus on textual components, despite the rapid growth of photo-sharing applications (e.g., Instagram). As a result, existing systems are not sufficient for detecting antivaccine messages with heavy visual components (e.g., images) posted on these newer platforms. To solve this problem, we propose a deep learning network that leverages both visual and textual information. A new semantic- and task-level attention mechanism was created to help our model to focus on the essential contents of a post that signal antivaccine messages. The proposed model, which consists of three branches, can generate comprehensive fused features for predictions. Moreover, an ensemble method is proposed to further improve the final prediction accuracy. To evaluate the proposed model's performance, a real-world social media dataset that consists of more than 30,000 samples was collected from Instagram between January 2016 and October 2019. Our 30 experiment results demonstrate that the final network achieves above 97% testing accuracy and outperforms other relevant models, demonstrating that it can detect a large amount of antivaccine messages posted daily. The implementation code is available at https://github.com/wzhings/antivaccine_detection.
Collapse
|
23
|
Farhangi MM, Sahiner B, Petrick N, Pezeshk A. Automatic lung nodule detection in thoracic CT scans using dilated slice-wise convolutions. Med Phys 2021; 48:3741-3751. [PMID: 33932241 DOI: 10.1002/mp.14915] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Revised: 04/08/2021] [Accepted: 04/15/2021] [Indexed: 12/24/2022] Open
Abstract
PURPOSE Most state-of-the-art automated medical image analysis methods for volumetric data rely on adaptations of two-dimensional (2D) and three-dimensional (3D) convolutional neural networks (CNNs). In this paper, we develop a novel unified CNN-based model that combines the benefits of 2D and 3D networks for analyzing volumetric medical images. METHODS In our proposed framework, multiscale contextual information is first extracted from 2D slices inside a volume of interest (VOI). This is followed by dilated 1D convolutions across slices to aggregate in-plane features in a slice-wise manner and encode the information in the entire volume. Moreover, we formalize a curriculum learning strategy for a two-stage system (i.e., a system that consists of screening and false positive reduction), where the training samples are presented to the network in a meaningful order to further improve the performance. RESULTS We evaluated the proposed approach by developing a computer-aided detection (CADe) system for lung nodules. Our results on 888 CT exams demonstrate that the proposed approach can effectively analyze volumetric data by achieving a sensitivity of > 0.99 in the screening stage and a sensitivity of > 0.96 at eight false positives per case in the false positive reduction stage. CONCLUSION Our experimental results show that the proposed method provides competitive results compared to state-of-the-art 3D frameworks. In addition, we illustrate the benefits of curriculum learning strategies in two-stage systems that are of common use in medical imaging applications.
Collapse
Affiliation(s)
- M Mehdi Farhangi
- Division of Imaging, Diagnostics, and Software Reliability, CDRH, U.S Food and Drug Administration, Silver Spring, MD, 20993, USA
| | - Berkman Sahiner
- Division of Imaging, Diagnostics, and Software Reliability, CDRH, U.S Food and Drug Administration, Silver Spring, MD, 20993, USA
| | - Nicholas Petrick
- Division of Imaging, Diagnostics, and Software Reliability, CDRH, U.S Food and Drug Administration, Silver Spring, MD, 20993, USA
| | - Aria Pezeshk
- Division of Imaging, Diagnostics, and Software Reliability, CDRH, U.S Food and Drug Administration, Silver Spring, MD, 20993, USA
| |
Collapse
|
24
|
Lin TH, Jhang JY, Huang CR, Tsai YC, Cheng HC, Sheu BS. Deep Ensemble Feature Network for Gastric Section Classification. IEEE J Biomed Health Inform 2021; 25:77-87. [PMID: 32750926 DOI: 10.1109/jbhi.2020.2999731] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
In this paper, we propose a novel deep ensemble feature (DEF) network to classify gastric sections from endoscopic images. Different from recent deep ensemble learning methods, which need to train deep features and classifiers individually to obtain fused classification results, the proposed method can simultaneously learn the deep ensemble feature from arbitrary number of convolutional neural networks (CNNs) and the decision classifier in an end-to-end trainable manner. It comprises two sub networks, the ensemble feature network and the decision network. The former sub network learns the deep ensemble feature from multiple CNNs to represent endoscopic images. The latter sub network learns to obtain the classification labels by using the deep ensemble feature. Both sub networks are optimized based on the proposed ensemble feature loss and the decision loss which guide the learning of deep features and decisions. As shown in the experimental results, the proposed method outperforms the state-of-the-art deep learning, ensemble learning, and deep ensemble learning methods.
Collapse
|
25
|
Spiculation Sign Recognition in a Pulmonary Nodule Based on Spiking Neural P Systems. BIOMED RESEARCH INTERNATIONAL 2020; 2020:6619076. [PMID: 33426059 PMCID: PMC7775132 DOI: 10.1155/2020/6619076] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/10/2020] [Revised: 12/04/2020] [Accepted: 12/11/2020] [Indexed: 11/18/2022]
Abstract
The spiculation sign is one of the main signs to distinguish benign and malignant pulmonary nodules. In order to effectively extract the image feature of a pulmonary nodule for the spiculation sign distinguishment, a new spiculation sign recognition model is proposed based on the doctors' diagnosis process of pulmonary nodules. A maximum density projection model is established to fuse the local three-dimensional information into the two-dimensional image. The complete boundary of a pulmonary nodule is extracted by the improved Snake model, which can take full advantage of the parallel calculation of the Spike Neural P Systems to build a new neural network structure. In this paper, our experiments show that the proposed algorithm can accurately extract the boundary of a pulmonary nodule and effectively improve the recognition rate of the spiculation sign.
Collapse
|
26
|
A deep learning system that generates quantitative CT reports for diagnosing pulmonary Tuberculosis. APPL INTELL 2020. [DOI: 10.1007/s10489-020-02051-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
AbstractThe purpose of this study was to establish and validate a new deep learning system that generates quantitative computed tomography (CT) reports for the diagnosis of pulmonary tuberculosis (PTB) in clinic. 501 CT imaging datasets were collected from 223 patients with active PTB, while another 501 datasets, which served as negative samples, were collected from a healthy population. All the PTB datasets were labeled and classified manually by professional radiologists. Then, four state-of-the-art 3D convolution neural network (CNN) models were trained and evaluated in the inspection of PTB CT images. The best model was selected to annotate the spatial location of lesions and classify them into miliary, infiltrative, caseous, tuberculoma, and cavitary types. The Noisy-Or Bayesian function was used to generate an overall infection probability of this case. The results showed that the recall and precision rates of detection, from the perspective of a single lesion region of PTB, were 85.9% and 89.2%, respectively. The overall recall and precision rates of detection, from the perspective of one PTB case, were 98.7% and 93.7%, respectively. Moreover, the precision rate of type classification of the PTB lesion was 90.9%. Finally, a quantitative diagnostic report of PTB was generated including infection possibility, locations of the lesion, as well as the types. This new method might serve as an effective reference for decision making by clinical doctors.
Collapse
|
27
|
Cui S, Ming S, Lin Y, Chen F, Shen Q, Li H, Chen G, Gong X, Wang H. Development and clinical application of deep learning model for lung nodules screening on CT images. Sci Rep 2020; 10:13657. [PMID: 32788705 PMCID: PMC7423892 DOI: 10.1038/s41598-020-70629-3] [Citation(s) in RCA: 48] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2020] [Accepted: 07/29/2020] [Indexed: 12/11/2022] Open
Abstract
Lung cancer screening based on low-dose CT (LDCT) has now been widely applied because of its effectiveness and ease of performance. Radiologists who evaluate a large LDCT screening images face enormous challenges, including mechanical repetition and boring work, the easy omission of small nodules, lack of consistent criteria, etc. It requires an efficient method for helping radiologists improve nodule detection accuracy with efficiency and cost-effectiveness. Many novel deep neural network-based systems have demonstrated the potential for use in the proposed technique to detect lung nodules. However, the effectiveness of clinical practice has not been fully recognized or proven. Therefore, the aim of this study to develop and assess a deep learning (DL) algorithm in identifying pulmonary nodules (PNs) on LDCT and investigate the prevalence of the PNs in China. Radiologists and algorithm performance were assessed using the FROC score, ROC-AUC, and average time consumption. Agreement between the reference standard and the DL algorithm in detecting positive nodules was assessed per-study by Bland-Altman analysis. The Lung Nodule Analysis (LUNA) public database was used as the external test. The prevalence of NCPNs was investigated as well as other detailed information regarding the number of pulmonary nodules, their location, and characteristics, as interpreted by two radiologists.
Collapse
Affiliation(s)
- Sijia Cui
- Department of Radiology, Zhejiang Provincial People's Hospital, Affiliated People's Hospital of Hangzhou Medical College, Hangzhou, 310013, China
- The Second Clinical Medical College, Zhejiang Chinese Medical University, Hangzhou, 310053, China
| | - Shuai Ming
- Department of Radiology, Zhejiang Provincial People's Hospital, Affiliated People's Hospital of Hangzhou Medical College, Hangzhou, 310013, China
| | - Yi Lin
- Department of Radiology, Zhejiang Provincial People's Hospital, Affiliated People's Hospital of Hangzhou Medical College, Hangzhou, 310013, China
| | - Fanghong Chen
- Department of Radiology, Zhejiang Provincial People's Hospital, Affiliated People's Hospital of Hangzhou Medical College, Hangzhou, 310013, China
| | - Qiang Shen
- Department of Radiology, Zhejiang Provincial People's Hospital, Affiliated People's Hospital of Hangzhou Medical College, Hangzhou, 310013, China
| | - Hui Li
- Hangzhou Yitu Healthcare Technology Co., Ltd, Hangzhou, 310000, China
| | - Gen Chen
- Hangzhou Yitu Healthcare Technology Co., Ltd, Hangzhou, 310000, China
| | - Xiangyang Gong
- Department of Radiology, Zhejiang Provincial People's Hospital, Affiliated People's Hospital of Hangzhou Medical College, Hangzhou, 310013, China.
- Institute of Artificial Intelligence and Remote Imaging, Hangzhou Medical College, Hangzhou, 310000, China.
| | - Haochu Wang
- Department of Radiology, Zhejiang Provincial People's Hospital, Affiliated People's Hospital of Hangzhou Medical College, Hangzhou, 310013, China.
| |
Collapse
|