1
|
Li Y, Hui L, Wang X, Zou L, Chua S. Lung nodule detection using a multi-scale convolutional neural network and global channel spatial attention mechanisms. Sci Rep 2025; 15:12313. [PMID: 40210738 PMCID: PMC11986029 DOI: 10.1038/s41598-025-97187-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2024] [Accepted: 04/02/2025] [Indexed: 04/12/2025] Open
Abstract
Early detection of lung nodules is crucial for the prevention and treatment of lung cancer. However, current methods face challenges such as missing small nodules, variations in nodule size, and high false positive rates. To address these challenges, we propose a Global Channel Spatial Attention Mechanism (GCSAM). Building upon it, we develop a Candidate Nodule Detection Network (CNDNet) and a False Positive Reduction Network (FPRNet). CNDNet employs Res2Net as its backbone network to capture multi-scale features of lung nodules, utilizing GCSAM to fuse global contextual information, adaptively adjust feature weights, and refine processing along the spatial dimension. Additionally, we design a Hierarchical Progressive Feature Fusion (HPFF) module to effectively combine deep semantic information with shallow positional information, enabling high-sensitivity detection of nodules of varying sizes. FPRNet significantly reduces the false positive rate by accurately distinguishing true nodules from similar structures. Experimental results on the LUNA16 dataset demonstrate that our method achieves a competitive performance metric (CPM) value of 0.929 and a sensitivity of 0.977 under 2 false positives per scan. Compared to existing methods, our proposed method effectively reduces false positives while maintaining high sensitivity, achieving competitive results.
Collapse
Affiliation(s)
- Yongbin Li
- Faculty of Medical Information Engineering, Zunyi Medical University, 563000, Zunyi, Guizhou, China
- Faculty of Computer Science and Information Technology, Universiti Malaysia Sarawak, 94300, Kota Samarahan, Sarawak, Malaysia
| | - Linhu Hui
- Faculty of Medical Information Engineering, Zunyi Medical University, 563000, Zunyi, Guizhou, China
| | - Xiaohua Wang
- Faculty of Medical Information Engineering, Zunyi Medical University, 563000, Zunyi, Guizhou, China
| | - Liping Zou
- Faculty of Medical Information Engineering, Zunyi Medical University, 563000, Zunyi, Guizhou, China
| | - Stephanie Chua
- Faculty of Computer Science and Information Technology, Universiti Malaysia Sarawak, 94300, Kota Samarahan, Sarawak, Malaysia.
| |
Collapse
|
2
|
Matta S, Lamard M, Zhang P, Le Guilcher A, Borderie L, Cochener B, Quellec G. A systematic review of generalization research in medical image classification. Comput Biol Med 2024; 183:109256. [PMID: 39427426 DOI: 10.1016/j.compbiomed.2024.109256] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2024] [Revised: 09/17/2024] [Accepted: 10/06/2024] [Indexed: 10/22/2024]
Abstract
Numerous Deep Learning (DL) classification models have been developed for a large spectrum of medical image analysis applications, which promises to reshape various facets of medical practice. Despite early advances in DL model validation and implementation, which encourage healthcare institutions to adopt them, a fundamental questions remain: how can these models effectively handle domain shift? This question is crucial to limit DL models performance degradation. Medical data are dynamic and prone to domain shift, due to multiple factors. Two main shift types can occur over time: (1) covariate shift mainly arising due to updates to medical equipment and (2) concept shift caused by inter-grader variability. To mitigate the problem of domain shift, existing surveys mainly focus on domain adaptation techniques, with an emphasis on covariate shift. More generally, no work has reviewed the state-of-the-art solutions while focusing on the shift types. This paper aims to explore existing domain generalization methods for DL-based classification models through a systematic review of literature. It proposes a taxonomy based on the shift type they aim to solve. Papers were searched and gathered on Scopus till 10 April 2023, and after the eligibility screening and quality evaluation, 77 articles were identified. Exclusion criteria included: lack of methodological novelty (e.g., reviews, benchmarks), experiments conducted on a single mono-center dataset, or articles not written in English. The results of this paper show that learning based methods are emerging, for both shift types. Finally, we discuss future challenges, including the need for improved evaluation protocols and benchmarks, and envisioned future developments to achieve robust, generalized models for medical image classification.
Collapse
Affiliation(s)
- Sarah Matta
- Université de Bretagne Occidentale, Brest, Bretagne, 29200, France; Inserm, UMR 1101, Brest, F-29200, France.
| | - Mathieu Lamard
- Université de Bretagne Occidentale, Brest, Bretagne, 29200, France; Inserm, UMR 1101, Brest, F-29200, France
| | - Philippe Zhang
- Université de Bretagne Occidentale, Brest, Bretagne, 29200, France; Inserm, UMR 1101, Brest, F-29200, France; Evolucare Technologies, Villers-Bretonneux, F-80800, France
| | | | | | - Béatrice Cochener
- Université de Bretagne Occidentale, Brest, Bretagne, 29200, France; Inserm, UMR 1101, Brest, F-29200, France; Service d'Ophtalmologie, CHRU Brest, Brest, F-29200, France
| | | |
Collapse
|
3
|
Ravipati A, Elman SA. The state of artificial intelligence for systemic dermatoses: Background and applications for psoriasis, systemic sclerosis, and much more. Clin Dermatol 2024; 42:487-491. [PMID: 38909858 DOI: 10.1016/j.clindermatol.2024.06.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/25/2024]
Abstract
Artificial intelligence (AI) has been steadily integrated into dermatology, with AI platforms already attempting to identify skin cancers and diagnose benign versus malignant lesions. Although not as widely known, AI programs have also been utilized as diagnostic and prognostic tools for dermatologic conditions with systemic or extracutaneous involvement, especially for diseases with autoimmune etiologies. We have provided a primer on commonly used AI platforms and the practical applicability of these algorithms in dealing with psoriasis, systemic sclerosis, and dermatomyositis as a microcosm for future directions in the field. With a rapidly changing landscape in dermatology and medicine as a whole, AI could be a versatile tool to support clinicians and enhance access to care.
Collapse
Affiliation(s)
- Advaitaa Ravipati
- Dr. Phillip Frost Department of Dermatology and Cutaneous Surgery, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Scott A Elman
- Dr. Phillip Frost Department of Dermatology and Cutaneous Surgery, University of Miami Miller School of Medicine, Miami, Florida, USA.
| |
Collapse
|
4
|
Ferreira A, Li J, Pomykala KL, Kleesiek J, Alves V, Egger J. GAN-based generation of realistic 3D volumetric data: A systematic review and taxonomy. Med Image Anal 2024; 93:103100. [PMID: 38340545 DOI: 10.1016/j.media.2024.103100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 11/20/2023] [Accepted: 01/30/2024] [Indexed: 02/12/2024]
Abstract
With the massive proliferation of data-driven algorithms, such as deep learning-based approaches, the availability of high-quality data is of great interest. Volumetric data is very important in medicine, as it ranges from disease diagnoses to therapy monitoring. When the dataset is sufficient, models can be trained to help doctors with these tasks. Unfortunately, there are scenarios where large amounts of data is unavailable. For example, rare diseases and privacy issues can lead to restricted data availability. In non-medical fields, the high cost of obtaining enough high-quality data can also be a concern. A solution to these problems can be the generation of realistic synthetic data using Generative Adversarial Networks (GANs). The existence of these mechanisms is a good asset, especially in healthcare, as the data must be of good quality, realistic, and without privacy issues. Therefore, most of the publications on volumetric GANs are within the medical domain. In this review, we provide a summary of works that generate realistic volumetric synthetic data using GANs. We therefore outline GAN-based methods in these areas with common architectures, loss functions and evaluation metrics, including their advantages and disadvantages. We present a novel taxonomy, evaluations, challenges, and research opportunities to provide a holistic overview of the current state of volumetric GANs.
Collapse
Affiliation(s)
- André Ferreira
- Center Algoritmi/LASI, University of Minho, Braga, 4710-057, Portugal; Computer Algorithms for Medicine Laboratory, Graz, Austria; Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, Essen, 45131, Germany; Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, 52074 Aachen, Germany; Institute of Medical Informatics, University Hospital RWTH Aachen, 52074 Aachen, Germany.
| | - Jianning Li
- Computer Algorithms for Medicine Laboratory, Graz, Austria; Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, Essen, 45131, Germany; Cancer Research Center Cologne Essen (CCCE), University Medicine Essen, Hufelandstraße 55, Essen, 45147, Germany.
| | - Kelsey L Pomykala
- Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, Essen, 45131, Germany.
| | - Jens Kleesiek
- Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, Essen, 45131, Germany; Cancer Research Center Cologne Essen (CCCE), University Medicine Essen, Hufelandstraße 55, Essen, 45147, Germany; German Cancer Consortium (DKTK), Partner Site Essen, Hufelandstraße 55, Essen, 45147, Germany; TU Dortmund University, Department of Physics, Otto-Hahn-Straße 4, 44227 Dortmund, Germany.
| | - Victor Alves
- Center Algoritmi/LASI, University of Minho, Braga, 4710-057, Portugal.
| | - Jan Egger
- Computer Algorithms for Medicine Laboratory, Graz, Austria; Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, Essen, 45131, Germany; Cancer Research Center Cologne Essen (CCCE), University Medicine Essen, Hufelandstraße 55, Essen, 45147, Germany; Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, Graz, 801, Austria.
| |
Collapse
|
5
|
Wu R, Liang C, Zhang J, Tan Q, Huang H. Multi-kernel driven 3D convolutional neural network for automated detection of lung nodules in chest CT scans. BIOMEDICAL OPTICS EXPRESS 2024; 15:1195-1218. [PMID: 38404310 PMCID: PMC10890889 DOI: 10.1364/boe.504875] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Revised: 12/27/2023] [Accepted: 12/28/2023] [Indexed: 02/27/2024]
Abstract
The accurate position detection of lung nodules is crucial in early chest computed tomography (CT)-based lung cancer screening, which helps to improve the survival rate of patients. Deep learning methodologies have shown impressive feature extraction ability in the CT image analysis task, but it is still a challenge to develop a robust nodule detection model due to the salient morphological heterogeneity of nodules and complex surrounding environment. In this study, a multi-kernel driven 3D convolutional neural network (MK-3DCNN) is proposed for computerized nodule detection in CT scans. In the MK-3DCNN, a residual learning-based encoder-decoder architecture is introduced to employ the multi-layer features of the deep model. Considering the various nodule sizes and shapes, a multi-kernel joint learning block is developed to capture 3D multi-scale spatial information of nodule CT images, and this is conducive to improving nodule detection performance. Furthermore, a multi-mode mixed pooling strategy is designed to replace the conventional single-mode pooling manner, and it reasonably integrates the max pooling, average pooling, and center cropping pooling operations to obtain more comprehensive nodule descriptions from complicated CT images. Experimental results on the public dataset LUNA16 illustrate that the proposed MK-3DCNN method achieves more competitive nodule detection performance compared to some state-of-the-art algorithms. The results on our constructed clinical dataset CQUCH-LND indicate that the MK-3DCNN has a good prospect in clinical practice.
Collapse
Affiliation(s)
- Ruoyu Wu
- Key Laboratory of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing 400044, China
| | - Changyu Liang
- Department of Radiology, Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400030, China
| | - Jiuquan Zhang
- Department of Radiology, Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400030, China
| | - QiJuan Tan
- Department of Radiology, Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400030, China
| | - Hong Huang
- Key Laboratory of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing 400044, China
| |
Collapse
|
6
|
Liu B, Song H, Li Q, Lin Y, Weng X, Su Z, Yang J. 3D ARCNN: An Asymmetric Residual CNN for False Positive Reduction in Pulmonary Nodule. IEEE Trans Nanobioscience 2024; 23:18-25. [PMID: 37216265 DOI: 10.1109/tnb.2023.3278706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Lung cancer is with the highest morbidity and mortality, and detecting cancerous lesions early is essential for reducing mortality rates. Deep learning-based lung nodule detection techniques have shown better scalability than traditional methods. However, pulmonary nodule test results often include a number of false positive outcomes. In this paper, we present a novel asymmetric residual network called 3D ARCNN that leverages 3D features and spatial information of lung nodules to improve classification performance. The proposed framework uses an internally cascaded multi-level residual model for fine-grained learning of lung nodule features and multi-layer asymmetric convolution to address the problem of large neural network parameters and poor reproducibility. We evaluate the proposed framework on the LUNA16 dataset and achieve a high detection sensitivity of 91.6%, 92.7%, 93.2%, and 95.8% for 1, 2, 4, and 8 false positives per scan, respectively, with an average CPM index of 0.912. Quantitative and qualitative evaluations demonstrate the superior performance of our framework compared to existing methods. 3D ARCNN framework can effectively reduce the possibility of false positive lung nodules in the clinical.
Collapse
|
7
|
Song W, Tang F, Marshall H, Fong KM, Liu F. An Improved Anchor-Free Nodule Detection System Using Feature Pyramid Network. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38082619 DOI: 10.1109/embc40787.2023.10340341] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Lung cancer (LC) is the leading cause of cancer death. Detecting LC at the earliest stage facilitates curative treatment options and will improve mortality rates. Computer-aided detection (CAD) systems can help improve LC diagnostic accuracy. In this work, we propose a deep-learning-based lung nodule detection method. The proposed CAD system is a 3D anchor-free nodule detection (AFND) method based on a feature pyramid network (FPN). The deep learning-based CAD system has several novel properties: (1) It achieves region proposal and nodule classification in a single network, forming a one-step detection pipeline and reducing operation time. (2) An adaptive nodule modelling method was designed to detect nodules of various sizes. (3) The proposed AFND also establishes a novel center point selection mechanism for better classification. (4) Based on the new nodule model, a composite loss function integrating cosine similarity (CS) loss and SmoothL1loss was designed to further improve the nodule detection accuracy. Experimental results show that the AFND outperforms other similar nodule detection systems on the LUNA 16 dataset.
Collapse
|
8
|
Wang X, Su R, Xie W, Wang W, Xu Y, Mann R, Han J, Tan T. 2.75D: Boosting learning by representing 3D Medical imaging to 2D features for small data. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104858] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/22/2023]
|
9
|
Chen Y, Hou X, Yang Y, Ge Q, Zhou Y, Nie S. A Novel Deep Learning Model Based on Multi-Scale and Multi-View for Detection of Pulmonary Nodules. J Digit Imaging 2023; 36:688-699. [PMID: 36544067 PMCID: PMC10039158 DOI: 10.1007/s10278-022-00749-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 11/03/2022] [Accepted: 12/02/2022] [Indexed: 12/24/2022] Open
Abstract
Lung cancer manifests as pulmonary nodules in the early stage. Thus, the early and accurate detection of these nodules is crucial for improving the survival rate of patients. We propose a novel two-stage model for lung nodule detection. In the candidate nodule detection stage, a deep learning model based on 3D context information roughly segments the nodules detects the preprocessed image and obtain candidate nodules. In this model, 3D image blocks are input into the constructed model, and it learns the contextual information between the various slices in the 3D image block. The parameters of our model are equivalent to those of a 2D convolutional neural network (CNN), but the model could effectively learn the 3D context information of the nodules. In the false-positive reduction stage, we propose a multi-scale shared convolutional structure model. Our lung detection model has no significant increase in parameters and computation in both stages of multi-scale and multi-view detection. The proposed model was evaluated by using 888 computed tomography (CT) scans from the LIDC-IDRI dataset and achieved a competition performance metric (CPM) score of 0.957. The average detection sensitivity per scan was 0.971/1.0 FP. Furthermore, an average detection sensitivity of 0.933/1.0 FP per scan was achieved based on data from Shanghai Pulmonary Hospital. Our model exhibited a higher detection sensitivity, a lower false-positive rate, and better generalization than current lung nodule detection methods. The method has fewer parameters and less computational complexity, which provides more possibilities for the clinical application of this method.
Collapse
Affiliation(s)
- Yang Chen
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Xuewen Hou
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Yifeng Yang
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Qianqian Ge
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Yan Zhou
- Department of Radiology, School of Medicine, Renji Hospital, Shanghai Jiao Tong University, Shanghai, 200127, China.
| | - Shengdong Nie
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China.
| |
Collapse
|
10
|
Konz N, Buda M, Gu H, Saha A, Yang J, Chłędowski J, Park J, Witowski J, Geras KJ, Shoshan Y, Gilboa-Solomon F, Khapun D, Ratner V, Barkan E, Ozery-Flato M, Martí R, Omigbodun A, Marasinou C, Nakhaei N, Hsu W, Sahu P, Hossain MB, Lee J, Santos C, Przelaskowski A, Kalpathy-Cramer J, Bearce B, Cha K, Farahani K, Petrick N, Hadjiiski L, Drukker K, Armato SG, Mazurowski MA. A Competition, Benchmark, Code, and Data for Using Artificial Intelligence to Detect Lesions in Digital Breast Tomosynthesis. JAMA Netw Open 2023; 6:e230524. [PMID: 36821110 PMCID: PMC9951043 DOI: 10.1001/jamanetworkopen.2023.0524] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/24/2023] Open
Abstract
IMPORTANCE An accurate and robust artificial intelligence (AI) algorithm for detecting cancer in digital breast tomosynthesis (DBT) could significantly improve detection accuracy and reduce health care costs worldwide. OBJECTIVES To make training and evaluation data for the development of AI algorithms for DBT analysis available, to develop well-defined benchmarks, and to create publicly available code for existing methods. DESIGN, SETTING, AND PARTICIPANTS This diagnostic study is based on a multi-institutional international grand challenge in which research teams developed algorithms to detect lesions in DBT. A data set of 22 032 reconstructed DBT volumes was made available to research teams. Phase 1, in which teams were provided 700 scans from the training set, 120 from the validation set, and 180 from the test set, took place from December 2020 to January 2021, and phase 2, in which teams were given the full data set, took place from May to July 2021. MAIN OUTCOMES AND MEASURES The overall performance was evaluated by mean sensitivity for biopsied lesions using only DBT volumes with biopsied lesions; ties were broken by including all DBT volumes. RESULTS A total of 8 teams participated in the challenge. The team with the highest mean sensitivity for biopsied lesions was the NYU B-Team, with 0.957 (95% CI, 0.924-0.984), and the second-place team, ZeDuS, had a mean sensitivity of 0.926 (95% CI, 0.881-0.964). When the results were aggregated, the mean sensitivity for all submitted algorithms was 0.879; for only those who participated in phase 2, it was 0.926. CONCLUSIONS AND RELEVANCE In this diagnostic study, an international competition produced algorithms with high sensitivity for using AI to detect lesions on DBT images. A standardized performance benchmark for the detection task using publicly available clinical imaging data was released, with detailed descriptions and analyses of submitted algorithms accompanied by a public release of their predictions and code for selected methods. These resources will serve as a foundation for future research on computer-assisted diagnosis methods for DBT, significantly lowering the barrier of entry for new researchers.
Collapse
Affiliation(s)
- Nicholas Konz
- Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina
| | - Mateusz Buda
- Department of Radiology, Duke University Medical Center, Durham, North Carolina
- Faculty of Mathematics and Information Science, Warsaw University of Technology, Warsaw, Poland
| | - Hanxue Gu
- Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina
| | - Ashirbani Saha
- Department of Radiology, Duke University Medical Center, Durham, North Carolina
- Department of Oncology, McMaster University, Hamilton, Ontario, Canada
| | | | - Jakub Chłędowski
- Jagiellonian University, Kraków, Poland
- Department of Radiology, NYU Grossman School of Medicine, New York, New York
| | - Jungkyu Park
- Department of Radiology, NYU Grossman School of Medicine, New York, New York
| | - Jan Witowski
- Department of Radiology, NYU Grossman School of Medicine, New York, New York
| | - Krzysztof J. Geras
- Department of Radiology, NYU Grossman School of Medicine, New York, New York
| | - Yoel Shoshan
- Medical Image Analytics, IBM Research, Haifa, Israel
| | | | - Daniel Khapun
- Medical Image Analytics, IBM Research, Haifa, Israel
| | - Vadim Ratner
- Medical Image Analytics, IBM Research, Haifa, Israel
| | - Ella Barkan
- Medical Image Analytics, IBM Research, Haifa, Israel
| | | | - Robert Martí
- Institute of Computer Vision and Robotics, University of Girona, Girona, Spain
| | - Akinyinka Omigbodun
- Medical and Imaging Informatics Group, Department of Radiological Sciences, David Geffen School of Medicine, University of California Los Angeles
| | - Chrysostomos Marasinou
- Medical and Imaging Informatics Group, Department of Radiological Sciences, David Geffen School of Medicine, University of California Los Angeles
| | - Noor Nakhaei
- Medical and Imaging Informatics Group, Department of Radiological Sciences, David Geffen School of Medicine, University of California Los Angeles
| | - William Hsu
- Medical and Imaging Informatics Group, Department of Radiological Sciences, David Geffen School of Medicine, University of California Los Angeles
- Department of Radiological Sciences, David Geffen School of Medicine, University of California Los Angeles
- Department of Bioengineering, University of California Los Angeles Samueli School of Engineering
| | - Pranjal Sahu
- Department of Computer Science, Stony Brook University, Stony Brook, New York
| | - Md Belayat Hossain
- Department of Radiology, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Juhun Lee
- Department of Radiology, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Carlos Santos
- Department of Radiology, Duke University Medical Center, Durham, North Carolina
| | - Artur Przelaskowski
- Faculty of Mathematics and Information Science, Warsaw University of Technology, Warsaw, Poland
| | - Jayashree Kalpathy-Cramer
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown
| | - Benjamin Bearce
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown
| | - Kenny Cha
- US Food and Drug Administration, Silver Spring, Maryland
| | - Keyvan Farahani
- Center for Biomedical Informatics and Information Technology, National Cancer Institute, Bethesda, Maryland
| | | | | | - Karen Drukker
- Department of Radiology, University of Chicago, Chicago, Illinois
| | - Samuel G. Armato
- Department of Radiology, University of Chicago, Chicago, Illinois
| | - Maciej A. Mazurowski
- Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina
- Department of Radiology, Duke University Medical Center, Durham, North Carolina
- Department of Computer Science, Duke University, Durham, North Carolina
- Department of Biostatistics and Bioinformatics, Duke University Medical Center, Durham, North Carolina
| |
Collapse
|
11
|
Jin H, Yu C, Gong Z, Zheng R, Zhao Y, Fu Q. Machine learning techniques for pulmonary nodule computer-aided diagnosis using CT images: A systematic review. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
12
|
Gu Z, Li Y, Luo H, Zhang C, Du H. Cross attention guided multi-scale feature fusion for false-positive reduction in pulmonary nodule detection. Comput Biol Med 2022; 151:106302. [PMID: 36401972 DOI: 10.1016/j.compbiomed.2022.106302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Revised: 10/24/2022] [Accepted: 11/06/2022] [Indexed: 11/10/2022]
Abstract
False-positive reduction is a crucial step of computer-aided diagnosis (CAD) system for pulmonary nodules detection and it plays an important role in lung cancer diagnosis. In this paper, we propose a novel cross attention guided multi-scale feature fusion method for false-positive reduction in pulmonary nodule detection. Specifically, a 3D SENet50 fed with a candidate nodule cube is applied as the backbone to acquire multi-scale coarse features. Then, the coarse features are refined and fused by the multi-scale fusion part to achieve a better feature extraction result. Finally, a 3D spatial pyramid pooling module is used to enhance receptive field and a distributed aligned linear classifier is applied to get the confidence score. In addition, each of the five nodule cubes with different sizes centering on every testing nodule position is fed into the proposed framework to obtain a confidence score separately and a weighted fusion method is used to improve the generalization performance of the model. Extensive experiments are conducted to demonstrate the effectiveness of the classification performance of the proposed model. The data used in our work is from the LUNA16 pulmonary nodule detection challenge. In this data set, the number of true-positive pulmonary nodules is 1,557, while the number of false-positive ones is 753,418. The new method is evaluated on the LUNA16 dataset and achieves the score of the competitive performance metric (CPM) 84.8%.
Collapse
Affiliation(s)
- Zhongxuan Gu
- Jiangsu Provincial Engineering Laboratory of Pattern Recognition and Computational Intelligence, Jiangnan University, 1800 Lihu Avenue, Wuxi, 214122, Jiangsu, China
| | - Yueyang Li
- Jiangsu Provincial Engineering Laboratory of Pattern Recognition and Computational Intelligence, Jiangnan University, 1800 Lihu Avenue, Wuxi, 214122, Jiangsu, China.
| | - Haichi Luo
- College of Internet of Things Engineering, Jiangnan University, 1800 Lihu Avenue, Wuxi, 214122, Jiangsu, China
| | - Caidi Zhang
- Department of Respiration, The Affiliated Hospital of Jiangnan University, 1000 Hefeng Road, Wuxi, 214122, Jiangsu, China
| | - Hongqun Du
- Department of Respiration, The Affiliated Hospital of Jiangnan University, 1000 Hefeng Road, Wuxi, 214122, Jiangsu, China
| |
Collapse
|
13
|
Sekeroglu K, Soysal ÖM. Multi-Perspective Hierarchical Deep-Fusion Learning Framework for Lung Nodule Classification. SENSORS (BASEL, SWITZERLAND) 2022; 22:8949. [PMID: 36433541 PMCID: PMC9697252 DOI: 10.3390/s22228949] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Revised: 11/09/2022] [Accepted: 11/10/2022] [Indexed: 06/16/2023]
Abstract
Lung cancer is the leading cancer type that causes mortality in both men and women. Computer-aided detection (CAD) and diagnosis systems can play a very important role for helping physicians with cancer treatments. This study proposes a hierarchical deep-fusion learning scheme in a CAD framework for the detection of nodules from computed tomography (CT) scans. In the proposed hierarchical approach, a decision is made at each level individually employing the decisions from the previous level. Further, individual decisions are computed for several perspectives of a volume of interest. This study explores three different approaches to obtain decisions in a hierarchical fashion. The first model utilizes raw images. The second model uses a single type of feature image having salient content. The last model employs multi-type feature images. All models learn the parameters by means of supervised learning. The proposed CAD frameworks are tested using lung CT scans from the LIDC/IDRI database. The experimental results showed that the proposed multi-perspective hierarchical fusion approach significantly improves the performance of the classification. The proposed hierarchical deep-fusion learning model achieved a sensitivity of 95% with only 0.4 fp/scan.
Collapse
Affiliation(s)
- Kazim Sekeroglu
- Department of Computer Science, Southeastern Louisiana University, Hammond, LA 70402, USA
| | - Ömer Muhammet Soysal
- Department of Computer Science, Southeastern Louisiana University, Hammond, LA 70402, USA
- School of Electrical Engineering and Computer Science, Louisiana State University, Baton Rouge, LA 70803, USA
| |
Collapse
|
14
|
Son JW, Hong JY, Kim Y, Kim WJ, Shin DY, Choi HS, Bak SH, Moon KM. How Many Private Data Are Needed for Deep Learning in Lung Nodule Detection on CT Scans? A Retrospective Multicenter Study. Cancers (Basel) 2022; 14:cancers14133174. [PMID: 35804946 PMCID: PMC9265117 DOI: 10.3390/cancers14133174] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2022] [Revised: 06/23/2022] [Accepted: 06/24/2022] [Indexed: 12/24/2022] Open
Abstract
Simple Summary The early detection of lung nodules is important for patient treatment and follow-up. Many researchers are investigating deep-learning-based lung nodule detection to ease the burden of lung nodule detection by radiologists. The purpose of this paper is to provide guidelines for collecting lung nodule data to facilitate research. We collected chest computed tomography scans reviewed by radiologists at three hospitals. In addition, several experiments were conducted using the large-scale open dataset, LUNA16. As a result of the experiment, it was possible to prove the value of using the collected data compared to using LUNA16. We also demonstrated the effectiveness of transfer learning from pre-trained learning weights in LUNA16. Finally, our study provides information on the amount of lung nodule data that must be collected to stabilize lung nodule detection performance. Abstract Early detection of lung nodules is essential for preventing lung cancer. However, the number of radiologists who can diagnose lung nodules is limited, and considerable effort and time are required. To address this problem, researchers are investigating the automation of deep-learning-based lung nodule detection. However, deep learning requires large amounts of data, which can be difficult to collect. Therefore, data collection should be optimized to facilitate experiments at the beginning of lung nodule detection studies. We collected chest computed tomography scans from 515 patients with lung nodules from three hospitals and high-quality lung nodule annotations reviewed by radiologists. We conducted several experiments using the collected datasets and publicly available data from LUNA16. The object detection model, YOLOX was used in the lung nodule detection experiment. Similar or better performance was obtained when training the model with the collected data rather than LUNA16 with large amounts of data. We also show that weight transfer learning from pre-trained open data is very useful when it is difficult to collect large amounts of data. Good performance can otherwise be expected when reaching more than 100 patients. This study offers valuable insights for guiding data collection in lung nodules studies in the future.
Collapse
Affiliation(s)
| | - Ji Young Hong
- Division of Pulmonary and Critical Care Medicine, Department of Medicine, Chuncheon Sacred Heart Hospital, Hallym University Medical Center, Chuncheon 24253, Korea;
| | - Yoon Kim
- ZIOVISION, Chuncheon 24341, Korea; (J.W.S.); (Y.K.)
- Department of Computer Science and Engineering, College of IT, Kangwon National University, Chuncheon 24341, Korea
| | - Woo Jin Kim
- Department of Internal Medicine, Kangwon National Universtiy, Chuncheon 24341, Korea;
| | - Dae-Yong Shin
- KNU-Industry Cooperation Foundation, Kangwon National Universtiy, Chuncheon 24341, Korea;
| | - Hyun-Soo Choi
- ZIOVISION, Chuncheon 24341, Korea; (J.W.S.); (Y.K.)
- Department of Computer Science and Engineering, College of IT, Kangwon National University, Chuncheon 24341, Korea
- Correspondence: (H.-S.C.); (S.H.B.); (K.M.M.); Tel.: +82-33-250-8452 (H.-S.C.); +82-2-3010-3491 (S.H.B.); +82-33-610-3058 (K.M.M.)
| | - So Hyeon Bak
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul 05505, Korea
- Correspondence: (H.-S.C.); (S.H.B.); (K.M.M.); Tel.: +82-33-250-8452 (H.-S.C.); +82-2-3010-3491 (S.H.B.); +82-33-610-3058 (K.M.M.)
| | - Kyoung Min Moon
- Department of Pulmonary, Allergy and Critical Care Medicine, Gangneung Asan Hospital, University of Ulsan College of Medicine, Gangneung 25440, Korea
- Correspondence: (H.-S.C.); (S.H.B.); (K.M.M.); Tel.: +82-33-250-8452 (H.-S.C.); +82-2-3010-3491 (S.H.B.); +82-33-610-3058 (K.M.M.)
| |
Collapse
|
15
|
AFA: adversarial frequency alignment for domain generalized lung nodule detection. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-06928-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
16
|
Chen L, Liu K, Shen H, Ye H, Liu H, Yu L, Li J, Zhao K, Zhu W. Multi-Modality Attention-Guided Three-Dimensional Detection of Non-Small Cell Lung Cancer in 18F-FDG PET/CT Images. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2022. [DOI: 10.1109/trpms.2021.3072064] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
17
|
Cui X, Zheng S, Heuvelmans MA, Du Y, Sidorenkov G, Fan S, Li Y, Xie Y, Zhu Z, Dorrius MD, Zhao Y, Veldhuis RNJ, de Bock GH, Oudkerk M, van Ooijen PMA, Vliegenthart R, Ye Z. Performance of a deep learning-based lung nodule detection system as an alternative reader in a Chinese lung cancer screening program. Eur J Radiol 2021; 146:110068. [PMID: 34871936 DOI: 10.1016/j.ejrad.2021.110068] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2021] [Revised: 10/03/2021] [Accepted: 11/22/2021] [Indexed: 11/03/2022]
Abstract
OBJECTIVE To evaluate the performance of a deep learning-based computer-aided detection (DL-CAD) system in a Chinese low-dose CT (LDCT) lung cancer screening program. MATERIALS AND METHODS One-hundred-and-eighty individuals with a lung nodule on their baseline LDCT lung cancer screening scan were randomly mixed with screenees without nodules in a 1:1 ratio (total: 360 individuals). All scans were assessed by double reading and subsequently processed by an academic DL-CAD system. The findings of double reading and the DL-CAD system were then evaluated by two senior radiologists to derive the reference standard. The detection performance was evaluated by the Free Response Operating Characteristic curve, sensitivity and false-positive (FP) rate. The senior radiologists categorized nodules according to nodule diameter, type (solid, part-solid, non-solid) and Lung-RADS. RESULTS The reference standard consisted of 262 nodules ≥ 4 mm in 196 individuals; 359 findings were considered false positives. The DL-CAD system achieved a sensitivity of 90.1% with 1.0 FP/scan for detection of lung nodules regardless of size or type, whereas double reading had a sensitivity of 76.0% with 0.04 FP/scan (P = 0.001). The sensitivity for detection of nodules ≥ 4 - ≤ 6 mm was significantly higher with DL-CAD than with double reading (86.3% vs. 58.9% respectively; P = 0.001). Sixty-three nodules were only identified by the DL-CAD system, and 27 nodules only found by double reading. The DL-CAD system reached similar performance compared to double reading in Lung-RADS 3 (94.3% vs. 90.0%, P = 0.549) and Lung-RADS 4 nodules (100.0% vs. 97.0%, P = 1.000), but showed a higher sensitivity in Lung-RADS 2 (86.2% vs. 65.4%, P < 0.001). CONCLUSIONS The DL-CAD system can accurately detect pulmonary nodules on LDCT, with an acceptable false-positive rate of 1 nodule per scan and has higher detection performance than double reading. This DL-CAD system may assist radiologists in nodule detection in LDCT lung cancer screening.
Collapse
Affiliation(s)
- Xiaonan Cui
- Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Centre of Cancer, Key Laboratory of Cancer Prevention and Therapy, Department of Radiology, Tianjin, People's Republic of China; University of Groningen, University Medical Center Groningen, Department of Radiology, Groningen, the Netherlands
| | - Sunyi Zheng
- Westlake University, Artificial Intelligence and Biomedical Image Analysis Lab, School of Engineering, Hangzhou, People's Republic of China; Institute of Advanced Technology, Westlake Institute for Advanced Study, Hangzhou, People's Republic of China; University of Groningen, University Medical Center Groningen, Department of Radiation Oncology, Groningen, the Netherlands
| | - Marjolein A Heuvelmans
- University of Groningen, University Medical Center Groningen, Department of Epidemiology, Groningen, the Netherlands
| | - Yihui Du
- University of Groningen, University Medical Center Groningen, Department of Epidemiology, Groningen, the Netherlands
| | - Grigory Sidorenkov
- University of Groningen, University Medical Center Groningen, Department of Epidemiology, Groningen, the Netherlands
| | - Shuxuan Fan
- Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Centre of Cancer, Key Laboratory of Cancer Prevention and Therapy, Department of Radiology, Tianjin, People's Republic of China
| | - Yanju Li
- Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Centre of Cancer, Key Laboratory of Cancer Prevention and Therapy, Department of Radiology, Tianjin, People's Republic of China
| | - Yongsheng Xie
- Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Centre of Cancer, Key Laboratory of Cancer Prevention and Therapy, Department of Radiology, Tianjin, People's Republic of China
| | - Zhongyuan Zhu
- Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Centre of Cancer, Key Laboratory of Cancer Prevention and Therapy, Department of Radiology, Tianjin, People's Republic of China
| | - Monique D Dorrius
- University of Groningen, University Medical Center Groningen, Department of Radiology, Groningen, the Netherlands
| | - Yingru Zhao
- Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Centre of Cancer, Key Laboratory of Cancer Prevention and Therapy, Department of Radiology, Tianjin, People's Republic of China
| | - Raymond N J Veldhuis
- University of Twente, Faculty of Electrical Engineering Mathematics and Computer Science, the Netherlands
| | - Geertruida H de Bock
- University of Groningen, University Medical Center Groningen, Department of Epidemiology, Groningen, the Netherlands
| | - Matthijs Oudkerk
- University of Groningen, Faculty of Medical Sciences, the Netherlands
| | - Peter M A van Ooijen
- University of Groningen, University Medical Center Groningen, Department of Radiation Oncology, Groningen, the Netherlands; University of Groningen, University Medical Center Groningen, Machine Learning Lab, Data Science Center in Health, Groningen, the Netherlands
| | - Rozemarijn Vliegenthart
- University of Groningen, University Medical Center Groningen, Department of Radiology, Groningen, the Netherlands
| | - Zhaoxiang Ye
- Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Centre of Cancer, Key Laboratory of Cancer Prevention and Therapy, Department of Radiology, Tianjin, People's Republic of China.
| |
Collapse
|
18
|
Liu W, Liu X, Li H, Li M, Zhao X, Zhu Z. Integrating Lung Parenchyma Segmentation and Nodule Detection With Deep Multi-Task Learning. IEEE J Biomed Health Inform 2021; 25:3073-3081. [PMID: 33471772 DOI: 10.1109/jbhi.2021.3053023] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Lung parenchyma segmentation is valuable for improving the performance of lung nodule detection in computed tomography (CT) images. Traditionally, the two tasks are performed separately. This paper proposes a deep multi-task learning (MTL) approach to integrate these tasks for better lung nodule detection. Three new ideas lead to our proposed approach. First, lung parenchyma segmentation is used as the attention module and is combined with nodule detection in a single deep network. Second, lung nodule detection is performed in an anchor-free manner by dividing it into two subtasks, nodule center identification and nodule size regression. Third, a novel pyramid dilated convolution block (PDCB) is proposed to utilize the advantage of dilated convolution and tackle its gridding problem for better lung parenchyma segmentation. Based on these ideas, we design our end-to-end deep network architecture and corresponding MTL method to achieve lung parenchyma segmentation and nodule detection simultaneously. We evaluate the proposed approach on the commonly used Lung Nodule Analysis 2016 (LUNA16) dataset. The experimental results show the value of our contributions and demonstrate that our approach can yield significant improvements compared with state-of-the-art counterparts.
Collapse
|
19
|
Farhangi MM, Sahiner B, Petrick N, Pezeshk A. Automatic lung nodule detection in thoracic CT scans using dilated slice-wise convolutions. Med Phys 2021; 48:3741-3751. [PMID: 33932241 DOI: 10.1002/mp.14915] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Revised: 04/08/2021] [Accepted: 04/15/2021] [Indexed: 12/24/2022] Open
Abstract
PURPOSE Most state-of-the-art automated medical image analysis methods for volumetric data rely on adaptations of two-dimensional (2D) and three-dimensional (3D) convolutional neural networks (CNNs). In this paper, we develop a novel unified CNN-based model that combines the benefits of 2D and 3D networks for analyzing volumetric medical images. METHODS In our proposed framework, multiscale contextual information is first extracted from 2D slices inside a volume of interest (VOI). This is followed by dilated 1D convolutions across slices to aggregate in-plane features in a slice-wise manner and encode the information in the entire volume. Moreover, we formalize a curriculum learning strategy for a two-stage system (i.e., a system that consists of screening and false positive reduction), where the training samples are presented to the network in a meaningful order to further improve the performance. RESULTS We evaluated the proposed approach by developing a computer-aided detection (CADe) system for lung nodules. Our results on 888 CT exams demonstrate that the proposed approach can effectively analyze volumetric data by achieving a sensitivity of > 0.99 in the screening stage and a sensitivity of > 0.96 at eight false positives per case in the false positive reduction stage. CONCLUSION Our experimental results show that the proposed method provides competitive results compared to state-of-the-art 3D frameworks. In addition, we illustrate the benefits of curriculum learning strategies in two-stage systems that are of common use in medical imaging applications.
Collapse
Affiliation(s)
- M Mehdi Farhangi
- Division of Imaging, Diagnostics, and Software Reliability, CDRH, U.S Food and Drug Administration, Silver Spring, MD, 20993, USA
| | - Berkman Sahiner
- Division of Imaging, Diagnostics, and Software Reliability, CDRH, U.S Food and Drug Administration, Silver Spring, MD, 20993, USA
| | - Nicholas Petrick
- Division of Imaging, Diagnostics, and Software Reliability, CDRH, U.S Food and Drug Administration, Silver Spring, MD, 20993, USA
| | - Aria Pezeshk
- Division of Imaging, Diagnostics, and Software Reliability, CDRH, U.S Food and Drug Administration, Silver Spring, MD, 20993, USA
| |
Collapse
|
20
|
Liang J, Ye G, Guo J, Huang Q, Zhang S. Reducing False-Positives in Lung Nodules Detection Using Balanced Datasets. Front Public Health 2021; 9:671070. [PMID: 34095073 PMCID: PMC8170487 DOI: 10.3389/fpubh.2021.671070] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Accepted: 04/12/2021] [Indexed: 01/26/2023] Open
Abstract
Malignant pulmonary nodules are one of the main manifestations of lung cancer in early CT image screening. Since lung cancer may have no early obvious symptoms, it is important to develop a computer-aided detection (CAD) system to assist doctors to detect the malignant pulmonary nodules in the early stage of lung cancer CT diagnosis. Due to the recent successful applications of deep learning in image processing, more and more researchers have been trying to apply it to the diagnosis of pulmonary nodules. However, due to the ratio of nodules and non-nodules samples used in the training and testing datasets usually being different from the practical ratio of lung cancer, the CAD classification systems may easily produce higher false-positives while using this imbalanced dataset. This work introduces a filtering step to remove the irrelevant images from the dataset, and the results show that the false-positives can be reduced and the accuracy can be above 98%. There are two steps in nodule detection. Firstly, the images with pulmonary nodules are screened from the whole lung CT images of the patients. Secondly, the exact locations of pulmonary nodules will be detected using Faster R-CNN. Final results show that this method can effectively detect the pulmonary nodules in the CT images and hence potentially assist doctors in the early diagnosis of lung cancer.
Collapse
Affiliation(s)
- Jinglun Liang
- School of Mechanical Engineering, Dongguan University of Technology, Dongguan, China
| | - Guoliang Ye
- School of Mechanical Engineering, Dongguan University of Technology, Dongguan, China
| | - Jianwen Guo
- School of Mechanical Engineering, Dongguan University of Technology, Dongguan, China
| | - Qifan Huang
- School of Mechanical Engineering, Dongguan University of Technology, Dongguan, China.,School of Electromechanical Engineering, Guangdong University of Technology, Guangzhou, China
| | - Shaohui Zhang
- School of Mechanical Engineering, Dongguan University of Technology, Dongguan, China
| |
Collapse
|
21
|
Sun L, Wang Z, Pu H, Yuan G, Guo L, Pu T, Peng Z. Attention-embedded complementary-stream CNN for false positive reduction in pulmonary nodule detection. Comput Biol Med 2021; 133:104357. [PMID: 33836449 DOI: 10.1016/j.compbiomed.2021.104357] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Revised: 03/22/2021] [Accepted: 03/22/2021] [Indexed: 01/18/2023]
Abstract
False positive reduction plays a key role in computer-aided detection systems for pulmonary nodule detection in computed tomography (CT) scans. However, this remains a challenge owing to the heterogeneity and similarity of anisotropic pulmonary nodules. In this study, a novel attention-embedded complementary-stream convolutional neural network (AECS-CNN) is proposed to obtain more representative features of nodules for false positive reduction. The proposed network comprises three function blocks: 1) attention-guided multi-scale feature extraction, 2) complementary-stream block with an attention module for feature integration, and 3) classification block. The inputs of the network are multi-scale 3D CT volumes due to variations in nodule sizes. Subsequently, a gradual multi-scale feature extraction block with an attention module was applied to acquire more contextual information regarding the nodules. A subsequent complementary-stream integration block with an attention module was utilized to learn the significantly complementary features. Finally, the candidates were classified using a fully connected layer block. An exhaustive experiment on the LUNA16 challenge dataset was conducted to verify the effectiveness and performance of the proposed network. The AECS-CNN achieved a sensitivity of 0.92 with 4 false positives per scan. The results indicate that the attention mechanism can improve the network performance in false positive reduction, the proposed AECS-CNN can learn more representative features, and the attention module can guide the network to learn the discriminated feature channels and the crucial information embedded in the data, thereby effectively enhancing the performance of the detection system.
Collapse
Affiliation(s)
- Lingma Sun
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, 610054, China; Laboratory of Imaging Detection and Intelligent Perception, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Zhuoran Wang
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, 610054, China; Laboratory of Imaging Detection and Intelligent Perception, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Hong Pu
- Sichuan Provincial People's Hospital, Chengdu, Sichuan, 610072, China; School of Medicine, University of Electronic Science and Technology of China, Chengdu, Sichuan, 610054, China
| | - Guohui Yuan
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, 610054, China; Laboratory of Imaging Detection and Intelligent Perception, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Lu Guo
- Sichuan Provincial People's Hospital, Chengdu, Sichuan, 610072, China; School of Medicine, University of Electronic Science and Technology of China, Chengdu, Sichuan, 610054, China
| | - Tian Pu
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, 610054, China; Laboratory of Imaging Detection and Intelligent Perception, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Zhenming Peng
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, 610054, China; Laboratory of Imaging Detection and Intelligent Perception, University of Electronic Science and Technology of China, Chengdu, 611731, China.
| |
Collapse
|
22
|
Zheng S, Cornelissen LJ, Cui X, Jing X, Veldhuis RNJ, Oudkerk M, van Ooijen PMA. Deep convolutional neural networks for multiplanar lung nodule detection: Improvement in small nodule identification. Med Phys 2021; 48:733-744. [PMID: 33300162 PMCID: PMC7986069 DOI: 10.1002/mp.14648] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2020] [Revised: 11/23/2020] [Accepted: 11/30/2020] [Indexed: 12/12/2022] Open
Abstract
PURPOSE Early detection of lung cancer is of importance since it can increase patients' chances of survival. To detect nodules accurately during screening, radiologists would commonly take the axial, coronal, and sagittal planes into account, rather than solely the axial plane in clinical evaluation. Inspired by clinical work, the paper aims to develop an accurate deep learning framework for nodule detection by a combination of multiple planes. METHODS The nodule detection system is designed in two stages, multiplanar nodule candidate detection, multiscale false positive (FP) reduction. At the first stage, a deeply supervised encoder-decoder network is trained by axial, coronal, and sagittal slices for the candidate detection task. All possible nodule candidates from the three different planes are merged. To further refine results, a three-dimensional multiscale dense convolutional neural network that extracts multiscale contextual information is applied to remove non-nodules. In the public LIDC-IDRI dataset, 888 computed tomography scans with 1186 nodules accepted by at least three of four radiologists are selected to train and evaluate our proposed system via a tenfold cross-validation scheme. The free-response receiver operating characteristic curve is used for performance assessment. RESULTS The proposed system achieves a sensitivity of 94.2% with 1.0 FP/scan and a sensitivity of 96.0% with 2.0 FPs/scan. Although it is difficult to detect small nodules (i.e., <6 mm), our designed CAD system reaches a sensitivity of 93.4% (95.0%) of these small nodules at an overall FP rate of 1.0 (2.0) FPs/scan. At the nodule candidate detection stage, results show that the system with a multiplanar method is capable to detect more nodules compared to using a single plane. CONCLUSION Our approach achieves good performance not only for small nodules but also for large lesions on this dataset. This demonstrates the effectiveness of our developed CAD system for lung nodule detection.
Collapse
Affiliation(s)
- Sunyi Zheng
- Department of Radiation OncologyUniversity Medical Center GroningenUniversity of Groningen9713 AVGroningenThe Netherlands
| | - Ludo J. Cornelissen
- Department of Radiation OncologyUniversity Medical Center GroningenUniversity of Groningen9713 AVGroningenThe Netherlands
| | - Xiaonan Cui
- Department of RadiologyTianjin Medical University Cancer Institute and HospitalNational Clinical Research Centre of Cancer300060TianjinChina
| | - Xueping Jing
- Department of Radiation OncologyUniversity Medical Center GroningenUniversity of Groningen9713 AVGroningenThe Netherlands
| | | | - Matthijs Oudkerk
- Faculty of Medical ScienceUniversity of Groningen9713 AVGroningenThe Netherlands
| | - Peter M. A. van Ooijen
- Department of Radiation OncologyUniversity Medical Center GroningenUniversity of Groningen9713 AVGroningenThe Netherlands
| |
Collapse
|
23
|
Multiscale CNN with compound fusions for false positive reduction in lung nodule detection. Artif Intell Med 2021; 113:102017. [PMID: 33685584 DOI: 10.1016/j.artmed.2021.102017] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2019] [Revised: 07/18/2020] [Accepted: 07/21/2020] [Indexed: 12/20/2022]
Abstract
Pulmonary lung nodules are often benign at the early stage but they could easily become malignant and metastasize to other locations in later stages. Morphological characteristics of these nodule instances vary largely in terms of their size, shape, and texture. There are also other co-existing lung anatomical structures such as lung walls and blood vessels surrounding these nodules resulting in complex contextual information. As a result, their early diagnosis to enable decisive intervention using Computer-Aided Diagnosis (CAD) systems face serious challenges, especially at low false positive rates. In this paper, we propose a new Convolutional Neural Network (CNN) architecture called Multiscale CNN with Compound Fusions (MCNN-CF) for this purpose which uses multiscale 3D patches as inputs and performs a fusion of intermediate features at two different depths of the network in two diverse fashions. The network is trained by a new iterative training procedure adapted to circumvent the class imbalance problem and obtained a Competitive Performance Metric (CPM) score of 0.948 when tested on the LUNA16 dataset. Experimental results illustrate the robustness of the proposed system which has increased the confidence of the prediction probabilities in the detection of the most variety of nodules.
Collapse
|
24
|
Zheng S, Cui X, Vonder M, Veldhuis RNJ, Ye Z, Vliegenthart R, Oudkerk M, van Ooijen PMA. Deep learning-based pulmonary nodule detection: Effect of slab thickness in maximum intensity projections at the nodule candidate detection stage. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 196:105620. [PMID: 32615493 DOI: 10.1016/j.cmpb.2020.105620] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/10/2020] [Accepted: 06/14/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE To investigate the effect of the slab thickness in maximum intensity projections (MIPs) on the candidate detection performance of a deep learning-based computer-aided detection (DL-CAD) system for pulmonary nodule detection in CT scans. METHODS The public LUNA16 dataset includes 888 CT scans with 1186 nodules annotated by four radiologists. From those scans, MIP images were reconstructed with slab thicknesses of 5 to 50 mm (at 5 mm intervals) and 3 to 13 mm (at 2 mm intervals). The architecture in the nodule candidate detection part of the DL-CAD system was trained separately using MIP images with various slab thicknesses. Based on ten-fold cross-validation, the sensitivity and the F2 score were determined to evaluate the performance of using each slab thickness at the nodule candidate detection stage. The free-response receiver operating characteristic (FROC) curve was used to assess the performance of the whole DL-CAD system that took the results combined from 16 MIP slab thickness settings. RESULTS At the nodule candidate detection stage, the combination of results from 16 MIP slab thickness settings showed a high sensitivity of 98.0% with 46 false positives (FPs) per scan. Regarding a single MIP slab thickness of 10 mm, the highest sensitivity of 90.0% with 8 FPs/scan was reached before false positive reduction. The sensitivity increased (82.8% to 90.0%) for slab thickness of 1 to 10 mm and decreased (88.7% to 76.6%) for slab thickness of 15-50 mm. The number of FPs was decreasing with increasing slab thickness, but was stable at 5 FPs/scan at a slab thickness of 30 mm or more. After false positive reduction, the DL-CAD system, utilizing 16 MIP slab thickness settings, had the sensitivity of 94.4% with 1 FP/scan. CONCLUSIONS The utilization of multi-MIP images could improve the performance at the nodule candidate detection stage, even for the whole DL-CAD system. For a single slab thickness of 10 mm, the highest sensitivity for pulmonary nodule detection was reached at the nodule candidate detection stage, similar to the slab thickness usually applied by radiologists.
Collapse
Affiliation(s)
- Sunyi Zheng
- Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands.
| | - Xiaonan Cui
- Department of Radiology, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands; Department of Radiology, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Centre of Cancer, Tianjin, China
| | - Marleen Vonder
- Department of Epidemiology, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands
| | | | - Zhaoxiang Ye
- Department of Radiology, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Centre of Cancer, Tianjin, China
| | - Rozemarijn Vliegenthart
- Department of Radiology, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands
| | | | - Peter M A van Ooijen
- Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands
| |
Collapse
|
25
|
Lu X, Gu Y, Yang L, Zhang B, Zhao Y, Yu D, Zhao J, Gao L, Zhou T, Liu Y, Zhang W. Multi-level 3D Densenets for False-positive Reduction in Lung Nodule Detection Based on Chest Computed Tomography. Curr Med Imaging 2020; 16:1004-1021. [PMID: 33081662 DOI: 10.2174/1573405615666191113122840] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2019] [Revised: 10/11/2019] [Accepted: 10/19/2019] [Indexed: 12/31/2022]
Abstract
OBJECTIVE False-positive nodule reduction is a crucial part of a computer-aided detection (CADe) system, which assists radiologists in accurate lung nodule detection. In this research, a novel scheme using multi-level 3D DenseNet framework is proposed to implement false-positive nodule reduction task. METHODS Multi-level 3D DenseNet models were extended to differentiate lung nodules from falsepositive nodules. First, different models were fed with 3D cubes with different sizes for encoding multi-level contextual information to meet the challenges of the large variations of lung nodules. In addition, image rotation and flipping were utilized to upsample positive samples which consisted of a positive sample set. Furthermore, the 3D DenseNets were designed to keep low-level information of nodules, as densely connected structures in DenseNet can reuse features of lung nodules and then boost feature propagation. Finally, the optimal weighted linear combination of all model scores obtained the best classification result in this research. RESULTS The proposed method was evaluated with LUNA16 dataset which contained 888 thin-slice CT scans. The performance was validated via 10-fold cross-validation. Both the Free-response Receiver Operating Characteristic (FROC) curve and the Competition Performance Metric (CPM) score show that the proposed scheme can achieve a satisfactory detection performance in the falsepositive reduction track of the LUNA16 challenge. CONCLUSION The result shows that the proposed scheme can be significant for false-positive nodule reduction task.
Collapse
Affiliation(s)
- Xiaoqi Lu
- College of Information Engineering, Inner Mongolia University of Technology, Hohhot, 010051, China,Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China,School of Computer Engineering and Science, Shanghai University, Shanghai, 200444, China
| | - Yu Gu
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China,School of Computer Engineering and Science, Shanghai University, Shanghai, 200444, China
| | - Lidong Yang
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Baohua Zhang
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Ying Zhao
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Dahua Yu
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Jianfeng Zhao
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Lixin Gao
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China,School of Foreign Languages, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Tao Zhou
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Yang Liu
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Wei Zhang
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| |
Collapse
|
26
|
Chen S, Han Y, Lin J, Zhao X, Kong P. Pulmonary nodule detection on chest radiographs using balanced convolutional neural network and classic candidate detection. Artif Intell Med 2020; 107:101881. [DOI: 10.1016/j.artmed.2020.101881] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2019] [Revised: 04/05/2020] [Accepted: 05/12/2020] [Indexed: 12/21/2022]
|
27
|
Xu YM, Zhang T, Xu H, Qi L, Zhang W, Zhang YD, Gao DS, Yuan M, Yu TF. Deep Learning in CT Images: Automated Pulmonary Nodule Detection for Subsequent Management Using Convolutional Neural Network. Cancer Manag Res 2020; 12:2979-2992. [PMID: 32425607 PMCID: PMC7196793 DOI: 10.2147/cmar.s239927] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2019] [Accepted: 04/05/2020] [Indexed: 12/26/2022] Open
Abstract
PURPOSE The purpose of this study is to compare the detection performance of the 3-dimensional convolutional neural network (3D CNN)-based computer-aided detection (CAD) models with radiologists of different levels of experience in detecting pulmonary nodules on thin-section computed tomography (CT). PATIENTS AND METHODS We retrospectively reviewed 1109 consecutive patients who underwent follow-up thin-section CT at our institution. The 3D CNN model for nodule detection was re-trained and complemented by expert augmentation. The annotations of a consensus panel consisting of two expert radiologists determined the ground truth. The detection performance of the re-trained CAD model and three other radiologists at different levels of experience were tested using a free-response receiver operating characteristic (FROC) analysis in the test group. RESULTS The detection performance of the re-trained CAD model was significantly better than that of the pre-trained network (sensitivity: 93.09% vs 38.44%). The re-trained CAD model had a significantly better detection performance than radiologists (average sensitivity: 93.09% vs 50.22%), without significantly increasing the number of false positives per scan (1.64 vs 0.68). In the training set, 922 nodules less than 3 mm in size in 211 patients at high risk were recommended for follow-up CT according to the Fleischner Society Guidelines. Fifteen of 101 solid nodules were confirmed to be lung cancer. CONCLUSION The re-trained 3D CNN-based CAD model, complemented by expert augmentation, was an accurate and efficient tool in identifying incidental pulmonary nodules for subsequent management.
Collapse
Affiliation(s)
- Yi-Ming Xu
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, People’s Republic of China
| | - Teng Zhang
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, People’s Republic of China
| | - Hai Xu
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, People’s Republic of China
| | - Liang Qi
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, People’s Republic of China
| | - Wei Zhang
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, People’s Republic of China
| | - Yu-Dong Zhang
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, People’s Republic of China
| | - Da-Shan Gao
- 12sigma Technologies, San Diego, California, USA
| | - Mei Yuan
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, People’s Republic of China
| | - Tong-Fu Yu
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, People’s Republic of China
| |
Collapse
|
28
|
Tan M, Wu F, Yang B, Ma J, Kong D, Chen Z, Long D. Pulmonary nodule detection using hybrid two-stage 3D CNNs. Med Phys 2020; 47:3376-3388. [PMID: 32239521 DOI: 10.1002/mp.14161] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Revised: 02/10/2020] [Accepted: 03/19/2020] [Indexed: 12/19/2022] Open
Abstract
PURPOSE Early detection of pulmonary nodules is an effective way to improve patients' chances of survival. In this work, we propose a novel and efficient way to build a computer-aided detection (CAD) system for pulmonary nodules based on computed tomography (CT) scans. METHODS The system can be roughly divided into two steps: nodule candidate detection and false positive reduction. Considering the three-dimensional (3D) nature of nodules, the CAD system adopts 3D convolutional neural networks (CNNs) in both stages. Specifically, in the first stage, a segmentation-based 3D CNN with a hybrid loss is designed to segment nodules. According to the probability maps produced by the segmentation network, a threshold method and connected component analysis are applied to generate nodule candidates. In the second stage, we employ three classification-based 3D CNNs with different types of inputs to reduce false positives. In addition to simple raw data input, we also introduce hybrid inputs to make better use of the output of the previous segmentation network. In experiments, we use data augmentation and batch normalization to avoid overfitting. RESULTS We evaluate the system on 888 CT scans from the publicly available LIDC-IDRI dataset, and our method achieves the best performance by comparing with the state-of-the-art methods, which has a high detection sensitivity of 97.5% with an average of only one false positive per scan. An additional evaluation on 115 CT scans from local hospitals is also performed. CONCLUSIONS Experimental results demonstrate that our method is highly suited for the detection of pulmonary nodules.
Collapse
Affiliation(s)
- Man Tan
- The School of Mathematical Sciences, Zhejiang University, Hangzhou, Zhejiang, 310058, China
| | - Fa Wu
- The School of Mathematical Sciences, Zhejiang University, Hangzhou, Zhejiang, 310058, China
| | - Bei Yang
- The School of Mathematical Sciences, Zhejiang University, Hangzhou, Zhejiang, 310058, China
| | - Jinlian Ma
- The School of Mathematical Sciences, Zhejiang University, Hangzhou, Zhejiang, 310058, China
| | - Dexing Kong
- The School of Mathematical Sciences, Zhejiang University, Hangzhou, Zhejiang, 310058, China
| | - Zengsi Chen
- The College of Science, China Jiliang University, Hangzhou, Zhejiang, 310018, China
| | - Dan Long
- The Department of Radiology, Zhejiang Cancer Hospital, Hangzhou, Zhejiang, 310022, China
| |
Collapse
|
29
|
Farhangi MM, Petrick N, Sahiner B, Frigui H, Amini AA, Pezeshk A. Recurrent attention network for false positive reduction in the detection of pulmonary nodules in thoracic CT scans. Med Phys 2020; 47:2150-2160. [DOI: 10.1002/mp.14076] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2019] [Revised: 12/13/2019] [Accepted: 01/13/2020] [Indexed: 12/19/2022] Open
Affiliation(s)
- M. Mehdi Farhangi
- Division of Imaging, Diagnostics, and Software Reliability (DIDSR) OSEL, CDRH, FDA Silver Spring MD 20993USA
| | - Nicholas Petrick
- Division of Imaging, Diagnostics, and Software Reliability (DIDSR) OSEL, CDRH, FDA Silver Spring MD 20993USA
| | - Berkman Sahiner
- Division of Imaging, Diagnostics, and Software Reliability (DIDSR) OSEL, CDRH, FDA Silver Spring MD 20993USA
| | - Hichem Frigui
- Multimedia Laboratory University of Louisville Louisville KY 40292USA
| | - Amir A. Amini
- Medical Imaging Laboratory University of Louisville Louisville KY 40292USA
| | - Aria Pezeshk
- Division of Imaging, Diagnostics, and Software Reliability (DIDSR) OSEL, CDRH, FDA Silver Spring MD 20993USA
| |
Collapse
|
30
|
Zheng S, Guo J, Cui X, Veldhuis RNJ, Oudkerk M, van Ooijen PMA. Automatic Pulmonary Nodule Detection in CT Scans Using Convolutional Neural Networks Based on Maximum Intensity Projection. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:797-805. [PMID: 31425026 DOI: 10.1109/tmi.2019.2935553] [Citation(s) in RCA: 48] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
Accurate pulmonary nodule detection is a crucial step in lung cancer screening. Computer-aided detection (CAD) systems are not routinely used by radiologists for pulmonary nodule detection in clinical practice despite their potential benefits. Maximum intensity projection (MIP) images improve the detection of pulmonary nodules in radiological evaluation with computed tomography (CT) scans. Inspired by the clinical methodology of radiologists, we aim to explore the feasibility of applying MIP images to improve the effectiveness of automatic lung nodule detection using convolutional neural networks (CNNs). We propose a CNN-based approach that takes MIP images of different slab thicknesses (5 mm, 10 mm, 15 mm) and 1 mm axial section slices as input. Such an approach augments the two-dimensional (2-D) CT slice images with more representative spatial information that helps discriminate nodules from vessels through their morphologies. Our proposed method achieves sensitivity of 92.7% with 1 false positive per scan and sensitivity of 94.2% with 2 false positives per scan for lung nodule detection on 888 scans in the LIDC-IDRI dataset. The use of thick MIP images helps the detection of small pulmonary nodules (3 mm-10 mm) and results in fewer false positives. Experimental results show that utilizing MIP images can increase the sensitivity and lower the number of false positives, which demonstrates the effectiveness and significance of the proposed MIP-based CNNs framework for automatic pulmonary nodule detection in CT scans. The proposed method also shows the potential that CNNs could gain benefits for nodule detection by combining the clinical procedure.
Collapse
|
31
|
A Computer-Aided Detection System for the Detection of Lung Nodules Based on 3D-ResNet. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9245544] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
In recent years, the research into automatic aided detection systems for pulmonary nodules has been extremely active. Most of the existing studies are based on 2D convolution neural networks, which cannot make full use of computed tomography’s (CT) 3D spatial information. To address this problem, a computer-aided detection (CAD) system for lung nodules based on a 3D residual network (3D-ResNet) inspired by cognitive science is proposed in this paper. In this system, we feed the slice information extracted from three different axis planes into the U-NET network set, and make the joint decision to generate a candidate nodule set, which is the input of the proposed 3D residual network after extraction. We extracted 3D samples with 40, 44, 48, 52, and 56 mm sides from each candidate nodule in the candidate set and feed them into the trained residual network to get the probability of positive nodule after re-sampling the 3D sample to
48
×
48
×
48
mm
3
. Finally, a joint judgment is made based on the probabilities of five 3D samples of different sizes to obtain the final result. Random rotation and translation and data amplification technology are used to prevent overfitting during network training. The detection intensity on the largest public data set (i.e., the Lung Image Database Consortium and Image Database Resource Initiative—LIDC-IDRI) reached 86.5% and 92.3% at 1 frame per second (FPs) and 4 FPs respectively using our algorithm, which is better than most CAD systems using 2D convolutional neural networks. In addition, a 3D residual network and a multi-section 2D convolution neural network were tested on the unrelated Tianchi dataset. The results indicate that 3D-ResNet has better feature extraction ability than multi-section 2D-ConvNet and is more suitable for reduction of false positive nodules.
Collapse
|
32
|
Gruetzemacher R, Gupta A, Paradice D. 3D deep learning for detecting pulmonary nodules in CT scans. J Am Med Inform Assoc 2019; 25:1301-1310. [PMID: 30137371 DOI: 10.1093/jamia/ocy098] [Citation(s) in RCA: 39] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2017] [Accepted: 07/03/2018] [Indexed: 01/09/2023] Open
Abstract
Objective To demonstrate and test the validity of a novel deep-learning-based system for the automated detection of pulmonary nodules. Materials and Methods The proposed system uses 2 3D deep learning models, 1 for each of the essential tasks of computer-aided nodule detection: candidate generation and false positive reduction. A total of 888 scans from the LIDC-IDRI dataset were used for training and evaluation. Results Results for candidate generation on the test data indicated a detection rate of 94.77% with 30.39 false positives per scan, while the test results for false positive reduction exhibited a sensitivity of 94.21% with 1.789 false positives per scan. The overall system detection rate on the test data was 89.29% with 1.789 false positives per scan. Discussion An extensive and rigorous validation was conducted to assess the performance of the proposed system. The system demonstrated a novel combination of 3D deep neural network architectures and demonstrates the use of deep learning for both candidate generation and false positive reduction to be evaluated with a substantial test dataset. The results strongly support the ability of deep learning pulmonary nodule detection systems to generalize to unseen data. The source code and trained model weights have been made available. Conclusion A novel deep-neural-network-based pulmonary nodule detection system is demonstrated and validated. The results provide comparison of the proposed deep-learning-based system over other similar systems based on performance.
Collapse
Affiliation(s)
- Ross Gruetzemacher
- Department of Systems & Technology, Raymond J. Harbert College of Business, Auburn University, Auburn, AL, USA 36849
| | - Ashish Gupta
- Department of Systems & Technology, Raymond J. Harbert College of Business, Auburn University, Auburn, AL, USA 36849
| | - David Paradice
- Department of Systems & Technology, Raymond J. Harbert College of Business, Auburn University, Auburn, AL, USA 36849
| |
Collapse
|
33
|
Zuo W, Zhou F, He Y, Li X. Automatic classification of lung nodule candidates based on a novel 3D convolution network and knowledge transferred from a 2D network. Med Phys 2019; 46:5499-5513. [PMID: 31621916 DOI: 10.1002/mp.13867] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2019] [Revised: 10/01/2019] [Accepted: 10/02/2019] [Indexed: 12/19/2022] Open
Abstract
OBJECTIVE In the automatic lung nodule detection system, the authenticity of a large number of nodule candidates needs to be judged, which is a classification task. However, the variable shapes and sizes of the lung nodules have posed a great challenge to the classification of candidates. To solve this problem, we propose a method for classifying nodule candidates through three-dimensional (3D) convolution neural network (ConvNet) model which is trained by transferring knowledge from a multiresolution two-dimensional (2D) ConvNet model. METHODS In this scheme, a novel 3D ConvNet model is preweighted with the weights of the trained 2D ConvNet model, and then the 3D ConvNet model is trained with 3D image volumes. In this way, the knowledge transfer method can make 3D network easier to converge and make full use of the spatial information of nodules with different sizes and shapes to improve the classification accuracy. RESULTS The experimental results on 551 065 pulmonary nodule candidates in the LUNA16 dataset show that our method gains a competitive average score in the false-positive reduction track in lung nodule detection, with the sensitivities of 0.619 and 0.642 at 0.125 and 0.25 FPs per scan, respectively. CONCLUSIONS The proposed method can maintain satisfactory classification accuracy even when the false-positive rate is extremely small in the face of nodules of different sizes and shapes. Moreover, as a transfer learning idea, the method to transfer knowledge from 2D ConvNet to 3D ConvNet is the first attempt to carry out full migration of parameters of various layers including convolution layers, full connection layers, and classifier between different dimensional models, which is more conducive to utilizing the existing 2D ConvNet resources and generalizing transfer learning schemes.
Collapse
Affiliation(s)
- Wangxia Zuo
- School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing, 100083, China.,College of Electrical Engineering, University of South China, Hengyang, Hunan, 421001, China
| | - Fuqiang Zhou
- School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing, 100083, China
| | - Yuzhu He
- School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing, 100083, China
| | - Xiaosong Li
- School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing, 100083, China
| |
Collapse
|
34
|
Zhu H, Zhao H, Song C, Bian Z, Bi Y, Liu T, He X, Yang D, Cai W. MR-Forest: A Deep Decision Framework for False Positive Reduction in Pulmonary Nodule Detection. IEEE J Biomed Health Inform 2019; 24:1652-1663. [PMID: 31634145 DOI: 10.1109/jbhi.2019.2947506] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
With the development of deep learning methods such as convolutional neural network (CNN), the accuracy of automated pulmonary nodule detection has been greatly improved. However, the high computational and storage costs of the large-scale network have been a potential concern for the future widespread clinical application. In this paper, an alternative Multi-ringed (MR)-Forest framework, against the resource-consuming neural networks (NN)-based architectures, has been proposed for false positive reduction in pulmonary nodule detection, which consists of three steps. First, a novel multi-ringed scanning method is used to extract the order ring facets (ORFs) from the surface voxels of the volumetric nodule models; Second, Mesh-LBP and mapping deformation are employed to estimate the texture and shape features. By sliding and resampling the multi-ringed ORFs, feature volumes with different lengths are generated. Finally, the outputs of multi-level are cascaded to predict the candidate class. On 1034 scans merging the dataset from the Affiliated Hospital of Liaoning University of Traditional Chinese Medicine (AH-LUTCM) and the LUNA16 Challenge dataset, our framework performs enough competitiveness than state-of-the-art in false positive reduction task (CPM score of 0.865). Experimental results demonstrate that MR-Forest is a successful solution to satisfy both resource-consuming and effectiveness for automated pulmonary nodule detection. The proposed MR-forest is a general architecture for 3D target detection, it can be easily extended in many other medical imaging analysis tasks, where the growth trend of the targeting object is approximated as a spheroidal expansion.
Collapse
|
35
|
Multi-Scale Heterogeneous 3D CNN for False-Positive Reduction in Pulmonary Nodule Detection, Based on Chest CT Images. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9163261] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
Abstract
Currently, lung cancer has one of the highest mortality rates because it is often caught too late. Therefore, early detection is essential to reduce the risk of death. Pulmonary nodules are considered key indicators of primary lung cancer. Developing an efficient and accurate computer-aided diagnosis system for pulmonary nodule detection is an important goal. Typically, a computer-aided diagnosis system for pulmonary nodule detection consists of two parts: candidate nodule extraction and false-positive reduction of candidate nodules. The reduction of false positives (FPs) of candidate nodules remains an important challenge due to morphological characteristics of nodule height changes and similar characteristics to other organs. In this study, we propose a novel multi-scale heterogeneous three-dimensional (3D) convolutional neural network (MSH-CNN) based on chest computed tomography (CT) images. There are three main strategies of the design: (1) using multi-scale 3D nodule blocks with different levels of contextual information as inputs; (2) using two different branches of 3D CNN to extract the expression features; (3) using a set of weights which are determined by back propagation to fuse the expression features produced by step 2. In order to test the performance of the algorithm, we trained and tested on the Lung Nodule Analysis 2016 (LUNA16) dataset, achieving an average competitive performance metric (CPM) score of 0.874 and a sensitivity of 91.7% at two FPs/scan. Moreover, our framework is universal and can be easily extended to other candidate false-positive reduction tasks in 3D object detection, as well as 3D object classification.
Collapse
|
36
|
Shaukat F, Raja G, Frangi AF. Computer-aided detection of lung nodules: a review. J Med Imaging (Bellingham) 2019. [DOI: 10.1117/1.jmi.6.2.020901] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Affiliation(s)
- Furqan Shaukat
- University of Engineering and Technology, Department of Electrical Engineering, Taxila
| | - Gulistan Raja
- University of Engineering and Technology, Department of Electrical Engineering, Taxila
| | - Alejandro F. Frangi
- University of Leeds Woodhouse Lane, School of Computing and School of Medicine, Leeds
| |
Collapse
|
37
|
Huang X, Sun W, Tseng TL(B, Li C, Qian W. Fast and fully-automated detection and segmentation of pulmonary nodules in thoracic CT scans using deep convolutional neural networks. Comput Med Imaging Graph 2019; 74:25-36. [DOI: 10.1016/j.compmedimag.2019.02.003] [Citation(s) in RCA: 44] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2018] [Revised: 01/09/2019] [Accepted: 02/18/2019] [Indexed: 12/24/2022]
|
38
|
Kim BC, Yoon JS, Choi JS, Suk HI. Multi-scale gradual integration CNN for false positive reduction in pulmonary nodule detection. Neural Netw 2019; 115:1-10. [PMID: 30909118 DOI: 10.1016/j.neunet.2019.03.003] [Citation(s) in RCA: 38] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2018] [Revised: 12/24/2018] [Accepted: 03/07/2019] [Indexed: 12/22/2022]
Abstract
Lung cancer is a global and dangerous disease, and its early detection is crucial for reducing the risks of mortality. In this regard, it has been of great interest in developing a computer-aided system for pulmonary nodules detection as early as possible on thoracic CT scans. In general, a nodule detection system involves two steps: (i) candidate nodule detection at a high sensitivity, which captures many false positives and (ii) false positive reduction from candidates. However, due to the high variation of nodule morphological characteristics and the possibility of mistaking them for neighboring organs, candidate nodule detection remains a challenge. In this study, we propose a novel Multi-scale Gradual Integration Convolutional Neural Network (MGI-CNN), designed with three main strategies: (1) to use multi-scale inputs with different levels of contextual information, (2) to use abstract information inherent in different input scales with gradual integration, and (3) to learn multi-stream feature integration in an end-to-end manner. To verify the efficacy of the proposed network, we conducted exhaustive experiments on the LUNA16 challenge datasets by comparing the performance of the proposed method with state-of-the-art methods in the literature. On two candidate subsets of the LUNA16 dataset, i.e., V1 and V2, our method achieved an average CPM of 0.908 (V1) and 0.942 (V2), outperforming comparable methods by a large margin. Our MGI-CNN is implemented in Python using TensorFlow and the source code is available from https://github.com/ku-milab/MGICNN.
Collapse
Affiliation(s)
- Bum-Chae Kim
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Jee Seok Yoon
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Jun-Sik Choi
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Heung-Il Suk
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea.
| |
Collapse
|
39
|
Spyridonos P, Gaitanis G, Likas A, Bassukas ID. Late fusion of deep and shallow features to improve discrimination of actinic keratosis from normal skin using clinical photography. Skin Res Technol 2019; 25:538-543. [PMID: 30762255 DOI: 10.1111/srt.12684] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2018] [Revised: 12/17/2018] [Accepted: 01/12/2019] [Indexed: 11/29/2022]
Abstract
BACKGROUND Actinic keratosis (AK) is a common premalignant skin lesion that can potentially progress to squamous cell carcinoma. Appropriate long-term management of AK requires close patient monitoring in addition to therapeutic interventions. Computer-aided diagnostic systems based on clinical photography might evolve in the future into valuable adjuncts to AK patient management. The present study proposes a late fusion approach of color-texture features (shallow features) and deep features extracted from pre-trained convolutional neural networks (CNN) to boost AK detection accuracy on clinical photographs. MATERIALS AND METHODS System uses a sliding rectangular window of 50 × 50 pixels and a classifier that assigns the window region to either the AK or the healthy skin class. 6010 and 13 915 cropped regions of interest (ROI) of 50 × 50 pixels of AK and healthy skin, respectively, from 22 patients were used for system implementation. Different support vector machine (SVM) classifiers employing shallow or deep features and their late fusion using the max rule at decision level were compared with the McNemar test and Yule's Q-statistic. RESULTS Support vector machine classifiers based on deep and shallow features exhibited overall competitive performances with complementary improvements in detection accuracy. Late fusion yielded significant improvement (6%) in both sensitivity (87%) and specificity (86%) compared to single classifier performance. CONCLUSION The parallel improvement of sensitivity and specificity is encouraging, demonstrating the potential use of our system in evaluating AK burden. The latter might be of value in future clinical studies for the comparison of field-directed treatment interventions.
Collapse
Affiliation(s)
- Panagiota Spyridonos
- Department of Medical Physics, Faculty of Medicine, School of Health Sciences, University of Ioannina, Ioannina, Greece
| | - Georgios Gaitanis
- Department of Skin and Venereal Diseases, Faculty of Medicine, School of Health Sciences, University of Ioannina, Ioannina, Greece
| | - Aristidis Likas
- Department of Computer Science & Engineering, University of Ioannina, Ioannina, Greece
| | - Ioannis D Bassukas
- Department of Skin and Venereal Diseases, Faculty of Medicine, School of Health Sciences, University of Ioannina, Ioannina, Greece
| |
Collapse
|
40
|
NODULe: Combining constrained multi-scale LoG filters with densely dilated 3D deep convolutional neural network for pulmonary nodule detection. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2018.08.022] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
|
41
|
Gu Y, Lu X, Yang L, Zhang B, Yu D, Zhao Y, Gao L, Wu L, Zhou T. Automatic lung nodule detection using a 3D deep convolutional neural network combined with a multi-scale prediction strategy in chest CTs. Comput Biol Med 2018; 103:220-231. [PMID: 30390571 DOI: 10.1016/j.compbiomed.2018.10.011] [Citation(s) in RCA: 79] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2018] [Revised: 10/11/2018] [Accepted: 10/11/2018] [Indexed: 12/17/2022]
Abstract
OBJECTIVE A novel computer-aided detection (CAD) scheme for lung nodule detection using a 3D deep convolutional neural network combined with a multi-scale prediction strategy is proposed to assist radiologists by providing a second opinion on accurate lung nodule detection, which is a crucial step in early diagnosis of lung cancer. METHOD A 3D deep convolutional neural network (CNN) with multi-scale prediction was used to detect lung nodules after the lungs were segmented from chest CT scans, with a comprehensive method utilized. Compared with a 2D CNN, a 3D CNN can utilize richer spatial 3D contextual information and generate more discriminative features after being trained with 3D samples to fully represent lung nodules. Furthermore, a multi-scale lung nodule prediction strategy, including multi-scale cube prediction and cube clustering, is also proposed to detect extremely small nodules. RESULT The proposed method was evaluated on 888 thin-slice scans with 1186 nodules in the LUNA16 database. All results were obtained via 10-fold cross-validation. Three options of the proposed scheme are provided for selection according to the actual needs. The sensitivity of the proposed scheme with the primary option reached 87.94% and 92.93% at one and four false positives per scan, respectively. Meanwhile, the competition performance metric (CPM) score is very satisfying (0.7967). CONCLUSION The experimental results demonstrate the outstanding detection performance of the proposed nodule detection scheme. In addition, the proposed scheme can be extended to other medical image recognition fields.
Collapse
Affiliation(s)
- Yu Gu
- School of Computer Engineering and Science, Shanghai University, Shanghai, 200444, China; Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Xiaoqi Lu
- School of Computer Engineering and Science, Shanghai University, Shanghai, 200444, China; Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China.
| | - Lidong Yang
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Baohua Zhang
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Dahua Yu
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Ying Zhao
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China.
| | - Lixin Gao
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China; School of Foreign Languages, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Liang Wu
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Tao Zhou
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| |
Collapse
|
42
|
Eun H, Kim D, Jung C, Kim C. Single-view 2D CNNs with fully automatic non-nodule categorization for false positive reduction in pulmonary nodule detection. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 165:215-224. [PMID: 30337076 DOI: 10.1016/j.cmpb.2018.08.012] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/18/2018] [Revised: 07/27/2018] [Accepted: 08/17/2018] [Indexed: 06/08/2023]
Abstract
BACKGROUND AND OBJECTIVE In pulmonary nodule detection, the first stage, candidate detection, aims to detect suspicious pulmonary nodules. However, detected candidates include many false positives and thus in the following stage, false positive reduction, such false positives are reliably reduced. Note that this task is challenging due to 1) the imbalance between the numbers of nodules and non-nodules and 2) the intra-class diversity of non-nodules. Although techniques using 3D convolutional neural networks (CNNs) have shown promising performance, they suffer from high computational complexity which hinders constructing deep networks. To efficiently address these problems, we propose a novel framework using the ensemble of 2D CNNs using single views, which outperforms existing 3D CNN-based methods. METHODS Our ensemble of 2D CNNs utilizes single-view 2D patches to improve both computational and memory efficiency compared to previous techniques exploiting 3D CNNs. We first categorize non-nodules on the basis of features encoded by an autoencoder. Then, all 2D CNNs are trained by using the same nodule samples, but with different types of non-nodules. By extending the learning capability, this training scheme resolves difficulties of extracting representative features from non-nodules with large appearance variations. Note that, instead of manual categorization requiring the heavy workload of radiologists, we propose to automatically categorize non-nodules based on the autoencoder and k-means clustering. RESULTS We performed extensive experiments to validate the effectiveness of our framework based on the database of the lung nodule analysis 2016 challenge. The superiority of our framework is demonstrated through comparing the performance of five frameworks trained with differently constructed training sets. Our proposed framework achieved state-of-the-art performance (0.922 of the competition performance metric score) with low computational demands (789K of parameters and 1024M of floating point operations per second). CONCLUSION We presented a novel false positive reduction framework, the ensemble of single-view 2D CNNs with fully automatic non-nodule categorization, for pulmonary nodule detection. Unlike previous 3D CNN-based frameworks, we utilized 2D CNNs using 2D single views to improve computational efficiency. Also, our training scheme using categorized non-nodules, extends the learning capability of representative features of different non-nodules. Our framework achieved state-of-the-art performance with low computational complexity.
Collapse
Affiliation(s)
- Hyunjun Eun
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Republic of Korea
| | - Daeyeong Kim
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Republic of Korea
| | - Chanho Jung
- Department of Electrical Engineering, Hanbat National University, Republic of Korea
| | - Changick Kim
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Republic of Korea.
| |
Collapse
|
43
|
Gupta A, Saar T, Martens O, Moullec YL. Automatic detection of multisize pulmonary nodules in CT images: Large-scale validation of the false-positive reduction step. Med Phys 2018; 45:1135-1149. [DOI: 10.1002/mp.12746] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2016] [Revised: 11/07/2017] [Accepted: 12/14/2017] [Indexed: 11/08/2022] Open
Affiliation(s)
- Anindya Gupta
- Thomas Johann Seebeck Department of Electronics; Tallinn University of Technology; Tallinn 19086 Estonia
| | - Tonis Saar
- Eliko Tehnoloogia Arenduskeskus OÜ; Tallinn 12618 and OÜ Tallinn 10143 Estonia
| | - Olev Martens
- Thomas Johann Seebeck Department of Electronics; Tallinn University of Technology; Tallinn 19086 Estonia
| | - Yannick Le Moullec
- Thomas Johann Seebeck Department of Electronics; Tallinn University of Technology; Tallinn 19086 Estonia
| |
Collapse
|
44
|
Dalmış MU, Vreemann S, Kooi T, Mann RM, Karssemeijer N, Gubern-Mérida A. Fully automated detection of breast cancer in screening MRI using convolutional neural networks. J Med Imaging (Bellingham) 2018; 5:014502. [PMID: 29340287 PMCID: PMC5763014 DOI: 10.1117/1.jmi.5.1.014502] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2017] [Accepted: 12/18/2017] [Indexed: 11/14/2022] Open
Abstract
Current computer-aided detection (CADe) systems for contrast-enhanced breast MRI rely on both spatial information obtained from the early-phase and temporal information obtained from the late-phase of the contrast enhancement. However, late-phase information might not be available in a screening setting, such as in abbreviated MRI protocols, where acquisition is limited to early-phase scans. We used deep learning to develop a CADe system that exploits the spatial information obtained from the early-phase scans. This system uses three-dimensional (3-D) morphological information in the candidate locations and the symmetry information arising from the enhancement differences of the two breasts. We compared the proposed system to a previously developed system, which uses the full dynamic breast MRI protocol. For training and testing, we used 385 MRI scans, containing 161 malignant lesions. Performance was measured by averaging the sensitivity values between 1/8-eight false positives. In our experiments, the proposed system obtained a significantly ([Formula: see text]) higher average sensitivity ([Formula: see text]) compared with that of the previous CADe system ([Formula: see text]). In conclusion, we developed a CADe system that is able to exploit the spatial information obtained from the early-phase scans and can be used in screening programs where abbreviated MRI protocols are used.
Collapse
Affiliation(s)
- Mehmet Ufuk Dalmış
- Radboud University Medical Center (RadboudUMC), Diagnostic Image Analysis Group (DIAG) Nijmegen, The Netherlands
| | - Suzan Vreemann
- Radboud University Medical Center (RadboudUMC), Diagnostic Image Analysis Group (DIAG) Nijmegen, The Netherlands
| | - Thijs Kooi
- Radboud University Medical Center (RadboudUMC), Diagnostic Image Analysis Group (DIAG) Nijmegen, The Netherlands
| | - Ritse M. Mann
- Radboud University Medical Center (RadboudUMC), Diagnostic Image Analysis Group (DIAG) Nijmegen, The Netherlands
| | - Nico Karssemeijer
- Radboud University Medical Center (RadboudUMC), Diagnostic Image Analysis Group (DIAG) Nijmegen, The Netherlands
| | - Albert Gubern-Mérida
- Radboud University Medical Center (RadboudUMC), Diagnostic Image Analysis Group (DIAG) Nijmegen, The Netherlands
| |
Collapse
|
45
|
Setio AAA, Traverso A, de Bel T, Berens MS, Bogaard CVD, Cerello P, Chen H, Dou Q, Fantacci ME, Geurts B, Gugten RVD, Heng PA, Jansen B, de Kaste MM, Kotov V, Lin JYH, Manders JT, Sóñora-Mengana A, García-Naranjo JC, Papavasileiou E, Prokop M, Saletta M, Schaefer-Prokop CM, Scholten ET, Scholten L, Snoeren MM, Torres EL, Vandemeulebroucke J, Walasek N, Zuidhof GC, Ginneken BV, Jacobs C. Validation, comparison, and combination of algorithms for automatic detection of pulmonary nodules in computed tomography images: The LUNA16 challenge. Med Image Anal 2017; 42:1-13. [PMID: 28732268 DOI: 10.1016/j.media.2017.06.015] [Citation(s) in RCA: 441] [Impact Index Per Article: 55.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2016] [Revised: 05/18/2017] [Accepted: 06/29/2017] [Indexed: 12/17/2022]
|
46
|
Shaukat F, Raja G, Gooya A, Frangi AF. Fully automatic detection of lung nodules in CT images using a hybrid feature set. Med Phys 2017; 44:3615-3629. [DOI: 10.1002/mp.12273] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2016] [Revised: 02/15/2017] [Accepted: 03/28/2017] [Indexed: 11/06/2022] Open
Affiliation(s)
- Furqan Shaukat
- Department of Electrical Engineering, University of Engineering & Technology, Taxila, 47080, Pakistan
| | - Gulistan Raja
- Department of Electrical Engineering, University of Engineering & Technology, Taxila, 47080, Pakistan
| | - Ali Gooya
- Department of Electronic and Electrical Engineering, University of Sheffield, Mappin Street, Sheffield, S1 3JD, UK
| | - Alejandro F Frangi
- Department of Electronic and Electrical Engineering, University of Sheffield, Mappin Street, Sheffield, S1 3JD, UK
| |
Collapse
|
47
|
Ishihara K, Ogawa T, Haseyama M. Helicobacter Pylori infection detection from gastric X-ray images based on feature fusion and decision fusion. Comput Biol Med 2017; 84:69-78. [PMID: 28346875 DOI: 10.1016/j.compbiomed.2017.03.007] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2016] [Revised: 03/07/2017] [Accepted: 03/08/2017] [Indexed: 12/18/2022]
Abstract
In this paper, a fully automatic method for detection of Helicobacter pylori (H. pylori) infection is presented with the aim of constructing a computer-aided diagnosis (CAD) system. In order to realize a CAD system with good performance for detection of H. pylori infection, we focus on the following characteristic of stomach X-ray examination. The accuracy of X-ray examination differs depending on the symptom of H. pylori infection that is focused on and the position from which X-ray images are taken. Therefore, doctors have to comprehensively assess the symptoms and positions. In order to introduce the idea of doctors' assessment into the CAD system, we newly propose a method for detection of H. pylori infection based on the combined use of feature fusion and decision fusion. As a feature fusion scheme, we adopt Multiple Kernel Learning (MKL). Since MKL can combine several features with determination of their weights, it can represent the differences in symptoms. By constructing an MKL classifier for each position, we can obtain several detection results. Furthermore, we introduce confidence-based decision fusion, which can consider the relationship between the classifier's performance and the detection results. Consequently, accurate detection of H. pylori infection becomes possible by the proposed method. Experimental results obtained by applying the proposed method to real X-ray images show that our method has good performance, close to the results of detection by specialists, and indicate that the realization of a CAD system for determining the risk of H. pylori infection is possible.
Collapse
Affiliation(s)
- Kenta Ishihara
- Graduate School of Information Science and Technology, Hokkaido University, Kita-14, Nishi-9, Sapporo-shi 060-0814, Japan.
| | - Takahiro Ogawa
- Graduate School of Information Science and Technology, Hokkaido University, Kita-14, Nishi-9, Sapporo-shi 060-0814, Japan.
| | - Miki Haseyama
- Graduate School of Information Science and Technology, Hokkaido University, Kita-14, Nishi-9, Sapporo-shi 060-0814, Japan.
| |
Collapse
|
48
|
Dou Q, Chen H, Yu L, Qin J, Heng PA. Multilevel Contextual 3-D CNNs for False Positive Reduction in Pulmonary Nodule Detection. IEEE Trans Biomed Eng 2017; 64:1558-1567. [PMID: 28113302 DOI: 10.1109/tbme.2016.2613502] [Citation(s) in RCA: 213] [Impact Index Per Article: 26.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
OBJECTIVE False positive reduction is one of the most crucial components in an automated pulmonary nodule detection system, which plays an important role in lung cancer diagnosis and early treatment. The objective of this paper is to effectively address the challenges in this task and therefore to accurately discriminate the true nodules from a large number of candidates. METHODS We propose a novel method employing three-dimensional (3-D) convolutional neural networks (CNNs) for false positive reduction in automated pulmonary nodule detection from volumetric computed tomography (CT) scans. Compared with its 2-D counterparts, the 3-D CNNs can encode richer spatial information and extract more representative features via their hierarchical architecture trained with 3-D samples. More importantly, we further propose a simple yet effective strategy to encode multilevel contextual information to meet the challenges coming with the large variations and hard mimics of pulmonary nodules. RESULTS The proposed framework has been extensively validated in the LUNA16 challenge held in conjunction with ISBI 2016, where we achieved the highest competition performance metric (CPM) score in the false positive reduction track. CONCLUSION Experimental results demonstrated the importance and effectiveness of integrating multilevel contextual information into 3-D CNN framework for automated pulmonary nodule detection in volumetric CT data. SIGNIFICANCE While our method is tailored for pulmonary nodule detection, the proposed framework is general and can be easily extended to many other 3-D object detection tasks from volumetric medical images, where the targeting objects have large variations and are accompanied by a number of hard mimics.
Collapse
|
49
|
Setio AAA, Ciompi F, Litjens G, Gerke P, Jacobs C, van Riel SJ, Wille MMW, Naqibullah M, Sanchez CI, van Ginneken B. Pulmonary Nodule Detection in CT Images: False Positive Reduction Using Multi-View Convolutional Networks. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:1160-1169. [PMID: 26955024 DOI: 10.1109/tmi.2016.2536809] [Citation(s) in RCA: 531] [Impact Index Per Article: 59.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/19/2023]
Abstract
We propose a novel Computer-Aided Detection (CAD) system for pulmonary nodules using multi-view convolutional networks (ConvNets), for which discriminative features are automatically learnt from the training data. The network is fed with nodule candidates obtained by combining three candidate detectors specifically designed for solid, subsolid, and large nodules. For each candidate, a set of 2-D patches from differently oriented planes is extracted. The proposed architecture comprises multiple streams of 2-D ConvNets, for which the outputs are combined using a dedicated fusion method to get the final classification. Data augmentation and dropout are applied to avoid overfitting. On 888 scans of the publicly available LIDC-IDRI dataset, our method reaches high detection sensitivities of 85.4% and 90.1% at 1 and 4 false positives per scan, respectively. An additional evaluation on independent datasets from the ANODE09 challenge and DLCST is performed. We showed that the proposed multi-view ConvNets is highly suited to be used for false positive reduction of a CAD system.
Collapse
|
50
|
Oğul H, Oğul BB, Ağıldere AM, Bayrak T, Sümer E. Eliminating rib shadows in chest radiographic images providing diagnostic assistance. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2016; 127:174-184. [PMID: 26775736 DOI: 10.1016/j.cmpb.2015.12.006] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/12/2015] [Revised: 10/30/2015] [Accepted: 12/17/2015] [Indexed: 06/05/2023]
Abstract
A major difficulty with chest radiographic analysis is the invisibility of abnormalities caused by the superimposition of normal anatomical structures, such as ribs, over the main tissue to be examined. Suppressing the ribs with no information loss about the original tissue would therefore be helpful during manual identification or computer-aided detection of nodules on a chest radiographic image. In this study, we introduce a two-step algorithm for eliminating rib shadows in chest radiographic images. The algorithm first delineates the ribs using a novel hybrid self-template approach and then suppresses these delineated ribs using an unsupervised regression model that takes into account the change in proximal thickness (depth) of bone in the vertical axis. The performance of the system is evaluated using a benchmark set of real chest radiographic images. The experimental results determine that proposed method for rib delineation can provide higher accuracy than existing methods. The knowledge of rib delineation can remarkably improve the nodule detection performance of a current computer-aided diagnosis (CAD) system. It is also shown that the rib suppression algorithm can increase the nodule visibility by eliminating rib shadows while mostly preserving the nodule intensity.
Collapse
Affiliation(s)
- Hasan Oğul
- Department of Computer Engineering, Başkent University, Ankara, Turkey.
| | | | - A Muhteşem Ağıldere
- Department of Radiology, Faculty of Medicine, Başkent University, Ankara, Turkey
| | - Tuncay Bayrak
- Department of Computer Engineering, Başkent University, Ankara, Turkey; Medicines and Medical Devices Agency of Turkey, Ankara, Turkey
| | - Emre Sümer
- Department of Computer Engineering, Başkent University, Ankara, Turkey
| |
Collapse
|