1
|
Li Y, Hui L, Wang X, Zou L, Chua S. Lung nodule detection using a multi-scale convolutional neural network and global channel spatial attention mechanisms. Sci Rep 2025; 15:12313. [PMID: 40210738 PMCID: PMC11986029 DOI: 10.1038/s41598-025-97187-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2024] [Accepted: 04/02/2025] [Indexed: 04/12/2025] Open
Abstract
Early detection of lung nodules is crucial for the prevention and treatment of lung cancer. However, current methods face challenges such as missing small nodules, variations in nodule size, and high false positive rates. To address these challenges, we propose a Global Channel Spatial Attention Mechanism (GCSAM). Building upon it, we develop a Candidate Nodule Detection Network (CNDNet) and a False Positive Reduction Network (FPRNet). CNDNet employs Res2Net as its backbone network to capture multi-scale features of lung nodules, utilizing GCSAM to fuse global contextual information, adaptively adjust feature weights, and refine processing along the spatial dimension. Additionally, we design a Hierarchical Progressive Feature Fusion (HPFF) module to effectively combine deep semantic information with shallow positional information, enabling high-sensitivity detection of nodules of varying sizes. FPRNet significantly reduces the false positive rate by accurately distinguishing true nodules from similar structures. Experimental results on the LUNA16 dataset demonstrate that our method achieves a competitive performance metric (CPM) value of 0.929 and a sensitivity of 0.977 under 2 false positives per scan. Compared to existing methods, our proposed method effectively reduces false positives while maintaining high sensitivity, achieving competitive results.
Collapse
Affiliation(s)
- Yongbin Li
- Faculty of Medical Information Engineering, Zunyi Medical University, 563000, Zunyi, Guizhou, China
- Faculty of Computer Science and Information Technology, Universiti Malaysia Sarawak, 94300, Kota Samarahan, Sarawak, Malaysia
| | - Linhu Hui
- Faculty of Medical Information Engineering, Zunyi Medical University, 563000, Zunyi, Guizhou, China
| | - Xiaohua Wang
- Faculty of Medical Information Engineering, Zunyi Medical University, 563000, Zunyi, Guizhou, China
| | - Liping Zou
- Faculty of Medical Information Engineering, Zunyi Medical University, 563000, Zunyi, Guizhou, China
| | - Stephanie Chua
- Faculty of Computer Science and Information Technology, Universiti Malaysia Sarawak, 94300, Kota Samarahan, Sarawak, Malaysia.
| |
Collapse
|
2
|
Okumura E, Kato H, Honmoto T, Suzuki N, Okumura E, Higashigawa T, Kitamura S, Ando J, Ishida T. [Segmentation of Mass in Mammogram Using Gaze Search Patterns]. Nihon Hoshasen Gijutsu Gakkai Zasshi 2024; 80:487-498. [PMID: 38479883 DOI: 10.6009/jjrt.2024-1438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/21/2024]
Abstract
PURPOSE It is very difficult for a radiologist to correctly detect small lesions and lesions hidden on dense breast tissue on a mammogram. Therefore, recently, computer-aided detection (CAD) systems have been widely used to assist radiologists in interpreting images. Thus, in this study, we aimed to segment mass on the mammogram with high accuracy by using focus images obtained from an eye-tracking device. METHODS We obtained focus images for two mammography expert radiologists and 19 mammography technologists on 8 abnormal and 8 normal mammograms published by the DDSM. Next, the auto-encoder, Pix2Pix, and UNIT learned the relationship between the actual mammogram and the focus image, and generated the focus image for the unknown mammogram. Finally, we segmented regions of mass on mammogram using the U-Net for each focus image generated by the auto-encoder, Pix2Pix, and UNIT. RESULTS The dice coefficient in the UNIT was 0.64±0.14. The dice coefficient in the UNIT was higher than that in the auto-encoder and Pix2Pix, and there was a statistically significant difference (p<0.05). The dice coefficient of the proposed method, which combines the focus images generated by the UNIT and the original mammogram, was 0.66±0.15, which is equivalent to the method using the original mammogram. CONCLUSION In the future, it will be necessary to increase the number of cases and further improve the segmentation.
Collapse
Affiliation(s)
- Eiichiro Okumura
- Department of Radiological Technology, Faculty of Health Sciences, Tsukuba International University
| | - Hideki Kato
- Department of Radiological Science, Faculty of Health Science, Gunma Paz University
| | - Tsuyoshi Honmoto
- Department of Radiological Technology, Ibaraki Children's Hospital
| | | | - Erika Okumura
- Department of Radiological Technology, Faculty of Health Sciences, Tsukuba International University
| | - Takuji Higashigawa
- Group of Visual Measurement, Department of Technology, Nac Image Technology
| | - Shigemi Kitamura
- Department of Radiological Technology, Faculty of Health Sciences, Tsukuba International University
| | - Jiro Ando
- Hospital Director, Tochigi Cancer Center
| | - Takayuki Ishida
- Division of Health Sciences, Graduate School of Medicine, Osaka University
| |
Collapse
|
3
|
Han L, Li F, Yu H, Xia K, Xin Q, Zou X. BiRPN-YOLOvX: A weighted bidirectional recursive feature pyramid algorithm for lung nodule detection. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2023; 31:301-317. [PMID: 36617767 DOI: 10.3233/xst-221310] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
BACKGROUND Lung cancer has the second highest cancer mortality rate in the world today. Although lung cancer screening using CT images is a common way for early lung cancer detection, accurately detecting lung nodules remains a challenged issue in clinical practice. OBJECTIVE This study aims to develop a new weighted bidirectional recursive pyramid algorithm to address the problems of small size of lung nodules, large proportion of background region, and complex lung structures in lung nodule detection of CT images. METHODS First, the weighted bidirectional recursive feature pyramid network (BiPRN) is proposed, which can increase the ability of network model to extract feature information and achieve multi-scale fusion information. Second, a CBAM_CSPDarknet53 structure is developed to incorporate an attention mechanism as a feature extraction module, which can aggregate both spatial information and channel information of the feature map. Third, the weighted BiRPN and CBAM_CSPDarknet53 are applied to the YOLOvX model for lung nodule detection experiments, named BiRPN-YOLOvX, where YOLOvX represents different versions of YOLO. To verify the effectiveness of our weighted BiRPN and CBAM_ CSPDarknet53 algorithm, they are fused with different models of YOLOv3, YOLOv4 and YOLOv5, and extensive experiments are carried out using the publicly available lung nodule datasets LUNA16 and LIDC-IDRI. The training set of LUNA16 contains 949 images, and the validation and testing sets each contain 118 images. There are 1987, 248 and 248 images in LIDC-IDRI's training, validation and testing sets, respectively. RESULTS The sensitivity of lung nodule detection using BiRPN-YOLOv5 reaches 98.7% on LUNA16 and 96.2% on LIDC-IDRI, respectively. CONCLUSION This study demonstrates that the proposed new method has potential to help improve the sensitivity of lung nodule detection in future clinical practice.
Collapse
Affiliation(s)
- Liying Han
- School of Electronics and Information Engineering, Hebei University of Technology, Tianjin, China
| | - Fugai Li
- School of Electronics and Information Engineering, Hebei University of Technology, Tianjin, China
| | - Hengyong Yu
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, Lowell, MA, USA
| | - Kewen Xia
- School of Electronics and Information Engineering, Hebei University of Technology, Tianjin, China
| | - Qiyuan Xin
- School of Electronics and Information Engineering, Hebei University of Technology, Tianjin, China
| | - Xiaoyu Zou
- School of Electronics and Information Engineering, Hebei University of Technology, Tianjin, China
| |
Collapse
|
4
|
Zhao W, Ma J, Zhao L, Hou R, Qiu L, Fu X, Zhao J. PUNDIT: Pulmonary nodule detection with image category transformation. Med Phys 2022; 50:2914-2927. [PMID: 36576169 DOI: 10.1002/mp.16183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Revised: 11/07/2022] [Accepted: 12/03/2022] [Indexed: 12/29/2022] Open
Abstract
BACKGROUND Convolutional neural networks (CNNs) have achieved great success in pulmonary nodules detection, which plays an important role in lung cancer screening. PURPOSE In this paper, we proposed a novel strategy for pulmonary nodule detection by learning it from a harder task, which was to transform nodule images into normal images. We named this strategy as pulmonary nodule detection with image category transformation (PUNDIT). METHODS There were two steps for nodules detection, nodule candidate detection and false positive (FP) reduction. In nodule candidate detection step, a segmentation-based framework was built for detection. We designed an image category transformation (ICT) task to translate nodule images into pixel-to-pixel normal images and share the information of detection and transformation tasks by multitask learning. As for references of transformation tasks, we proposed background consistency losses into standard cycle-consistent adversarial networks, which can solve the problem of background uncontrolled changing. A three-dimensional network was used in FP reduction step. RESULTS PUNDIT was evaluated in two datasets, cancer screening dataset (CSD) with 1186 nodules for cross-validation and (CTD) with 3668 nodules for external test. Results were mainly evaluated by competition performance metric (CPM), the average sensitivity at seven predefined FP rates. The CPM was improved from 0.906 to 0.931 in CSD, and from 0.835 to 0.848 in CTD. CONCLUSIONS Experimental results showed that PUNDIT can improve the performance of pulmonary nodules detection effectively.
Collapse
Affiliation(s)
- Wangyuan Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Jingchen Ma
- Department of Radiology, Columbia University Irving Medical Center, New York, New York, USA
| | - Lu Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Runping Hou
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China.,Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Lu Qiu
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Xiaolong Fu
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Jun Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
5
|
Zhang Y, Jiang B, Zhang L, Greuter MJW, de Bock GH, Zhang H, Xie X. Lung Nodule Detectability of Artificial Intelligence-assisted CT Image Reading in Lung Cancer Screening. Curr Med Imaging 2021; 18:327-334. [PMID: 34365951 DOI: 10.2174/1573405617666210806125953] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Revised: 06/11/2021] [Accepted: 06/17/2021] [Indexed: 11/22/2022]
Abstract
BACKGROUND Artificial intelligence (AI)-based automatic lung nodule detection system improves the detection rate of nodules. It is important to evaluate the clinical value of AI system by comparing AI-assisted nodule detection with actu-al radiology reports. OBJECTIVE To compare the detection rate of lung nodules between the actual radiology reports and AI-assisted reading in lung cancer CT screening. METHODS Participants in chest CT screening from November to December 2019 were retrospectively included. In the real-world radiologist observation, 14 residents and 15 radiologists participated to finalize radiology reports. In AI-assisted reading, one resident and one radiologist reevaluated all subjects with the assistance of an AI system to lo-cate and measure the detected lung nodules. A reading panel determined the type and number of detected lung nodules between these two methods. RESULTS In 860 participants (57±7 years), the reading panel confirmed 250 patients with >1 solid nodule, while radiolo-gists observed 131, lower than 247 by AI-assisted reading (p<0.001). The panel confirmed 111 patients with >1 non-solid nodule, whereas radiologist observation identified 28, lower than 110 by AI-assisted reading (p<0.001). The accuracy and sensitivity of radiologist observation for solid nodules were 86.2% and 52.4%, lower than 99.1% and 98.8% by AI-assisted reading, respectively. These metrics were 90.4% and 25.2% for non-solid nodules, lower than 98.8% and 99.1% by AI-assisted reading, respectively. CONCLUSION Comparing with the actual radiology reports, AI-assisted reading greatly improves the accuracy and sensi-tivity of nodule detection in chest CT, which benefits lung nodule detection, especially for non-solid nodules.
Collapse
Affiliation(s)
- Yaping Zhang
- Department of Radiology, Shanghai General Hospital of Nanjing Medical University, Haining Rd.100, Shanghai 200080. China
| | - Beibei Jiang
- Department of Radiology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Haining Rd.100, Shanghai 200080. 0
| | - Lu Zhang
- Department of Radiology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Haining Rd.100, Shanghai 200080. 0
| | - Marcel J W Greuter
- Department of Radiology, University of Groningen, University Medical Center Groningen, Hanzeplein 1, 9713GZ Gro-ningen. Netherlands
| | - Geertruida H de Bock
- Department of Epidemiology, University of Groningen, University Medical Center Groningen, Hanzeplein 1, 9713GZ Groningen. Netherlands
| | - Hao Zhang
- Department of Radiology, Shanghai General Hospital of Nanjing Medical University, Haining Rd.100, Shanghai 200080. China
| | - Xueqian Xie
- Department of Radiology, Shanghai General Hospital of Nanjing Medical University, Haining Rd.100, Shanghai 200080. China
| |
Collapse
|
6
|
Guo W, Gu X, Fang Q, Li Q. Comparison of performances of conventional and deep learning-based methods in segmentation of lung vessels and registration of chest radiographs. Radiol Phys Technol 2020; 14:6-15. [PMID: 32918159 DOI: 10.1007/s12194-020-00584-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2020] [Revised: 08/31/2020] [Accepted: 09/01/2020] [Indexed: 12/27/2022]
Abstract
Conventional machine learning-based methods have been effective in assisting physicians in making accurate decisions and utilized in computer-aided diagnosis for more than 30 years. Recently, deep learning-based methods, and convolutional neural networks in particular, have rapidly become preferred options in medical image analysis because of their state-of-the-art performance. However, the performances of conventional and deep learning-based methods cannot be compared reliably because of their evaluations on different datasets. Hence, we developed both conventional and deep learning-based methods for lung vessel segmentation and chest radiograph registration, and subsequently compared their performances on the same datasets. The results strongly indicated the superiority of deep learning-based methods over their conventional counterparts.
Collapse
Affiliation(s)
- Wei Guo
- Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
- School of Computer, Shenyang Aerospace University, Shenyang, 110136, Liaoning, China
| | - Xiaomeng Gu
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Qiming Fang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Qiang Li
- Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.
| |
Collapse
|