1
|
Horry MJ, Chakraborty S, Pradhan B, Paul M, Zhu J, Loh HW, Barua PD, Acharya UR. Development of Debiasing Technique for Lung Nodule Chest X-ray Datasets to Generalize Deep Learning Models. Sensors (Basel) 2023; 23:6585. [PMID: 37514877 PMCID: PMC10385599 DOI: 10.3390/s23146585] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 07/16/2023] [Accepted: 07/20/2023] [Indexed: 07/30/2023]
Abstract
Screening programs for early lung cancer diagnosis are uncommon, primarily due to the challenge of reaching at-risk patients located in rural areas far from medical facilities. To overcome this obstacle, a comprehensive approach is needed that combines mobility, low cost, speed, accuracy, and privacy. One potential solution lies in combining the chest X-ray imaging mode with federated deep learning, ensuring that no single data source can bias the model adversely. This study presents a pre-processing pipeline designed to debias chest X-ray images, thereby enhancing internal classification and external generalization. The pipeline employs a pruning mechanism to train a deep learning model for nodule detection, utilizing the most informative images from a publicly available lung nodule X-ray dataset. Histogram equalization is used to remove systematic differences in image brightness and contrast. Model training is then performed using combinations of lung field segmentation, close cropping, and rib/bone suppression. The resulting deep learning models, generated through this pre-processing pipeline, demonstrate successful generalization on an independent lung nodule dataset. By eliminating confounding variables in chest X-ray images and suppressing signal noise from the bone structures, the proposed deep learning lung nodule detection algorithm achieves an external generalization accuracy of 89%. This approach paves the way for the development of a low-cost and accessible deep learning-based clinical system for lung cancer screening.
Collapse
Affiliation(s)
- Michael J Horry
- Centre for Advanced Modelling and Geospatial Information Systems (CAMGIS), Faculty of Engineering and Information Technology, University of Technology Sydney, Ultimo, NSW 2007, Australia
- IBM Australia Limited, Sydney, NSW 2000, Australia
| | - Subrata Chakraborty
- Centre for Advanced Modelling and Geospatial Information Systems (CAMGIS), Faculty of Engineering and Information Technology, University of Technology Sydney, Ultimo, NSW 2007, Australia
- Faculty of Science, Agriculture, Business and Law, University of New England, Armidale, NSW 2351, Australia
| | - Biswajeet Pradhan
- Centre for Advanced Modelling and Geospatial Information Systems (CAMGIS), Faculty of Engineering and Information Technology, University of Technology Sydney, Ultimo, NSW 2007, Australia
- Earth Observation Center, Institute of Climate Change, Universiti Kebangsaan Malaysia, Bangi 43600, Malaysia
| | - Manoranjan Paul
- Machine Vision and Digital Health (MaViDH), School of Computing and Mathematics, Charles Sturt University, Bathurst, NSW 2795, Australia
| | - Jing Zhu
- Department of Radiology, Westmead Hospital, Westmead, NSW 2145, Australia
| | - Hui Wen Loh
- School of Science and Technology, Singapore University of Social Sciences, Singapore 599494, Singapore
| | - Prabal Datta Barua
- Centre for Advanced Modelling and Geospatial Information Systems (CAMGIS), Faculty of Engineering and Information Technology, University of Technology Sydney, Ultimo, NSW 2007, Australia
- Faculty of Science, Agriculture, Business and Law, University of New England, Armidale, NSW 2351, Australia
- Cogninet Brain Team, Cogninet Australia, Sydney, NSW 2010, Australia
- School of Business (Information Systems), Faculty of Business, Education, Law & Arts, University of Southern Queensland, Toowoomba, QLD 4350, Australia
| | - U Rajendra Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Springfield, QLD 4300, Australia
| |
Collapse
|
2
|
Horry MJ, Chakraborty S, Paul M, Ulhaq A, Pradhan B, Saha M, Shukla N. COVID-19 Detection Through Transfer Learning Using Multimodal Imaging Data. IEEE Access 2020; 8:149808-149824. [PMID: 34931154 PMCID: PMC8668160 DOI: 10.1109/access.2020.3016780] [Citation(s) in RCA: 150] [Impact Index Per Article: 37.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/06/2020] [Accepted: 08/11/2020] [Indexed: 05/02/2023]
Abstract
Detecting COVID-19 early may help in devising an appropriate treatment plan and disease containment decisions. In this study, we demonstrate how transfer learning from deep learning models can be used to perform COVID-19 detection using images from three most commonly used medical imaging modes X-Ray, Ultrasound, and CT scan. The aim is to provide over-stressed medical professionals a second pair of eyes through intelligent deep learning image classification models. We identify a suitable Convolutional Neural Network (CNN) model through initial comparative study of several popular CNN models. We then optimize the selected VGG19 model for the image modalities to show how the models can be used for the highly scarce and challenging COVID-19 datasets. We highlight the challenges (including dataset size and quality) in utilizing current publicly available COVID-19 datasets for developing useful deep learning models and how it adversely impacts the trainability of complex models. We also propose an image pre-processing stage to create a trustworthy image dataset for developing and testing the deep learning models. The new approach is aimed to reduce unwanted noise from the images so that deep learning models can focus on detecting diseases with specific features from them. Our results indicate that Ultrasound images provide superior detection accuracy compared to X-Ray and CT scans. The experimental results highlight that with limited data, most of the deeper networks struggle to train well and provides less consistency over the three imaging modes we are using. The selected VGG19 model, which is then extensively tuned with appropriate parameters, performs in considerable levels of COVID-19 detection against pneumonia or normal for all three lung image modes with the precision of up to 86% for X-Ray, 100% for Ultrasound and 84% for CT scans.
Collapse
Affiliation(s)
- Michael J. Horry
- Centre for Advanced Modelling and
Geospatial Information Systems (CAMGIS), School of Information, Systems, and
Modeling, Faculty of Engineering and ITUniversity of Technology
SydneySydneyNSW2007Australia
- IBM Australia LimitedSydneyNSW2065Australia
| | - Subrata Chakraborty
- Centre for Advanced Modelling and
Geospatial Information Systems (CAMGIS), School of Information, Systems, and
Modeling, Faculty of Engineering and ITUniversity of Technology
SydneySydneyNSW2007Australia
| | - Manoranjan Paul
- Machine Vision and Digital Health (MaViDH),
School of Computing and MathematicsCharles Sturt UniversityBathurstNSW2795Australia
| | - Anwaar Ulhaq
- Machine Vision and Digital Health (MaViDH),
School of Computing and MathematicsCharles Sturt UniversityBathurstNSW2795Australia
| | - Biswajeet Pradhan
- Centre for Advanced Modelling and
Geospatial Information Systems (CAMGIS), School of Information, Systems, and
Modeling, Faculty of Engineering and ITUniversity of Technology
SydneySydneyNSW2007Australia
- Department of Energy and Mineral
Resources EngineeringSejong UniversitySeoul05006South Korea
| | - Manas Saha
- Manning Rural Referral
HospitalTareeNSW2430Australia
| | - Nagesh Shukla
- Centre for Advanced Modelling and
Geospatial Information Systems (CAMGIS), School of Information, Systems, and
Modeling, Faculty of Engineering and ITUniversity of Technology
SydneySydneyNSW2007Australia
| |
Collapse
|