1
|
Guo L, Zhou C, Xu J, Huang C, Yu Y, Lu G. Deep Learning for Chest X-ray Diagnosis: Competition Between Radiologists with or Without Artificial Intelligence Assistance. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-00990-6. [PMID: 38332402 DOI: 10.1007/s10278-024-00990-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/09/2023] [Revised: 12/13/2023] [Accepted: 12/13/2023] [Indexed: 02/10/2024]
Abstract
This study aimed to assess the performance of a deep learning algorithm in helping radiologist achieve improved efficiency and accuracy in chest radiograph diagnosis. We adopted a deep learning algorithm to concurrently detect the presence of normal findings and 13 different abnormalities in chest radiographs and evaluated its performance in assisting radiologists. Each competing radiologist had to determine the presence or absence of these signs based on the label provided by the AI. The 100 radiographs were randomly divided into two sets for evaluation: one without AI assistance (control group) and one with AI assistance (test group). The accuracy, false-positive rate, false-negative rate, and analysis time of 111 radiologists (29 senior, 32 intermediate, and 50 junior) were evaluated. A radiologist was given an initial score of 14 points for each image read, with 1 point deducted for an incorrect answer and 0 points given for a correct answer. The final score for each doctor was automatically calculated by the backend calculator. We calculated the mean scores of each radiologist in the two groups (the control group and the test group) and calculated the mean scores to evaluate the performance of the radiologists with and without AI assistance. The average score of the 111 radiologists was 597 (587-605) in the control group and 619 (612-626) in the test group (P < 0.001). The time spent by the 111 radiologists on the control and test groups was 3279 (2972-3941) and 1926 (1710-2432) s, respectively (P < 0.001). The performance of the 111 radiologists in the two groups was evaluated by the area under the receiver operating characteristic curve (AUC). The radiologists showed better performance on the test group of radiographs in terms of normal findings, pulmonary fibrosis, heart shadow enlargement, mass, pleural effusion, and pulmonary consolidation recognition, with AUCs of 1.0, 0.950, 0.991, 1.0, 0.993, and 0.982, respectively. The radiologists alone showed better performance in aortic calcification (0.993), calcification (0.933), cavity (0.963), nodule (0.923), pleural thickening (0.957), and rib fracture (0.987) recognition. This competition verified the positive effects of deep learning methods in assisting radiologists in interpreting chest X-rays. AI assistance can help to improve both the efficacy and efficiency of radiologists.
Collapse
Affiliation(s)
- Lili Guo
- Department of Radiology, The Affiliated Huaian No. 1 People's Hospital of Nanjing Medical University, Huai'an, 223300, China.
| | - Changsheng Zhou
- Department of Medical Imaging, Jinling Hospital, Medical School of Nanjing University, Nanjing, 210002, China
| | - Jingxu Xu
- Deepwise AI Lab, Beijing Deepwise & League of PHD Technology Co., Ltd, Beijing, 100080, China
| | - Chencui Huang
- Deepwise AI Lab, Beijing Deepwise & League of PHD Technology Co., Ltd, Beijing, 100080, China
| | - Yizhou Yu
- Deepwise AI Lab, Beijing Deepwise & League of PHD Technology Co., Ltd, Beijing, 100080, China
| | - Guangming Lu
- Department of Medical Imaging, Jinling Hospital, Medical School of Nanjing University, Nanjing, 210002, China.
| |
Collapse
|
2
|
Katzman BD, Alabousi M, Islam N, Zha N, Patlas MN. Deep Learning for Pneumothorax Detection on Chest Radiograph: A Diagnostic Test Accuracy Systematic Review and Meta Analysis. Can Assoc Radiol J 2024:8465371231220885. [PMID: 38189265 DOI: 10.1177/08465371231220885] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2024] Open
Abstract
BACKGROUND Pneumothorax is a common acute presentation in healthcare settings. A chest radiograph (CXR) is often necessary to make the diagnosis, and minimizing the time between presentation and diagnosis is critical to deliver optimal treatment. Deep learning (DL) algorithms have been developed to rapidly identify pathologic findings on various imaging modalities. PURPOSE The purpose of this systematic review and meta-analysis was to evaluate the overall performance of studies utilizing DL algorithms to detect pneumothorax on CXR. METHODS A study protocol was created and registered a priori (PROSPERO CRD42023391375). The search strategy included studies published up until January 10, 2023. Inclusion criteria were studies that used adult patients, utilized computer-aided detection of pneumothorax on CXR, dataset was evaluated by a qualified physician, and sufficient data was present to create a 2 × 2 contingency table. Risk of bias was assessed using the QUADAS-2 tool. Bivariate random effects meta-analyses and meta-regression modeling were performed. RESULTS Twenty-three studies were selected, including 34 011 patients and 34 075 CXRs. The pooled sensitivity and specificity were 87% (95% confidence interval, 81%, 92%) and 95% (95% confidence interval, 92%, 97%), respectively. The study design, use of an institutional/public data set and risk of bias had no significant effect on the sensitivity and specificity of pneumothorax detection. CONCLUSIONS The relatively high sensitivity and specificity of pneumothorax detection by deep-learning showcases the vast potential for implementation in clinical settings to both augment the workflow of radiologists and assist in more rapid diagnoses and subsequent patient treatment.
Collapse
Affiliation(s)
- Benjamin D Katzman
- Michael G. DeGroote School of Medicine, McMaster University, Hamilton, ON, Canada
| | - Mostafa Alabousi
- Department of Medical Imaging, McMaster University, Hamilton, ON, Canada
| | - Nabil Islam
- Faculty of Medicine, University of Ottawa, Ottawa, ON, Canada
| | - Nanxi Zha
- Department of Medical Imaging, McMaster University, Hamilton, ON, Canada
| | - Michael N Patlas
- Department of Medical Imaging, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
3
|
Tatar OC, Akay MA, Metin S. DraiNet: AI-driven decision support in pneumothorax and pleural effusion management. Pediatr Surg Int 2023; 40:30. [PMID: 38151565 DOI: 10.1007/s00383-023-05609-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 11/24/2023] [Indexed: 12/29/2023]
Abstract
OBJECTIVE This study presents DraiNet, a deep learning model developed to detect pneumothorax and pleural effusion in pediatric patients and aid in assessing the necessity for tube thoracostomy. The primary goal is to utilize DraiNet as a decision support tool to enhance clinical decision-making in the management of these conditions. METHODS DraiNet was trained on a diverse dataset of pediatric CT scans, carefully annotated by experienced surgeons. The model incorporated advanced object detection techniques and underwent evaluation using standard metrics, such as mean Average Precision (mAP), to assess its performance. RESULTS DraiNet achieved an impressive mAP score of 0.964, demonstrating high accuracy in detecting and precisely localizing abnormalities associated with pneumothorax and pleural effusion. The model's precision and recall further confirmed its ability to effectively predict positive cases. CONCLUSION The integration of DraiNet as an AI-driven decision support system marks a significant advancement in pediatric healthcare. By combining deep learning algorithms with clinical expertise, DraiNet provides a valuable tool for non-surgical teams and emergency room doctors, aiding them in making informed decisions about surgical interventions. With its remarkable mAP score of 0.964, DraiNet has the potential to enhance patient outcomes and optimize the management of critical conditions, including pneumothorax and pleural effusion.
Collapse
Affiliation(s)
- Ozan Can Tatar
- Department of General Surgery, School of Medicine, Kocaeli University, 41000, Kocaeli, Turkey.
- Information Systems Engineering, Faculty of Technology, Kocaeli University, Kocaeli, Turkey.
| | - Mustafa Alper Akay
- Department of Pediatric Surgery, School of Medicine, Kocaeli University, Kocaeli, Turkey
| | - Semih Metin
- Department of Pediatric Surgery, School of Medicine, Kocaeli University, Kocaeli, Turkey
| |
Collapse
|
4
|
Wang CH, Lin T, Chen G, Lee MR, Tay J, Wu CY, Wu MC, Roth HR, Yang D, Zhao C, Wang W, Huang CH. Deep Learning-based Diagnosis and Localization of Pneumothorax on Portable Supine Chest X-ray in Intensive and Emergency Medicine: A Retrospective Study. J Med Syst 2023; 48:1. [PMID: 38048012 PMCID: PMC10695857 DOI: 10.1007/s10916-023-02023-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Accepted: 11/25/2023] [Indexed: 12/05/2023]
Abstract
PURPOSE To develop two deep learning-based systems for diagnosing and localizing pneumothorax on portable supine chest X-rays (SCXRs). METHODS For this retrospective study, images meeting the following inclusion criteria were included: (1) patient age ≥ 20 years; (2) portable SCXR; (3) imaging obtained in the emergency department or intensive care unit. Included images were temporally split into training (1571 images, between January 2015 and December 2019) and testing (1071 images, between January 2020 to December 2020) datasets. All images were annotated using pixel-level labels. Object detection and image segmentation were adopted to develop separate systems. For the detection-based system, EfficientNet-B2, DneseNet-121, and Inception-v3 were the architecture for the classification model; Deformable DETR, TOOD, and VFNet were the architecture for the localization model. Both classification and localization models of the segmentation-based system shared the UNet architecture. RESULTS In diagnosing pneumothorax, performance was excellent for both detection-based (Area under receiver operating characteristics curve [AUC]: 0.940, 95% confidence interval [CI]: 0.907-0.967) and segmentation-based (AUC: 0.979, 95% CI: 0.963-0.991) systems. For images with both predicted and ground-truth pneumothorax, lesion localization was highly accurate (detection-based Dice coefficient: 0.758, 95% CI: 0.707-0.806; segmentation-based Dice coefficient: 0.681, 95% CI: 0.642-0.721). The performance of the two deep learning-based systems declined as pneumothorax size diminished. Nonetheless, both systems were similar or better than human readers in diagnosis or localization performance across all sizes of pneumothorax. CONCLUSIONS Both deep learning-based systems excelled when tested in a temporally different dataset with differing patient or image characteristics, showing favourable potential for external generalizability.
Collapse
Affiliation(s)
- Chih-Hung Wang
- Department of Emergency Medicine, College of Medicine, National Taiwan University, Taipei, Taiwan
- Department of Emergency Medicine, National Taiwan University Hospital, No. 7, Zhongshan S. Rd., Taipei, Zhongzheng Dist., 100, Taiwan
| | - Tzuching Lin
- Institute of Applied Mathematical Sciences, National Taiwan University, No. 1, Sec. 4, Roosevelt Rd., Taipei, 106, Taiwan
| | - Guanru Chen
- Institute of Applied Mathematical Sciences, National Taiwan University, No. 1, Sec. 4, Roosevelt Rd., Taipei, 106, Taiwan
| | - Meng-Rui Lee
- Department of internal medicine, National Taiwan University Hospital, Taipei, Taiwan
| | - Joyce Tay
- Department of Emergency Medicine, National Taiwan University Hospital, No. 7, Zhongshan S. Rd., Taipei, Zhongzheng Dist., 100, Taiwan
| | - Cheng-Yi Wu
- Department of Emergency Medicine, National Taiwan University Hospital, No. 7, Zhongshan S. Rd., Taipei, Zhongzheng Dist., 100, Taiwan
| | - Meng-Che Wu
- Department of Emergency Medicine, National Taiwan University Hospital, No. 7, Zhongshan S. Rd., Taipei, Zhongzheng Dist., 100, Taiwan
| | | | | | - Can Zhao
- NVIDIA Corporation, Bethesda, USA
| | - Weichung Wang
- Institute of Applied Mathematical Sciences, National Taiwan University, No. 1, Sec. 4, Roosevelt Rd., Taipei, 106, Taiwan.
| | - Chien-Hua Huang
- Department of Emergency Medicine, College of Medicine, National Taiwan University, Taipei, Taiwan.
- Department of Emergency Medicine, National Taiwan University Hospital, No. 7, Zhongshan S. Rd., Taipei, Zhongzheng Dist., 100, Taiwan.
| |
Collapse
|
5
|
Sugibayashi T, Walston SL, Matsumoto T, Mitsuyama Y, Miki Y, Ueda D. Deep learning for pneumothorax diagnosis: a systematic review and meta-analysis. Eur Respir Rev 2023; 32:32/168/220259. [PMID: 37286217 DOI: 10.1183/16000617.0259-2022] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Accepted: 03/16/2023] [Indexed: 06/09/2023] Open
Abstract
BACKGROUND Deep learning (DL), a subset of artificial intelligence (AI), has been applied to pneumothorax diagnosis to aid physician diagnosis, but no meta-analysis has been performed. METHODS A search of multiple electronic databases through September 2022 was performed to identify studies that applied DL for pneumothorax diagnosis using imaging. Meta-analysis via a hierarchical model to calculate the summary area under the curve (AUC) and pooled sensitivity and specificity for both DL and physicians was performed. Risk of bias was assessed using a modified Prediction Model Study Risk of Bias Assessment Tool. RESULTS In 56 of the 63 primary studies, pneumothorax was identified from chest radiography. The total AUC was 0.97 (95% CI 0.96-0.98) for both DL and physicians. The total pooled sensitivity was 84% (95% CI 79-89%) for DL and 85% (95% CI 73-92%) for physicians and the pooled specificity was 96% (95% CI 94-98%) for DL and 98% (95% CI 95-99%) for physicians. More than half of the original studies (57%) had a high risk of bias. CONCLUSIONS Our review found the diagnostic performance of DL models was similar to that of physicians, although the majority of studies had a high risk of bias. Further pneumothorax AI research is needed.
Collapse
Affiliation(s)
- Takahiro Sugibayashi
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Shannon L Walston
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Toshimasa Matsumoto
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
- Smart Life Science Lab, Center for Health Science Innovation, Osaka Metropolitan University, Osaka, Japan
| | - Yasuhito Mitsuyama
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Yukio Miki
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Daiju Ueda
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
- Smart Life Science Lab, Center for Health Science Innovation, Osaka Metropolitan University, Osaka, Japan
| |
Collapse
|
6
|
Brown MS, Wong KP, Shrestha L, Wahi-Anwar M, Daly M, Foster G, Abtin F, Ruchalski KL, Goldin JG, Enzmann D. Automated Endotracheal Tube Placement Check Using Semantically Embedded Deep Neural Networks. Acad Radiol 2023; 30:412-420. [PMID: 35644754 DOI: 10.1016/j.acra.2022.04.022] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Revised: 04/07/2022] [Accepted: 04/22/2022] [Indexed: 01/25/2023]
Abstract
RATIONALE AND OBJECTIVES To develop artificial intelligence (AI) system that assists in checking endotracheal tube (ETT) placement on chest X-rays (CXRs) and evaluate whether it can move into clinical validation as a quality improvement tool. MATERIALS AND METHODS A retrospective data set including 2000 de-identified images from intensive care unit patients was split into 1488 for training and 512 for testing. AI was developed to automatically identify the ETT, trachea, and carina using semantically embedded neural networks that combine a declarative knowledge base with deep neural networks. To check the ETT tip placement, a "safe zone" was computed as the region inside the trachea and 3-7 cm above the carina. Two AI outputs were evaluated: (1) ETT overlay, (2) ETT misplacement alert messages. Clinically relevant performance metrics were compared against prespecified thresholds of >85% overlay accuracy and positive predictive value (PPV) > 30% and negative predictive value NPV > 95% for alerts to move into clinical validation. RESULTS An ETT was present in 285 of 512 test cases. The AI detected 95% (271/285) of ETTs, 233 (86%) of these with accurate tip localization. The system (correctly) did not generate an ETT overlay in 221/227 CXRs where the tube was absent for an overall overlay accuracy of 89% (454/512). The alert messages indicating that either the ETT was misplaced or not detected had a PPV of 83% (265/320) and NPV of 98% (188/192). CONCLUSION The chest X-ray AI met prespecified performance thresholds to move into clinical validation.
Collapse
Affiliation(s)
- Matthew S Brown
- Department of Radiological Sciences, Center for Computer Vision and Imaging Biomarkers, David Geffen School of Medicine at UCLA, 924 Westwood Blvd., Suite 615, Los Angeles, CA 90024.
| | - Koon-Pong Wong
- Department of Radiological Sciences, Center for Computer Vision and Imaging Biomarkers, David Geffen School of Medicine at UCLA, 924 Westwood Blvd., Suite 615, Los Angeles, CA 90024
| | - Liza Shrestha
- Department of Radiological Sciences, Center for Computer Vision and Imaging Biomarkers, David Geffen School of Medicine at UCLA, 924 Westwood Blvd., Suite 615, Los Angeles, CA 90024
| | - Muhammad Wahi-Anwar
- Department of Radiological Sciences, Center for Computer Vision and Imaging Biomarkers, David Geffen School of Medicine at UCLA, 924 Westwood Blvd., Suite 615, Los Angeles, CA 90024
| | - Morgan Daly
- Department of Radiological Sciences, Center for Computer Vision and Imaging Biomarkers, David Geffen School of Medicine at UCLA, 924 Westwood Blvd., Suite 615, Los Angeles, CA 90024
| | - George Foster
- Department of Radiological Sciences, Center for Computer Vision and Imaging Biomarkers, David Geffen School of Medicine at UCLA, 924 Westwood Blvd., Suite 615, Los Angeles, CA 90024
| | - Fereidoun Abtin
- Department of Radiological Sciences, Center for Computer Vision and Imaging Biomarkers, David Geffen School of Medicine at UCLA, 924 Westwood Blvd., Suite 615, Los Angeles, CA 90024
| | - Kathleen L Ruchalski
- Department of Radiological Sciences, Center for Computer Vision and Imaging Biomarkers, David Geffen School of Medicine at UCLA, 924 Westwood Blvd., Suite 615, Los Angeles, CA 90024
| | - Jonathan G Goldin
- Department of Radiological Sciences, Center for Computer Vision and Imaging Biomarkers, David Geffen School of Medicine at UCLA, 924 Westwood Blvd., Suite 615, Los Angeles, CA 90024
| | - Dieter Enzmann
- Department of Radiological Sciences, Center for Computer Vision and Imaging Biomarkers, David Geffen School of Medicine at UCLA, 924 Westwood Blvd., Suite 615, Los Angeles, CA 90024
| |
Collapse
|
7
|
Irmici G, Cè M, Caloro E, Khenkina N, Della Pepa G, Ascenti V, Martinenghi C, Papa S, Oliva G, Cellina M. Chest X-ray in Emergency Radiology: What Artificial Intelligence Applications Are Available? Diagnostics (Basel) 2023; 13:diagnostics13020216. [PMID: 36673027 PMCID: PMC9858224 DOI: 10.3390/diagnostics13020216] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Revised: 12/28/2022] [Accepted: 01/03/2023] [Indexed: 01/11/2023] Open
Abstract
Due to its widespread availability, low cost, feasibility at the patient's bedside and accessibility even in low-resource settings, chest X-ray is one of the most requested examinations in radiology departments. Whilst it provides essential information on thoracic pathology, it can be difficult to interpret and is prone to diagnostic errors, particularly in the emergency setting. The increasing availability of large chest X-ray datasets has allowed the development of reliable Artificial Intelligence (AI) tools to help radiologists in everyday clinical practice. AI integration into the diagnostic workflow would benefit patients, radiologists, and healthcare systems in terms of improved and standardized reporting accuracy, quicker diagnosis, more efficient management, and appropriateness of the therapy. This review article aims to provide an overview of the applications of AI for chest X-rays in the emergency setting, emphasizing the detection and evaluation of pneumothorax, pneumonia, heart failure, and pleural effusion.
Collapse
Affiliation(s)
- Giovanni Irmici
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Maurizio Cè
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Elena Caloro
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Natallia Khenkina
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Gianmarco Della Pepa
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Velio Ascenti
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Carlo Martinenghi
- Radiology Department, San Raffaele Hospital, Via Olgettina 60, 20132 Milan, Italy
| | - Sergio Papa
- Unit of Diagnostic Imaging and Stereotactic Radiosurgery, Centro Diagnostico Italiano, Via Saint Bon 20, 20147 Milan, Italy
| | - Giancarlo Oliva
- Radiology Department, Fatebenefratelli Hospital, ASST Fatebenefratelli Sacco, Milano, Piazza Principessa Clotilde 3, 20121 Milan, Italy
| | - Michaela Cellina
- Radiology Department, Fatebenefratelli Hospital, ASST Fatebenefratelli Sacco, Milano, Piazza Principessa Clotilde 3, 20121 Milan, Italy
| |
Collapse
|
8
|
ALCNN: Attention based lightweight convolutional neural network for pneumothorax detection in chest X-rays. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
9
|
Hillis JM, Bizzo BC, Mercaldo S, Chin JK, Newbury-Chaet I, Digumarthy SR, Gilman MD, Muse VV, Bottrell G, Seah JC, Jones CM, Kalra MK, Dreyer KJ. Evaluation of an Artificial Intelligence Model for Detection of Pneumothorax and Tension Pneumothorax in Chest Radiographs. JAMA Netw Open 2022; 5:e2247172. [PMID: 36520432 PMCID: PMC9856508 DOI: 10.1001/jamanetworkopen.2022.47172] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
IMPORTANCE Early detection of pneumothorax, most often via chest radiography, can help determine need for emergent clinical intervention. The ability to accurately detect and rapidly triage pneumothorax with an artificial intelligence (AI) model could assist with earlier identification and improve care. OBJECTIVE To compare the accuracy of an AI model vs consensus thoracic radiologist interpretations in detecting any pneumothorax (incorporating both nontension and tension pneumothorax) and tension pneumothorax. DESIGN, SETTING, AND PARTICIPANTS This diagnostic study was a retrospective standalone performance assessment using a data set of 1000 chest radiographs captured between June 1, 2015, and May 31, 2021. The radiographs were obtained from patients aged at least 18 years at 4 hospitals in the Mass General Brigham hospital network in the United States. Included radiographs were selected using 2 strategies from all chest radiography performed at the hospitals, including inpatient and outpatient. The first strategy identified consecutive radiographs with pneumothorax through a manual review of radiology reports, and the second strategy identified consecutive radiographs with tension pneumothorax using natural language processing. For both strategies, negative radiographs were selected by taking the next negative radiograph acquired from the same radiography machine as each positive radiograph. The final data set was an amalgamation of these processes. Each radiograph was interpreted independently by up to 3 radiologists to establish consensus ground-truth interpretations. Each radiograph was then interpreted by the AI model for the presence of pneumothorax and tension pneumothorax. This study was conducted between July and October 2021, with the primary analysis performed between October and November 2021. MAIN OUTCOMES AND MEASURES The primary end points were the areas under the receiver operating characteristic curves (AUCs) for the detection of pneumothorax and tension pneumothorax. The secondary end points were the sensitivities and specificities for the detection of pneumothorax and tension pneumothorax. RESULTS The final analysis included radiographs from 985 patients (mean [SD] age, 60.8 [19.0] years; 436 [44.3%] female patients), including 307 patients with nontension pneumothorax, 128 patients with tension pneumothorax, and 550 patients without pneumothorax. The AI model detected any pneumothorax with an AUC of 0.979 (95% CI, 0.970-0.987), sensitivity of 94.3% (95% CI, 92.0%-96.3%), and specificity of 92.0% (95% CI, 89.6%-94.2%) and tension pneumothorax with an AUC of 0.987 (95% CI, 0.980-0.992), sensitivity of 94.5% (95% CI, 90.6%-97.7%), and specificity of 95.3% (95% CI, 93.9%-96.6%). CONCLUSIONS AND RELEVANCE These findings suggest that the assessed AI model accurately detected pneumothorax and tension pneumothorax in this chest radiograph data set. The model's use in the clinical workflow could lead to earlier identification and improved care for patients with pneumothorax.
Collapse
Affiliation(s)
- James M. Hillis
- Data Science Office, Mass General Brigham, Boston, Massachusetts
- Department of Neurology, Massachusetts General Hospital, Boston
- Harvard Medical School, Boston, Massachusetts
| | - Bernardo C. Bizzo
- Data Science Office, Mass General Brigham, Boston, Massachusetts
- Harvard Medical School, Boston, Massachusetts
- Department of Radiology, Massachusetts General Hospital, Boston
| | - Sarah Mercaldo
- Data Science Office, Mass General Brigham, Boston, Massachusetts
- Harvard Medical School, Boston, Massachusetts
- Department of Radiology, Massachusetts General Hospital, Boston
| | - John K. Chin
- Data Science Office, Mass General Brigham, Boston, Massachusetts
| | | | - Subba R. Digumarthy
- Harvard Medical School, Boston, Massachusetts
- Department of Radiology, Massachusetts General Hospital, Boston
| | - Matthew D. Gilman
- Harvard Medical School, Boston, Massachusetts
- Department of Radiology, Massachusetts General Hospital, Boston
| | - Victorine V. Muse
- Harvard Medical School, Boston, Massachusetts
- Department of Radiology, Massachusetts General Hospital, Boston
| | | | | | - Catherine M. Jones
- Annalise-AI, Sydney, Australia
- I-MED Radiology Network, Brisbane, Australia
- Faculty of Medicine and Health, University of Sydney, Sydney, Australia
| | - Mannudeep K. Kalra
- Data Science Office, Mass General Brigham, Boston, Massachusetts
- Harvard Medical School, Boston, Massachusetts
- Department of Radiology, Massachusetts General Hospital, Boston
| | - Keith J. Dreyer
- Data Science Office, Mass General Brigham, Boston, Massachusetts
- Harvard Medical School, Boston, Massachusetts
- Department of Radiology, Massachusetts General Hospital, Boston
| |
Collapse
|
10
|
Deep Learning Approaches for Automatic Localization in Medical Images. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:6347307. [PMID: 35814554 PMCID: PMC9259335 DOI: 10.1155/2022/6347307] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Accepted: 05/23/2022] [Indexed: 12/21/2022]
Abstract
Recent revolutionary advances in deep learning (DL) have fueled several breakthrough achievements in various complicated computer vision tasks. The remarkable successes and achievements started in 2012 when deep learning neural networks (DNNs) outperformed the shallow machine learning models on a number of significant benchmarks. Significant advances were made in computer vision by conducting very complex image interpretation tasks with outstanding accuracy. These achievements have shown great promise in a wide variety of fields, especially in medical image analysis by creating opportunities to diagnose and treat diseases earlier. In recent years, the application of the DNN for object localization has gained the attention of researchers due to its success over conventional methods, especially in object localization. As this has become a very broad and rapidly growing field, this study presents a short review of DNN implementation for medical images and validates its efficacy on benchmarks. This study presents the first review that focuses on object localization using the DNN in medical images. The key aim of this study was to summarize the recent studies based on the DNN for medical image localization and to highlight the research gaps that can provide worthwhile ideas to shape future research related to object localization tasks. It starts with an overview on the importance of medical image analysis and existing technology in this space. The discussion then proceeds to the dominant DNN utilized in the current literature. Finally, we conclude by discussing the challenges associated with the application of the DNN for medical image localization which can drive further studies in identifying potential future developments in the relevant field of study.
Collapse
|
11
|
Gu H, Wang H, Qin P, Wang J. Chest L-Transformer: Local Features With Position Attention for Weakly Supervised Chest Radiograph Segmentation and Classification. Front Med (Lausanne) 2022; 9:923456. [PMID: 35721071 PMCID: PMC9201450 DOI: 10.3389/fmed.2022.923456] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Accepted: 05/12/2022] [Indexed: 11/13/2022] Open
Abstract
We consider the problem of weakly supervised segmentation on chest radiographs. The chest radiograph is the most common means of screening and diagnosing thoracic diseases. Weakly supervised deep learning models have gained increasing popularity in medical image segmentation. However, these models are not suitable for the critical characteristics presented in chest radiographs: the global symmetry of chest radiographs and dependencies between lesions and their positions. These models extract global features from the whole image to make the image-level decision. The global symmetry can lead these models to misclassification of symmetrical positions of the lesions. Thoracic diseases often have special disease prone areas in chest radiographs. There is a relationship between the lesions and their positions. In this study, we propose a weakly supervised model, called Chest L-Transformer, to take these characteristics into account. Chest L-Transformer classifies an image based on local features to avoid the misclassification caused by the global symmetry. Moreover, associated with Transformer attention mechanism, Chest L-Transformer models the dependencies between the lesions and their positions and pays more attention to the disease prone areas. Chest L-Transformer is only trained with image-level annotations for lesion segmentation. Thus, Log-Sum-Exp voting and its variant are proposed to unify the pixel-level prediction with the image-level prediction. We demonstrate a significant segmentation performance improvement over the current state-of-the-art while achieving competitive classification performance.
Collapse
Affiliation(s)
- Hong Gu
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, China
| | - Hongyu Wang
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, China
| | - Pan Qin
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, China
- *Correspondence: Pan Qin
| | - Jia Wang
- Department of Surgery, The Second Hospital of Dalian Medical University, Dalian, China
- Jia Wang
| |
Collapse
|
12
|
Malhotra P, Gupta S, Koundal D, Zaguia A, Kaur M, Lee HN. Deep Learning-Based Computer-Aided Pneumothorax Detection Using Chest X-ray Images. SENSORS (BASEL, SWITZERLAND) 2022; 22:2278. [PMID: 35336449 PMCID: PMC8955356 DOI: 10.3390/s22062278] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/24/2021] [Revised: 02/21/2022] [Accepted: 03/01/2022] [Indexed: 06/14/2023]
Abstract
Pneumothorax is a thoracic disease leading to failure of the respiratory system, cardiac arrest, or in extreme cases, death. Chest X-ray (CXR) imaging is the primary diagnostic imaging technique for the diagnosis of pneumothorax. A computerized diagnosis system can detect pneumothorax in chest radiographic images, which provide substantial benefits in disease diagnosis. In the present work, a deep learning neural network model is proposed to detect the regions of pneumothoraces in the chest X-ray images. The model incorporates a Mask Regional Convolutional Neural Network (Mask RCNN) framework and transfer learning with ResNet101 as a backbone feature pyramid network (FPN). The proposed model was trained on a pneumothorax dataset prepared by the Society for Imaging Informatics in Medicine in association with American college of Radiology (SIIM-ACR). The present work compares the operation of the proposed MRCNN model based on ResNet101 as an FPN with the conventional model based on ResNet50 as an FPN. The proposed model had lower class loss, bounding box loss, and mask loss as compared to the conventional model based on ResNet50 as an FPN. Both models were simulated with a learning rate of 0.0004 and 0.0006 with 10 and 12 epochs, respectively.
Collapse
Affiliation(s)
- Priyanka Malhotra
- Chitkara University Institute of Engineering and Technology, Chitkara University, Patiala 140401, Punjab, India; (P.M.); (S.G.)
| | - Sheifali Gupta
- Chitkara University Institute of Engineering and Technology, Chitkara University, Patiala 140401, Punjab, India; (P.M.); (S.G.)
| | - Deepika Koundal
- Department of Systemics, School of Computer Science, University of Petroleum and Energy Studies, Dehradun 248007, Uttarakhand, India;
| | - Atef Zaguia
- Department of Computer Science, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia;
| | - Manjit Kaur
- School of Electrical Engineering and Computer Science, Gwangju Institute of Science and Technology, Gwangju 61005, Korea;
| | - Heung-No Lee
- School of Electrical Engineering and Computer Science, Gwangju Institute of Science and Technology, Gwangju 61005, Korea;
| |
Collapse
|
13
|
Malhotra P, Gupta S, Koundal D, Zaguia A, Enbeyle W. Deep Neural Networks for Medical Image Segmentation. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:9580991. [PMID: 35310182 PMCID: PMC8930223 DOI: 10.1155/2022/9580991] [Citation(s) in RCA: 33] [Impact Index Per Article: 16.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Revised: 01/06/2022] [Accepted: 01/10/2022] [Indexed: 12/31/2022]
Abstract
Image segmentation is a branch of digital image processing which has numerous applications in the field of analysis of images, augmented reality, machine vision, and many more. The field of medical image analysis is growing and the segmentation of the organs, diseases, or abnormalities in medical images has become demanding. The segmentation of medical images helps in checking the growth of disease like tumour, controlling the dosage of medicine, and dosage of exposure to radiations. Medical image segmentation is really a challenging task due to the various artefacts present in the images. Recently, deep neural models have shown application in various image segmentation tasks. This significant growth is due to the achievements and high performance of the deep learning strategies. This work presents a review of the literature in the field of medical image segmentation employing deep convolutional neural networks. The paper examines the various widely used medical image datasets, the different metrics used for evaluating the segmentation tasks, and performances of different CNN based networks. In comparison to the existing review and survey papers, the present work also discusses the various challenges in the field of segmentation of medical images and different state-of-the-art solutions available in the literature.
Collapse
Affiliation(s)
- Priyanka Malhotra
- Chitkara University Institute of Engineering and Technology, Chitkara University, Chandigarh, Punjab, India
| | - Sheifali Gupta
- Chitkara University Institute of Engineering and Technology, Chitkara University, Chandigarh, Punjab, India
| | - Deepika Koundal
- Department of Systemics, School of Computer Science, University of Petroleum and Energy Studies, Dehradun, India
| | - Atef Zaguia
- Department of Computer Science, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia
| | | |
Collapse
|
14
|
Feng S, Liu Q, Patel A, Bazai SU, Jin C, Kim JS, Sarrafzadeh M, Azzollini D, Yeoh J, Kim E, Gordon S, Jang‐Jaccard J, Urschler M, Barnard S, Fong A, Simmers C, Tarr GP, Wilson B. Automated pneumothorax triaging in chest X‐rays in the New Zealand population using deep‐learning algorithms. J Med Imaging Radiat Oncol 2022; 66:1035-1043. [DOI: 10.1111/1754-9485.13393] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2021] [Accepted: 02/03/2022] [Indexed: 11/30/2022]
Affiliation(s)
- Sijing Feng
- Department of Radiology Dunedin Hospital Dunedin New Zealand
| | - Qixiu Liu
- Counties Manukau Health Auckland New Zealand
| | - Aakash Patel
- Dunedin School of Medicine Dunedin Hospital Dunedin New Zealand
| | - Sibghat Ullah Bazai
- School of Natural and Computational Sciences Massey University Palmerston North New Zealand
| | | | - Ji Soo Kim
- Auckland District Health Board Auckland New Zealand
| | | | | | - Jason Yeoh
- Auckland District Health Board Auckland New Zealand
| | - Eve Kim
- Auckland District Health Board Auckland New Zealand
| | - Simon Gordon
- Waikato District Health Board Hamilton New Zealand
| | - Julian Jang‐Jaccard
- School of Natural and Computational Sciences Massey University Palmerston North New Zealand
| | - Martin Urschler
- School of Computer Science University of Auckland Auckland New Zealand
| | | | - Amy Fong
- Department of Radiology Dunedin Hospital Dunedin New Zealand
| | - Cameron Simmers
- Department of Radiology Dunedin Hospital Dunedin New Zealand
| | | | - Ben Wilson
- Department of Radiology Dunedin Hospital Dunedin New Zealand
| |
Collapse
|
15
|
Wang H, Gu H, Qin P, Wang J. U-shaped GAN for Semi-Supervised Learning and Unsupervised Domain Adaptation in High Resolution Chest Radiograph Segmentation. Front Med (Lausanne) 2022; 8:782664. [PMID: 35096877 PMCID: PMC8792862 DOI: 10.3389/fmed.2021.782664] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Accepted: 12/14/2021] [Indexed: 01/03/2023] Open
Abstract
Deep learning has achieved considerable success in medical image segmentation. However, applying deep learning in clinical environments often involves two problems: (1) scarcity of annotated data as data annotation is time-consuming and (2) varying attributes of different datasets due to domain shift. To address these problems, we propose an improved generative adversarial network (GAN) segmentation model, called U-shaped GAN, for limited-annotated chest radiograph datasets. The semi-supervised learning approach and unsupervised domain adaptation (UDA) approach are modeled into a unified framework for effective segmentation. We improve GAN by replacing the traditional discriminator with a U-shaped net, which predicts each pixel a label. The proposed U-shaped net is designed with high resolution radiographs (1,024 × 1,024) for effective segmentation while taking computational burden into account. The pointwise convolution is applied to U-shaped GAN for dimensionality reduction, which decreases the number of feature maps while retaining their salient features. Moreover, we design the U-shaped net with a pretrained ResNet-50 as an encoder to reduce the computational burden of training the encoder from scratch. A semi-supervised learning approach is proposed learning from limited annotated data while exploiting additional unannotated data with a pixel-level loss. U-shaped GAN is extended to UDA by taking the source and target domain data as the annotated data and the unannotated data in the semi-supervised learning approach, respectively. Compared to the previous models dealing with the aforementioned problems separately, U-shaped GAN is compatible with varying data distributions of multiple medical centers, with efficient training and optimizing performance. U-shaped GAN can be generalized to chest radiograph segmentation for clinical deployment. We evaluate U-shaped GAN with two chest radiograph datasets. U-shaped GAN is shown to significantly outperform the state-of-the-art models.
Collapse
Affiliation(s)
- Hongyu Wang
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, China
| | - Hong Gu
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, China
| | - Pan Qin
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, China
| | - Jia Wang
- Department of Surgery, The Second Hospital of Dalian Medical University, Dalian, China
| |
Collapse
|
16
|
Moses DA. Deep learning applied to automatic disease detection using chest X-rays. J Med Imaging Radiat Oncol 2021; 65:498-517. [PMID: 34231311 DOI: 10.1111/1754-9485.13273] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Accepted: 06/08/2021] [Indexed: 12/24/2022]
Abstract
Deep learning (DL) has shown rapid advancement and considerable promise when applied to the automatic detection of diseases using CXRs. This is important given the widespread use of CXRs across the world in diagnosing significant pathologies, and the lack of trained radiologists to report them. This review article introduces the basic concepts of DL as applied to CXR image analysis including basic deep neural network (DNN) structure, the use of transfer learning and the application of data augmentation. It then reviews the current literature on how DNN models have been applied to the detection of common CXR abnormalities (e.g. lung nodules, pneumonia, tuberculosis and pneumothorax) over the last few years. This includes DL approaches employed for the classification of multiple different diseases (multi-class classification). Performance of different techniques and models and their comparison with human observers are presented. Some of the challenges facing DNN models, including their future implementation and relationships to radiologists, are also discussed.
Collapse
Affiliation(s)
- Daniel A Moses
- Graduate School of Biomedical Engineering, Faculty of Engineering, University of New South Wales, Sydney, New South Wales, Australia.,Department of Medical Imaging, Prince of Wales Hospital, Sydney, New South Wales, Australia
| |
Collapse
|
17
|
Detection of the location of pneumothorax in chest X-rays using small artificial neural networks and a simple training process. Sci Rep 2021; 11:13054. [PMID: 34158562 PMCID: PMC8219779 DOI: 10.1038/s41598-021-92523-2] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Accepted: 06/11/2021] [Indexed: 02/06/2023] Open
Abstract
The purpose of this study was to evaluate the diagnostic performance achieved by using fully-connected small artificial neural networks (ANNs) and a simple training process, the Kim-Monte Carlo algorithm, to detect the location of pneumothorax in chest X-rays. A total of 1,000 chest X-ray images with pneumothorax were taken randomly from NIH (the National Institutes of Health) public image database and used as the training and test sets. Each X-ray image with pneumothorax was divided into 49 boxes for pneumothorax localization. For each of the boxes in the chest X-ray images contained in the test set, the area under the receiver operating characteristic (ROC) curve (AUC) was 0.882, and the sensitivity and specificity were 80.6% and 83.0%, respectively. In addition, a common currently used deep-learning method for image recognition, the convolution neural network (CNN), was also applied to the same dataset for comparison purposes. The performance of the fully-connected small ANN was better than that of the CNN. Regarding the diagnostic performances of the CNN with different activation functions, the CNN with a sigmoid activation function for fully-connected hidden nodes was better than the CNN with the rectified linear unit (RELU) activation function. This study showed that our approach can accurately detect the location of pneumothorax in chest X-rays, significantly reduce the time delay incurred when diagnosing urgent diseases such as pneumothorax, and increase the effectiveness of clinical practice and patient care.
Collapse
|
18
|
Çallı E, Sogancioglu E, van Ginneken B, van Leeuwen KG, Murphy K. Deep learning for chest X-ray analysis: A survey. Med Image Anal 2021; 72:102125. [PMID: 34171622 DOI: 10.1016/j.media.2021.102125] [Citation(s) in RCA: 90] [Impact Index Per Article: 30.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Revised: 05/17/2021] [Accepted: 05/27/2021] [Indexed: 12/14/2022]
Abstract
Recent advances in deep learning have led to a promising performance in many medical image analysis tasks. As the most commonly performed radiological exam, chest radiographs are a particularly important modality for which a variety of applications have been researched. The release of multiple, large, publicly available chest X-ray datasets in recent years has encouraged research interest and boosted the number of publications. In this paper, we review all studies using deep learning on chest radiographs published before March 2021, categorizing works by task: image-level prediction (classification and regression), segmentation, localization, image generation and domain adaptation. Detailed descriptions of all publicly available datasets are included and commercial systems in the field are described. A comprehensive discussion of the current state of the art is provided, including caveats on the use of public datasets, the requirements of clinically useful systems and gaps in the current literature.
Collapse
Affiliation(s)
- Erdi Çallı
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands.
| | - Ecem Sogancioglu
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands
| | - Bram van Ginneken
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands
| | - Kicky G van Leeuwen
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands
| | - Keelin Murphy
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands
| |
Collapse
|