1
|
Lim WH, Kim H. Application of Artificial Intelligence in Thoracic Radiology: A Narrative Review. Tuberc Respir Dis (Seoul) 2025; 88:278-291. [PMID: 39689720 PMCID: PMC12010722 DOI: 10.4046/trd.2024.0062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2024] [Revised: 09/02/2024] [Accepted: 12/11/2024] [Indexed: 12/19/2024] Open
Abstract
Thoracic radiology has emerged as a primary field in which artificial intelligence (AI) is extensively researched. Recent advancements highlight the potential to enhance radiologists' performance through AI. AI aids in detecting and classifying abnormalities, and in quantifying both normal and abnormal anatomical structures. Additionally, it facilitates prognostication by leveraging these quantitative values. This review article will discuss the recent achievements of AI in thoracic radiology, focusing primarily on deep learning, and explore the current limitations and future directions of this cutting-edge technique.
Collapse
Affiliation(s)
- Woo Hyeon Lim
- Department of Radiology, Seoul National University Hospital, Seoul, Republic of Korea
| | - Hyungjin Kim
- Department of Radiology, Seoul National University Hospital, Seoul, Republic of Korea
- Department of Radiology, Seoul National University College of Medicine, Seoul, Republic of Korea
| |
Collapse
|
2
|
Yuan H, Hong C, Tran NTA, Xu X, Liu N. Leveraging anatomical constraints with uncertainty for pneumothorax segmentation. HEALTH CARE SCIENCE 2024; 3:456-474. [PMID: 39735285 PMCID: PMC11671217 DOI: 10.1002/hcs2.119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/18/2024] [Revised: 09/01/2024] [Accepted: 09/19/2024] [Indexed: 12/31/2024]
Abstract
Background Pneumothorax is a medical emergency caused by the abnormal accumulation of air in the pleural space-the potential space between the lungs and chest wall. On 2D chest radiographs, pneumothorax occurs within the thoracic cavity and outside of the mediastinum, and we refer to this area as "lung + space." While deep learning (DL) has increasingly been utilized to segment pneumothorax lesions in chest radiographs, many existing DL models employ an end-to-end approach. These models directly map chest radiographs to clinician-annotated lesion areas, often neglecting the vital domain knowledge that pneumothorax is inherently location-sensitive. Methods We propose a novel approach that incorporates the lung + space as a constraint during DL model training for pneumothorax segmentation on 2D chest radiographs. To circumvent the need for additional annotations and to prevent potential label leakage on the target task, our method utilizes external datasets and an auxiliary task of lung segmentation. This approach generates a specific constraint of lung + space for each chest radiograph. Furthermore, we have incorporated a discriminator to eliminate unreliable constraints caused by the domain shift between the auxiliary and target datasets. Results Our results demonstrated considerable improvements, with average performance gains of 4.6%, 3.6%, and 3.3% regarding intersection over union, dice similarity coefficient, and Hausdorff distance. These results were consistent across six baseline models built on three architectures (U-Net, LinkNet, or PSPNet) and two backbones (VGG-11 or MobileOne-S0). We further conducted an ablation study to evaluate the contribution of each component in the proposed method and undertook several robustness studies on hyper-parameter selection to validate the stability of our method. Conclusions The integration of domain knowledge in DL models for medical applications has often been underemphasized. Our research underscores the significance of incorporating medical domain knowledge about the location-specific nature of pneumothorax to enhance DL-based lesion segmentation and further bolster clinicians' trust in DL tools. Beyond pneumothorax, our approach is promising for other thoracic conditions that possess location-relevant characteristics.
Collapse
Affiliation(s)
- Han Yuan
- Centre for Quantitative Medicine, Duke‐NUS Medical SchoolSingapore
| | - Chuan Hong
- Department of Biostatistics and BioinformaticsDuke UniversityDurhamNorth CarolinaUSA
| | | | - Xinxing Xu
- Institute of High Performance Computing, Agency for Science, Technology and ResearchSingapore
| | - Nan Liu
- Centre for Quantitative Medicine, Duke‐NUS Medical SchoolSingapore
- Programme in Health Services and Systems Research, Duke‐NUS Medical SchoolSingapore
- Institute of Data ScienceNational University of SingaporeSingapore
| |
Collapse
|
3
|
Yuan Y, Liu L, Yang X, Liu L, Huang Q. Multi-scale Lesion Feature Fusion and Location-Aware for Chest Multi-disease Detection. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:2752-2767. [PMID: 38760643 PMCID: PMC11612080 DOI: 10.1007/s10278-024-01133-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/03/2024] [Revised: 04/14/2024] [Accepted: 04/29/2024] [Indexed: 05/19/2024]
Abstract
Accurately identifying and locating lesions in chest X-rays has the potential to significantly enhance diagnostic efficiency, quality, and interpretability. However, current methods primarily focus on detecting of specific diseases in chest X-rays, disregarding the presence of multiple diseases in a single chest X-ray scan. Moreover, the diversity in lesion locations and attributes introduces complexity in accurately discerning specific traits for each lesion, leading to diminished accuracy when detecting multiple diseases. To address these issues, we propose a novel detection framework that enhances multi-scale lesion feature extraction and fusion, improving lesion position perception and subsequently boosting chest multi-disease detection performance. Initially, we construct a multi-scale lesion feature extraction network to tackle the uniqueness of various lesion features and locations, strengthening the global semantic correlation between lesion features and their positions. Following this, we introduce an instance-aware semantic enhancement network that dynamically amalgamates instance-specific features with high-level semantic representations across various scales. This adaptive integration effectively mitigates the loss of detailed information within lesion regions. Additionally, we perform lesion region feature mapping using candidate boxes to preserve crucial positional information, enhancing the accuracy of chest disease detection across multiple scales. Experimental results on the VinDr-CXR dataset reveal a 6% increment in mean average precision (mAP) and an 8.4% improvement in mean recall (mR) when compared to state-of-the-art baselines. This demonstrates the effectiveness of the model in accurately detecting multiple chest diseases by capturing specific features and location information.
Collapse
Affiliation(s)
- Yubo Yuan
- School of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, 650500, China
| | - Lijun Liu
- School of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, 650500, China.
- Key Laboratory of Application in Computer Technology in Yunnan Province, Kunming, 650500, China.
| | - Xiaobing Yang
- Department of State-Owned Assets and Laboratory Management, Kunming University of Science and Technology, Kunming, 650500, China
| | - Li Liu
- School of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, 650500, China
| | - Qingsong Huang
- School of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, 650500, China
- Department of State-Owned Assets and Laboratory Management, Kunming University of Science and Technology, Kunming, 650500, China
| |
Collapse
|
4
|
Yang Z, Shi M, Gharbi Y, Qi Q, Shen H, Tao G, Xu W, Lyu W, Ji A. A Near-Infrared Imaging System for Robotic Venous Blood Collection. SENSORS (BASEL, SWITZERLAND) 2024; 24:7413. [PMID: 39599189 PMCID: PMC11598678 DOI: 10.3390/s24227413] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2024] [Revised: 11/11/2024] [Accepted: 11/16/2024] [Indexed: 11/29/2024]
Abstract
Venous blood collection is a widely used medical diagnostic technique, and with rapid advancements in robotics, robotic venous blood collection has the potential to replace traditional manual methods. The success of this robotic approach is heavily dependent on the quality of vein imaging. In this paper, we develop a vein imaging device based on the simulation analysis of vein imaging parameters and propose a U-Net+ResNet18 neural network for vein image segmentation. The U-Net+ResNet18 neural network integrates the residual blocks from ResNet18 into the encoder of the U-Net to form a new neural network. ResNet18 is pre-trained using the Bootstrap Your Own Latent (BYOL) framework, and its encoder parameters are transferred to the U-Net+ResNet18 neural network, enhancing the segmentation performance of vein images with limited labelled data. Furthermore, we optimize the AD-Census stereo matching algorithm by developing a variable-weight version, which improves its adaptability to image variations across different regions. Results show that, compared to U-Net, the BYOL+U-Net+ResNet18 method achieves an 8.31% reduction in Binary Cross-Entropy (BCE), a 5.50% reduction in Hausdorff Distance (HD), a 15.95% increase in Intersection over Union (IoU), and a 9.20% increase in the Dice coefficient (Dice), indicating improved image segmentation quality. The average error of the optimized AD-Census stereo matching algorithm is reduced by 25.69%, and the improvement of the image stereo matching performance is more obvious. Future research will explore the application of the vein imaging system in robotic venous blood collection to facilitate real-time puncture guidance.
Collapse
Affiliation(s)
- Zhikang Yang
- Laboratory of Locomotion Bioinspiration and Intelligent Robots, College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China; (Z.Y.); (M.S.); (Y.G.); (Q.Q.); (H.S.)
| | - Mao Shi
- Laboratory of Locomotion Bioinspiration and Intelligent Robots, College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China; (Z.Y.); (M.S.); (Y.G.); (Q.Q.); (H.S.)
| | - Yassine Gharbi
- Laboratory of Locomotion Bioinspiration and Intelligent Robots, College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China; (Z.Y.); (M.S.); (Y.G.); (Q.Q.); (H.S.)
| | - Qian Qi
- Laboratory of Locomotion Bioinspiration and Intelligent Robots, College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China; (Z.Y.); (M.S.); (Y.G.); (Q.Q.); (H.S.)
| | - Huan Shen
- Laboratory of Locomotion Bioinspiration and Intelligent Robots, College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China; (Z.Y.); (M.S.); (Y.G.); (Q.Q.); (H.S.)
| | - Gaojian Tao
- Department of Pain Medicine, Nanjing Drum Tower Hospital, The Affiliated Hospital of Nanjing University Medical School, Nanjing 210008, China;
| | - Wu Xu
- Department of Neurosurgery, Nanjing Drum Tower Hospital, The Affiliated Hospital of Nanjing University Medical School, Nanjing 210008, China;
| | - Wenqi Lyu
- Faculty of Sciences, Engineering and Technology (SET), University of Adelaide, Adelaide, SA 5005, Australia
| | - Aihong Ji
- Jiangsu Key Laboratory of Bionic Materials and Equipment, Nanjing 210016, China
- State Key Laboratory of Mechanics and Control for Aerospace Structures, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
| |
Collapse
|
5
|
Huemann Z, Tie X, Hu J, Bradshaw TJ. ConTEXTual Net: A Multimodal Vision-Language Model for Segmentation of Pneumothorax. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1652-1663. [PMID: 38485899 PMCID: PMC11300752 DOI: 10.1007/s10278-024-01051-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Revised: 01/09/2024] [Accepted: 01/17/2024] [Indexed: 07/24/2024]
Abstract
Radiology narrative reports often describe characteristics of a patient's disease, including its location, size, and shape. Motivated by the recent success of multimodal learning, we hypothesized that this descriptive text could guide medical image analysis algorithms. We proposed a novel vision-language model, ConTEXTual Net, for the task of pneumothorax segmentation on chest radiographs. ConTEXTual Net extracts language features from physician-generated free-form radiology reports using a pre-trained language model. We then introduced cross-attention between the language features and the intermediate embeddings of an encoder-decoder convolutional neural network to enable language guidance for image analysis. ConTEXTual Net was trained on the CANDID-PTX dataset consisting of 3196 positive cases of pneumothorax with segmentation annotations from 6 different physicians as well as clinical radiology reports. Using cross-validation, ConTEXTual Net achieved a Dice score of 0.716±0.016, which was similar to the degree of inter-reader variability (0.712±0.044) computed on a subset of the data. It outperformed vision-only models (Swin UNETR: 0.670±0.015, ResNet50 U-Net: 0.677±0.015, GLoRIA: 0.686±0.014, and nnUNet 0.694±0.016) and a competing vision-language model (LAVT: 0.706±0.009). Ablation studies confirmed that it was the text information that led to the performance gains. Additionally, we show that certain augmentation methods degraded ConTEXTual Net's segmentation performance by breaking the image-text concordance. We also evaluated the effects of using different language models and activation functions in the cross-attention module, highlighting the efficacy of our chosen architectural design.
Collapse
Affiliation(s)
- Zachary Huemann
- Department of Radiology, University of Wisconsin-Madison, Madison, WI, 53705, USA.
| | - Xin Tie
- Department of Radiology, University of Wisconsin-Madison, Madison, WI, 53705, USA
| | - Junjie Hu
- Departments of Biostatistics and Computer Science, University of Wisconsin-Madison, Madison, WI, 53705, USA
| | - Tyler J Bradshaw
- Department of Radiology, University of Wisconsin-Madison, Madison, WI, 53705, USA
| |
Collapse
|
6
|
Lin FCF, Wei CJ, Bai ZR, Chang CC, Chiu MC. Developing an explainable diagnosis system utilizing deep learning model: a case study of spontaneous pneumothorax. Phys Med Biol 2024; 69:145017. [PMID: 38955331 DOI: 10.1088/1361-6560/ad5e31] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Accepted: 07/01/2024] [Indexed: 07/04/2024]
Abstract
Objective.The trend in the medical field is towards intelligent detection-based medical diagnostic systems. However, these methods are often seen as 'black boxes' due to their lack of interpretability. This situation presents challenges in identifying reasons for misdiagnoses and improving accuracy, which leads to potential risks of misdiagnosis and delayed treatment. Therefore, how to enhance the interpretability of diagnostic models is crucial for improving patient outcomes and reducing treatment delays. So far, only limited researches exist on deep learning-based prediction of spontaneous pneumothorax, a pulmonary disease that affects lung ventilation and venous return.Approach.This study develops an integrated medical image analysis system using explainable deep learning model for image recognition and visualization to achieve an interpretable automatic diagnosis process.Main results.The system achieves an impressive 95.56% accuracy in pneumothorax classification, which emphasizes the significance of the blood vessel penetration defect in clinical judgment.Significance.This would lead to improve model trustworthiness, reduce uncertainty, and accurate diagnosis of various lung diseases, which results in better medical outcomes for patients and better utilization of medical resources. Future research can focus on implementing new deep learning models to detect and diagnose other lung diseases that can enhance the generalizability of this system.
Collapse
Affiliation(s)
- Frank Cheau-Feng Lin
- Department of Thoracic Surgery, Chung Shan Medical University Hospital, No. 110, Sec. 1, Jianguo N. Rd., South Dist., Taichung 40201, Taiwan, R.O.C
- School of Medicine, Chung Shan Medical University, No. 110, Sec. 1, Jianguo N. Rd., South Dist., Taichung 40201, Taiwan, R.O.C
| | - Chia-Jung Wei
- Department of Industrial Engineering and Industrial Management, National Tsing Hua University, Engineering BuildingⅠ, No. 101, Section 2, Kuang-Fu Road, Hsinchu 30013, Taiwan, R.O.C
| | - Zhe-Rui Bai
- Department of Industrial Engineering and Industrial Management, National Tsing Hua University, Engineering BuildingⅠ, No. 101, Section 2, Kuang-Fu Road, Hsinchu 30013, Taiwan, R.O.C
| | - Chi-Chang Chang
- Department of Medical Informatics, Chung Shan Medical University, No. 110, Sec. 1, Jianguo N. Rd., South Dist., Taichung 402306, Taiwan, R.O.C
- IT Office, Chung Shan Medical University Hospital, No. 110, Sec. 1, Jianguo N. Rd., South Dist., Taichung 402306, Taiwan, R.O.C
- Department of Information Management, Ming Chuan University, No. 5, De Ming Rd., Taoyuan 333000, Taiwan, R.O.C
| | - Ming-Chuan Chiu
- Department of Industrial Engineering and Industrial Management, National Tsing Hua University, Engineering BuildingⅠ, No. 101, Section 2, Kuang-Fu Road, Hsinchu 30013, Taiwan, R.O.C
| |
Collapse
|
7
|
Ansari G, Mirza-Aghazadeh-Attari M, Mosier KM, Fakhry C, Yousem DM. Radiomics Features in Predicting Human Papillomavirus Status in Oropharyngeal Squamous Cell Carcinoma: A Systematic Review, Quality Appraisal, and Meta-Analysis. Diagnostics (Basel) 2024; 14:737. [PMID: 38611650 PMCID: PMC11011663 DOI: 10.3390/diagnostics14070737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2023] [Revised: 03/18/2024] [Accepted: 03/20/2024] [Indexed: 04/14/2024] Open
Abstract
We sought to determine the diagnostic accuracy of radiomics features in predicting HPV status in oropharyngeal squamous cell carcinoma (SCC) compared to routine paraclinical measures used in clinical practice. Twenty-six articles were included in the systematic review, and thirteen were used for the meta-analysis. The overall sensitivity of the included studies was 0.78, the overall specificity was 0.76, and the overall area under the ROC curve was 0.84. The diagnostic odds ratio (DOR) equaled 12 (8, 17). Subgroup analysis showed no significant difference between radiomics features extracted from CT or MR images. Overall, the studies were of low quality in regard to radiomics quality score, although most had a low risk of bias based on the QUADAS-2 tool. Radiomics features showed good overall sensitivity and specificity in determining HPV status in OPSCC, though the low quality of the included studies poses problems for generalizability.
Collapse
Affiliation(s)
- Golnoosh Ansari
- Department of Radiology, Northwestern Hospital, Northwestern School of Medicine, Chicago, IL 60611, USA;
| | - Mohammad Mirza-Aghazadeh-Attari
- Division of Interventional Radiology, Department of Radiology and Radiological Sciences, Johns Hopkins School of Medicine, Baltimore, MD 21205, USA
| | - Kristine M. Mosier
- Department of Radiology and Imaging Sciences, Indiana University School of Medicine, Indianapolis, IN 46202, USA;
| | - Carole Fakhry
- Department of Otolaryngology, Johns Hopkins School of Medicine, Baltimore, MD 21205, USA;
| | - David M. Yousem
- Division of Neuroradiology, Department of Radiology and Radiological Sciences, Johns Hopkins School of Medicine, Baltimore, MD 21205, USA;
| |
Collapse
|
8
|
Wilson JR, Prevedello LM, Witiw CD, Flanders AE, Colak E. Data Liberation and Crowdsourcing in Medical Research: The Intersection of Collective and Artificial Intelligence. Radiol Artif Intell 2024; 6:e230006. [PMID: 38231037 PMCID: PMC10831522 DOI: 10.1148/ryai.230006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Revised: 11/08/2023] [Accepted: 11/20/2023] [Indexed: 01/18/2024]
Abstract
In spite of an exponential increase in the volume of medical data produced globally, much of these data are inaccessible to those who might best use them to develop improved health care solutions through the application of advanced analytics such as artificial intelligence. Data liberation and crowdsourcing represent two distinct but interrelated approaches to bridging existing data silos and accelerating the pace of innovation internationally. In this article, we examine these concepts in the context of medical artificial intelligence research, summarizing their potential benefits, identifying potential pitfalls, and ultimately making a case for their expanded use going forward. A practical example of a crowdsourced competition using an international medical imaging dataset is provided. Keywords: Artificial Intelligence, Data Liberation, Crowdsourcing © RSNA, 2023.
Collapse
Affiliation(s)
- Jefferson R. Wilson
- From the Division of Neurosurgery (J.R.W., C.D.W.) and Department of
Medical Imaging (E.C.), St Michael’s Hospital, 30 Bond St, Toronto, ON,
Canada M5B 1W8; Department of Surgery (J.R.W., C.D.W.) and Department of Medical
Imaging (E.C.), University of Toronto, Toronto, Canada (J.R.W., C.D.W.);
Department of Radiology, The Ohio State University Wexner Medical Center,
Columbus, Ohio (L.M.P.); and Department of Radiology, Thomas Jefferson
University, Philadelphia, Pa (A.E.F.)
| | - Luciano M. Prevedello
- From the Division of Neurosurgery (J.R.W., C.D.W.) and Department of
Medical Imaging (E.C.), St Michael’s Hospital, 30 Bond St, Toronto, ON,
Canada M5B 1W8; Department of Surgery (J.R.W., C.D.W.) and Department of Medical
Imaging (E.C.), University of Toronto, Toronto, Canada (J.R.W., C.D.W.);
Department of Radiology, The Ohio State University Wexner Medical Center,
Columbus, Ohio (L.M.P.); and Department of Radiology, Thomas Jefferson
University, Philadelphia, Pa (A.E.F.)
| | - Christopher D. Witiw
- From the Division of Neurosurgery (J.R.W., C.D.W.) and Department of
Medical Imaging (E.C.), St Michael’s Hospital, 30 Bond St, Toronto, ON,
Canada M5B 1W8; Department of Surgery (J.R.W., C.D.W.) and Department of Medical
Imaging (E.C.), University of Toronto, Toronto, Canada (J.R.W., C.D.W.);
Department of Radiology, The Ohio State University Wexner Medical Center,
Columbus, Ohio (L.M.P.); and Department of Radiology, Thomas Jefferson
University, Philadelphia, Pa (A.E.F.)
| | - Adam E. Flanders
- From the Division of Neurosurgery (J.R.W., C.D.W.) and Department of
Medical Imaging (E.C.), St Michael’s Hospital, 30 Bond St, Toronto, ON,
Canada M5B 1W8; Department of Surgery (J.R.W., C.D.W.) and Department of Medical
Imaging (E.C.), University of Toronto, Toronto, Canada (J.R.W., C.D.W.);
Department of Radiology, The Ohio State University Wexner Medical Center,
Columbus, Ohio (L.M.P.); and Department of Radiology, Thomas Jefferson
University, Philadelphia, Pa (A.E.F.)
| | - Errol Colak
- From the Division of Neurosurgery (J.R.W., C.D.W.) and Department of
Medical Imaging (E.C.), St Michael’s Hospital, 30 Bond St, Toronto, ON,
Canada M5B 1W8; Department of Surgery (J.R.W., C.D.W.) and Department of Medical
Imaging (E.C.), University of Toronto, Toronto, Canada (J.R.W., C.D.W.);
Department of Radiology, The Ohio State University Wexner Medical Center,
Columbus, Ohio (L.M.P.); and Department of Radiology, Thomas Jefferson
University, Philadelphia, Pa (A.E.F.)
| |
Collapse
|
9
|
Yoon MS, Kwon G, Oh J, Ryu J, Lim J, Kang BK, Lee J, Han DK. Effect of Contrast Level and Image Format on a Deep Learning Algorithm for the Detection of Pneumothorax with Chest Radiography. J Digit Imaging 2023; 36:1237-1247. [PMID: 36698035 PMCID: PMC10287877 DOI: 10.1007/s10278-022-00772-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 12/23/2022] [Accepted: 12/29/2022] [Indexed: 01/26/2023] Open
Abstract
Under the black-box nature in the deep learning model, it is uncertain how the change in contrast level and format affects the performance. We aimed to investigate the effect of contrast level and image format on the effectiveness of deep learning for diagnosing pneumothorax on chest radiographs. We collected 3316 images (1016 pneumothorax and 2300 normal images), and all images were set to the standard contrast level (100%) and stored in the Digital Imaging and Communication in Medicine and Joint Photographic Experts Group (JPEG) formats. Data were randomly separated into 80% of training and 20% of test sets, and the contrast of images in the test set was changed to 5 levels (50%, 75%, 100%, 125%, and 150%). We trained the model to detect pneumothorax using ResNet-50 with 100% level images and tested with 5-level images in the two formats. While comparing the overall performance between each contrast level in the two formats, the area under the receiver-operating characteristic curve (AUC) was significantly different (all p < 0.001) except between 125 and 150% in JPEG format (p = 0.382). When comparing the two formats at same contrast levels, AUC was significantly different (all p < 0.001) except 50% and 100% (p = 0.079 and p = 0.082, respectively). The contrast level and format of medical images could influence the performance of the deep learning model. It is required to train with various contrast levels and formats of image, and further image processing for improvement and maintenance of the performance.
Collapse
Affiliation(s)
- Myeong Seong Yoon
- Department of Emergency Medicine, College of Medicine, Hanyang University, 222 Wangsimni-Ro, Seongdong-Gu, Seoul, 04763, Republic of Korea
- Machine Learning Research Center for Medical Data, Hanyang University, 222 Wangsimni-Ro, Seongdong-Gu, Seoul, 04763, Republic of Korea
- Department of Radiological Science, Eulji University, 553 Sanseong-daero, Seongnam-si, Gyeonggi Do, 13135, Republic of Korea
| | - Gitaek Kwon
- Department of Computer Science, Hanyang University, 222 Wangsimni-Ro, Seongdong-Gu, Seoul, 04763, Republic of Korea
- VUNO, Inc, 479 Gangnam-daero, Seocho-gu, Seoul, 06541, Republic of Korea
| | - Jaehoon Oh
- Department of Emergency Medicine, College of Medicine, Hanyang University, 222 Wangsimni-Ro, Seongdong-Gu, Seoul, 04763, Republic of Korea.
- Machine Learning Research Center for Medical Data, Hanyang University, 222 Wangsimni-Ro, Seongdong-Gu, Seoul, 04763, Republic of Korea.
| | - Jongbin Ryu
- Department of Software and Computer Engineering, Ajou University, 206 World cup-ro, Suwon-si, Gyeonggi Do, 16499, Republic of Korea.
| | - Jongwoo Lim
- Department of Computer Science, Hanyang University, 222 Wangsimni-Ro, Seongdong-Gu, Seoul, 04763, Republic of Korea
- Machine Learning Research Center for Medical Data, Hanyang University, 222 Wangsimni-Ro, Seongdong-Gu, Seoul, 04763, Republic of Korea
| | - Bo-Kyeong Kang
- Machine Learning Research Center for Medical Data, Hanyang University, 222 Wangsimni-Ro, Seongdong-Gu, Seoul, 04763, Republic of Korea
- Department of Radiology, College of Medicine, Hanyang University, 222 Wangsimni-Ro, Seongdong-Gu, Seoul, 04763, Republic of Korea
| | - Juncheol Lee
- Department of Emergency Medicine, College of Medicine, Hanyang University, 222 Wangsimni-Ro, Seongdong-Gu, Seoul, 04763, Republic of Korea
| | - Dong-Kyoon Han
- Department of Radiological Science, Eulji University, 553 Sanseong-daero, Seongnam-si, Gyeonggi Do, 13135, Republic of Korea
| |
Collapse
|
10
|
Garin SP, Parekh VS, Sulam J, Yi PH. Medical imaging data science competitions should report dataset demographics and evaluate for bias. Nat Med 2023; 29:1038-1039. [PMID: 37012552 DOI: 10.1038/s41591-023-02264-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/05/2023]
Affiliation(s)
- Sean P Garin
- University of Maryland Medical Intelligent Imaging (UM2ii) Center, Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, MD, USA
| | - Vishwa S Parekh
- University of Maryland Medical Intelligent Imaging (UM2ii) Center, Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, MD, USA
| | - Jeremias Sulam
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Paul H Yi
- University of Maryland Medical Intelligent Imaging (UM2ii) Center, Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, MD, USA.
| |
Collapse
|
11
|
Anomaly Detection in Chest X-rays Based on Dual-Attention Mechanism and Multi-Scale Feature Fusion. Symmetry (Basel) 2023. [DOI: 10.3390/sym15030668] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/09/2023] Open
Abstract
The efficient and automatic detection of chest abnormalities is vital for the auxiliary diagnosis of medical images. Many studies utilize computer vision and deep learning approaches involving symmetry and asymmetry concepts to detect chest abnormalities, and achieve promising findings. However, an accurate instance-level and multi-label detection of abnormalities in chest X-rays remains a significant challenge. Here, a novel anomaly detection method for symmetric chest X-rays using dual-attention and multi-scale feature fusion is proposed. Three aspects of our method should be noted in comparison with the previous approaches. We improved the deep neural network with channel-dimensional and spatial-dimensional attention to capture the abundant contextual features. We then used an optimized multi-scale learning framework for feature fusion to adapt to the scale variation in the abnormalities. Considering the influence of the data imbalance and other factors, we introduced a seesaw loss function to flexibly adjust the sample weights and enhance the model learning efficiency. The rigorous experimental evaluation of a public chest X-ray dataset with fourteen different types of abnormalities demonstrates that our model has a mean average precision of 0.362 and outperforms existing methods.
Collapse
|
12
|
Ibragimov B, Arzamasov K, Maksudov B, Kiselev S, Mongolin A, Mustafaev T, Ibragimova D, Evteeva K, Andreychenko A, Morozov S. A 178-clinical-center experiment of integrating AI solutions for lung pathology diagnosis. Sci Rep 2023; 13:1135. [PMID: 36670118 PMCID: PMC9859802 DOI: 10.1038/s41598-023-27397-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Accepted: 01/02/2023] [Indexed: 01/22/2023] Open
Abstract
In 2020, an experiment testing AI solutions for lung X-ray analysis on a multi-hospital network was conducted. The multi-hospital network linked 178 Moscow state healthcare centers, where all chest X-rays from the network were redirected to a research facility, analyzed with AI, and returned to the centers. The experiment was formulated as a public competition with monetary awards for participating industrial and research teams. The task was to perform the binary detection of abnormalities from chest X-rays. For the objective real-life evaluation, no training X-rays were provided to the participants. This paper presents one of the top-performing AI frameworks from this experiment. First, the framework used two EfficientNets, histograms of gradients, Haar feature ensembles, and local binary patterns to recognize whether an input image represents an acceptable lung X-ray sample, meaning the X-ray is not grayscale inverted, is a frontal chest X-ray, and completely captures both lung fields. Second, the framework extracted the region with lung fields and then passed them to a multi-head DenseNet, where the heads recognized the patient's gender, age and the potential presence of abnormalities, and generated the heatmap with the abnormality regions highlighted. During one month of the experiment from 11.23.2020 to 12.25.2020, 17,888 cases have been analyzed by the framework with 11,902 cases having radiological reports with the reference diagnoses that were unequivocally parsed by the experiment organizers. The performance measured in terms of the area under receiving operator curve (AUC) was 0.77. The AUC for individual diseases ranged from 0.55 for herniation to 0.90 for pneumothorax.
Collapse
Affiliation(s)
- Bulat Ibragimov
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark.
| | - Kirill Arzamasov
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Healthcare Department, Moscow, Russia
| | - Bulat Maksudov
- School of Electronic Engineering, Dublin City University, Dublin, Ireland
| | | | - Alexander Mongolin
- Innopolis University, Innopolis, Russia
- Nova Information Management School, Universidade Nova de Lisboa, Lisbon, Portugal
| | - Tamerlan Mustafaev
- Innopolis University, Innopolis, Russia
- University Clinic Kazan State University, Kazan, Russia
| | | | - Ksenia Evteeva
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Healthcare Department, Moscow, Russia
| | - Anna Andreychenko
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Healthcare Department, Moscow, Russia
| | - Sergey Morozov
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Healthcare Department, Moscow, Russia
- Osimis SA, Liege, Belgium
| |
Collapse
|
13
|
Ait Nasser A, Akhloufi MA. A Review of Recent Advances in Deep Learning Models for Chest Disease Detection Using Radiography. Diagnostics (Basel) 2023; 13:159. [PMID: 36611451 PMCID: PMC9818166 DOI: 10.3390/diagnostics13010159] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 12/21/2022] [Accepted: 12/26/2022] [Indexed: 01/05/2023] Open
Abstract
Chest X-ray radiography (CXR) is among the most frequently used medical imaging modalities. It has a preeminent value in the detection of multiple life-threatening diseases. Radiologists can visually inspect CXR images for the presence of diseases. Most thoracic diseases have very similar patterns, which makes diagnosis prone to human error and leads to misdiagnosis. Computer-aided detection (CAD) of lung diseases in CXR images is among the popular topics in medical imaging research. Machine learning (ML) and deep learning (DL) provided techniques to make this task more efficient and faster. Numerous experiments in the diagnosis of various diseases proved the potential of these techniques. In comparison to previous reviews our study describes in detail several publicly available CXR datasets for different diseases. It presents an overview of recent deep learning models using CXR images to detect chest diseases such as VGG, ResNet, DenseNet, Inception, EfficientNet, RetinaNet, and ensemble learning methods that combine multiple models. It summarizes the techniques used for CXR image preprocessing (enhancement, segmentation, bone suppression, and data-augmentation) to improve image quality and address data imbalance issues, as well as the use of DL models to speed-up the diagnosis process. This review also discusses the challenges present in the published literature and highlights the importance of interpretability and explainability to better understand the DL models' detections. In addition, it outlines a direction for researchers to help develop more effective models for early and automatic detection of chest diseases.
Collapse
Affiliation(s)
| | - Moulay A. Akhloufi
- Perception, Robotics and Intelligent Machines Research Group (PRIME), Department of Computer Science, Université de Moncton, Moncton, NB E1C 3E9, Canada
| |
Collapse
|
14
|
Padash S, Mohebbian MR, Adams SJ, Henderson RDE, Babyn P. Pediatric chest radiograph interpretation: how far has artificial intelligence come? A systematic literature review. Pediatr Radiol 2022; 52:1568-1580. [PMID: 35460035 PMCID: PMC9033522 DOI: 10.1007/s00247-022-05368-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/11/2021] [Revised: 02/28/2022] [Accepted: 03/24/2022] [Indexed: 10/24/2022]
Abstract
Most artificial intelligence (AI) studies have focused primarily on adult imaging, with less attention to the unique aspects of pediatric imaging. The objectives of this study were to (1) identify all publicly available pediatric datasets and determine their potential utility and limitations for pediatric AI studies and (2) systematically review the literature to assess the current state of AI in pediatric chest radiograph interpretation. We searched PubMed, Web of Science and Embase to retrieve all studies from 1990 to 2021 that assessed AI for pediatric chest radiograph interpretation and abstracted the datasets used to train and test AI algorithms, approaches and performance metrics. Of 29 publicly available chest radiograph datasets, 2 datasets included solely pediatric chest radiographs, and 7 datasets included pediatric and adult patients. We identified 55 articles that implemented an AI model to interpret pediatric chest radiographs or pediatric and adult chest radiographs. Classification of chest radiographs as pneumonia was the most common application of AI, evaluated in 65% of the studies. Although many studies report high diagnostic accuracy, most algorithms were not validated on external datasets. Most AI studies for pediatric chest radiograph interpretation have focused on a limited number of diseases, and progress is hindered by a lack of large-scale pediatric chest radiograph datasets.
Collapse
Affiliation(s)
- Sirwa Padash
- Department of Medical Imaging, University of Saskatchewan, 103 Hospital Drive, Saskatoon, Saskatchewan, S7N 0W8, Canada.
- Department of Radiology, Mayo Clinic, Rochester, MN, USA.
| | - Mohammad Reza Mohebbian
- Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, Saskatchewan, Canada
| | - Scott J Adams
- Department of Medical Imaging, University of Saskatchewan, 103 Hospital Drive, Saskatoon, Saskatchewan, S7N 0W8, Canada
| | - Robert D E Henderson
- Department of Medical Imaging, University of Saskatchewan, 103 Hospital Drive, Saskatoon, Saskatchewan, S7N 0W8, Canada
| | - Paul Babyn
- Department of Medical Imaging, University of Saskatchewan, 103 Hospital Drive, Saskatoon, Saskatchewan, S7N 0W8, Canada
| |
Collapse
|
15
|
Gu H, Wang H, Qin P, Wang J. Chest L-Transformer: Local Features With Position Attention for Weakly Supervised Chest Radiograph Segmentation and Classification. Front Med (Lausanne) 2022; 9:923456. [PMID: 35721071 PMCID: PMC9201450 DOI: 10.3389/fmed.2022.923456] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Accepted: 05/12/2022] [Indexed: 11/13/2022] Open
Abstract
We consider the problem of weakly supervised segmentation on chest radiographs. The chest radiograph is the most common means of screening and diagnosing thoracic diseases. Weakly supervised deep learning models have gained increasing popularity in medical image segmentation. However, these models are not suitable for the critical characteristics presented in chest radiographs: the global symmetry of chest radiographs and dependencies between lesions and their positions. These models extract global features from the whole image to make the image-level decision. The global symmetry can lead these models to misclassification of symmetrical positions of the lesions. Thoracic diseases often have special disease prone areas in chest radiographs. There is a relationship between the lesions and their positions. In this study, we propose a weakly supervised model, called Chest L-Transformer, to take these characteristics into account. Chest L-Transformer classifies an image based on local features to avoid the misclassification caused by the global symmetry. Moreover, associated with Transformer attention mechanism, Chest L-Transformer models the dependencies between the lesions and their positions and pays more attention to the disease prone areas. Chest L-Transformer is only trained with image-level annotations for lesion segmentation. Thus, Log-Sum-Exp voting and its variant are proposed to unify the pixel-level prediction with the image-level prediction. We demonstrate a significant segmentation performance improvement over the current state-of-the-art while achieving competitive classification performance.
Collapse
Affiliation(s)
- Hong Gu
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, China
| | - Hongyu Wang
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, China
| | - Pan Qin
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, China
| | - Jia Wang
- Department of Surgery, The Second Hospital of Dalian Medical University, Dalian, China
| |
Collapse
|
16
|
Aljabri M, AlAmir M, AlGhamdi M, Abdel-Mottaleb M, Collado-Mesa F. Towards a better understanding of annotation tools for medical imaging: a survey. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:25877-25911. [PMID: 35350630 PMCID: PMC8948453 DOI: 10.1007/s11042-022-12100-1] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/06/2021] [Revised: 08/04/2021] [Accepted: 01/03/2022] [Indexed: 05/07/2023]
Abstract
Medical imaging refers to several different technologies that are used to view the human body to diagnose, monitor, or treat medical conditions. It requires significant expertise to efficiently and correctly interpret the images generated by each of these technologies, which among others include radiography, ultrasound, and magnetic resonance imaging. Deep learning and machine learning techniques provide different solutions for medical image interpretation including those associated with detection and diagnosis. Despite the huge success of deep learning algorithms in image analysis, training algorithms to reach human-level performance in these tasks depends on the availability of large amounts of high-quality training data, including high-quality annotations to serve as ground-truth. Different annotation tools have been developed to assist with the annotation process. In this survey, we present the currently available annotation tools for medical imaging, including descriptions of graphical user interfaces (GUI) and supporting instruments. The main contribution of this study is to provide an intensive review of the popular annotation tools and show their successful usage in annotating medical imaging dataset to guide researchers in this area.
Collapse
Affiliation(s)
- Manar Aljabri
- Department of Computer Science, Umm Al-Qura University, Mecca, Saudi Arabia
| | - Manal AlAmir
- Department of Computer Science, Umm Al-Qura University, Mecca, Saudi Arabia
| | - Manal AlGhamdi
- Department of Computer Science, Umm Al-Qura University, Mecca, Saudi Arabia
| | | | - Fernando Collado-Mesa
- Department of Radiology, University of Miami Miller School of Medicine, Florida, FL USA
| |
Collapse
|
17
|
Schultheis WG, Lakhani P. Using Deep Learning Segmentation for Endotracheal Tube Position Assessment. J Thorac Imaging 2022; 37:125-131. [PMID: 34292275 DOI: 10.1097/rti.0000000000000608] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
PURPOSE The purpose of this study was to determine the efficacy of using deep learning segmentation for endotracheal tube (ETT) position on frontal chest x-rays (CXRs). MATERIALS AND METHODS This was a retrospective trial involving 936 deidentified frontal CXRs divided into sets for training (676), validation (50), and 2 for testing (210). This included an "internal test" set of 100 CXRs from the same institution, and an "external test" set of 110 CXRs from a different institution. Each image was labeled by 2 radiologists with the ETT-carina distance. On the training images, 1 radiologist manually segmented the ETT tip and inferior wall of the carina. A U-NET architecture was constructed to label each pixel of the CXR as belonging to either the ETT, carina, or neither. This labeling allowed the distance between the ETT and carina to be compared with the average of 2 radiologists. The interclass correlation coefficients, mean, and SDs of the absolute differences between the U-NET and radiologists were calculated. RESULTS The mean absolute differences between the U-NET and average of radiologist measurements were 0.60±0.61 and 0.48±0.47 cm on the internal and external datasets, respectively. The interclass correlation coefficients were 0.87 (0.82, 0.91) and 0.92 (0.88, 0.94) on the internal and external datasets, respectively. CONCLUSION The U-NET model had excellent reliability and performance similar to radiologists in assessing ETT-carina distance.
Collapse
Affiliation(s)
| | - Paras Lakhani
- Sidney Kimmel Medical College, Thomas Jefferson University
- Department of Radiology, Thomas Jefferson University Hospital, Sidney Kimmel Jefferson Medical College, Philadelphia, PA
| |
Collapse
|
18
|
Arun N, Gaw N, Singh P, Chang K, Aggarwal M, Chen B, Hoebel K, Gupta S, Patel J, Gidwani M, Adebayo J, Li MD, Kalpathy-Cramer J. Assessing the Trustworthiness of Saliency Maps for Localizing Abnormalities in Medical Imaging. Radiol Artif Intell 2021; 3:e200267. [PMID: 34870212 PMCID: PMC8637231 DOI: 10.1148/ryai.2021200267] [Citation(s) in RCA: 73] [Impact Index Per Article: 18.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2020] [Revised: 09/13/2021] [Accepted: 09/20/2021] [Indexed: 11/11/2022]
Abstract
PURPOSE To evaluate the trustworthiness of saliency maps for abnormality localization in medical imaging. MATERIALS AND METHODS Using two large publicly available radiology datasets (Society for Imaging Informatics in Medicine-American College of Radiology Pneumothorax Segmentation dataset and Radiological Society of North America Pneumonia Detection Challenge dataset), the performance of eight commonly used saliency map techniques were quantified in regard to (a) localization utility (segmentation and detection), (b) sensitivity to model weight randomization, (c) repeatability, and (d) reproducibility. Their performances versus baseline methods and localization network architectures were compared, using area under the precision-recall curve (AUPRC) and structural similarity index measure (SSIM) as metrics. RESULTS All eight saliency map techniques failed at least one of the criteria and were inferior in performance compared with localization networks. For pneumothorax segmentation, the AUPRC ranged from 0.024 to 0.224, while a U-Net achieved a significantly superior AUPRC of 0.404 (P < .005). For pneumonia detection, the AUPRC ranged from 0.160 to 0.519, while a RetinaNet achieved a significantly superior AUPRC of 0.596 (P <.005). Five and two saliency methods (of eight) failed the model randomization test on the segmentation and detection datasets, respectively, suggesting that these methods are not sensitive to changes in model parameters. The repeatability and reproducibility of the majority of the saliency methods were worse than localization networks for both the segmentation and detection datasets. CONCLUSION The use of saliency maps in the high-risk domain of medical imaging warrants additional scrutiny and recommend that detection or segmentation models be used if localization is the desired output of the network.Keywords: Technology Assessment, Technical Aspects, Feature Detection, Convolutional Neural Network (CNN) Supplemental material is available for this article. © RSNA, 2021.
Collapse
Affiliation(s)
| | | | - Praveer Singh
- From the Athinoula A. Martinos Center for Biomedical Imaging,
Department of Radiology, Massachusetts General Hospital, Harvard Medical School,
149 13th St, Boston, MA 02129 (N.A., P.S., K.C., M.A., B.C., K.H., S.G.,
J.P., M.G., M.D.L., J.K.C.); Department of Computer Science, Shiv Nadar
University, Greater Noida, India (N.A.); Department of Operational Sciences,
Graduate School of Engineering and Management, Air Force Institute of
Technology, Wright-Patterson AFB, Dayton, Ohio (N.G.); and Massachusetts
Institute of Technology, Cambridge, Mass (K.C., B.C., K.H., J.P., J.A.)
| | - Ken Chang
- From the Athinoula A. Martinos Center for Biomedical Imaging,
Department of Radiology, Massachusetts General Hospital, Harvard Medical School,
149 13th St, Boston, MA 02129 (N.A., P.S., K.C., M.A., B.C., K.H., S.G.,
J.P., M.G., M.D.L., J.K.C.); Department of Computer Science, Shiv Nadar
University, Greater Noida, India (N.A.); Department of Operational Sciences,
Graduate School of Engineering and Management, Air Force Institute of
Technology, Wright-Patterson AFB, Dayton, Ohio (N.G.); and Massachusetts
Institute of Technology, Cambridge, Mass (K.C., B.C., K.H., J.P., J.A.)
| | - Mehak Aggarwal
- From the Athinoula A. Martinos Center for Biomedical Imaging,
Department of Radiology, Massachusetts General Hospital, Harvard Medical School,
149 13th St, Boston, MA 02129 (N.A., P.S., K.C., M.A., B.C., K.H., S.G.,
J.P., M.G., M.D.L., J.K.C.); Department of Computer Science, Shiv Nadar
University, Greater Noida, India (N.A.); Department of Operational Sciences,
Graduate School of Engineering and Management, Air Force Institute of
Technology, Wright-Patterson AFB, Dayton, Ohio (N.G.); and Massachusetts
Institute of Technology, Cambridge, Mass (K.C., B.C., K.H., J.P., J.A.)
| | - Bryan Chen
- From the Athinoula A. Martinos Center for Biomedical Imaging,
Department of Radiology, Massachusetts General Hospital, Harvard Medical School,
149 13th St, Boston, MA 02129 (N.A., P.S., K.C., M.A., B.C., K.H., S.G.,
J.P., M.G., M.D.L., J.K.C.); Department of Computer Science, Shiv Nadar
University, Greater Noida, India (N.A.); Department of Operational Sciences,
Graduate School of Engineering and Management, Air Force Institute of
Technology, Wright-Patterson AFB, Dayton, Ohio (N.G.); and Massachusetts
Institute of Technology, Cambridge, Mass (K.C., B.C., K.H., J.P., J.A.)
| | - Katharina Hoebel
- From the Athinoula A. Martinos Center for Biomedical Imaging,
Department of Radiology, Massachusetts General Hospital, Harvard Medical School,
149 13th St, Boston, MA 02129 (N.A., P.S., K.C., M.A., B.C., K.H., S.G.,
J.P., M.G., M.D.L., J.K.C.); Department of Computer Science, Shiv Nadar
University, Greater Noida, India (N.A.); Department of Operational Sciences,
Graduate School of Engineering and Management, Air Force Institute of
Technology, Wright-Patterson AFB, Dayton, Ohio (N.G.); and Massachusetts
Institute of Technology, Cambridge, Mass (K.C., B.C., K.H., J.P., J.A.)
| | - Sharut Gupta
- From the Athinoula A. Martinos Center for Biomedical Imaging,
Department of Radiology, Massachusetts General Hospital, Harvard Medical School,
149 13th St, Boston, MA 02129 (N.A., P.S., K.C., M.A., B.C., K.H., S.G.,
J.P., M.G., M.D.L., J.K.C.); Department of Computer Science, Shiv Nadar
University, Greater Noida, India (N.A.); Department of Operational Sciences,
Graduate School of Engineering and Management, Air Force Institute of
Technology, Wright-Patterson AFB, Dayton, Ohio (N.G.); and Massachusetts
Institute of Technology, Cambridge, Mass (K.C., B.C., K.H., J.P., J.A.)
| | - Jay Patel
- From the Athinoula A. Martinos Center for Biomedical Imaging,
Department of Radiology, Massachusetts General Hospital, Harvard Medical School,
149 13th St, Boston, MA 02129 (N.A., P.S., K.C., M.A., B.C., K.H., S.G.,
J.P., M.G., M.D.L., J.K.C.); Department of Computer Science, Shiv Nadar
University, Greater Noida, India (N.A.); Department of Operational Sciences,
Graduate School of Engineering and Management, Air Force Institute of
Technology, Wright-Patterson AFB, Dayton, Ohio (N.G.); and Massachusetts
Institute of Technology, Cambridge, Mass (K.C., B.C., K.H., J.P., J.A.)
| | - Mishka Gidwani
- From the Athinoula A. Martinos Center for Biomedical Imaging,
Department of Radiology, Massachusetts General Hospital, Harvard Medical School,
149 13th St, Boston, MA 02129 (N.A., P.S., K.C., M.A., B.C., K.H., S.G.,
J.P., M.G., M.D.L., J.K.C.); Department of Computer Science, Shiv Nadar
University, Greater Noida, India (N.A.); Department of Operational Sciences,
Graduate School of Engineering and Management, Air Force Institute of
Technology, Wright-Patterson AFB, Dayton, Ohio (N.G.); and Massachusetts
Institute of Technology, Cambridge, Mass (K.C., B.C., K.H., J.P., J.A.)
| | - Julius Adebayo
- From the Athinoula A. Martinos Center for Biomedical Imaging,
Department of Radiology, Massachusetts General Hospital, Harvard Medical School,
149 13th St, Boston, MA 02129 (N.A., P.S., K.C., M.A., B.C., K.H., S.G.,
J.P., M.G., M.D.L., J.K.C.); Department of Computer Science, Shiv Nadar
University, Greater Noida, India (N.A.); Department of Operational Sciences,
Graduate School of Engineering and Management, Air Force Institute of
Technology, Wright-Patterson AFB, Dayton, Ohio (N.G.); and Massachusetts
Institute of Technology, Cambridge, Mass (K.C., B.C., K.H., J.P., J.A.)
| | - Matthew D. Li
- From the Athinoula A. Martinos Center for Biomedical Imaging,
Department of Radiology, Massachusetts General Hospital, Harvard Medical School,
149 13th St, Boston, MA 02129 (N.A., P.S., K.C., M.A., B.C., K.H., S.G.,
J.P., M.G., M.D.L., J.K.C.); Department of Computer Science, Shiv Nadar
University, Greater Noida, India (N.A.); Department of Operational Sciences,
Graduate School of Engineering and Management, Air Force Institute of
Technology, Wright-Patterson AFB, Dayton, Ohio (N.G.); and Massachusetts
Institute of Technology, Cambridge, Mass (K.C., B.C., K.H., J.P., J.A.)
| | - Jayashree Kalpathy-Cramer
- From the Athinoula A. Martinos Center for Biomedical Imaging,
Department of Radiology, Massachusetts General Hospital, Harvard Medical School,
149 13th St, Boston, MA 02129 (N.A., P.S., K.C., M.A., B.C., K.H., S.G.,
J.P., M.G., M.D.L., J.K.C.); Department of Computer Science, Shiv Nadar
University, Greater Noida, India (N.A.); Department of Operational Sciences,
Graduate School of Engineering and Management, Air Force Institute of
Technology, Wright-Patterson AFB, Dayton, Ohio (N.G.); and Massachusetts
Institute of Technology, Cambridge, Mass (K.C., B.C., K.H., J.P., J.A.)
| |
Collapse
|
19
|
Automated Radiology Alert System for Pneumothorax Detection on Chest Radiographs Improves Efficiency and Diagnostic Performance. Diagnostics (Basel) 2021; 11:diagnostics11071182. [PMID: 34209844 PMCID: PMC8307391 DOI: 10.3390/diagnostics11071182] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2021] [Revised: 06/21/2021] [Accepted: 06/26/2021] [Indexed: 11/17/2022] Open
Abstract
We aimed to set up an Automated Radiology Alert System (ARAS) for the detection of pneumothorax in chest radiographs by a deep learning model, and to compare its efficiency and diagnostic performance with the existing Manual Radiology Alert System (MRAS) at the tertiary medical center. This study retrospectively collected 1235 chest radiographs with pneumothorax labeling from 2013 to 2019, and 337 chest radiographs with negative findings in 2019 were separated into training and validation datasets for the deep learning model of ARAS. The efficiency before and after using the model was compared in terms of alert time and report time. During parallel running of the two systems from September to October 2020, chest radiographs prospectively acquired in the emergency department with age more than 6 years served as the testing dataset for comparison of diagnostic performance. The efficiency was improved after using the model, with mean alert time improving from 8.45 min to 0.69 min and the mean report time from 2.81 days to 1.59 days. The comparison of the diagnostic performance of both systems using 3739 chest radiographs acquired during parallel running showed that the ARAS was better than the MRAS as assessed in terms of sensitivity (recall), area under receiver operating characteristic curve, and F1 score (0.837 vs. 0.256, 0.914 vs. 0.628, and 0.754 vs. 0.407, respectively), but worse in terms of positive predictive value (PPV) (precision) (0.686 vs. 1.000). This study had successfully designed a deep learning model for pneumothorax detection on chest radiographs and set up an ARAS with improved efficiency and overall diagnostic performance.
Collapse
|
20
|
Detection of the location of pneumothorax in chest X-rays using small artificial neural networks and a simple training process. Sci Rep 2021; 11:13054. [PMID: 34158562 PMCID: PMC8219779 DOI: 10.1038/s41598-021-92523-2] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Accepted: 06/11/2021] [Indexed: 02/06/2023] Open
Abstract
The purpose of this study was to evaluate the diagnostic performance achieved by using fully-connected small artificial neural networks (ANNs) and a simple training process, the Kim-Monte Carlo algorithm, to detect the location of pneumothorax in chest X-rays. A total of 1,000 chest X-ray images with pneumothorax were taken randomly from NIH (the National Institutes of Health) public image database and used as the training and test sets. Each X-ray image with pneumothorax was divided into 49 boxes for pneumothorax localization. For each of the boxes in the chest X-ray images contained in the test set, the area under the receiver operating characteristic (ROC) curve (AUC) was 0.882, and the sensitivity and specificity were 80.6% and 83.0%, respectively. In addition, a common currently used deep-learning method for image recognition, the convolution neural network (CNN), was also applied to the same dataset for comparison purposes. The performance of the fully-connected small ANN was better than that of the CNN. Regarding the diagnostic performances of the CNN with different activation functions, the CNN with a sigmoid activation function for fully-connected hidden nodes was better than the CNN with the rectified linear unit (RELU) activation function. This study showed that our approach can accurately detect the location of pneumothorax in chest X-rays, significantly reduce the time delay incurred when diagnosing urgent diseases such as pneumothorax, and increase the effectiveness of clinical practice and patient care.
Collapse
|