1
|
Liang Z, Xue Z, Rajaraman S, Feng Y, Antani S. Automatic Quantification of COVID-19 Pulmonary Edema by Self-supervised Contrastive Learning. MEDICAL IMAGE LEARNING WITH LIMITED AND NOISY DATA : SECOND INTERNATIONAL WORKSHOP, MILLAND 2023, HELD IN CONJUNCTION WITH MICCAI 2023, VANCOUVER, BC, CANADA, OCTOBER 8, 2023, PROCEEDINGS. MILLAND (WORKSHOP) : (2ND : 2023 : VANCOUVER, B... 2023; 14307:128-137. [PMID: 38415180 PMCID: PMC10896252 DOI: 10.1007/978-3-031-44917-8_12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/29/2024]
Abstract
We proposed a self-supervised machine learning method to automatically rate the severity of pulmonary edema in the frontal chest X-ray radiographs (CXR) which could be potentially related to COVID-19 viral pneumonia. For this we use the modified radiographic assessment of lung edema (mRALE) scoring system. The new model was first optimized with the simple Siamese network (SimSiam) architecture where a ResNet-50 pretrained by ImageNet database was used as the backbone. The encoder projected a 2048-dimension embedding as representation features to a downstream fully connected deep neural network for mRALE score prediction. A 5-fold cross-validation with 2,599 frontal CXRs was used to examine the new model's performance with comparison to a non-pretrained SimSiam encoder and a ResNet-50 trained from scratch. The mean absolute error (MAE) of the new model is 5.05 (95%CI 5.03-5.08), the mean squared error (MSE) is 66.67 (95%CI 66.29-67.06), and the Spearman's correlation coefficient (Spearman ρ) to the expert-annotated scores is 0.77 (95%CI 0.75-0.79). All the performance metrics of the new model are superior to the two comparators (P<0.01), and the scores of MSE and Spearman ρ of the two comparators have no statistical difference (P>0.05). The model also achieved a prediction probability concordance of 0.811 and a quadratic weighted kappa of 0.739 with the medical expert annotations in external validation. We conclude that the self-supervised contrastive learning method is an effective strategy for mRALE automated scoring. It provides a new approach to improve machine learning performance and minimize the expert knowledge involvement in quantitative medical image pattern learning.
Collapse
Affiliation(s)
- Zhaohui Liang
- Computational Health Research Branch, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Zhiyun Xue
- Computational Health Research Branch, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Sivaramakrishnan Rajaraman
- Computational Health Research Branch, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Yang Feng
- Computational Health Research Branch, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Sameer Antani
- Computational Health Research Branch, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| |
Collapse
|
2
|
Schulz D, Rasch S, Heilmaier M, Abbassi R, Poszler A, Ulrich J, Steinhardt M, Kaissis GA, Schmid RM, Braren R, Lahmer T. A deep learning model enables accurate prediction and quantification of pulmonary edema from chest X-rays. Crit Care 2023; 27:201. [PMID: 37237287 DOI: 10.1186/s13054-023-04426-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Accepted: 04/02/2023] [Indexed: 05/28/2023] Open
Abstract
BACKGROUND A quantitative assessment of pulmonary edema is important because the clinical severity can range from mild impairment to life threatening. A quantitative surrogate measure, although invasive, for pulmonary edema is the extravascular lung water index (EVLWI) extracted from the transpulmonary thermodilution (TPTD). Severity of edema from chest X-rays, to date is based on the subjective classification of radiologists. In this work, we use machine learning to quantitatively predict the severity of pulmonary edema from chest radiography. METHODS We retrospectively included 471 X-rays from 431 patients who underwent chest radiography and TPTD measurement within 24 h at our intensive care unit. The EVLWI extracted from the TPTD was used as a quantitative measure for pulmonary edema. We used a deep learning approach and binned the data into two, three, four and five classes increasing the resolution of the EVLWI prediction from the X-rays. RESULTS The accuracy, area under the receiver operating characteristic curve (AUROC) and Mathews correlation coefficient (MCC) in the binary classification models (EVLWI < 15, ≥ 15) were 0.93 (accuracy), 0.98 (AUROC) and 0.86(MCC). In the three multiclass models, the accuracy ranged between 0.90 and 0.95, the AUROC between 0.97 and 0.99 and the MCC between 0.86 and 0.92. CONCLUSION Deep learning can quantify pulmonary edema as measured by EVLWI with high accuracy.
Collapse
Affiliation(s)
- Dominik Schulz
- Klinik und Poliklinik für Innere Medizin II, Klinikum rechts der Isar, Munich, Germany.
- III. Medizinische Klinik, Universitätsklinikum Augsburg, Augsburg, Germany.
| | - Sebastian Rasch
- Klinik und Poliklinik für Innere Medizin II, Klinikum rechts der Isar, Munich, Germany
| | - Markus Heilmaier
- Klinik und Poliklinik für Innere Medizin II, Klinikum rechts der Isar, Munich, Germany
| | - Rami Abbassi
- Klinik und Poliklinik für Innere Medizin II, Klinikum rechts der Isar, Munich, Germany
| | - Alexander Poszler
- Innere Medizin - Gastroenterologie, Krankenhaus Agatharied, Hausham, Germany
| | - Jörg Ulrich
- Klinik und Poliklinik für Innere Medizin II, Klinikum rechts der Isar, Munich, Germany
| | - Manuel Steinhardt
- Institute for Diagnostic and Interventional Radiology, Klinikum rechts der Isar, School of Medicine, Technical University of Munich, Munich, Germany
| | - Georgios A Kaissis
- Institute for Diagnostic and Interventional Radiology, Klinikum rechts der Isar, School of Medicine, Technical University of Munich, Munich, Germany
| | - Roland M Schmid
- Klinik und Poliklinik für Innere Medizin II, Klinikum rechts der Isar, Munich, Germany
| | - Rickmer Braren
- Institute for Diagnostic and Interventional Radiology, Klinikum rechts der Isar, School of Medicine, Technical University of Munich, Munich, Germany
| | - Tobias Lahmer
- Klinik und Poliklinik für Innere Medizin II, Klinikum rechts der Isar, Munich, Germany
| |
Collapse
|
3
|
A Review of Recent Advances in Deep Learning Models for Chest Disease Detection Using Radiography. Diagnostics (Basel) 2023; 13:diagnostics13010159. [PMID: 36611451 PMCID: PMC9818166 DOI: 10.3390/diagnostics13010159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 12/21/2022] [Accepted: 12/26/2022] [Indexed: 01/05/2023] Open
Abstract
Chest X-ray radiography (CXR) is among the most frequently used medical imaging modalities. It has a preeminent value in the detection of multiple life-threatening diseases. Radiologists can visually inspect CXR images for the presence of diseases. Most thoracic diseases have very similar patterns, which makes diagnosis prone to human error and leads to misdiagnosis. Computer-aided detection (CAD) of lung diseases in CXR images is among the popular topics in medical imaging research. Machine learning (ML) and deep learning (DL) provided techniques to make this task more efficient and faster. Numerous experiments in the diagnosis of various diseases proved the potential of these techniques. In comparison to previous reviews our study describes in detail several publicly available CXR datasets for different diseases. It presents an overview of recent deep learning models using CXR images to detect chest diseases such as VGG, ResNet, DenseNet, Inception, EfficientNet, RetinaNet, and ensemble learning methods that combine multiple models. It summarizes the techniques used for CXR image preprocessing (enhancement, segmentation, bone suppression, and data-augmentation) to improve image quality and address data imbalance issues, as well as the use of DL models to speed-up the diagnosis process. This review also discusses the challenges present in the published literature and highlights the importance of interpretability and explainability to better understand the DL models' detections. In addition, it outlines a direction for researchers to help develop more effective models for early and automatic detection of chest diseases.
Collapse
|
4
|
Saqib M, Iftikhar M, Neha F, Karishma F, Mumtaz H. Artificial intelligence in critical illness and its impact on patient care: a comprehensive review. Front Med (Lausanne) 2023; 10:1176192. [PMID: 37153088 PMCID: PMC10158493 DOI: 10.3389/fmed.2023.1176192] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Accepted: 04/04/2023] [Indexed: 05/09/2023] Open
Abstract
Artificial intelligence (AI) has great potential to improve the field of critical care and enhance patient outcomes. This paper provides an overview of current and future applications of AI in critical illness and its impact on patient care, including its use in perceiving disease, predicting changes in pathological processes, and assisting in clinical decision-making. To achieve this, it is important to ensure that the reasoning behind AI-generated recommendations is comprehensible and transparent and that AI systems are designed to be reliable and robust in the care of critically ill patients. These challenges must be addressed through research and the development of quality control measures to ensure that AI is used in a safe and effective manner. In conclusion, this paper highlights the numerous opportunities and potential applications of AI in critical care and provides guidance for future research and development in this field. By enabling the perception of disease, predicting changes in pathological processes, and assisting in the resolution of clinical decisions, AI has the potential to revolutionize patient care for critically ill patients and improve the efficiency of health systems.
Collapse
Affiliation(s)
- Muhammad Saqib
- Khyber Medical College, Peshawar, Khyber Pakhtunkhwa, Pakistan
| | | | - Fnu Neha
- Ghulam Muhammad Mahar Medical College, Sukkur, Sindh, Pakistan
| | - Fnu Karishma
- Jinnah Sindh Medical University, Karachi, Sindh, Pakistan
| | - Hassan Mumtaz
- Health Services Academy, Islamabad, Pakistan
- *Correspondence: Hassan Mumtaz,
| |
Collapse
|
5
|
Kim MJ, Choi YH, Lee SB, Cho YJ, Lee SH, Shin CH, Shin SM, Cheon JE. Development and evaluation of deep-learning measurement of leg length discrepancy: bilateral iliac crest height difference measurement. Pediatr Radiol 2022; 52:2197-2205. [PMID: 36121497 DOI: 10.1007/s00247-022-05499-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Revised: 07/21/2022] [Accepted: 08/26/2022] [Indexed: 11/29/2022]
Abstract
BACKGROUND Leg length discrepancy (LLD) is a common problem that can cause long-term musculoskeletal problems. However, measuring LLD on radiography is time-consuming and labor intensive, despite being a simple task. OBJECTIVE To develop and evaluate a deep-learning algorithm for measurement of LLD on radiographs. MATERIALS AND METHODS In this Health Insurance Portability and Accountability Act (HIPAA)-compliant retrospective study, radiographs were obtained to develop a deep-learning algorithm. The algorithm developed with two U-Net models measures LLD using the difference between the bilateral iliac crest heights. For performance evaluation of the algorithm, 300 different radiographs were collected and LLD was measured by two radiologists, the algorithm alone and the model-assisting method. Statistical analysis was performed to compare the measurement differences with the measurement results of an experienced radiologist considered as the ground truth. The time spent on each measurement was then compared. RESULTS Of the 300 cases, the deep-learning model successfully delineated both iliac crests in 284. All human measurements, the deep-learning model and the model-assisting method, showed a significant correlation with ground truth measurements, while Pearson correlation coefficients and interclass correlations (ICCs) decreased in the order listed. (Pearson correlation coefficients ranged from 0.880 to 0.996 and ICCs ranged from 0.914 to 0.997.) The mean absolute errors of the human measurement, deep-learning-assisting model and deep-learning-alone model were 0.7 ± 0.6 mm, 1.1 ± 1.1 mm and 2.3 ± 5.2 mm, respectively. The reading time was 7 h and 12 min on average for human reading, while the deep-learning measurement took 7 min and 26 s. The radiologist took 74 min to complete measurements in the deep-learning mode. CONCLUSION A deep-learning U-Net model measuring the iliac crest height difference was possible on teleroentgenograms in children. LLD measurements assisted by the deep-learning algorithm saved time and labor while producing comparable results with human measurements.
Collapse
Affiliation(s)
- Min Jong Kim
- Department of Radiology, Seoul National University Hospital, 101 Daehangno, Jongno-gu, Seoul, 03080, Republic of Korea
| | - Young Hun Choi
- Department of Radiology, Seoul National University Hospital, 101 Daehangno, Jongno-gu, Seoul, 03080, Republic of Korea. .,Department of Radiology, Seoul National University College of Medicine, Seoul, Republic of Korea.
| | - Seul Bi Lee
- Department of Radiology, Seoul National University Hospital, 101 Daehangno, Jongno-gu, Seoul, 03080, Republic of Korea.,Department of Radiology, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Yeon Jin Cho
- Department of Radiology, Seoul National University Hospital, 101 Daehangno, Jongno-gu, Seoul, 03080, Republic of Korea.,Department of Radiology, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Seung Hyun Lee
- Department of Radiology, Seoul National University Hospital, 101 Daehangno, Jongno-gu, Seoul, 03080, Republic of Korea.,Department of Radiology, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Chang Ho Shin
- Division of Paediatric Orthopaedics, Seoul National University Children's Hospital, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Su-Mi Shin
- Department of Radiology, SMG-SNU Boramae Medical Center, Seoul, Republic of Korea
| | - Jung-Eun Cheon
- Department of Radiology, Seoul National University Hospital, 101 Daehangno, Jongno-gu, Seoul, 03080, Republic of Korea.,Department of Radiology, Seoul National University College of Medicine, Seoul, Republic of Korea.,Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul, Republic of Korea
| |
Collapse
|
6
|
Koul A, Bawa RK, Kumar Y. Artificial Intelligence Techniques to Predict the Airway Disorders Illness: A Systematic Review. ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING : STATE OF THE ART REVIEWS 2022; 30:831-864. [PMID: 36189431 PMCID: PMC9516534 DOI: 10.1007/s11831-022-09818-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Accepted: 09/04/2022] [Indexed: 06/16/2023]
Abstract
Airway disease is a major healthcare issue that causes at least 3 million fatalities every year. It is also considered one of the foremost causes of death all around the globe by 2030. Numerous studies have been undertaken to demonstrate the latest advances in artificial intelligence algorithms to assist in identifying and classifying these diseases. This comprehensive review aims to summarise the state-of-the-art machine and deep learning-based systems for detecting airway disorders, envisage the trends of the recent work in this domain, and analyze the difficulties and potential future paths. This systematic literature review includes the study of one hundred fifty-five articles on airway diseases such as cystic fibrosis, emphysema, lung cancer, Mesothelioma, covid-19, pneumoconiosis, asthma, pulmonary edema, tuberculosis, pulmonary embolism as well as highlights the automated learning techniques to predict them. The study concludes with a discussion and challenges about expanding the efficiency and machine and deep learning-assisted airway disease detection applications.
Collapse
Affiliation(s)
- Apeksha Koul
- Department of Computer Science and Engineering, Punjabi University, Patiala, Punjab India
| | - Rajesh K. Bawa
- Department of Computer Science, Punjabi University, Patiala, Punjab India
| | - Yogesh Kumar
- Department of Computer Science and Engineering, School of Technology, Pandit Deendayal Energy University, Gandhinagar, Gujarat India
| |
Collapse
|
7
|
Akbar MN, Wang X, Erdogmus D, Dalal S. PENet: Continuous-Valued Pulmonary Edema Severity Prediction On Chest X-ray Using Siamese Convolutional Networks. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:1834-1838. [PMID: 36086469 DOI: 10.1109/embc48229.2022.9871153] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
For physicians to make rapid clinical decisions for patients with congestive heart failure, the assessment of pulmonary edema severity in chest radiographs is vital. Although deep learning has shown promise in detecting the presence or absence or discrete grades of severity, of such edema, prediction of continuous-valued severity yet remains a challenge. Here, we propose PENet: Siamese convolutional neural networks to assess the continuous spectrum of severity of lung edema from chest radiographs. We present different modes of implementing this network and demonstrate that our best model outperforms that of earlier work (mean AUC of 0.91 over 0.87), while using only 1/16-th the dimension of input images and 1/69-th the size of training data, thus also saving expensive computation.
Collapse
|
8
|
Nadkarni P, Merchant SA. Enhancing medical-imaging artificial intelligence through holistic use of time-tested key imaging and clinical parameters: Future insights. Artif Intell Med Imaging 2022; 3:55-69. [DOI: 10.35711/aimi.v3.i3.55] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Revised: 04/12/2022] [Accepted: 06/17/2022] [Indexed: 02/06/2023] Open
Abstract
Much of the published literature in Radiology-related Artificial Intelligence (AI) focuses on single tasks, such as identifying the presence or absence or severity of specific lesions. Progress comparable to that achieved for general-purpose computer vision has been hampered by the unavailability of large and diverse radiology datasets containing different types of lesions with possibly multiple kinds of abnormalities in the same image. Also, since a diagnosis is rarely achieved through an image alone, radiology AI must be able to employ diverse strategies that consider all available evidence, not just imaging information. Using key imaging and clinical signs will help improve their accuracy and utility tremendously. Employing strategies that consider all available evidence will be a formidable task; we believe that the combination of human and computer intelligence will be superior to either one alone. Further, unless an AI application is explainable, radiologists will not trust it to be either reliable or bias-free; we discuss some approaches aimed at providing better explanations, as well as regulatory concerns regarding explainability (“transparency”). Finally, we look at federated learning, which allows pooling data from multiple locales while maintaining data privacy to create more generalizable and reliable models, and quantum computing, still prototypical but potentially revolutionary in its computing impact.
Collapse
Affiliation(s)
- Prakash Nadkarni
- College of Nursing, University of Iowa, Iowa City, IA 52242, United States
| | - Suleman Adam Merchant
- Department of Radiology, LTM Medical College & LTM General Hospital, Mumbai 400022, Maharashtra, India
| |
Collapse
|
9
|
Abstract
This article is one of ten reviews selected from the Annual Update in Intensive Care and Emergency Medicine 2022. Other selected articles can be found online at https://www.biomedcentral.com/collections/annualupdate2022 . Further information about the Annual Update in Intensive Care and Emergency Medicine is available from https://link.springer.com/bookseries/8901 .
Collapse
Affiliation(s)
- Joo Heung Yoon
- grid.21925.3d0000 0004 1936 9000Division of Pulmonary, Allergy, and Critical Care Medicine, Department of Medicine, University of Pittsburgh, Pittsburgh, PA USA
| | - Michael R. Pinsky
- grid.21925.3d0000 0004 1936 9000Department of Critical Care Medicine, University of Pittsburgh, Pittsburgh, PA USA
| | - Gilles Clermont
- grid.21925.3d0000 0004 1936 9000Department of Critical Care Medicine, University of Pittsburgh, Pittsburgh, PA USA
| |
Collapse
|
10
|
Liao R, Moyer D, Cha M, Quigley K, Berkowitz S, Horng S, Golland P, Wells WM. Multimodal Representation Learning via Maximization of Local Mutual Information. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2021; 12902:273-283. [PMID: 36282980 PMCID: PMC9576150 DOI: 10.1007/978-3-030-87196-3_26] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
We propose and demonstrate a representation learning approach by maximizing the mutual information between local features of images and text. The goal of this approach is to learn useful image representations by taking advantage of the rich information contained in the free text that describes the findings in the image. Our method trains image and text encoders by encouraging the resulting representations to exhibit high local mutual information. We make use of recent advances in mutual information estimation with neural network discriminators. We argue that the sum of local mutual information is typically a lower bound on the global mutual information. Our experimental results in the downstream image classification tasks demonstrate the advantages of using local features for image-text representation learning. Our code is available at: https://github.com/RayRuizhiLiao/mutual_info_img_txt.
Collapse
Affiliation(s)
- Ruizhi Liao
- CSAIL, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Daniel Moyer
- CSAIL, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Miriam Cha
- MIT Lincoln Laboratory, Lexington, MA, USA
| | | | - Seth Berkowitz
- Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA
| | - Steven Horng
- Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA
| | - Polina Golland
- CSAIL, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - William M Wells
- CSAIL, Massachusetts Institute of Technology, Cambridge, MA, USA
- Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
11
|
Wang ZJ. Probing an AI regression model for hand bone age determination using gradient-based saliency mapping. Sci Rep 2021; 11:10610. [PMID: 34012111 PMCID: PMC8134559 DOI: 10.1038/s41598-021-90157-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Accepted: 05/04/2021] [Indexed: 11/21/2022] Open
Abstract
Understanding how a neural network makes decisions holds significant value for users. For this reason, gradient-based saliency mapping was tested on an artificial intelligence (AI) regression model for determining hand bone age from X-ray radiographs. The partial derivative (PD) of the inferred age with respect to input image intensity at each pixel served as a saliency marker to find sensitive areas contributing to the outcome. The mean of the absolute PD values was calculated for five anatomical regions of interest, and one hundred test images were evaluated with this procedure. The PD maps suggested that the AI model employed a holistic approach in determining hand bone age, with the wrist area being the most important at early ages. However, this importance decreased with increasing age. The middle section of the metacarpal bones was the least important area for bone age determination. The muscular region between the first and second metacarpal bones also exhibited high PD values but contained no bone age information, suggesting a region of vulnerability in age determination. An end-to-end gradient-based saliency map can be obtained from a black box regression AI model and provide insight into how the model makes decisions.
Collapse
Affiliation(s)
- Zhiyue J Wang
- Department of Radiology, Children's Health and University of Texas Southwestern Medical Center, 1935 Medical District Drive, F1-02, Dallas, TX, 75235, USA.
| |
Collapse
|
12
|
Auffermann WF. Quantifying Pulmonary Edema on Chest Radiographs. Radiol Artif Intell 2021; 3:e210004. [PMID: 33939783 PMCID: PMC8043360 DOI: 10.1148/ryai.2021210004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2021] [Revised: 01/12/2021] [Accepted: 01/13/2021] [Indexed: 04/26/2023]
Affiliation(s)
- William F. Auffermann
- From the Department of Radiology and Imaging Sciences, University of Utah School of Medicine, 30 North 1900 East, Room 1A71, Salt Lake City, UT 84132
| |
Collapse
|