1
|
Ozcan BB, Dogan BE, Xi Y, Knippa EE. Patient Perception of Artificial Intelligence Use in Interpretation of Screening Mammograms: A Survey Study. Radiol Imaging Cancer 2025; 7:e240290. [PMID: 40249272 DOI: 10.1148/rycan.240290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/19/2025]
Abstract
Purpose To assess patient perceptions of artificial intelligence (AI) use in the interpretation of screening mammograms. Materials and Methods In a prospective, institutional review board-approved study, all patients undergoing mammography screening at the authors' institution between February 2023 and August 2023 were offered a 29-question survey. Age, race and ethnicity, education, income level, and history of breast cancer and biopsy were collected. Univariable and multivariable logistic regression analyses were used to identify the independent factors associated with participants' acceptance of AI use. Results Of the 518 participants, the majority were between the ages of 40 and 69 years (377 of 518, 72.8%), at least college graduates (347 of 518, 67.0%), and non-Hispanic White (262 of 518, 50.6%). Participant-reported knowledge of AI was none or minimal in 76.5% (396 of 518). Stand-alone AI interpretation was accepted by 4.44% (23 of 518), whereas 71.0% (368 of 518) preferred AI to be used as a second reader. After an AI-reported abnormal screening, 88.9% (319 of 359) requested radiologist review versus 51.3% (184 of 359) of radiologist recall review by AI (P < .001). In cases of discrepancy, higher rate of participants would undergo diagnostic examination for radiologist recalls compared with AI recalls (94.2% [419 of 445] vs 92.6% [412 of 445]; P = .20]. Higher education was associated with higher AI acceptance (odds ratio [OR] 2.05, 95% CI: 1.31, 3.20; P = .002). Race was associated with higher concern for bias in Hispanic versus non-Hispanic White participants (OR 3.32, 95% CI: 1.15, 9.61; P = .005) and non-Hispanic Black versus non-Hispanic White participants (OR 4.31, 95% CI: 1.50, 12.39; P = .005). Conclusion AI use as a second reader of screening mammograms was accepted by participants. Participants' race and education level were significantly associated with AI acceptance. Keywords: Breast, Mammography, Artificial Intelligence Supplemental material is available for this article. Published under a CC BY 4.0 license.
Collapse
Affiliation(s)
- B Bersu Ozcan
- Department of Radiology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, MC 8896, Dallas, TX 75390-8896
| | - Basak E Dogan
- Department of Radiology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, MC 8896, Dallas, TX 75390-8896
| | - Yin Xi
- Department of Radiology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, MC 8896, Dallas, TX 75390-8896
- Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Department of Population and Data Sciences, Dallas, Tex
| | - Emily E Knippa
- Department of Radiology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, MC 8896, Dallas, TX 75390-8896
| |
Collapse
|
2
|
Doo FX, Naranjo WG, Kapouranis T, Thor M, Chao M, Yang X, Marshall DC. Sex-Based Bias in Artificial Intelligence-Based Segmentation Models in Clinical Oncology. Clin Oncol (R Coll Radiol) 2025; 39:103758. [PMID: 39874747 PMCID: PMC11850178 DOI: 10.1016/j.clon.2025.103758] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2024] [Revised: 11/14/2024] [Accepted: 01/03/2025] [Indexed: 01/30/2025]
Abstract
Artificial intelligence (AI) advancements have accelerated applications of imaging in clinical oncology, especially in revolutionizing the safe and accurate delivery of state-of-the-art imaging-guided radiotherapy techniques. However, concerns are growing over the potential for sex-related bias and the omission of female-specific data in multi-organ segmentation algorithm development pipelines. Opportunities exist for addressing sex-specific data as a source of bias, and improving sex inclusion to adequately inform the development of AI-based technologies to ensure their fairness, generalizability and equitable distribution. The goal of this review is to discuss the importance of biological sex for AI-based multi-organ image segmentation in routine clinical and radiation oncology; sources of sex-based bias in data generation, model building and implementation and recommendations to ensure AI equity in this rapidly evolving domain.
Collapse
Affiliation(s)
- F X Doo
- University of Maryland Medical Intelligent Imaging (UM2ii) Center, Department of Radiology and Nuclear Medicine, University of Maryland, Baltimore, MD, USA; University of Maryland-Institute for Health Computing (UM-IHC), University of Maryland, North Bethesda, MD, USA
| | - W G Naranjo
- Department of Medical Physics, Columbia University, New York, New York, USA; Department of Population Health Science and Policy, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - T Kapouranis
- Department of Population Health Science and Policy, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - M Thor
- Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - M Chao
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - X Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - D C Marshall
- Department of Population Health Science and Policy, Icahn School of Medicine at Mount Sinai, New York, New York, USA; Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, New York, New York, USA.
| |
Collapse
|
3
|
Faghani S, Moassefi M, Yadav U, Buadi FK, Kumar SK, Erickson BJ, Gonsalves WI, Baffour FI. Whole-body low-dose computed tomography in patients with newly diagnosed multiple myeloma predicts cytogenetic risk: a deep learning radiogenomics study. Skeletal Radiol 2025; 54:267-273. [PMID: 38937291 PMCID: PMC11652250 DOI: 10.1007/s00256-024-04733-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/04/2024] [Revised: 05/31/2024] [Accepted: 06/12/2024] [Indexed: 06/29/2024]
Abstract
OBJECTIVE To develop a whole-body low-dose CT (WBLDCT) deep learning model and determine its accuracy in predicting the presence of cytogenetic abnormalities in multiple myeloma (MM). MATERIALS AND METHODS WBLDCTs of MM patients performed within a year of diagnosis were included. Cytogenetic assessments of clonal plasma cells via fluorescent in situ hybridization (FISH) were used to risk-stratify patients as high-risk (HR) or standard-risk (SR). Presence of any of del(17p), t(14;16), t(4;14), and t(14;20) on FISH was defined as HR. The dataset was evenly divided into five groups (folds) at the individual patient level for model training. Mean and standard deviation (SD) of the area under the receiver operating curve (AUROC) across the folds were recorded. RESULTS One hundred fifty-one patients with MM were included in the study. The model performed best for t(4;14), mean (SD) AUROC of 0.874 (0.073). The lowest AUROC was observed for trisomies: AUROC of 0.717 (0.058). Two- and 5-year survival rates for HR cytogenetics were 87% and 71%, respectively, compared to 91% and 79% for SR cytogenetics. Survival predictions by the WBLDCT deep learning model revealed 2- and 5-year survival rates for patients with HR cytogenetics as 87% and 71%, respectively, compared to 92% and 81% for SR cytogenetics. CONCLUSION A deep learning model trained on WBLDCT scans predicted the presence of cytogenetic abnormalities used for risk stratification in MM. Assessment of the model's performance revealed good to excellent classification of the various cytogenetic abnormalities.
Collapse
Affiliation(s)
- Shahriar Faghani
- Department of Radiology, Mayo Clinic, 200 1st St SW, Rochester, MN, 55905, USA
| | - Mana Moassefi
- Department of Radiology, Mayo Clinic, 200 1st St SW, Rochester, MN, 55905, USA
| | - Udit Yadav
- Division of Hematology, Mayo Clinic, 13400 E. Shea Blvd, Scottsdale, AZ, 85259, USA
| | - Francis K Buadi
- Division of Hematology, Mayo Clinic, 200 1st St SW, Rochester, MN, 55905, USA
| | - Shaji K Kumar
- Division of Hematology, Mayo Clinic, 200 1st St SW, Rochester, MN, 55905, USA
| | - Bradley J Erickson
- Department of Radiology, Mayo Clinic, 200 1st St SW, Rochester, MN, 55905, USA
| | - Wilson I Gonsalves
- Division of Hematology, Mayo Clinic, 200 1st St SW, Rochester, MN, 55905, USA
| | - Francis I Baffour
- Department of Radiology, Mayo Clinic, 200 1st St SW, Rochester, MN, 55905, USA.
| |
Collapse
|
4
|
Orzan F, Iancu ŞD, Dioşan L, Bálint Z. Textural analysis and artificial intelligence as decision support tools in the diagnosis of multiple sclerosis - a systematic review. Front Neurosci 2025; 18:1457420. [PMID: 39906910 PMCID: PMC11790655 DOI: 10.3389/fnins.2024.1457420] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2024] [Accepted: 12/30/2024] [Indexed: 02/06/2025] Open
Abstract
Introduction Magnetic resonance imaging (MRI) is conventionally used for the detection and diagnosis of multiple sclerosis (MS), often complemented by lumbar puncture-a highly invasive method-to validate the diagnosis. Additionally, MRI is periodically repeated to monitor disease progression and treatment efficacy. Recent research has focused on the application of artificial intelligence (AI) and radiomics in medical image processing, diagnosis, and treatment planning. Methods A review of the current literature was conducted, analyzing the use of AI models and texture analysis for MS lesion segmentation and classification. The study emphasizes common models, including U-Net, Support Vector Machine, Random Forest, and K-Nearest Neighbors, alongside their evaluation metrics. Results The analysis revealed a fragmented research landscape, with significant variation in model architectures and performance. Evaluation metrics such as Accuracy, Dice score, and Sensitivity are commonly employed, with some models demonstrating robustness across multi-center datasets. However, most studies lack validation in clinical scenarios. Discussion The absence of consensus on the optimal model for MS lesion segmentation highlights the need for standardized methodologies and clinical validation. Future research should prioritize clinical trials to establish the real-world applicability of AI-driven decision support tools. This review provides a comprehensive overview of contemporary advancements in AI and radiomics for analyzing and monitoring emerging MS lesions in MRI.
Collapse
Affiliation(s)
- Filip Orzan
- Department of Biomedical Physics, Faculty of Physics, Babeş-Bolyai University, Cluj-Napoca, Romania
| | - Ştefania D. Iancu
- Department of Biomedical Physics, Faculty of Physics, Babeş-Bolyai University, Cluj-Napoca, Romania
| | - Laura Dioşan
- Faculty of Mathematics and Computer Science, Babeş-Bolyai University, Cluj-Napoca, Romania
| | - Zoltán Bálint
- Department of Biomedical Physics, Faculty of Physics, Babeş-Bolyai University, Cluj-Napoca, Romania
| |
Collapse
|
5
|
Mastrodicasa D, van Assen M, Huisman M, Leiner T, Williamson EE, Nicol ED, Allen BD, Saba L, Vliegenthart R, Hanneman K, Atzen S. Use of AI in Cardiac CT and MRI: A Scientific Statement from the ESCR, EuSoMII, NASCI, SCCT, SCMR, SIIM, and RSNA. Radiology 2025; 314:e240516. [PMID: 39873607 PMCID: PMC11783164 DOI: 10.1148/radiol.240516] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2024] [Revised: 07/29/2024] [Accepted: 08/06/2024] [Indexed: 01/30/2025]
Abstract
Artificial intelligence (AI) offers promising solutions for many steps of the cardiac imaging workflow, from patient and test selection through image acquisition, reconstruction, and interpretation, extending to prognostication and reporting. Despite the development of many cardiac imaging AI algorithms, AI tools are at various stages of development and face challenges for clinical implementation. This scientific statement, endorsed by several societies in the field, provides an overview of the current landscape and challenges of AI applications in cardiac CT and MRI. Each section is organized into questions and statements that address key steps of the cardiac imaging workflow, including ethical, legal, and environmental sustainability considerations. A technology readiness level range of 1 to 9 summarizes the maturity level of AI tools and reflects the progression from preliminary research to clinical implementation. This document aims to bridge the gap between burgeoning research developments and limited clinical applications of AI tools in cardiac CT and MRI.
Collapse
Affiliation(s)
| | | | - Merel Huisman
- From the Department of Radiology, University of Washington, UW
Medical Center-Montlake, Seattle, Wash (D.M.); Department of Radiology,
OncoRad/Tumor Imaging Metrics Core (TIMC), University of Washington, Seattle,
Wash (D.M.); Department of Radiology and Imaging Sciences, Emory University,
Atlanta, Ga (M.v.A.); Department of Radiology and Nuclear Medicine, Radboud
University Medical Center, Nijmegen, the Netherlands (M.H.); Department of
Radiology, Mayo Clinic, Rochester, Minn (T.L., E.E.W.); Departments of
Cardiology and Radiology, Royal Brompton Hospital, London, United Kingdom
(E.D.N.); School of Biomedical Engineering and Imaging Sciences, King’s
College, London, United Kingdom (E.D.N.); Department of Radiology, Northwestern
University Feinberg School of Medicine, Chicago, Ill (B.D.A.); Department of
Radiology, University of Cagliari, Cagliari, Italy (L.S.); Department of
Radiology, University of Groningen, University Medical Center Groningen,
Hanzeplein 1 Postbus 30 001, 9700 RB Groningen, the Netherlands (R.V.);
Department of Medical Imaging, University Medical Imaging Toronto, University of
Toronto, Toronto, Ontario, Canada (K.H.); and Toronto General Hospital Research
Institute, University Health Network, University of Toronto, Toronto, Ontario,
Canada (K.H.)
| | - Tim Leiner
- From the Department of Radiology, University of Washington, UW
Medical Center-Montlake, Seattle, Wash (D.M.); Department of Radiology,
OncoRad/Tumor Imaging Metrics Core (TIMC), University of Washington, Seattle,
Wash (D.M.); Department of Radiology and Imaging Sciences, Emory University,
Atlanta, Ga (M.v.A.); Department of Radiology and Nuclear Medicine, Radboud
University Medical Center, Nijmegen, the Netherlands (M.H.); Department of
Radiology, Mayo Clinic, Rochester, Minn (T.L., E.E.W.); Departments of
Cardiology and Radiology, Royal Brompton Hospital, London, United Kingdom
(E.D.N.); School of Biomedical Engineering and Imaging Sciences, King’s
College, London, United Kingdom (E.D.N.); Department of Radiology, Northwestern
University Feinberg School of Medicine, Chicago, Ill (B.D.A.); Department of
Radiology, University of Cagliari, Cagliari, Italy (L.S.); Department of
Radiology, University of Groningen, University Medical Center Groningen,
Hanzeplein 1 Postbus 30 001, 9700 RB Groningen, the Netherlands (R.V.);
Department of Medical Imaging, University Medical Imaging Toronto, University of
Toronto, Toronto, Ontario, Canada (K.H.); and Toronto General Hospital Research
Institute, University Health Network, University of Toronto, Toronto, Ontario,
Canada (K.H.)
| | - Eric E. Williamson
- From the Department of Radiology, University of Washington, UW
Medical Center-Montlake, Seattle, Wash (D.M.); Department of Radiology,
OncoRad/Tumor Imaging Metrics Core (TIMC), University of Washington, Seattle,
Wash (D.M.); Department of Radiology and Imaging Sciences, Emory University,
Atlanta, Ga (M.v.A.); Department of Radiology and Nuclear Medicine, Radboud
University Medical Center, Nijmegen, the Netherlands (M.H.); Department of
Radiology, Mayo Clinic, Rochester, Minn (T.L., E.E.W.); Departments of
Cardiology and Radiology, Royal Brompton Hospital, London, United Kingdom
(E.D.N.); School of Biomedical Engineering and Imaging Sciences, King’s
College, London, United Kingdom (E.D.N.); Department of Radiology, Northwestern
University Feinberg School of Medicine, Chicago, Ill (B.D.A.); Department of
Radiology, University of Cagliari, Cagliari, Italy (L.S.); Department of
Radiology, University of Groningen, University Medical Center Groningen,
Hanzeplein 1 Postbus 30 001, 9700 RB Groningen, the Netherlands (R.V.);
Department of Medical Imaging, University Medical Imaging Toronto, University of
Toronto, Toronto, Ontario, Canada (K.H.); and Toronto General Hospital Research
Institute, University Health Network, University of Toronto, Toronto, Ontario,
Canada (K.H.)
| | - Edward D. Nicol
- From the Department of Radiology, University of Washington, UW
Medical Center-Montlake, Seattle, Wash (D.M.); Department of Radiology,
OncoRad/Tumor Imaging Metrics Core (TIMC), University of Washington, Seattle,
Wash (D.M.); Department of Radiology and Imaging Sciences, Emory University,
Atlanta, Ga (M.v.A.); Department of Radiology and Nuclear Medicine, Radboud
University Medical Center, Nijmegen, the Netherlands (M.H.); Department of
Radiology, Mayo Clinic, Rochester, Minn (T.L., E.E.W.); Departments of
Cardiology and Radiology, Royal Brompton Hospital, London, United Kingdom
(E.D.N.); School of Biomedical Engineering and Imaging Sciences, King’s
College, London, United Kingdom (E.D.N.); Department of Radiology, Northwestern
University Feinberg School of Medicine, Chicago, Ill (B.D.A.); Department of
Radiology, University of Cagliari, Cagliari, Italy (L.S.); Department of
Radiology, University of Groningen, University Medical Center Groningen,
Hanzeplein 1 Postbus 30 001, 9700 RB Groningen, the Netherlands (R.V.);
Department of Medical Imaging, University Medical Imaging Toronto, University of
Toronto, Toronto, Ontario, Canada (K.H.); and Toronto General Hospital Research
Institute, University Health Network, University of Toronto, Toronto, Ontario,
Canada (K.H.)
| | - Bradley D. Allen
- From the Department of Radiology, University of Washington, UW
Medical Center-Montlake, Seattle, Wash (D.M.); Department of Radiology,
OncoRad/Tumor Imaging Metrics Core (TIMC), University of Washington, Seattle,
Wash (D.M.); Department of Radiology and Imaging Sciences, Emory University,
Atlanta, Ga (M.v.A.); Department of Radiology and Nuclear Medicine, Radboud
University Medical Center, Nijmegen, the Netherlands (M.H.); Department of
Radiology, Mayo Clinic, Rochester, Minn (T.L., E.E.W.); Departments of
Cardiology and Radiology, Royal Brompton Hospital, London, United Kingdom
(E.D.N.); School of Biomedical Engineering and Imaging Sciences, King’s
College, London, United Kingdom (E.D.N.); Department of Radiology, Northwestern
University Feinberg School of Medicine, Chicago, Ill (B.D.A.); Department of
Radiology, University of Cagliari, Cagliari, Italy (L.S.); Department of
Radiology, University of Groningen, University Medical Center Groningen,
Hanzeplein 1 Postbus 30 001, 9700 RB Groningen, the Netherlands (R.V.);
Department of Medical Imaging, University Medical Imaging Toronto, University of
Toronto, Toronto, Ontario, Canada (K.H.); and Toronto General Hospital Research
Institute, University Health Network, University of Toronto, Toronto, Ontario,
Canada (K.H.)
| | - Luca Saba
- From the Department of Radiology, University of Washington, UW
Medical Center-Montlake, Seattle, Wash (D.M.); Department of Radiology,
OncoRad/Tumor Imaging Metrics Core (TIMC), University of Washington, Seattle,
Wash (D.M.); Department of Radiology and Imaging Sciences, Emory University,
Atlanta, Ga (M.v.A.); Department of Radiology and Nuclear Medicine, Radboud
University Medical Center, Nijmegen, the Netherlands (M.H.); Department of
Radiology, Mayo Clinic, Rochester, Minn (T.L., E.E.W.); Departments of
Cardiology and Radiology, Royal Brompton Hospital, London, United Kingdom
(E.D.N.); School of Biomedical Engineering and Imaging Sciences, King’s
College, London, United Kingdom (E.D.N.); Department of Radiology, Northwestern
University Feinberg School of Medicine, Chicago, Ill (B.D.A.); Department of
Radiology, University of Cagliari, Cagliari, Italy (L.S.); Department of
Radiology, University of Groningen, University Medical Center Groningen,
Hanzeplein 1 Postbus 30 001, 9700 RB Groningen, the Netherlands (R.V.);
Department of Medical Imaging, University Medical Imaging Toronto, University of
Toronto, Toronto, Ontario, Canada (K.H.); and Toronto General Hospital Research
Institute, University Health Network, University of Toronto, Toronto, Ontario,
Canada (K.H.)
| | | | | | - Sarah Atzen
- From the Department of Radiology, University of Washington, UW
Medical Center-Montlake, Seattle, Wash (D.M.); Department of Radiology,
OncoRad/Tumor Imaging Metrics Core (TIMC), University of Washington, Seattle,
Wash (D.M.); Department of Radiology and Imaging Sciences, Emory University,
Atlanta, Ga (M.v.A.); Department of Radiology and Nuclear Medicine, Radboud
University Medical Center, Nijmegen, the Netherlands (M.H.); Department of
Radiology, Mayo Clinic, Rochester, Minn (T.L., E.E.W.); Departments of
Cardiology and Radiology, Royal Brompton Hospital, London, United Kingdom
(E.D.N.); School of Biomedical Engineering and Imaging Sciences, King’s
College, London, United Kingdom (E.D.N.); Department of Radiology, Northwestern
University Feinberg School of Medicine, Chicago, Ill (B.D.A.); Department of
Radiology, University of Cagliari, Cagliari, Italy (L.S.); Department of
Radiology, University of Groningen, University Medical Center Groningen,
Hanzeplein 1 Postbus 30 001, 9700 RB Groningen, the Netherlands (R.V.);
Department of Medical Imaging, University Medical Imaging Toronto, University of
Toronto, Toronto, Ontario, Canada (K.H.); and Toronto General Hospital Research
Institute, University Health Network, University of Toronto, Toronto, Ontario,
Canada (K.H.)
| |
Collapse
|
6
|
Mayfield JD, Murtagh R, Ciotti J, Robertson D, Naqa IE. Time-Dependent Deep Learning Prediction of Multiple Sclerosis Disability. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:3231-3249. [PMID: 38871944 PMCID: PMC11612123 DOI: 10.1007/s10278-024-01031-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Revised: 01/05/2024] [Accepted: 01/23/2024] [Indexed: 06/15/2024]
Abstract
The majority of deep learning models in medical image analysis concentrate on single snapshot timepoint circumstances, such as the identification of current pathology on a given image or volume. This is often in contrast to the diagnostic methodology in radiology where presumed pathologic findings are correlated to prior studies and subsequent changes over time. For multiple sclerosis (MS), the current body of literature describes various forms of lesion segmentation with few studies analyzing disability progression over time. For the purpose of longitudinal time-dependent analysis, we propose a combinatorial analysis of a video vision transformer (ViViT) benchmarked against traditional recurrent neural network of Convolutional Neural Network-Long Short-Term Memory (CNN-LSTM) architectures and a hybrid Vision Transformer-LSTM (ViT-LSTM) to predict long-term disability based upon the Extended Disability Severity Score (EDSS). The patient cohort was procured from a two-site institution with 703 patients' multisequence, contrast-enhanced MRIs of the cervical spine between the years 2002 and 2023. Following a competitive performance analysis, a VGG-16-based CNN-LSTM was compared to ViViT with an ablation analysis to determine time-dependency of the models. The VGG16-LSTM predicted trinary classification of EDSS score in 6 years with 0.74 AUC versus the ViViT with 0.84 AUC (p-value < 0.001 per 5 × 2 cross-validation F-test) on an 80:20 hold-out testing split. However, the VGG16-LSTM outperformed ViViT when patients with only 2 years of MRIs (n = 94) (0.75 AUC versus 0.72 AUC, respectively). Exact EDSS classification was investigated for both models using both classification and regression strategies but showed collectively worse performance. Our experimental results demonstrate the ability of time-dependent deep learning models to predict disability in MS using trinary stratification of disability, mimicking clinical practice. Further work includes external validation and subsequent observational clinical trials.
Collapse
Affiliation(s)
- John D Mayfield
- USF Health Department of Radiology, 2 Tampa General Circle, STC 6103, Tampa, FL, 33612, USA.
| | - Ryan Murtagh
- USF Health Department of Radiology, 2 Tampa General Circle, STC 6103, Tampa, FL, 33612, USA
| | - John Ciotti
- Department of Neurology, University of South Florida, Morsani College of Medicine, USF Multiple Sclerosis Center, 13330 USF Laurel Drive, Tampa, FL, 33612, USA
| | - Derrick Robertson
- Department of Neurology, James A. Haley VA Medical Center, 13000 Bruce B Downs Blvd, Tampa, FL, 33612, USA
| | - Issam El Naqa
- University of South Florida, College of Engineering, 12902 USF Magnolia Drive, Tampa, FL, 33612, USA
- H. Lee Moffitt Cancer Center Department of Machine Learning, Tampa, FL, 33612, USA
| |
Collapse
|
7
|
Huisman M. When AUC-ROC and accuracy are not accurate: what everyone needs to know about evaluating artificial intelligence in radiology. Eur Radiol 2024; 34:7892-7894. [PMID: 38913248 DOI: 10.1007/s00330-024-10859-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Revised: 03/15/2024] [Accepted: 03/21/2024] [Indexed: 06/25/2024]
Affiliation(s)
- Merel Huisman
- Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, The Netherlands.
| |
Collapse
|
8
|
Jung HK, Kim K, Park JE, Kim N. Image-Based Generative Artificial Intelligence in Radiology: Comprehensive Updates. Korean J Radiol 2024; 25:959-981. [PMID: 39473088 PMCID: PMC11524689 DOI: 10.3348/kjr.2024.0392] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2024] [Revised: 08/29/2024] [Accepted: 08/29/2024] [Indexed: 11/02/2024] Open
Abstract
Generative artificial intelligence (AI) has been applied to images for image quality enhancement, domain transfer, and augmentation of training data for AI modeling in various medical fields. Image-generative AI can produce large amounts of unannotated imaging data, which facilitates multiple downstream deep-learning tasks. However, their evaluation methods and clinical utility have not been thoroughly reviewed. This article summarizes commonly used generative adversarial networks and diffusion models. In addition, it summarizes their utility in clinical tasks in the field of radiology, such as direct image utilization, lesion detection, segmentation, and diagnosis. This article aims to guide readers regarding radiology practice and research using image-generative AI by 1) reviewing basic theories of image-generative AI, 2) discussing the methods used to evaluate the generated images, 3) outlining the clinical and research utility of generated images, and 4) discussing the issue of hallucinations.
Collapse
Affiliation(s)
- Ha Kyung Jung
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea
| | - Kiduk Kim
- Department of Convergence Medicine, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea
| | - Ji Eun Park
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea.
| | - Namkug Kim
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea
- Department of Convergence Medicine, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea.
| |
Collapse
|
9
|
Sourlos N, Vliegenthart R, Santinha J, Klontzas ME, Cuocolo R, Huisman M, van Ooijen P. Recommendations for the creation of benchmark datasets for reproducible artificial intelligence in radiology. Insights Imaging 2024; 15:248. [PMID: 39400639 PMCID: PMC11473745 DOI: 10.1186/s13244-024-01833-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2024] [Accepted: 09/20/2024] [Indexed: 10/15/2024] Open
Abstract
Various healthcare domains have witnessed successful preliminary implementation of artificial intelligence (AI) solutions, including radiology, though limited generalizability hinders their widespread adoption. Currently, most research groups and industry have limited access to the data needed for external validation studies. The creation and accessibility of benchmark datasets to validate such solutions represents a critical step towards generalizability, for which an array of aspects ranging from preprocessing to regulatory issues and biostatistical principles come into play. In this article, the authors provide recommendations for the creation of benchmark datasets in radiology, explain current limitations in this realm, and explore potential new approaches. CLINICAL RELEVANCE STATEMENT: Benchmark datasets, facilitating validation of AI software performance can contribute to the adoption of AI in clinical practice. KEY POINTS: Benchmark datasets are essential for the validation of AI software performance. Factors like image quality and representativeness of cases should be considered. Benchmark datasets can help adoption by increasing the trustworthiness and robustness of AI.
Collapse
Affiliation(s)
- Nikos Sourlos
- Department of Radiology, University Medical Center of Groningen, Groningen, The Netherlands
- DataScience Center in Health, University Medical Center Groningen, Groningen, The Netherlands
| | - Rozemarijn Vliegenthart
- Department of Radiology, University Medical Center of Groningen, Groningen, The Netherlands
- DataScience Center in Health, University Medical Center Groningen, Groningen, The Netherlands
| | - Joao Santinha
- Digital Surgery LAB, Champalimaud Foundation, Champalimaud Clinical Centre, Lisbon, Portugal
| | - Michail E Klontzas
- Department of Medical Imaging, University Hospital of Heraklion, Heraklion, Greece
- Department of Radiology, School of Medicine, University of Crete, Heraklion, Greece
| | - Renato Cuocolo
- Department of Medicine, Surgery, and Dentistry, University of Salerno, Baronissi, Italy
| | - Merel Huisman
- Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Peter van Ooijen
- DataScience Center in Health, University Medical Center Groningen, Groningen, The Netherlands.
- Department of Radiation Oncology, University Medical Center Groningen, Groningen, The Netherlands.
| |
Collapse
|
10
|
Sunnetci KM, Kaba E, Celiker FB, Alkan A. MR Image Fusion-Based Parotid Gland Tumor Detection. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01137-3. [PMID: 39327379 DOI: 10.1007/s10278-024-01137-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/01/2024] [Revised: 04/29/2024] [Accepted: 04/30/2024] [Indexed: 09/28/2024]
Abstract
The differentiation of benign and malignant parotid gland tumors is of major significance as it directly affects the treatment process. In addition, it is also a vital task in terms of early and accurate diagnosis of parotid gland tumors and the determination of treatment planning accordingly. As in other diseases, the differentiation of tumor types involves several challenging, time-consuming, and laborious processes. In the study, Magnetic Resonance (MR) images of 114 patients with parotid gland tumors are used for training and testing purposes by Image Fusion (IF). After the Apparent Diffusion Coefficient (ADC), Contrast-enhanced T1-w (T1C-w), and T2-w sequences are cropped, IF (ADC, T1C-w), IF (ADC, T2-w), IF (T1C-w, T2-w), and IF (ADC, T1C-w, T2-w) datasets are obtained for different combinations of these sequences using a two-dimensional Discrete Wavelet Transform (DWT)-based fusion technique. For each of these four datasets, ResNet18, GoogLeNet, and DenseNet-201 architectures are trained separately, and thus, 12 models are obtained in the study. A Graphical User Interface (GUI) application that contains the most successful of these trained architectures for each data is also designed to support the users. The designed GUI application not only allows the fusing of different sequence images but also predicts whether the label of the fused image is benign or malignant. The results show that the DenseNet-201 models for IF (ADC, T1C-w), IF (ADC, T2-w), and IF (ADC, T1C-w, T2-w) are better than the others, with accuracies of 95.45%, 95.96%, and 92.93%, respectively. It is also noted in the study that the most successful model for IF (T1C-w, T2-w) is ResNet18, and its accuracy is equal to 94.95%.
Collapse
Affiliation(s)
- Kubilay Muhammed Sunnetci
- Department of Electrical and Electronics Engineering, Osmaniye Korkut Ata University, Osmaniye, 80000, Turkey
- Department of Electrical and Electronics Engineering, Kahramanmaraş Sütçü İmam University, Kahramanmaraş, 46050, Turkey
| | - Esat Kaba
- Department of Radiology, Recep Tayyip Erdogan University, Rize, 53100, Turkey
| | | | - Ahmet Alkan
- Department of Electrical and Electronics Engineering, Kahramanmaraş Sütçü İmam University, Kahramanmaraş, 46050, Turkey.
| |
Collapse
|
11
|
Mäenpää SM, Korja M. Diagnostic test accuracy of externally validated convolutional neural network (CNN) artificial intelligence (AI) models for emergency head CT scans - A systematic review. Int J Med Inform 2024; 189:105523. [PMID: 38901270 DOI: 10.1016/j.ijmedinf.2024.105523] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2024] [Revised: 05/29/2024] [Accepted: 06/10/2024] [Indexed: 06/22/2024]
Abstract
BACKGROUND The surge in emergency head CT imaging and artificial intelligence (AI) advancements, especially deep learning (DL) and convolutional neural networks (CNN), have accelerated the development of computer-aided diagnosis (CADx) for emergency imaging. External validation assesses model generalizability, providing preliminary evidence of clinical potential. OBJECTIVES This study systematically reviews externally validated CNN-CADx models for emergency head CT scans, critically appraises diagnostic test accuracy (DTA), and assesses adherence to reporting guidelines. METHODS Studies comparing CNN-CADx model performance to reference standard were eligible. The review was registered in PROSPERO (CRD42023411641) and conducted on Medline, Embase, EBM-Reviews and Web of Science following PRISMA-DTA guideline. DTA reporting were systematically extracted and appraised using standardised checklists (STARD, CHARMS, CLAIM, TRIPOD, PROBAST, QUADAS-2). RESULTS Six of 5636 identified studies were eligible. The common target condition was intracranial haemorrhage (ICH), and intended workflow roles auxiliary to experts. Due to methodological and clinical between-study variation, meta-analysis was inappropriate. The scan-level sensitivity exceeded 90 % in 5/6 studies, while specificities ranged from 58,0-97,7 %. The SROC 95 % predictive region was markedly broader than the confidence region, ranging above 50 % sensitivity and 20 % specificity. All studies had unclear or high risk of bias and concern for applicability (QUADAS-2, PROBAST), and reporting adherence was below 50 % in 20 of 32 TRIPOD items. CONCLUSION 0.01 % of identified studies met the eligibility criteria. The evidence on the DTA of CNN-CADx models for emergency head CT scans remains limited in the scope of this review, as the reviewed studies were scarce, inapt for meta-analysis and undermined by inadequate methodological conduct and reporting. Properly conducted, external validation remains preliminary for evaluating the clinical potential of AI-CADx models, but prospective and pragmatic clinical validation in comparative trials remains most crucial. In conclusion, future AI-CADx research processes should be methodologically standardized and reported in a clinically meaningful way to avoid research waste.
Collapse
Affiliation(s)
- Saana M Mäenpää
- Department of Neurosurgery, University of Helsinki and Helsinki University Hospital, Helsinki, Finland.
| | - Miikka Korja
- Department of Neurosurgery, University of Helsinki and Helsinki University Hospital, Helsinki, Finland.
| |
Collapse
|
12
|
Linguraru MG, Bakas S, Aboian M, Chang PD, Flanders AE, Kalpathy-Cramer J, Kitamura FC, Lungren MP, Mongan J, Prevedello LM, Summers RM, Wu CC, Adewole M, Kahn CE. Clinical, Cultural, Computational, and Regulatory Considerations to Deploy AI in Radiology: Perspectives of RSNA and MICCAI Experts. Radiol Artif Intell 2024; 6:e240225. [PMID: 38984986 PMCID: PMC11294958 DOI: 10.1148/ryai.240225] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2024] [Revised: 04/13/2024] [Accepted: 04/25/2024] [Indexed: 07/11/2024]
Abstract
The Radiological Society of North of America (RSNA) and the Medical Image Computing and Computer Assisted Intervention (MICCAI) Society have led a series of joint panels and seminars focused on the present impact and future directions of artificial intelligence (AI) in radiology. These conversations have collected viewpoints from multidisciplinary experts in radiology, medical imaging, and machine learning on the current clinical penetration of AI technology in radiology and how it is impacted by trust, reproducibility, explainability, and accountability. The collective points-both practical and philosophical-define the cultural changes for radiologists and AI scientists working together and describe the challenges ahead for AI technologies to meet broad approval. This article presents the perspectives of experts from MICCAI and RSNA on the clinical, cultural, computational, and regulatory considerations-coupled with recommended reading materials-essential to adopt AI technology successfully in radiology and, more generally, in clinical practice. The report emphasizes the importance of collaboration to improve clinical deployment, highlights the need to integrate clinical and medical imaging data, and introduces strategies to ensure smooth and incentivized integration. Keywords: Adults and Pediatrics, Computer Applications-General (Informatics), Diagnosis, Prognosis © RSNA, 2024.
Collapse
Affiliation(s)
- Marius George Linguraru
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation,
Children’s National Hospital, Washington, DC (M.G.L.); Divisions of
Radiology and Pediatrics, George Washington University School of Medicine and
Health Sciences, Washington, DC (M.G.L.); Division of Computational Pathology,
Department of Pathology & Laboratory Medicine, School of Medicine,
Indiana University, Indianapolis, Ind (S.B.); Department of Radiology,
Children’s Hospital of Philadelphia, Philadelphia, Pa (M.A.); Department
of Radiological Sciences, University of California Irvine, Irvine, Calif
(P.D.C.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa
(A.E.F.); Department of Ophthalmology, University of Colorado Anschutz Medical
Campus, Aurora, Colo (J.K.C.); Department of Applied Innovation and AI,
Diagnósticos da América SA (DasaInova), São Paulo, Brazil
(F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São
Paulo, São Paulo, Brazil (F.C.K.); Microsoft, Nuance, Burlington, Mass
(M.P.L.); Department of Radiology and Biomedical Imaging and Center for
Intelligent Imaging, University of California San Francisco, San Francisco,
Calif (J.M.); Department of Radiology, The Ohio State University Wexner Medical
Center, Columbus, Ohio (L.M.P.); Department of Radiology and Imaging Sciences,
National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); Division
of Diagnostic Imaging, University of Texas MD Anderson Cancer Center, Houston,
Tex (C.C.W.); Medical Artificial Intelligence Laboratory, University of Lagos
College of Medicine, Lagos, Nigeria (M.A.); and Department of Radiology,
University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA
19104-6243 (C.E.K.)
| | - Spyridon Bakas
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation,
Children’s National Hospital, Washington, DC (M.G.L.); Divisions of
Radiology and Pediatrics, George Washington University School of Medicine and
Health Sciences, Washington, DC (M.G.L.); Division of Computational Pathology,
Department of Pathology & Laboratory Medicine, School of Medicine,
Indiana University, Indianapolis, Ind (S.B.); Department of Radiology,
Children’s Hospital of Philadelphia, Philadelphia, Pa (M.A.); Department
of Radiological Sciences, University of California Irvine, Irvine, Calif
(P.D.C.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa
(A.E.F.); Department of Ophthalmology, University of Colorado Anschutz Medical
Campus, Aurora, Colo (J.K.C.); Department of Applied Innovation and AI,
Diagnósticos da América SA (DasaInova), São Paulo, Brazil
(F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São
Paulo, São Paulo, Brazil (F.C.K.); Microsoft, Nuance, Burlington, Mass
(M.P.L.); Department of Radiology and Biomedical Imaging and Center for
Intelligent Imaging, University of California San Francisco, San Francisco,
Calif (J.M.); Department of Radiology, The Ohio State University Wexner Medical
Center, Columbus, Ohio (L.M.P.); Department of Radiology and Imaging Sciences,
National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); Division
of Diagnostic Imaging, University of Texas MD Anderson Cancer Center, Houston,
Tex (C.C.W.); Medical Artificial Intelligence Laboratory, University of Lagos
College of Medicine, Lagos, Nigeria (M.A.); and Department of Radiology,
University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA
19104-6243 (C.E.K.)
| | - Mariam Aboian
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation,
Children’s National Hospital, Washington, DC (M.G.L.); Divisions of
Radiology and Pediatrics, George Washington University School of Medicine and
Health Sciences, Washington, DC (M.G.L.); Division of Computational Pathology,
Department of Pathology & Laboratory Medicine, School of Medicine,
Indiana University, Indianapolis, Ind (S.B.); Department of Radiology,
Children’s Hospital of Philadelphia, Philadelphia, Pa (M.A.); Department
of Radiological Sciences, University of California Irvine, Irvine, Calif
(P.D.C.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa
(A.E.F.); Department of Ophthalmology, University of Colorado Anschutz Medical
Campus, Aurora, Colo (J.K.C.); Department of Applied Innovation and AI,
Diagnósticos da América SA (DasaInova), São Paulo, Brazil
(F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São
Paulo, São Paulo, Brazil (F.C.K.); Microsoft, Nuance, Burlington, Mass
(M.P.L.); Department of Radiology and Biomedical Imaging and Center for
Intelligent Imaging, University of California San Francisco, San Francisco,
Calif (J.M.); Department of Radiology, The Ohio State University Wexner Medical
Center, Columbus, Ohio (L.M.P.); Department of Radiology and Imaging Sciences,
National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); Division
of Diagnostic Imaging, University of Texas MD Anderson Cancer Center, Houston,
Tex (C.C.W.); Medical Artificial Intelligence Laboratory, University of Lagos
College of Medicine, Lagos, Nigeria (M.A.); and Department of Radiology,
University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA
19104-6243 (C.E.K.)
| | - Peter D. Chang
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation,
Children’s National Hospital, Washington, DC (M.G.L.); Divisions of
Radiology and Pediatrics, George Washington University School of Medicine and
Health Sciences, Washington, DC (M.G.L.); Division of Computational Pathology,
Department of Pathology & Laboratory Medicine, School of Medicine,
Indiana University, Indianapolis, Ind (S.B.); Department of Radiology,
Children’s Hospital of Philadelphia, Philadelphia, Pa (M.A.); Department
of Radiological Sciences, University of California Irvine, Irvine, Calif
(P.D.C.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa
(A.E.F.); Department of Ophthalmology, University of Colorado Anschutz Medical
Campus, Aurora, Colo (J.K.C.); Department of Applied Innovation and AI,
Diagnósticos da América SA (DasaInova), São Paulo, Brazil
(F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São
Paulo, São Paulo, Brazil (F.C.K.); Microsoft, Nuance, Burlington, Mass
(M.P.L.); Department of Radiology and Biomedical Imaging and Center for
Intelligent Imaging, University of California San Francisco, San Francisco,
Calif (J.M.); Department of Radiology, The Ohio State University Wexner Medical
Center, Columbus, Ohio (L.M.P.); Department of Radiology and Imaging Sciences,
National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); Division
of Diagnostic Imaging, University of Texas MD Anderson Cancer Center, Houston,
Tex (C.C.W.); Medical Artificial Intelligence Laboratory, University of Lagos
College of Medicine, Lagos, Nigeria (M.A.); and Department of Radiology,
University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA
19104-6243 (C.E.K.)
| | - Adam E. Flanders
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation,
Children’s National Hospital, Washington, DC (M.G.L.); Divisions of
Radiology and Pediatrics, George Washington University School of Medicine and
Health Sciences, Washington, DC (M.G.L.); Division of Computational Pathology,
Department of Pathology & Laboratory Medicine, School of Medicine,
Indiana University, Indianapolis, Ind (S.B.); Department of Radiology,
Children’s Hospital of Philadelphia, Philadelphia, Pa (M.A.); Department
of Radiological Sciences, University of California Irvine, Irvine, Calif
(P.D.C.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa
(A.E.F.); Department of Ophthalmology, University of Colorado Anschutz Medical
Campus, Aurora, Colo (J.K.C.); Department of Applied Innovation and AI,
Diagnósticos da América SA (DasaInova), São Paulo, Brazil
(F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São
Paulo, São Paulo, Brazil (F.C.K.); Microsoft, Nuance, Burlington, Mass
(M.P.L.); Department of Radiology and Biomedical Imaging and Center for
Intelligent Imaging, University of California San Francisco, San Francisco,
Calif (J.M.); Department of Radiology, The Ohio State University Wexner Medical
Center, Columbus, Ohio (L.M.P.); Department of Radiology and Imaging Sciences,
National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); Division
of Diagnostic Imaging, University of Texas MD Anderson Cancer Center, Houston,
Tex (C.C.W.); Medical Artificial Intelligence Laboratory, University of Lagos
College of Medicine, Lagos, Nigeria (M.A.); and Department of Radiology,
University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA
19104-6243 (C.E.K.)
| | - Jayashree Kalpathy-Cramer
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation,
Children’s National Hospital, Washington, DC (M.G.L.); Divisions of
Radiology and Pediatrics, George Washington University School of Medicine and
Health Sciences, Washington, DC (M.G.L.); Division of Computational Pathology,
Department of Pathology & Laboratory Medicine, School of Medicine,
Indiana University, Indianapolis, Ind (S.B.); Department of Radiology,
Children’s Hospital of Philadelphia, Philadelphia, Pa (M.A.); Department
of Radiological Sciences, University of California Irvine, Irvine, Calif
(P.D.C.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa
(A.E.F.); Department of Ophthalmology, University of Colorado Anschutz Medical
Campus, Aurora, Colo (J.K.C.); Department of Applied Innovation and AI,
Diagnósticos da América SA (DasaInova), São Paulo, Brazil
(F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São
Paulo, São Paulo, Brazil (F.C.K.); Microsoft, Nuance, Burlington, Mass
(M.P.L.); Department of Radiology and Biomedical Imaging and Center for
Intelligent Imaging, University of California San Francisco, San Francisco,
Calif (J.M.); Department of Radiology, The Ohio State University Wexner Medical
Center, Columbus, Ohio (L.M.P.); Department of Radiology and Imaging Sciences,
National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); Division
of Diagnostic Imaging, University of Texas MD Anderson Cancer Center, Houston,
Tex (C.C.W.); Medical Artificial Intelligence Laboratory, University of Lagos
College of Medicine, Lagos, Nigeria (M.A.); and Department of Radiology,
University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA
19104-6243 (C.E.K.)
| | - Felipe C. Kitamura
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation,
Children’s National Hospital, Washington, DC (M.G.L.); Divisions of
Radiology and Pediatrics, George Washington University School of Medicine and
Health Sciences, Washington, DC (M.G.L.); Division of Computational Pathology,
Department of Pathology & Laboratory Medicine, School of Medicine,
Indiana University, Indianapolis, Ind (S.B.); Department of Radiology,
Children’s Hospital of Philadelphia, Philadelphia, Pa (M.A.); Department
of Radiological Sciences, University of California Irvine, Irvine, Calif
(P.D.C.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa
(A.E.F.); Department of Ophthalmology, University of Colorado Anschutz Medical
Campus, Aurora, Colo (J.K.C.); Department of Applied Innovation and AI,
Diagnósticos da América SA (DasaInova), São Paulo, Brazil
(F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São
Paulo, São Paulo, Brazil (F.C.K.); Microsoft, Nuance, Burlington, Mass
(M.P.L.); Department of Radiology and Biomedical Imaging and Center for
Intelligent Imaging, University of California San Francisco, San Francisco,
Calif (J.M.); Department of Radiology, The Ohio State University Wexner Medical
Center, Columbus, Ohio (L.M.P.); Department of Radiology and Imaging Sciences,
National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); Division
of Diagnostic Imaging, University of Texas MD Anderson Cancer Center, Houston,
Tex (C.C.W.); Medical Artificial Intelligence Laboratory, University of Lagos
College of Medicine, Lagos, Nigeria (M.A.); and Department of Radiology,
University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA
19104-6243 (C.E.K.)
| | - Matthew P. Lungren
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation,
Children’s National Hospital, Washington, DC (M.G.L.); Divisions of
Radiology and Pediatrics, George Washington University School of Medicine and
Health Sciences, Washington, DC (M.G.L.); Division of Computational Pathology,
Department of Pathology & Laboratory Medicine, School of Medicine,
Indiana University, Indianapolis, Ind (S.B.); Department of Radiology,
Children’s Hospital of Philadelphia, Philadelphia, Pa (M.A.); Department
of Radiological Sciences, University of California Irvine, Irvine, Calif
(P.D.C.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa
(A.E.F.); Department of Ophthalmology, University of Colorado Anschutz Medical
Campus, Aurora, Colo (J.K.C.); Department of Applied Innovation and AI,
Diagnósticos da América SA (DasaInova), São Paulo, Brazil
(F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São
Paulo, São Paulo, Brazil (F.C.K.); Microsoft, Nuance, Burlington, Mass
(M.P.L.); Department of Radiology and Biomedical Imaging and Center for
Intelligent Imaging, University of California San Francisco, San Francisco,
Calif (J.M.); Department of Radiology, The Ohio State University Wexner Medical
Center, Columbus, Ohio (L.M.P.); Department of Radiology and Imaging Sciences,
National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); Division
of Diagnostic Imaging, University of Texas MD Anderson Cancer Center, Houston,
Tex (C.C.W.); Medical Artificial Intelligence Laboratory, University of Lagos
College of Medicine, Lagos, Nigeria (M.A.); and Department of Radiology,
University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA
19104-6243 (C.E.K.)
| | - John Mongan
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation,
Children’s National Hospital, Washington, DC (M.G.L.); Divisions of
Radiology and Pediatrics, George Washington University School of Medicine and
Health Sciences, Washington, DC (M.G.L.); Division of Computational Pathology,
Department of Pathology & Laboratory Medicine, School of Medicine,
Indiana University, Indianapolis, Ind (S.B.); Department of Radiology,
Children’s Hospital of Philadelphia, Philadelphia, Pa (M.A.); Department
of Radiological Sciences, University of California Irvine, Irvine, Calif
(P.D.C.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa
(A.E.F.); Department of Ophthalmology, University of Colorado Anschutz Medical
Campus, Aurora, Colo (J.K.C.); Department of Applied Innovation and AI,
Diagnósticos da América SA (DasaInova), São Paulo, Brazil
(F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São
Paulo, São Paulo, Brazil (F.C.K.); Microsoft, Nuance, Burlington, Mass
(M.P.L.); Department of Radiology and Biomedical Imaging and Center for
Intelligent Imaging, University of California San Francisco, San Francisco,
Calif (J.M.); Department of Radiology, The Ohio State University Wexner Medical
Center, Columbus, Ohio (L.M.P.); Department of Radiology and Imaging Sciences,
National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); Division
of Diagnostic Imaging, University of Texas MD Anderson Cancer Center, Houston,
Tex (C.C.W.); Medical Artificial Intelligence Laboratory, University of Lagos
College of Medicine, Lagos, Nigeria (M.A.); and Department of Radiology,
University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA
19104-6243 (C.E.K.)
| | - Luciano M. Prevedello
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation,
Children’s National Hospital, Washington, DC (M.G.L.); Divisions of
Radiology and Pediatrics, George Washington University School of Medicine and
Health Sciences, Washington, DC (M.G.L.); Division of Computational Pathology,
Department of Pathology & Laboratory Medicine, School of Medicine,
Indiana University, Indianapolis, Ind (S.B.); Department of Radiology,
Children’s Hospital of Philadelphia, Philadelphia, Pa (M.A.); Department
of Radiological Sciences, University of California Irvine, Irvine, Calif
(P.D.C.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa
(A.E.F.); Department of Ophthalmology, University of Colorado Anschutz Medical
Campus, Aurora, Colo (J.K.C.); Department of Applied Innovation and AI,
Diagnósticos da América SA (DasaInova), São Paulo, Brazil
(F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São
Paulo, São Paulo, Brazil (F.C.K.); Microsoft, Nuance, Burlington, Mass
(M.P.L.); Department of Radiology and Biomedical Imaging and Center for
Intelligent Imaging, University of California San Francisco, San Francisco,
Calif (J.M.); Department of Radiology, The Ohio State University Wexner Medical
Center, Columbus, Ohio (L.M.P.); Department of Radiology and Imaging Sciences,
National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); Division
of Diagnostic Imaging, University of Texas MD Anderson Cancer Center, Houston,
Tex (C.C.W.); Medical Artificial Intelligence Laboratory, University of Lagos
College of Medicine, Lagos, Nigeria (M.A.); and Department of Radiology,
University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA
19104-6243 (C.E.K.)
| | - Ronald M. Summers
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation,
Children’s National Hospital, Washington, DC (M.G.L.); Divisions of
Radiology and Pediatrics, George Washington University School of Medicine and
Health Sciences, Washington, DC (M.G.L.); Division of Computational Pathology,
Department of Pathology & Laboratory Medicine, School of Medicine,
Indiana University, Indianapolis, Ind (S.B.); Department of Radiology,
Children’s Hospital of Philadelphia, Philadelphia, Pa (M.A.); Department
of Radiological Sciences, University of California Irvine, Irvine, Calif
(P.D.C.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa
(A.E.F.); Department of Ophthalmology, University of Colorado Anschutz Medical
Campus, Aurora, Colo (J.K.C.); Department of Applied Innovation and AI,
Diagnósticos da América SA (DasaInova), São Paulo, Brazil
(F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São
Paulo, São Paulo, Brazil (F.C.K.); Microsoft, Nuance, Burlington, Mass
(M.P.L.); Department of Radiology and Biomedical Imaging and Center for
Intelligent Imaging, University of California San Francisco, San Francisco,
Calif (J.M.); Department of Radiology, The Ohio State University Wexner Medical
Center, Columbus, Ohio (L.M.P.); Department of Radiology and Imaging Sciences,
National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); Division
of Diagnostic Imaging, University of Texas MD Anderson Cancer Center, Houston,
Tex (C.C.W.); Medical Artificial Intelligence Laboratory, University of Lagos
College of Medicine, Lagos, Nigeria (M.A.); and Department of Radiology,
University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA
19104-6243 (C.E.K.)
| | - Carol C. Wu
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation,
Children’s National Hospital, Washington, DC (M.G.L.); Divisions of
Radiology and Pediatrics, George Washington University School of Medicine and
Health Sciences, Washington, DC (M.G.L.); Division of Computational Pathology,
Department of Pathology & Laboratory Medicine, School of Medicine,
Indiana University, Indianapolis, Ind (S.B.); Department of Radiology,
Children’s Hospital of Philadelphia, Philadelphia, Pa (M.A.); Department
of Radiological Sciences, University of California Irvine, Irvine, Calif
(P.D.C.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa
(A.E.F.); Department of Ophthalmology, University of Colorado Anschutz Medical
Campus, Aurora, Colo (J.K.C.); Department of Applied Innovation and AI,
Diagnósticos da América SA (DasaInova), São Paulo, Brazil
(F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São
Paulo, São Paulo, Brazil (F.C.K.); Microsoft, Nuance, Burlington, Mass
(M.P.L.); Department of Radiology and Biomedical Imaging and Center for
Intelligent Imaging, University of California San Francisco, San Francisco,
Calif (J.M.); Department of Radiology, The Ohio State University Wexner Medical
Center, Columbus, Ohio (L.M.P.); Department of Radiology and Imaging Sciences,
National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); Division
of Diagnostic Imaging, University of Texas MD Anderson Cancer Center, Houston,
Tex (C.C.W.); Medical Artificial Intelligence Laboratory, University of Lagos
College of Medicine, Lagos, Nigeria (M.A.); and Department of Radiology,
University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA
19104-6243 (C.E.K.)
| | - Maruf Adewole
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation,
Children’s National Hospital, Washington, DC (M.G.L.); Divisions of
Radiology and Pediatrics, George Washington University School of Medicine and
Health Sciences, Washington, DC (M.G.L.); Division of Computational Pathology,
Department of Pathology & Laboratory Medicine, School of Medicine,
Indiana University, Indianapolis, Ind (S.B.); Department of Radiology,
Children’s Hospital of Philadelphia, Philadelphia, Pa (M.A.); Department
of Radiological Sciences, University of California Irvine, Irvine, Calif
(P.D.C.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa
(A.E.F.); Department of Ophthalmology, University of Colorado Anschutz Medical
Campus, Aurora, Colo (J.K.C.); Department of Applied Innovation and AI,
Diagnósticos da América SA (DasaInova), São Paulo, Brazil
(F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São
Paulo, São Paulo, Brazil (F.C.K.); Microsoft, Nuance, Burlington, Mass
(M.P.L.); Department of Radiology and Biomedical Imaging and Center for
Intelligent Imaging, University of California San Francisco, San Francisco,
Calif (J.M.); Department of Radiology, The Ohio State University Wexner Medical
Center, Columbus, Ohio (L.M.P.); Department of Radiology and Imaging Sciences,
National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); Division
of Diagnostic Imaging, University of Texas MD Anderson Cancer Center, Houston,
Tex (C.C.W.); Medical Artificial Intelligence Laboratory, University of Lagos
College of Medicine, Lagos, Nigeria (M.A.); and Department of Radiology,
University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA
19104-6243 (C.E.K.)
| | - Charles E. Kahn
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation,
Children’s National Hospital, Washington, DC (M.G.L.); Divisions of
Radiology and Pediatrics, George Washington University School of Medicine and
Health Sciences, Washington, DC (M.G.L.); Division of Computational Pathology,
Department of Pathology & Laboratory Medicine, School of Medicine,
Indiana University, Indianapolis, Ind (S.B.); Department of Radiology,
Children’s Hospital of Philadelphia, Philadelphia, Pa (M.A.); Department
of Radiological Sciences, University of California Irvine, Irvine, Calif
(P.D.C.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa
(A.E.F.); Department of Ophthalmology, University of Colorado Anschutz Medical
Campus, Aurora, Colo (J.K.C.); Department of Applied Innovation and AI,
Diagnósticos da América SA (DasaInova), São Paulo, Brazil
(F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São
Paulo, São Paulo, Brazil (F.C.K.); Microsoft, Nuance, Burlington, Mass
(M.P.L.); Department of Radiology and Biomedical Imaging and Center for
Intelligent Imaging, University of California San Francisco, San Francisco,
Calif (J.M.); Department of Radiology, The Ohio State University Wexner Medical
Center, Columbus, Ohio (L.M.P.); Department of Radiology and Imaging Sciences,
National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); Division
of Diagnostic Imaging, University of Texas MD Anderson Cancer Center, Houston,
Tex (C.C.W.); Medical Artificial Intelligence Laboratory, University of Lagos
College of Medicine, Lagos, Nigeria (M.A.); and Department of Radiology,
University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA
19104-6243 (C.E.K.)
| |
Collapse
|
13
|
Codipilly DC, Faghani S, Hagan C, Lewis J, Erickson BJ, Iyer PG. The Evolving Role of Artificial Intelligence in Gastrointestinal Histopathology: An Update. Clin Gastroenterol Hepatol 2024; 22:1170-1180. [PMID: 38154727 DOI: 10.1016/j.cgh.2023.11.044] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Revised: 11/20/2023] [Accepted: 11/21/2023] [Indexed: 12/30/2023]
Abstract
Significant advances in artificial intelligence (AI) over the past decade potentially may lead to dramatic effects on clinical practice. Digitized histology represents an area ripe for AI implementation. We describe several current needs within the world of gastrointestinal histopathology, and outline, using currently studied models, how AI potentially can address them. We also highlight pitfalls as AI makes inroads into clinical practice.
Collapse
Affiliation(s)
- D Chamil Codipilly
- Barrett's Esophagus Unit, Division of Gastroenterology and Hepatology, Mayo Clinic Rochester, Rochester, Minnesota
| | - Shahriar Faghani
- Mayo Artificial Intelligence Laboratory, Department of Radiology, Mayo Clinic, Rochester, Minnesota
| | - Catherine Hagan
- Department of Laboratory Medicine and Pathology, Mayo Clinic, Rochester, Minnesota
| | - Jason Lewis
- Department of Pathology, Mayo Clinic, Jacksonville, Florida
| | - Bradley J Erickson
- Mayo Artificial Intelligence Laboratory, Department of Radiology, Mayo Clinic, Rochester, Minnesota
| | - Prasad G Iyer
- Barrett's Esophagus Unit, Division of Gastroenterology and Hepatology, Mayo Clinic Rochester, Rochester, Minnesota.
| |
Collapse
|
14
|
Moassefi M, Faghani S, Khanipour Roshan S, Conte GM, Rassoulinejad Mousavi SM, Kaufmann TJ, Erickson BJ. Exploring the Impact of 3D Fast Spin Echo and Inversion Recovery Gradient Echo Sequences Magnetic Resonance Imaging Acquisition on Automated Brain Tumor Segmentation. MAYO CLINIC PROCEEDINGS. DIGITAL HEALTH 2024; 2:231-240. [PMID: 40207177 PMCID: PMC11975840 DOI: 10.1016/j.mcpdig.2024.03.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 04/11/2025]
Abstract
Objective To conduct a study comparing the performance of automated segmentation techniques using 2 different contrast-enhanced T1-weighted (CET1) magnetic resonance imaging (MRI) acquisition protocol. Patients and Methods We collected 100 preoperative glioblastoma (GBM) MRIs consisting of 50 IR-GRE and 50 3-dimensional fast spin echo (3D-FSE) image sets. Their gold-standard tumor segmentation mask was created based on the expert opinion of a neuroradiologist. Cases were randomly divided into training and test sets. We used the no new UNet (nnUNet) architecture pretrained on the 501-image public data set containing IR-GRE sequence image sets, followed by 2 training rounds with the IR-GRE and 3D-FSE images, respectively. For each patient, in the IR-GRE and 3D-FSE test sets, we had 2 prediction masks, one from the model fine-tuned with the IR-GRE training set and one with 3D-FSE. The dice similarity coefficients (DSCs) of the 2 sets of results for each case in the test sets were compared using the Wilcoxon tests. Results Models trained on 3D-FSE images outperformed IR-GRE models in lesion segmentation, with mean DSC differences of 0.057 and 0.022 in the respective test sets. For the 3D-FSE and IR-GRE test sets, the calculated P values comparing DSCs from 2 models were .02 and .61, respectively. Conclusion Including 3D-FSE MRI in the training data set improves segmentation performance when segmenting 3D-FSE images.
Collapse
Affiliation(s)
- Mana Moassefi
- Mayo Clinic Artificial Intelligence Laboratory, Department of Radiology, Mayo Clinic, Rochester, MN
- Department of Radiology, Mayo Clinic, Rochester, MN
| | - Shahriar Faghani
- Mayo Clinic Artificial Intelligence Laboratory, Department of Radiology, Mayo Clinic, Rochester, MN
- Department of Radiology, Mayo Clinic, Rochester, MN
| | | | - Gian Marco Conte
- Mayo Clinic Artificial Intelligence Laboratory, Department of Radiology, Mayo Clinic, Rochester, MN
- Department of Radiology, Mayo Clinic, Rochester, MN
| | - Seyed Moein Rassoulinejad Mousavi
- Mayo Clinic Artificial Intelligence Laboratory, Department of Radiology, Mayo Clinic, Rochester, MN
- Department of Radiology, Mayo Clinic, Rochester, MN
| | | | - Bradley J. Erickson
- Mayo Clinic Artificial Intelligence Laboratory, Department of Radiology, Mayo Clinic, Rochester, MN
- Department of Radiology, Mayo Clinic, Rochester, MN
| |
Collapse
|
15
|
Guo Z, Zhao M, Liu Z, Zheng J, Gong Y, Huang L, Xue J, Zhou X, Li S. Feasibility of ultrasound radiomics based models for classification of liver fibrosis due to Schistosoma japonicum infection. PLoS Negl Trop Dis 2024; 18:e0012235. [PMID: 38870200 PMCID: PMC11207143 DOI: 10.1371/journal.pntd.0012235] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2023] [Revised: 06/26/2024] [Accepted: 05/22/2024] [Indexed: 06/15/2024] Open
Abstract
BACKGROUND Schistosomiasis japonica represents a significant public health concern in South Asia. There is an urgent need to optimize existing schistosomiasis diagnostic techniques. This study aims to develop models for the different stages of liver fibrosis caused by Schistosoma infection utilizing ultrasound radiomics and machine learning techniques. METHODS From 2018 to 2022, we retrospectively collected data on 1,531 patients and 5,671 B-mode ultrasound images from the Second People's Hospital of Duchang City, Jiangxi Province, China. The datasets were screened based on inclusion and exclusion criteria suitable for radiomics models. Liver fibrosis due to Schistosoma infection (LFSI) was categorized into four stages: grade 0, grade 1, grade 2, and grade 3. The data were divided into six binary classification problems, such as group 1 (grade 0 vs. grade 1) and group 2 (grade 0 vs. grade 2). Key radiomic features were extracted using Pyradiomics, the Mann-Whitney U test, and the Least Absolute Shrinkage and Selection Operator (LASSO). Machine learning models were constructed using Support Vector Machine (SVM), and the contribution of different features in the model was described by applying Shapley Additive Explanations (SHAP). RESULTS This study ultimately included 1,388 patients and their corresponding images. A total of 851 radiomics features were extracted for each binary classification problems. Following feature selection, 18 to 76 features were retained from each groups. The area under the receiver operating characteristic curve (AUC) for the validation cohorts was 0.834 (95% CI: 0.779-0.885) for the LFSI grade 0 vs. LFSI grade 1, 0.771 (95% CI: 0.713-0.835) for LFSI grade 1 vs. LFSI grade 2, and 0.830 (95% CI: 0.762-0.885) for LFSI grade 2 vs. LFSI grade 3. CONCLUSION Machine learning models based on ultrasound radiomics are feasible for classifying different stages of liver fibrosis caused by Schistosoma infection.
Collapse
Affiliation(s)
- Zhaoyu Guo
- National Institute of Parasitic Diseases, Chinese Center for Disease Control and Prevention (Chinese Center for Tropical Diseases Research); National Key Laboratory of Intelligent Tracking and Forecasting for Infectious Diseases; NHC Key Laboratory of Parasite and Vector Biology; WHO Collaborating Centre for Tropical Diseases; National Center for International Research on Tropical Diseases, Shanghai, China
| | - Miaomiao Zhao
- Department of Ultrasound, The Yancheng Clinical College of Xuzhou Medical University, The First People’s Hospital of Yancheng, Yancheng, Jiangsu, China
| | - Zhenhua Liu
- Department of Ultrasound, The Yancheng Clinical College of Xuzhou Medical University, The First People’s Hospital of Yancheng, Yancheng, Jiangsu, China
| | - Jinxin Zheng
- School of Global Health, Chinese Center for Tropical Diseases Research, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Yanfeng Gong
- School of Public Health, Fudan University, Shanghai, China
| | - Lulu Huang
- National Institute of Parasitic Diseases, Chinese Center for Disease Control and Prevention (Chinese Center for Tropical Diseases Research); National Key Laboratory of Intelligent Tracking and Forecasting for Infectious Diseases; NHC Key Laboratory of Parasite and Vector Biology; WHO Collaborating Centre for Tropical Diseases; National Center for International Research on Tropical Diseases, Shanghai, China
| | - Jingbo Xue
- National Institute of Parasitic Diseases, Chinese Center for Disease Control and Prevention (Chinese Center for Tropical Diseases Research); National Key Laboratory of Intelligent Tracking and Forecasting for Infectious Diseases; NHC Key Laboratory of Parasite and Vector Biology; WHO Collaborating Centre for Tropical Diseases; National Center for International Research on Tropical Diseases, Shanghai, China
- School of Global Health, Chinese Center for Tropical Diseases Research, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Xiaonong Zhou
- National Institute of Parasitic Diseases, Chinese Center for Disease Control and Prevention (Chinese Center for Tropical Diseases Research); National Key Laboratory of Intelligent Tracking and Forecasting for Infectious Diseases; NHC Key Laboratory of Parasite and Vector Biology; WHO Collaborating Centre for Tropical Diseases; National Center for International Research on Tropical Diseases, Shanghai, China
- School of Global Health, Chinese Center for Tropical Diseases Research, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Shizhu Li
- National Institute of Parasitic Diseases, Chinese Center for Disease Control and Prevention (Chinese Center for Tropical Diseases Research); National Key Laboratory of Intelligent Tracking and Forecasting for Infectious Diseases; NHC Key Laboratory of Parasite and Vector Biology; WHO Collaborating Centre for Tropical Diseases; National Center for International Research on Tropical Diseases, Shanghai, China
- School of Global Health, Chinese Center for Tropical Diseases Research, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| |
Collapse
|
16
|
Tejani AS, Ng YS, Xi Y, Rayan JC. Understanding and Mitigating Bias in Imaging Artificial Intelligence. Radiographics 2024; 44:e230067. [PMID: 38635456 DOI: 10.1148/rg.230067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/20/2024]
Abstract
Artificial intelligence (AI) algorithms are prone to bias at multiple stages of model development, with potential for exacerbating health disparities. However, bias in imaging AI is a complex topic that encompasses multiple coexisting definitions. Bias may refer to unequal preference to a person or group owing to preexisting attitudes or beliefs, either intentional or unintentional. However, cognitive bias refers to systematic deviation from objective judgment due to reliance on heuristics, and statistical bias refers to differences between true and expected values, commonly manifesting as systematic error in model prediction (ie, a model with output unrepresentative of real-world conditions). Clinical decisions informed by biased models may lead to patient harm due to action on inaccurate AI results or exacerbate health inequities due to differing performance among patient populations. However, while inequitable bias can harm patients in this context, a mindful approach leveraging equitable bias can address underrepresentation of minority groups or rare diseases. Radiologists should also be aware of bias after AI deployment such as automation bias, or a tendency to agree with automated decisions despite contrary evidence. Understanding common sources of imaging AI bias and the consequences of using biased models can guide preventive measures to mitigate its impact. Accordingly, the authors focus on sources of bias at stages along the imaging machine learning life cycle, attempting to simplify potentially intimidating technical terminology for general radiologists using AI tools in practice or collaborating with data scientists and engineers for AI tool development. The authors review definitions of bias in AI, describe common sources of bias, and present recommendations to guide quality control measures to mitigate the impact of bias in imaging AI. Understanding the terms featured in this article will enable a proactive approach to identifying and mitigating bias in imaging AI. Published under a CC BY 4.0 license. Test Your Knowledge questions for this article are available in the supplemental material. See the invited commentary by Rouzrokh and Erickson in this issue.
Collapse
Affiliation(s)
- Ali S Tejani
- From the Department of Radiology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390
| | - Yee Seng Ng
- From the Department of Radiology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390
| | - Yin Xi
- From the Department of Radiology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390
| | - Jesse C Rayan
- From the Department of Radiology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390
| |
Collapse
|
17
|
Mickley JP, Grove AF, Rouzrokh P, Yang L, Larson AN, Sanchez-Sotello J, Maradit Kremers H, Wyles CC. A Stepwise Approach to Analyzing Musculoskeletal Imaging Data With Artificial Intelligence. Arthritis Care Res (Hoboken) 2024; 76:590-599. [PMID: 37849415 DOI: 10.1002/acr.25260] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 08/27/2023] [Accepted: 10/13/2023] [Indexed: 10/19/2023]
Abstract
The digitization of medical records and expanding electronic health records has created an era of "Big Data" with an abundance of available information ranging from clinical notes to imaging studies. In the field of rheumatology, medical imaging is used to guide both diagnosis and treatment of a wide variety of rheumatic conditions. Although there is an abundance of data to analyze, traditional methods of image analysis are human resource intensive. Fortunately, the growth of artificial intelligence (AI) may be a solution to handle large datasets. In particular, computer vision is a field within AI that analyzes images and extracts information. Computer vision has impressive capabilities and can be applied to rheumatologic conditions, necessitating a need to understand how computer vision works. In this article, we provide an overview of AI in rheumatology and conclude with a five step process to plan and conduct research in the field of computer vision. The five steps include (1) project definition, (2) data handling, (3) model development, (4) performance evaluation, and (5) deployment into clinical care.
Collapse
|
18
|
Rouzrokh P, Erickson BJ. Invited Commentary: The Double-edged Sword of Bias in Medical Imaging Artificial Intelligence. Radiographics 2024; 44:e230243. [PMID: 38635455 DOI: 10.1148/rg.230243] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/20/2024]
Affiliation(s)
- Pouria Rouzrokh
- From the Mayo Clinic Artificial Intelligence Laboratory and Radiology Informatics Laboratory, Department of Radiology, Mayo Clinic, 200 First St SW, Rochester, MN 55905
| | - Bradley J Erickson
- From the Mayo Clinic Artificial Intelligence Laboratory and Radiology Informatics Laboratory, Department of Radiology, Mayo Clinic, 200 First St SW, Rochester, MN 55905
| |
Collapse
|
19
|
Faghani S, Erickson BJ. Bone Age Prediction under Stress. Radiol Artif Intell 2024; 6:e240137. [PMID: 38629960 PMCID: PMC11140503 DOI: 10.1148/ryai.240137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2024] [Revised: 03/24/2024] [Accepted: 04/01/2024] [Indexed: 04/19/2024]
Affiliation(s)
- Shahriar Faghani
- From the Department of Radiology, Radiology Informatics Laboratory, Mayo Clinic, 200 1st St SW, Rochester, MN 55905
| | - Bradley J. Erickson
- From the Department of Radiology, Radiology Informatics Laboratory, Mayo Clinic, 200 1st St SW, Rochester, MN 55905
| |
Collapse
|
20
|
Faghani S, Moassefi M, Madhavan AA, Mark IT, Verdoorn JT, Erickson BJ, Benson JC. Identifying Patients with CSF-Venous Fistula Using Brain MRI: A Deep Learning Approach. AJNR Am J Neuroradiol 2024; 45:439-443. [PMID: 38423747 PMCID: PMC11288568 DOI: 10.3174/ajnr.a8173] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Accepted: 12/12/2023] [Indexed: 03/02/2024]
Abstract
BACKGROUND AND PURPOSE Spontaneous intracranial hypotension is an increasingly recognized condition. Spontaneous intracranial hypotension is caused by a CSF leak, which is commonly related to a CSF-venous fistula. In patients with spontaneous intracranial hypotension, multiple intracranial abnormalities can be observed on brain MR imaging, including dural enhancement, "brain sag," and pituitary engorgement. This study seeks to create a deep learning model for the accurate diagnosis of CSF-venous fistulas via brain MR imaging. MATERIALS AND METHODS A review of patients with clinically suspected spontaneous intracranial hypotension who underwent digital subtraction myelogram imaging preceded by brain MR imaging was performed. The patients were categorized as having a definite CSF-venous fistula, no fistula, or indeterminate findings on a digital subtraction myelogram. The data set was split into 5 folds at the patient level and stratified by label. A 5-fold cross-validation was then used to evaluate the reliability of the model. The predictive value of the model to identify patients with a CSF leak was assessed by using the area under the receiver operating characteristic curve for each validation fold. RESULTS There were 129 patients were included in this study. The median age was 54 years, and 66 (51.2%) had a CSF-venous fistula. In discriminating between positive and negative cases for CSF-venous fistulas, the classifier demonstrated an average area under the receiver operating characteristic curve of 0.8668 with a standard deviation of 0.0254 across the folds. CONCLUSIONS This study developed a deep learning model that can predict the presence of a spinal CSF-venous fistula based on brain MR imaging in patients with suspected spontaneous intracranial hypotension. However, further model refinement and external validation are necessary before clinical adoption. This research highlights the substantial potential of deep learning in diagnosing CSF-venous fistulas by using brain MR imaging.
Collapse
Affiliation(s)
- Shahriar Faghani
- From the Radiology Informatics Lab, Department of Radiology, Mayo Clinic, Rochester, Minnesota
| | - Mana Moassefi
- From the Radiology Informatics Lab, Department of Radiology, Mayo Clinic, Rochester, Minnesota
| | | | - Ian T. Mark
- Department of Radiology, Mayo Clinic, Rochester, Minnesota
| | | | - Bradley J. Erickson
- From the Radiology Informatics Lab, Department of Radiology, Mayo Clinic, Rochester, Minnesota
| | - John C. Benson
- Department of Radiology, Mayo Clinic, Rochester, Minnesota
| |
Collapse
|
21
|
van Assen M, Beecy A, Gershon G, Newsome J, Trivedi H, Gichoya J. Implications of Bias in Artificial Intelligence: Considerations for Cardiovascular Imaging. Curr Atheroscler Rep 2024; 26:91-102. [PMID: 38363525 DOI: 10.1007/s11883-024-01190-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/16/2024] [Indexed: 02/17/2024]
Abstract
PURPOSE OF REVIEW Bias in artificial intelligence (AI) models can result in unintended consequences. In cardiovascular imaging, biased AI models used in clinical practice can negatively affect patient outcomes. Biased AI models result from decisions made when training and evaluating a model. This paper is a comprehensive guide for AI development teams to understand assumptions in datasets and chosen metrics for outcome/ground truth, and how this translates to real-world performance for cardiovascular disease (CVD). RECENT FINDINGS CVDs are the number one cause of mortality worldwide; however, the prevalence, burden, and outcomes of CVD vary across gender and race. Several biomarkers are also shown to vary among different populations and ethnic/racial groups. Inequalities in clinical trial inclusion, clinical presentation, diagnosis, and treatment are preserved in health data that is ultimately used to train AI algorithms, leading to potential biases in model performance. Despite the notion that AI models themselves are biased, AI can also help to mitigate bias (e.g., bias auditing tools). In this review paper, we describe in detail implicit and explicit biases in the care of cardiovascular disease that may be present in existing datasets but are not obvious to model developers. We review disparities in CVD outcomes across different genders and race groups, differences in treatment of historically marginalized groups, and disparities in clinical trials for various cardiovascular diseases and outcomes. Thereafter, we summarize some CVD AI literature that shows bias in CVD AI as well as approaches that AI is being used to mitigate CVD bias.
Collapse
Affiliation(s)
- Marly van Assen
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA, USA.
| | - Ashley Beecy
- Division of Cardiology, Department of Medicine, Weill Cornell Medicine, New York, NY, USA
- Information Technology, NewYork-Presbyterian, New York, NY, USA
| | - Gabrielle Gershon
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA, USA
| | - Janice Newsome
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA, USA
| | - Hari Trivedi
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA, USA
| | - Judy Gichoya
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA, USA
| |
Collapse
|
22
|
Faghani S, Gamble C, Erickson BJ. Uncover This Tech Term: Uncertainty Quantification for Deep Learning. Korean J Radiol 2024; 25:395-398. [PMID: 38528697 PMCID: PMC10973738 DOI: 10.3348/kjr.2024.0108] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Revised: 02/05/2024] [Accepted: 02/06/2024] [Indexed: 03/27/2024] Open
Affiliation(s)
- Shahriar Faghani
- Artificial Intelligence Laboratory, Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | - Cooper Gamble
- Artificial Intelligence Laboratory, Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | - Bradley J Erickson
- Artificial Intelligence Laboratory, Department of Radiology, Mayo Clinic, Rochester, MN, USA.
| |
Collapse
|
23
|
Yang L, Oeding JF, de Marinis R, Marigi E, Sanchez-Sotelo J. Deep learning to automatically classify very large sets of preoperative and postoperative shoulder arthroplasty radiographs. J Shoulder Elbow Surg 2024; 33:773-780. [PMID: 37879598 DOI: 10.1016/j.jse.2023.09.021] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 09/06/2023] [Accepted: 09/10/2023] [Indexed: 10/27/2023]
Abstract
BACKGROUND Joint arthroplasty registries usually lack information on medical imaging owing to the laborious process of observing and recording, as well as the lack of standard methods to transfer the imaging information to the registries, which can limit the investigation of various research questions. Artificial intelligence (AI) algorithms can automate imaging-feature identification with high accuracy and efficiency. With the purpose of enriching shoulder arthroplasty registries with organized imaging information, it was hypothesized that an automated AI algorithm could be developed to classify and organize preoperative and postoperative radiographs from shoulder arthroplasty patients according to laterality, radiographic projection, and implant type. METHODS This study used a cohort of 2303 shoulder radiographs from 1724 shoulder arthroplasty patients. Two observers manually labeled all radiographs according to (1) laterality (left or right), (2) projection (anteroposterior, axillary, or lateral), and (3) whether the radiograph was a preoperative radiograph or showed an anatomic total shoulder arthroplasty or a reverse shoulder arthroplasty. All these labeled radiographs were randomly split into developmental and testing sets at the patient level and based on stratification. By use of 10-fold cross-validation, a 3-task deep-learning algorithm was trained on the developmental set to classify the 3 aforementioned characteristics. The trained algorithm was then evaluated on the testing set using quantitative metrics and visual evaluation techniques. RESULTS The trained algorithm perfectly classified laterality (F1 scores [harmonic mean values of precision and sensitivity] of 100% on the testing set). When classifying the imaging projection, the algorithm achieved F1 scores of 99.2%, 100%, and 100% on anteroposterior, axillary, and lateral views, respectively. When classifying the implant type, the model achieved F1 scores of 100%, 95.2%, and 100% on preoperative radiographs, anatomic total shoulder arthroplasty radiographs, and reverse shoulder arthroplasty radiographs, respectively. Visual evaluation using integrated maps showed that the algorithm focused on the relevant patient body and prosthesis parts for classification. It took the algorithm 20.3 seconds to analyze 502 images. CONCLUSIONS We developed an efficient, accurate, and reliable AI algorithm to automatically identify key imaging features of laterality, imaging view, and implant type in shoulder radiographs. This algorithm represents the first step to automatically classify and organize shoulder radiographs on a large scale in very little time, which will profoundly enrich shoulder arthroplasty registries.
Collapse
Affiliation(s)
- Linjun Yang
- Orthopedic Surgery Artificial Intelligence Laboratory, Department of Orthopedic Surgery, Mayo Clinic, Rochester, MN, USA; Department of Orthopedic Surgery, Mayo Clinic, Rochester, MN, USA
| | - Jacob F Oeding
- Orthopedic Surgery Artificial Intelligence Laboratory, Department of Orthopedic Surgery, Mayo Clinic, Rochester, MN, USA
| | - Rodrigo de Marinis
- Orthopedic Surgery Artificial Intelligence Laboratory, Department of Orthopedic Surgery, Mayo Clinic, Rochester, MN, USA
| | - Erick Marigi
- Orthopedic Surgery Artificial Intelligence Laboratory, Department of Orthopedic Surgery, Mayo Clinic, Rochester, MN, USA
| | - Joaquin Sanchez-Sotelo
- Orthopedic Surgery Artificial Intelligence Laboratory, Department of Orthopedic Surgery, Mayo Clinic, Rochester, MN, USA.
| |
Collapse
|
24
|
Al Mohammad B, Aldaradkeh A, Gharaibeh M, Reed W. Assessing radiologists' and radiographers' perceptions on artificial intelligence integration: opportunities and challenges. Br J Radiol 2024; 97:763-769. [PMID: 38273675 PMCID: PMC11027289 DOI: 10.1093/bjr/tqae022] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 09/30/2023] [Accepted: 01/21/2024] [Indexed: 01/27/2024] Open
Abstract
OBJECTIVES The objective of this study was to evaluate radiologists' and radiographers' opinions and perspectives on artificial intelligence (AI) and its integration into the radiology department. Additionally, we investigated the most common challenges and barriers that radiologists and radiographers face when learning about AI. METHODS A nationwide, online descriptive cross-sectional survey was distributed to radiologists and radiographers working in hospitals and medical centres from May 29, 2023 to July 30, 2023. The questionnaire examined the participants' opinions, feelings, and predictions regarding AI and its applications in the radiology department. Descriptive statistics were used to report the participants' demographics and responses. Five-points Likert-scale data were reported using divergent stacked bar graphs to highlight any central tendencies. RESULTS Responses were collected from 258 participants, revealing a positive attitude towards implementing AI. Both radiologists and radiographers predicted breast imaging would be the subspecialty most impacted by the AI revolution. MRI, mammography, and CT were identified as the primary modalities with significant importance in the field of AI application. The major barrier encountered by radiologists and radiographers when learning about AI was the lack of mentorship, guidance, and support from experts. CONCLUSION Participants demonstrated a positive attitude towards learning about AI and implementing it in the radiology practice. However, radiologists and radiographers encounter several barriers when learning about AI, such as the absence of experienced professionals support and direction. ADVANCES IN KNOWLEDGE Radiologists and radiographers reported several barriers to AI learning, with the most significant being the lack of mentorship and guidance from experts, followed by the lack of funding and investment in new technologies.
Collapse
Affiliation(s)
- Badera Al Mohammad
- Department of Allied Medical Sciences, Faculty of Applied Medical Sciences, Jordan University of Science and Technology, Irbid 22110, Jordan
| | - Afnan Aldaradkeh
- Department of Allied Medical Sciences, Faculty of Applied Medical Sciences, Jordan University of Science and Technology, Irbid 22110, Jordan
| | - Monther Gharaibeh
- Department of Special Surgery, Faculty of Medicine, The Hashemite University, Zarqa 13133, Jordan
| | - Warren Reed
- Discipline of Medical Imaging Science, Faculty of Medicine and Health, University of Sydney 2006, Sydney, NSW, Australia
| |
Collapse
|
25
|
Drukker K, Sahiner B, Hu T, Kim GH, Whitney HM, Baughan N, Myers KJ, Giger ML, McNitt-Gray M. MIDRC-MetricTree: a decision tree-based tool for recommending performance metrics in artificial intelligence-assisted medical image analysis. J Med Imaging (Bellingham) 2024; 11:024504. [PMID: 38576536 PMCID: PMC10990563 DOI: 10.1117/1.jmi.11.2.024504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 02/16/2024] [Accepted: 03/18/2024] [Indexed: 04/06/2024] Open
Abstract
Purpose The Medical Imaging and Data Resource Center (MIDRC) was created to facilitate medical imaging machine learning (ML) research for tasks including early detection, diagnosis, prognosis, and assessment of treatment response related to the coronavirus disease 2019 pandemic and beyond. The purpose of this work was to create a publicly available metrology resource to assist researchers in evaluating the performance of their medical image analysis ML algorithms. Approach An interactive decision tree, called MIDRC-MetricTree, has been developed, organized by the type of task that the ML algorithm was trained to perform. The criteria for this decision tree were that (1) users can select information such as the type of task, the nature of the reference standard, and the type of the algorithm output and (2) based on the user input, recommendations are provided regarding appropriate performance evaluation approaches and metrics, including literature references and, when possible, links to publicly available software/code as well as short tutorial videos. Results Five types of tasks were identified for the decision tree: (a) classification, (b) detection/localization, (c) segmentation, (d) time-to-event (TTE) analysis, and (e) estimation. As an example, the classification branch of the decision tree includes two-class (binary) and multiclass classification tasks and provides suggestions for methods, metrics, software/code recommendations, and literature references for situations where the algorithm produces either binary or non-binary (e.g., continuous) output and for reference standards with negligible or non-negligible variability and unreliability. Conclusions The publicly available decision tree is a resource to assist researchers in conducting task-specific performance evaluations, including classification, detection/localization, segmentation, TTE, and estimation tasks.
Collapse
Affiliation(s)
- Karen Drukker
- University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Berkman Sahiner
- U.S. Food and Drug Administration, Bethesda, Maryland, United States
| | - Tingting Hu
- U.S. Food and Drug Administration, Bethesda, Maryland, United States
| | - Grace Hyun Kim
- University of California Los Angeles, Los Angeles, California, United States
| | - Heather M. Whitney
- University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Natalie Baughan
- University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | | | - Maryellen L. Giger
- University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Michael McNitt-Gray
- University of California Los Angeles, Los Angeles, California, United States
| |
Collapse
|
26
|
Faghani S, Nicholas RG, Patel S, Baffour FI, Moassefi M, Rouzrokh P, Khosravi B, Powell GM, Leng S, Glazebrook KN, Erickson BJ, Tiegs-Heiden CA. Development of a deep learning model for the automated detection of green pixels indicative of gout on dual energy CT scan. RESEARCH IN DIAGNOSTIC AND INTERVENTIONAL IMAGING 2024; 9:100044. [PMID: 39076582 PMCID: PMC11265492 DOI: 10.1016/j.redii.2024.100044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Accepted: 02/24/2024] [Indexed: 07/31/2024]
Abstract
Background Dual-energy CT (DECT) is a non-invasive way to determine the presence of monosodium urate (MSU) crystals in the workup of gout. Color-coding distinguishes MSU from calcium following material decomposition and post-processing. Most software labels MSU as green and calcium as blue. There are limitations in the current image processing methods of segmenting green-encoded pixels. Additionally, identifying green foci is tedious, and automated detection would improve workflow. This study aimed to determine the optimal deep learning (DL) algorithm for segmenting green-encoded pixels of MSU crystals on DECTs. Methods DECT images of positive and negative gout cases were retrospectively collected. The dataset was split into train (N = 28) and held-out test (N = 30) sets. To perform cross-validation, the train set was split into seven folds. The images were presented to two musculoskeletal radiologists, who independently identified green-encoded voxels. Two 3D Unet-based DL models, Segresnet and SwinUNETR, were trained, and the Dice similarity coefficient (DSC), sensitivity, and specificity were reported as the segmentation metrics. Results Segresnet showed superior performance, achieving a DSC of 0.9999 for the background pixels, 0.7868 for the green pixels, and an average DSC of 0.8934 for both types of pixels, respectively. According to the post-processed results, the Segresnet reached voxel-level sensitivity and specificity of 98.72 % and 99.98 %, respectively. Conclusion In this study, we compared two DL-based segmentation approaches for detecting MSU deposits in a DECT dataset. The Segresnet resulted in superior performance metrics. The developed algorithm provides a potential fast, consistent, highly sensitive and specific computer-aided diagnosis tool. Ultimately, such an algorithm could be used by radiologists to streamline DECT workflow and improve accuracy in the detection of gout.
Collapse
Affiliation(s)
- Shahriar Faghani
- Radiology Informatics Lab, Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | - Rhodes G Nicholas
- Division of Musculoskeletal Radiology, Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | - Soham Patel
- Division of Musculoskeletal Radiology, Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | - Francis I Baffour
- Division of Musculoskeletal Radiology, Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | - Mana Moassefi
- Radiology Informatics Lab, Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | - Pouria Rouzrokh
- Radiology Informatics Lab, Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | - Bardia Khosravi
- Radiology Informatics Lab, Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | - Garret M Powell
- Division of Musculoskeletal Radiology, Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | - Shuai Leng
- Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | - Katrina N Glazebrook
- Division of Musculoskeletal Radiology, Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | | | - Christin A Tiegs-Heiden
- Division of Musculoskeletal Radiology, Department of Radiology, Mayo Clinic, Rochester, MN, USA
| |
Collapse
|
27
|
Hanneman K, Playford D, Dey D, van Assen M, Mastrodicasa D, Cook TS, Gichoya JW, Williamson EE, Rubin GD. Value Creation Through Artificial Intelligence and Cardiovascular Imaging: A Scientific Statement From the American Heart Association. Circulation 2024; 149:e296-e311. [PMID: 38193315 DOI: 10.1161/cir.0000000000001202] [Citation(s) in RCA: 15] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/10/2024]
Abstract
Multiple applications for machine learning and artificial intelligence (AI) in cardiovascular imaging are being proposed and developed. However, the processes involved in implementing AI in cardiovascular imaging are highly diverse, varying by imaging modality, patient subtype, features to be extracted and analyzed, and clinical application. This article establishes a framework that defines value from an organizational perspective, followed by value chain analysis to identify the activities in which AI might produce the greatest incremental value creation. The various perspectives that should be considered are highlighted, including clinicians, imagers, hospitals, patients, and payers. Integrating the perspectives of all health care stakeholders is critical for creating value and ensuring the successful deployment of AI tools in a real-world setting. Different AI tools are summarized, along with the unique aspects of AI applications to various cardiac imaging modalities, including cardiac computed tomography, magnetic resonance imaging, and positron emission tomography. AI is applicable and has the potential to add value to cardiovascular imaging at every step along the patient journey, from selecting the more appropriate test to optimizing image acquisition and analysis, interpreting the results for classification and diagnosis, and predicting the risk for major adverse cardiac events.
Collapse
|
28
|
Ueda D, Ehara S, Yamamoto A, Walston SL, Shimono T, Miki Y. Challenges of using artificial intelligence to detect valvular heart disease from chest radiography - Authors' reply. Lancet Digit Health 2024; 6:e10. [PMID: 38123250 DOI: 10.1016/s2589-7500(23)00224-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Accepted: 10/25/2023] [Indexed: 12/23/2023]
Affiliation(s)
- Daiju Ueda
- Department of Diagnostic and Interventional Radiology, Osaka Metropolitan University, Osaka, 545-8585, Japan; Graduate School of Medicine, and Smart Life Science Lab, Center for Health Science Innovation, Osaka Metropolitan University, Osaka, 545-8585, Japan.
| | - Shoichi Ehara
- Department of Intensive Care Medicine, Osaka Metropolitan University, Osaka, 545-8585, Japan
| | - Akira Yamamoto
- Department of Diagnostic and Interventional Radiology, Osaka Metropolitan University, Osaka, 545-8585, Japan
| | - Shannon L Walston
- Department of Diagnostic and Interventional Radiology, Osaka Metropolitan University, Osaka, 545-8585, Japan
| | - Taro Shimono
- Department of Diagnostic and Interventional Radiology, Osaka Metropolitan University, Osaka, 545-8585, Japan
| | - Yukio Miki
- Department of Diagnostic and Interventional Radiology, Osaka Metropolitan University, Osaka, 545-8585, Japan
| |
Collapse
|
29
|
Cen HS, Dandamudi S, Lei X, Weight C, Desai M, Gill I, Duddalwar V. Diversity in Renal Mass Data Cohorts: Implications for Urology AI Researchers. Oncology 2023; 102:574-584. [PMID: 38104555 PMCID: PMC11178677 DOI: 10.1159/000535841] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2023] [Accepted: 12/08/2023] [Indexed: 12/19/2023]
Abstract
INTRODUCTION We examine the heterogeneity and distribution of the cohort populations in two publicly used radiological image cohorts, the Cancer Genome Atlas Kidney Renal Clear Cell Carcinoma (TCIA TCGA KIRC) collection and 2019 MICCAI Kidney Tumor Segmentation Challenge (KiTS19), and deviations in real-world population renal cancer data from the National Cancer Database (NCDB) Participant User Data File (PUF) and tertiary center data. PUF data are used as an anchor for prevalence rate bias assessment. Specific gene expression and, therefore, biology of RCC differ by self-reported race, especially between the African American and Caucasian populations. AI algorithms learn from datasets, but if the dataset misrepresents the population, reinforcing bias may occur. Ignoring these demographic features may lead to inaccurate downstream effects, thereby limiting the translation of these analyses to clinical practice. Consciousness of model training biases is vital to patient care decisions when using models in clinical settings. METHODS Data elements evaluated included gender, demographics, reported pathologic grading, and cancer staging. American Urological Association risk levels were used. Poisson regression was performed to estimate the population-based and sample-specific estimation for prevalence rate and corresponding 95% confidence interval. SAS 9.4 was used for data analysis. RESULTS Compared to PUF, KiTS19 and TCGA KIRC oversampled Caucasian by 9.5% (95% CI, -3.7 to 22.7%) and 15.1% (95% CI, 1.5 to 28.8%), undersampled African American by -6.7% (95% CI, -10% to -3.3%), and -5.5% (95% CI, -9.3% to -1.8%). Tertiary also undersampled African American by -6.6% (95% CI, -8.7% to -4.6%). The tertiary cohort largely undersampled aggressive cancers by -14.7% (95% CI, -20.9% to -8.4%). No statistically significant difference was found among PUF, TCGA, and KiTS19 in aggressive rate; however, heterogeneities in risk are notable. CONCLUSION Heterogeneities between cohorts need to be considered in future AI training and cross-validation for renal masses.
Collapse
Affiliation(s)
- Harmony Selena Cen
- Keck School of Medicine, University of Southern California, Los Angeles, California, USA,
| | - Siddhartha Dandamudi
- College of Human Medicine, Michigan State University, East Lansing, Michigan, USA
| | - Xiaomeng Lei
- Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Chris Weight
- Urologic Oncology, Cleveland Clinic, Cleveland, Ohio, USA
| | - Mihir Desai
- Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Inderbir Gill
- Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Vinay Duddalwar
- Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| |
Collapse
|
30
|
Khosravi B, Rouzrokh P, Mickley JP, Faghani S, Mulford K, Yang L, Larson AN, Howe BM, Erickson BJ, Taunton MJ, Wyles CC. Few-shot biomedical image segmentation using diffusion models: Beyond image generation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 242:107832. [PMID: 37778140 DOI: 10.1016/j.cmpb.2023.107832] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 09/12/2023] [Accepted: 09/25/2023] [Indexed: 10/03/2023]
Abstract
BACKGROUND Medical image analysis pipelines often involve segmentation, which requires a large amount of annotated training data, which is time-consuming and costly. To address this issue, we proposed leveraging generative models to achieve few-shot image segmentation. METHODS We trained a denoising diffusion probabilistic model (DDPM) on 480,407 pelvis radiographs to generate 256 ✕ 256 px synthetic images. The DDPM was conditioned on demographic and radiologic characteristics and was rigorously validated by domain experts and objective image quality metrics (Frechet inception distance [FID] and inception score [IS]). For the next step, three landmarks (greater trochanter [GT], lesser trochanter [LT], and obturator foramen [OF]) were annotated on 45 real-patient radiographs; 25 for training and 20 for testing. To extract features, each image was passed through the pre-trained DDPM at three timesteps and for each pass, features from specific blocks were extracted. The features were concatenated with the real image to form an image with 4225 channels. The feature-set was broken into random patches, which were fed to a U-Net. Dice Similarity Coefficient (DSC) was used to compare the performance with a vanilla U-Net trained on radiographs. RESULTS Expert accuracy was 57.5 % in determining real versus generated images, while the model reached an FID = 7.2 and IS = 210. The segmentation UNet trained on the 20 feature-sets achieved a DSC of 0.90, 0.84, and 0.61 for OF, GT, and LT segmentation, respectively, which was at least 0.30 points higher than the naively trained model. CONCLUSION We demonstrated the applicability of DDPMs as feature extractors, facilitating medical image segmentation with few annotated samples.
Collapse
Affiliation(s)
- Bardia Khosravi
- Department of Orthopedic Surgery, Mayo Clinic, 200 First Street SW, Rochester, MN 55905, USA; Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | - Pouria Rouzrokh
- Department of Orthopedic Surgery, Mayo Clinic, 200 First Street SW, Rochester, MN 55905, USA; Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | - John P Mickley
- Department of Orthopedic Surgery, Mayo Clinic, 200 First Street SW, Rochester, MN 55905, USA
| | | | - Kellen Mulford
- Department of Orthopedic Surgery, Mayo Clinic, 200 First Street SW, Rochester, MN 55905, USA
| | - Linjun Yang
- Department of Orthopedic Surgery, Mayo Clinic, 200 First Street SW, Rochester, MN 55905, USA
| | - A Noelle Larson
- Department of Orthopedic Surgery, Mayo Clinic, 200 First Street SW, Rochester, MN 55905, USA
| | | | | | - Michael J Taunton
- Department of Orthopedic Surgery, Mayo Clinic, 200 First Street SW, Rochester, MN 55905, USA
| | - Cody C Wyles
- Department of Orthopedic Surgery, Mayo Clinic, 200 First Street SW, Rochester, MN 55905, USA; Department of Clinical Anatomy, Mayo Clinic, Rochester, MN, USA.
| |
Collapse
|
31
|
Whitney HM, Baughan N, Myers KJ, Drukker K, Gichoya J, Bower B, Chen W, Gruszauskas N, Kalpathy-Cramer J, Koyejo S, Sá RC, Sahiner B, Zhang Z, Giger ML. Longitudinal assessment of demographic representativeness in the Medical Imaging and Data Resource Center open data commons. J Med Imaging (Bellingham) 2023; 10:61105. [PMID: 37469387 PMCID: PMC10353566 DOI: 10.1117/1.jmi.10.6.061105] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 06/21/2023] [Accepted: 06/23/2023] [Indexed: 07/21/2023] Open
Abstract
Purpose The Medical Imaging and Data Resource Center (MIDRC) open data commons was launched to accelerate the development of artificial intelligence (AI) algorithms to help address the COVID-19 pandemic. The purpose of this study was to quantify longitudinal representativeness of the demographic characteristics of the primary MIDRC dataset compared to the United States general population (US Census) and COVID-19 positive case counts from the Centers for Disease Control and Prevention (CDC). Approach The Jensen-Shannon distance (JSD), a measure of similarity of two distributions, was used to longitudinally measure the representativeness of the distribution of (1) all unique patients in the MIDRC data to the 2020 US Census and (2) all unique COVID-19 positive patients in the MIDRC data to the case counts reported by the CDC. The distributions were evaluated in the demographic categories of age at index, sex, race, ethnicity, and the combination of race and ethnicity. Results Representativeness of the MIDRC data by ethnicity and the combination of race and ethnicity was impacted by the percentage of CDC case counts for which this was not reported. The distributions by sex and race have retained their level of representativeness over time. Conclusion The representativeness of the open medical imaging datasets in the curated public data commons at MIDRC has evolved over time as the number of contributing institutions and overall number of subjects have grown. The use of metrics, such as the JSD support measurement of representativeness, is one step needed for fair and generalizable AI algorithm development.
Collapse
Affiliation(s)
- Heather M. Whitney
- University of Chicago, Chicago, Illinois, United States
- The Medical Imaging and Data Resource Center (midrc.org)
| | - Natalie Baughan
- University of Chicago, Chicago, Illinois, United States
- The Medical Imaging and Data Resource Center (midrc.org)
| | - Kyle J. Myers
- The Medical Imaging and Data Resource Center (midrc.org)
- Puente Solutions LLC, Phoenix, Arizona, United States
| | - Karen Drukker
- University of Chicago, Chicago, Illinois, United States
- The Medical Imaging and Data Resource Center (midrc.org)
| | - Judy Gichoya
- The Medical Imaging and Data Resource Center (midrc.org)
- Emory University, Atlanta, Georgia, United States
| | - Brad Bower
- The Medical Imaging and Data Resource Center (midrc.org)
- National Institutes of Health, Bethesda, Maryland, United States
| | - Weijie Chen
- The Medical Imaging and Data Resource Center (midrc.org)
- United States Food and Drug Administration, Silver Spring, Maryland, United States
| | - Nicholas Gruszauskas
- University of Chicago, Chicago, Illinois, United States
- The Medical Imaging and Data Resource Center (midrc.org)
| | - Jayashree Kalpathy-Cramer
- The Medical Imaging and Data Resource Center (midrc.org)
- University of Colorado Anschutz Medical Campus, Aurora, Colorado, United States
| | - Sanmi Koyejo
- The Medical Imaging and Data Resource Center (midrc.org)
- Stanford University, Stanford, California, United States
| | - Rui C. Sá
- The Medical Imaging and Data Resource Center (midrc.org)
- National Institutes of Health, Bethesda, Maryland, United States
- University of California, San Diego, La Jolla, California, United States
| | - Berkman Sahiner
- The Medical Imaging and Data Resource Center (midrc.org)
- United States Food and Drug Administration, Silver Spring, Maryland, United States
| | - Zi Zhang
- The Medical Imaging and Data Resource Center (midrc.org)
- Jefferson Health, Philadelphia, Pennsylvania, United States
| | - Maryellen L. Giger
- University of Chicago, Chicago, Illinois, United States
- The Medical Imaging and Data Resource Center (midrc.org)
| |
Collapse
|
32
|
Drukker K, Chen W, Gichoya J, Gruszauskas N, Kalpathy-Cramer J, Koyejo S, Myers K, Sá RC, Sahiner B, Whitney H, Zhang Z, Giger M. Toward fairness in artificial intelligence for medical image analysis: identification and mitigation of potential biases in the roadmap from data collection to model deployment. J Med Imaging (Bellingham) 2023; 10:061104. [PMID: 37125409 PMCID: PMC10129875 DOI: 10.1117/1.jmi.10.6.061104] [Citation(s) in RCA: 27] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Accepted: 04/03/2023] [Indexed: 05/02/2023] Open
Abstract
Purpose To recognize and address various sources of bias essential for algorithmic fairness and trustworthiness and to contribute to a just and equitable deployment of AI in medical imaging, there is an increasing interest in developing medical imaging-based machine learning methods, also known as medical imaging artificial intelligence (AI), for the detection, diagnosis, prognosis, and risk assessment of disease with the goal of clinical implementation. These tools are intended to help improve traditional human decision-making in medical imaging. However, biases introduced in the steps toward clinical deployment may impede their intended function, potentially exacerbating inequities. Specifically, medical imaging AI can propagate or amplify biases introduced in the many steps from model inception to deployment, resulting in a systematic difference in the treatment of different groups. Approach Our multi-institutional team included medical physicists, medical imaging artificial intelligence/machine learning (AI/ML) researchers, experts in AI/ML bias, statisticians, physicians, and scientists from regulatory bodies. We identified sources of bias in AI/ML, mitigation strategies for these biases, and developed recommendations for best practices in medical imaging AI/ML development. Results Five main steps along the roadmap of medical imaging AI/ML were identified: (1) data collection, (2) data preparation and annotation, (3) model development, (4) model evaluation, and (5) model deployment. Within these steps, or bias categories, we identified 29 sources of potential bias, many of which can impact multiple steps, as well as mitigation strategies. Conclusions Our findings provide a valuable resource to researchers, clinicians, and the public at large.
Collapse
Affiliation(s)
- Karen Drukker
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Weijie Chen
- US Food and Drug Administration, Division of Imaging, Diagnostics, and Software Reliability, Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, Silver Spring, Maryland, United States
| | - Judy Gichoya
- Emory University, Department of Radiology, Atlanta, Georgia, United States
| | - Nicholas Gruszauskas
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | | | - Sanmi Koyejo
- Stanford University, Department of Computer Science, Stanford, California, United States
| | - Kyle Myers
- Puente Solutions LLC, Phoenix, Arizona, United States
| | - Rui C. Sá
- National Institutes of Health, Bethesda, Maryland, United States
- University of California, San Diego, La Jolla, California, United States
| | - Berkman Sahiner
- US Food and Drug Administration, Division of Imaging, Diagnostics, and Software Reliability, Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, Silver Spring, Maryland, United States
| | - Heather Whitney
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Zi Zhang
- Jefferson Health, Philadelphia, Pennsylvania, United States
| | - Maryellen Giger
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| |
Collapse
|
33
|
Le Bouthillier ME, Hrynkiw L, Beauchamp A, Duong L, Ratté S. Automated detection of regions of interest in cartridge case images using deep learning. J Forensic Sci 2023; 68:1958-1971. [PMID: 37435904 DOI: 10.1111/1556-4029.15319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Revised: 05/23/2023] [Accepted: 06/16/2023] [Indexed: 07/13/2023]
Abstract
This paper explores a deep-learning approach to evaluate the position of circular delimiters in cartridge case images. These delimiters define two regions of interest (ROI), corresponding to the breech face and the firing pin impressions, and are placed manually or by an image-processing algorithm. This positioning bears a significant impact on the performance of the image-matching algorithms for firearm identification, and an automated evaluation method would be beneficial to any computerized system. Our contribution consists in optimizing and training U-Net segmentation models from digital images of cartridge cases, intending to locate ROIs automatically. For the experiments, we used high-resolution 2D images from 1195 samples of cartridge cases fired by different 9MM firearms. Our results show that the segmentation models, trained on augmented data sets, exhibit a performance of 95.6% IoU (Intersection over Union) and 99.3% DC (Dice Coefficient) with a loss of 0.014 for the breech face images; and a performance of 95.9% IoU and 99.5% DC with a loss of 0.011 for the firing pin images. We observed that the natural shapes of predicted circles reduce the performance of segmentation models compared with perfect circles on ground truth masks suggesting that our method provide a more accurate segmentation of the real ROI shape. In practice, we believe that these results could be useful for firearms identification. In future work, the predictions may be used to evaluate the quality of delimiters on specimens in a database, or they could determine the region of interest on a cartridge case image.
Collapse
Affiliation(s)
- Marie-Eve Le Bouthillier
- École de technologie supérieure, ÉTS, Montréal, Québec, Canada
- Ultra Electronics Forensic Technology, Inc., St-Laurent, Québec, Canada
| | - Lynne Hrynkiw
- École de technologie supérieure, ÉTS, Montréal, Québec, Canada
- Ultra Electronics Forensic Technology, Inc., St-Laurent, Québec, Canada
| | - Alain Beauchamp
- Ultra Electronics Forensic Technology, Inc., St-Laurent, Québec, Canada
| | - Luc Duong
- École de technologie supérieure, ÉTS, Montréal, Québec, Canada
| | - Sylvie Ratté
- École de technologie supérieure, ÉTS, Montréal, Québec, Canada
| |
Collapse
|
34
|
Vera-Garcia DV, Nugen F, Padash S, Khosravi B, Mickley JP, Erickson BJ, Wyles CC, Taunton MJ. Educational Overview of the Concept and Application of Computer Vision in Arthroplasty. J Arthroplasty 2023; 38:1954-1958. [PMID: 37633507 PMCID: PMC10616773 DOI: 10.1016/j.arth.2023.08.046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Revised: 08/10/2023] [Accepted: 08/11/2023] [Indexed: 08/28/2023] Open
Abstract
Image data has grown exponentially as systems have increased their ability to collect and store it. Unfortunately, there are limits to human resources both in time and knowledge to fully interpret and manage that data. Computer Vision (CV) has grown in popularity as a discipline for better understanding visual data. Computer Vision has become a powerful tool for imaging analytics in orthopedic surgery, allowing computers to evaluate large volumes of image data with greater nuance than previously possible. Nevertheless, even with the growing number of uses in medicine, literature on the fundamentals of CV and its implementation is mainly oriented toward computer scientists rather than clinicians, rendering CV unapproachable for most orthopedic surgeons as a tool for clinical practice and research. The purpose of this article is to summarize and review the fundamental concepts of CV application for the orthopedic surgeon and musculoskeletal researcher.
Collapse
Affiliation(s)
- Diana Victoria Vera-Garcia
- Orthopedic Surgery Artificial Intelligence Laboratory (OSAIL), Mayo Clinic, Rochester, MN
- Radiology Informatics Lab (RIL), Department of Radiology, Mayo Clinic, Rochester, MN
| | - Fred Nugen
- Orthopedic Surgery Artificial Intelligence Laboratory (OSAIL), Mayo Clinic, Rochester, MN
- Radiology Informatics Lab (RIL), Department of Radiology, Mayo Clinic, Rochester, MN
| | - Sirwa Padash
- Orthopedic Surgery Artificial Intelligence Laboratory (OSAIL), Mayo Clinic, Rochester, MN
- Radiology Informatics Lab (RIL), Department of Radiology, Mayo Clinic, Rochester, MN
| | - Bardia Khosravi
- Orthopedic Surgery Artificial Intelligence Laboratory (OSAIL), Mayo Clinic, Rochester, MN
- Radiology Informatics Lab (RIL), Department of Radiology, Mayo Clinic, Rochester, MN
| | - John P. Mickley
- Orthopedic Surgery Artificial Intelligence Laboratory (OSAIL), Mayo Clinic, Rochester, MN
- Department of Orthopedic Surgery, Mayo Clinic, Rochester, MN
| | - Bradley J. Erickson
- Radiology Informatics Lab (RIL), Department of Radiology, Mayo Clinic, Rochester, MN
| | - Cody C. Wyles
- Orthopedic Surgery Artificial Intelligence Laboratory (OSAIL), Mayo Clinic, Rochester, MN
- Department of Orthopedic Surgery, Mayo Clinic, Rochester, MN
| | - Michael J. Taunton
- Orthopedic Surgery Artificial Intelligence Laboratory (OSAIL), Mayo Clinic, Rochester, MN
- Department of Orthopedic Surgery, Mayo Clinic, Rochester, MN
| |
Collapse
|
35
|
Gichoya JW, Thomas K, Celi LA, Safdar N, Banerjee I, Banja JD, Seyyed-Kalantari L, Trivedi H, Purkayastha S. AI pitfalls and what not to do: mitigating bias in AI. Br J Radiol 2023; 96:20230023. [PMID: 37698583 PMCID: PMC10546443 DOI: 10.1259/bjr.20230023] [Citation(s) in RCA: 54] [Impact Index Per Article: 27.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2023] [Revised: 08/10/2023] [Accepted: 08/14/2023] [Indexed: 09/13/2023] Open
Abstract
Various forms of artificial intelligence (AI) applications are being deployed and used in many healthcare systems. As the use of these applications increases, we are learning the failures of these models and how they can perpetuate bias. With these new lessons, we need to prioritize bias evaluation and mitigation for radiology applications; all the while not ignoring the impact of changes in the larger enterprise AI deployment which may have downstream impact on performance of AI models. In this paper, we provide an updated review of known pitfalls causing AI bias and discuss strategies for mitigating these biases within the context of AI deployment in the larger healthcare enterprise. We describe these pitfalls by framing them in the larger AI lifecycle from problem definition, data set selection and curation, model training and deployment emphasizing that bias exists across a spectrum and is a sequela of a combination of both human and machine factors.
Collapse
Affiliation(s)
| | - Kaesha Thomas
- Department of Radiology, Emory University, Atlanta, United States
| | | | - Nabile Safdar
- Department of Radiology, Emory University, Atlanta, United States
| | - Imon Banerjee
- School of Computing, Informatics, and Decision Systems Engineering, Arizona State University, Tempe, United States
| | - John D Banja
- Emory University Center for Ethics, Emory University, Atlanta, United States
| | - Laleh Seyyed-Kalantari
- Department of Electrical Engineering and Computer Science, Lassonde School of Engineering, York University, North York, United States
| | - Hari Trivedi
- Department of Radiology, Emory University, Atlanta, United States
| | - Saptarshi Purkayastha
- School of Informatics and Computing, Indiana University Purdue University, Indianapolis, United States
| |
Collapse
|
36
|
Nugen F, Vera Garcia DV, Sohn S, Mickley JP, Wyles CC, Erickson BJ, Taunton MJ. Application of Natural Language Processing in Total Joint Arthroplasty: Opportunities and Challenges. J Arthroplasty 2023; 38:1948-1953. [PMID: 37619802 DOI: 10.1016/j.arth.2023.08.047] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Revised: 08/10/2023] [Accepted: 08/11/2023] [Indexed: 08/26/2023] Open
Abstract
Total joint arthroplasty is becoming one of the most common surgeries within the United States, creating an abundance of analyzable data to improve patient experience and outcomes. Unfortunately, a large majority of this data is concealed in electronic health records only accessible by manual extraction, which takes extensive time and resources. Natural language processing (NLP), a field within artificial intelligence, may offer a viable alternative to manual extraction. Using NLP, a researcher can analyze written and spoken data and extract data in an organized manner suitable for future research and clinical use. This article will first discuss common subtasks involved in an NLP pipeline, including data preparation, modeling, analysis, and external validation, followed by examples of NLP projects. Challenges and limitations of NLP will be discussed, closing with future directions of NLP projects, including large language models.
Collapse
Affiliation(s)
- Fred Nugen
- Orthopedic Surgery Artificial Intelligence Laboratory (OSAIL), Mayo Clinic, Rochester, Minnesota; Radiology Informatics Lab (RIL), Department of Radiology, Mayo Clinic, Rochester, Minnesota
| | - Diana V Vera Garcia
- Orthopedic Surgery Artificial Intelligence Laboratory (OSAIL), Mayo Clinic, Rochester, Minnesota; Radiology Informatics Lab (RIL), Department of Radiology, Mayo Clinic, Rochester, Minnesota
| | - Sunghwan Sohn
- Orthopedic Surgery Artificial Intelligence Laboratory (OSAIL), Mayo Clinic, Rochester, Minnesota; Department of Health Sciences Research, Mayo Clinic, Rochester, Minnesota
| | - John P Mickley
- Orthopedic Surgery Artificial Intelligence Laboratory (OSAIL), Mayo Clinic, Rochester, Minnesota; Department of Orthopedic Surgery, Mayo Clinic, Rochester, Minnesota
| | - Cody C Wyles
- Orthopedic Surgery Artificial Intelligence Laboratory (OSAIL), Mayo Clinic, Rochester, Minnesota; Department of Orthopedic Surgery, Mayo Clinic, Rochester, Minnesota
| | - Bradley J Erickson
- Radiology Informatics Lab (RIL), Department of Radiology, Mayo Clinic, Rochester, Minnesota
| | - Michael J Taunton
- Orthopedic Surgery Artificial Intelligence Laboratory (OSAIL), Mayo Clinic, Rochester, Minnesota; Department of Orthopedic Surgery, Mayo Clinic, Rochester, Minnesota
| |
Collapse
|
37
|
Armato SG, Drukker K, Hadjiiski L. AI in medical imaging grand challenges: translation from competition to research benefit and patient care. Br J Radiol 2023; 96:20221152. [PMID: 37698542 PMCID: PMC10546459 DOI: 10.1259/bjr.20221152] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2022] [Revised: 05/24/2023] [Accepted: 07/11/2023] [Indexed: 09/13/2023] Open
Abstract
Artificial intelligence (AI), in one form or another, has been a part of medical imaging for decades. The recent evolution of AI into approaches such as deep learning has dramatically accelerated the application of AI across a wide range of radiologic settings. Despite the promises of AI, developers and users of AI technology must be fully aware of its potential biases and pitfalls, and this knowledge must be incorporated throughout the AI system development pipeline that involves training, validation, and testing. Grand challenges offer an opportunity to advance the development of AI methods for targeted applications and provide a mechanism for both directing and facilitating the development of AI systems. In the process, a grand challenge centralizes (with the challenge organizers) the burden of providing a valid benchmark test set to assess performance and generalizability of participants' models and the collection and curation of image metadata, clinical/demographic information, and the required reference standard. The most relevant grand challenges are those designed to maximize the open-science nature of the competition, with code and trained models deposited for future public access. The ultimate goal of AI grand challenges is to foster the translation of AI systems from competition to research benefit and patient care. Rather than reference the many medical imaging grand challenges that have been organized by groups such as MICCAI, RSNA, AAPM, and grand-challenge.org, this review assesses the role of grand challenges in promoting AI technologies for research advancement and for eventual clinical implementation, including their promises and limitations.
Collapse
Affiliation(s)
- Samuel G Armato
- Department of Radiology, The University of Chicago, Chicago, Illinois, USA
| | - Karen Drukker
- Department of Radiology, The University of Chicago, Chicago, Illinois, USA
| | - Lubomir Hadjiiski
- Department of Radiology, University of Michigan, Ann Arbor, Michigan, USA
| |
Collapse
|
38
|
Hathaway QA, Lakhani DA. Fostering Artificial Intelligence Education within Radiology Residencies: A Two-Tiered Approach. Acad Radiol 2023; 30:2097-2098. [PMID: 36549992 DOI: 10.1016/j.acra.2022.12.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2022] [Revised: 11/30/2022] [Accepted: 12/03/2022] [Indexed: 12/24/2022]
Affiliation(s)
| | - Dhairya A Lakhani
- Department of Radiology, West Virginia University, Morgantown, WV, USA.
| |
Collapse
|
39
|
Alkhulaifat D, Rafful P, Khalkhali V, Welsh M, Sotardi ST. Implications of Pediatric Artificial Intelligence Challenges for Artificial Intelligence Education and Curriculum Development. J Am Coll Radiol 2023; 20:724-729. [PMID: 37352995 DOI: 10.1016/j.jacr.2023.04.013] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2023] [Revised: 03/22/2023] [Accepted: 04/06/2023] [Indexed: 06/25/2023]
Abstract
Several radiology artificial intelligence (AI) courses are offered by a variety of institutions and educators. The major radiology societies have developed AI curricula focused on basic AI principles and practices. However, a specific AI curriculum focused on pediatric radiology is needed to offer targeted education material on AI model development and performance evaluation. There are inherent differences between pediatric and adult practice patterns, which may hinder the application of adult AI models in pediatric cohorts. Such differences include the different imaging modality utilization, imaging acquisition parameters, lower radiation doses, the rapid growth of children and changes in their body composition, and the presence of unique pathologies and diseases, which differ in prevalence from adults. Thus, to enhance radiologists' knowledge of the applications of AI models in pediatric patients, curricula should be structured keeping in mind the unique pediatric setting and its challenges, along with methods to overcome these challenges, and pediatric-specific data governance and ethical considerations. In this report, the authors highlight the salient aspects of pediatric radiology that are necessary for AI education in the pediatric setting, including the challenges for research investigation and clinical implementation.
Collapse
Affiliation(s)
- Dana Alkhulaifat
- Department of Pediatric Radiology, Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
| | - Patricia Rafful
- Department of Pediatric Radiology, Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
| | - Vahid Khalkhali
- Department of Pediatric Radiology, Children's Hospital of Philadelphia, Philadelphia, Pennsylvania; Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Michael Welsh
- Department of Pediatric Radiology, Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
| | - Susan T Sotardi
- Director, CHOP Radiology Informatics and Artificial Intelligence, Department of Pediatric Radiology, Children's Hospital of Philadelphia, Philadelphia, Pennsylvania; Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania.
| |
Collapse
|
40
|
Faghani S, Khosravi B, Moassefi M, Conte GM, Erickson BJ. A Comparison of Three Different Deep Learning-Based Models to Predict the MGMT Promoter Methylation Status in Glioblastoma Using Brain MRI. J Digit Imaging 2023; 36:837-846. [PMID: 36604366 PMCID: PMC10287882 DOI: 10.1007/s10278-022-00757-x] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2022] [Revised: 12/07/2022] [Accepted: 12/08/2022] [Indexed: 01/06/2023] Open
Abstract
Glioblastoma (GBM) is the most common primary malignant brain tumor in adults. The standard treatment for GBM consists of surgical resection followed by concurrent chemoradiotherapy and adjuvant temozolomide. O-6-methylguanine-DNA methyltransferase (MGMT) promoter methylation status is an important prognostic biomarker that predicts the response to temozolomide and guides treatment decisions. At present, the only reliable way to determine MGMT promoter methylation status is through the analysis of tumor tissues. Considering the complications of the tissue-based methods, an imaging-based approach is preferred. This study aimed to compare three different deep learning-based approaches for predicting MGMT promoter methylation status. We obtained 576 T2WI with their corresponding tumor masks, and MGMT promoter methylation status from, The Brain Tumor Segmentation (BraTS) 2021 datasets. We developed three different models: voxel-wise, slice-wise, and whole-brain. For voxel-wise classification, methylated and unmethylated MGMT tumor masks were made into 1 and 2 with 0 background, respectively. We converted each T2WI into 32 × 32 × 32 patches. We trained a 3D-Vnet model for tumor segmentation. After inference, we constructed the whole brain volume based on the patch's coordinates. The final prediction of MGMT methylation status was made by majority voting between the predicted voxel values of the biggest connected component. For slice-wise classification, we trained an object detection model for tumor detection and MGMT methylation status prediction, then for final prediction, we used majority voting. For the whole-brain approach, we trained a 3D Densenet121 for prediction. Whole-brain, slice-wise, and voxel-wise, accuracy was 65.42% (SD 3.97%), 61.37% (SD 1.48%), and 56.84% (SD 4.38%), respectively.
Collapse
Affiliation(s)
- Shahriar Faghani
- Radiology Informatics Lab, Department of Radiology, Mayo Clinic, S.W, 200 1St Street, Rochester, MN, 55905, USA
| | - Bardia Khosravi
- Radiology Informatics Lab, Department of Radiology, Mayo Clinic, S.W, 200 1St Street, Rochester, MN, 55905, USA
| | - Mana Moassefi
- Radiology Informatics Lab, Department of Radiology, Mayo Clinic, S.W, 200 1St Street, Rochester, MN, 55905, USA
| | - Gian Marco Conte
- Radiology Informatics Lab, Department of Radiology, Mayo Clinic, S.W, 200 1St Street, Rochester, MN, 55905, USA
| | - Bradley J Erickson
- Radiology Informatics Lab, Department of Radiology, Mayo Clinic, S.W, 200 1St Street, Rochester, MN, 55905, USA.
| |
Collapse
|
41
|
Radiomics Applications in Head and Neck Tumor Imaging: A Narrative Review. Cancers (Basel) 2023; 15:cancers15041174. [PMID: 36831517 PMCID: PMC9954362 DOI: 10.3390/cancers15041174] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 01/31/2023] [Accepted: 02/08/2023] [Indexed: 02/16/2023] Open
Abstract
Recent advances in machine learning and artificial intelligence technology have ensured automated evaluation of medical images. As a result, quantifiable diagnostic and prognostic biomarkers have been created. We discuss radiomics applications for the head and neck region in this paper. Molecular characterization, categorization, prognosis and therapy recommendation are given special consideration. In a narrative manner, we outline the fundamental technological principles, the overall idea and usual workflow of radiomic analysis and what seem to be the present and potential challenges in normal clinical practice. Clinical oncology intends for all of this to ensure informed decision support for personalized and useful cancer treatment. Head and neck cancers present a unique set of diagnostic and therapeutic challenges. These challenges are brought on by the complicated anatomy and heterogeneity of the area under investigation. Radiomics has the potential to address these barriers. Future research must be interdisciplinary and focus on the study of certain oncologic functions and outcomes, with external validation and multi-institutional cooperation in order to achieve this.
Collapse
|
42
|
Moassefi M, Faghani S, Khosravi B, Rouzrokh P, Erickson BJ. Artificial Intelligence in Radiology: Overview of Application Types, Design, and Challenges. Semin Roentgenol 2023; 58:170-177. [PMID: 37087137 DOI: 10.1053/j.ro.2023.01.005] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Revised: 01/16/2023] [Accepted: 01/18/2023] [Indexed: 02/17/2023]
|
43
|
Rouzrokh P, Khosravi B, Vahdati S, Moassefi M, Faghani S, Mahmoudi E, Chalian H, Erickson BJ. Machine Learning in Cardiovascular Imaging: A Scoping Review of Published Literature. CURRENT RADIOLOGY REPORTS 2022; 11:34-45. [PMID: 36531124 PMCID: PMC9742664 DOI: 10.1007/s40134-022-00407-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/17/2022] [Indexed: 12/14/2022]
Abstract
Purpose of Review In this study, we planned and carried out a scoping review of the literature to learn how machine learning (ML) has been investigated in cardiovascular imaging (CVI). Recent Findings During our search, we found numerous studies that developed or utilized existing ML models for segmentation, classification, object detection, generation, and regression applications involving cardiovascular imaging data. We first quantitatively investigated the different aspects of study characteristics, data handling, model development, and performance evaluation in all studies that were included in our review. We then supplemented these findings with a qualitative synthesis to highlight the common themes in the studied literature and provided recommendations to pave the way for upcoming research. Summary ML is a subfield of artificial intelligence (AI) that enables computers to learn human-like decision-making from data. Due to its novel applications, ML is gaining more and more attention from researchers in the healthcare industry. Cardiovascular imaging is an active area of research in medical imaging with lots of room for incorporating new technologies, like ML. Supplementary Information The online version contains supplementary material available at 10.1007/s40134-022-00407-8.
Collapse
Affiliation(s)
- Pouria Rouzrokh
- Artificial Intelligence Laboratory, Mayo Clinic, Rochester, MN 55905 USA
- Radiology Informatics Laboratory, Department of Radiology, Mayo Clinic, 200 1st Street, SW, Rochester, MN USA
| | - Bardia Khosravi
- Artificial Intelligence Laboratory, Mayo Clinic, Rochester, MN 55905 USA
- Radiology Informatics Laboratory, Department of Radiology, Mayo Clinic, 200 1st Street, SW, Rochester, MN USA
| | - Sanaz Vahdati
- Artificial Intelligence Laboratory, Mayo Clinic, Rochester, MN 55905 USA
- Radiology Informatics Laboratory, Department of Radiology, Mayo Clinic, 200 1st Street, SW, Rochester, MN USA
| | - Mana Moassefi
- Artificial Intelligence Laboratory, Mayo Clinic, Rochester, MN 55905 USA
- Radiology Informatics Laboratory, Department of Radiology, Mayo Clinic, 200 1st Street, SW, Rochester, MN USA
| | - Shahriar Faghani
- Artificial Intelligence Laboratory, Mayo Clinic, Rochester, MN 55905 USA
- Radiology Informatics Laboratory, Department of Radiology, Mayo Clinic, 200 1st Street, SW, Rochester, MN USA
| | - Elham Mahmoudi
- Artificial Intelligence Laboratory, Mayo Clinic, Rochester, MN 55905 USA
- Radiology Informatics Laboratory, Department of Radiology, Mayo Clinic, 200 1st Street, SW, Rochester, MN USA
| | - Hamid Chalian
- Department of Radiology, Cardiothoracic Imaging, University of Washington, Seattle, WA USA
| | - Bradley J. Erickson
- Artificial Intelligence Laboratory, Mayo Clinic, Rochester, MN 55905 USA
- Radiology Informatics Laboratory, Department of Radiology, Mayo Clinic, 200 1st Street, SW, Rochester, MN USA
| |
Collapse
|
44
|
Khosravi B, Rouzrokh P, Faghani S, Moassefi M, Vahdati S, Mahmoudi E, Chalian H, Erickson BJ. Machine Learning and Deep Learning in Cardiothoracic Imaging: A Scoping Review. Diagnostics (Basel) 2022; 12:2512. [PMID: 36292201 PMCID: PMC9600598 DOI: 10.3390/diagnostics12102512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2022] [Revised: 10/14/2022] [Accepted: 10/15/2022] [Indexed: 01/17/2023] Open
Abstract
Machine-learning (ML) and deep-learning (DL) algorithms are part of a group of modeling algorithms that grasp the hidden patterns in data based on a training process, enabling them to extract complex information from the input data. In the past decade, these algorithms have been increasingly used for image processing, specifically in the medical domain. Cardiothoracic imaging is one of the early adopters of ML/DL research, and the COVID-19 pandemic resulted in more research focus on the feasibility and applications of ML/DL in cardiothoracic imaging. In this scoping review, we systematically searched available peer-reviewed medical literature on cardiothoracic imaging and quantitatively extracted key data elements in order to get a big picture of how ML/DL have been used in the rapidly evolving cardiothoracic imaging field. During this report, we provide insights on different applications of ML/DL and some nuances pertaining to this specific field of research. Finally, we provide general suggestions on how researchers can make their research more than just a proof-of-concept and move toward clinical adoption.
Collapse
Affiliation(s)
- Bardia Khosravi
- Radiology Informatics Lab (RIL), Department of Radiology, Mayo Clinic, Rochester, MN 55905, USA
- Orthopedic Surgery Artificial Intelligence Laboratory (OSAIL), Department of Orthopedic Surgery, Mayo Clinic, Rochester, MN 55905, USA
| | - Pouria Rouzrokh
- Radiology Informatics Lab (RIL), Department of Radiology, Mayo Clinic, Rochester, MN 55905, USA
- Orthopedic Surgery Artificial Intelligence Laboratory (OSAIL), Department of Orthopedic Surgery, Mayo Clinic, Rochester, MN 55905, USA
| | - Shahriar Faghani
- Radiology Informatics Lab (RIL), Department of Radiology, Mayo Clinic, Rochester, MN 55905, USA
| | - Mana Moassefi
- Radiology Informatics Lab (RIL), Department of Radiology, Mayo Clinic, Rochester, MN 55905, USA
| | - Sanaz Vahdati
- Radiology Informatics Lab (RIL), Department of Radiology, Mayo Clinic, Rochester, MN 55905, USA
| | - Elham Mahmoudi
- Radiology Informatics Lab (RIL), Department of Radiology, Mayo Clinic, Rochester, MN 55905, USA
| | - Hamid Chalian
- Department of Radiology, Cardiothoracic Imaging, University of Washington, Seattle, WA 98195, USA
| | - Bradley J. Erickson
- Radiology Informatics Lab (RIL), Department of Radiology, Mayo Clinic, Rochester, MN 55905, USA
| |
Collapse
|
45
|
Kahn CE. Hitting the Mark: Reducing Bias in AI Systems. Radiol Artif Intell 2022; 4:e220171. [PMID: 36204534 PMCID: PMC9530777 DOI: 10.1148/ryai.220171] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 08/15/2022] [Indexed: 05/24/2023]
Affiliation(s)
- Charles E. Kahn
- From the Department of Radiology, University of Pennsylvania, 3400
Spruce St, 1 Silverstein, Philadelphia, PA 19104
| |
Collapse
|