1
|
Wahid KA, Dede C, El-Habashy DM, Kamel S, Rooney MK, Khamis Y, Abdelaal MRA, Ahmed S, Corrigan KL, Chang E, Dudzinski SO, Salzillo TC, McDonald BA, Mulder SL, McCullum L, Alakayleh Q, Sjogreen C, He R, Mohamed ASR, Lai SY, Christodouleas JP, Schaefer AJ, Naser MA, Fuller CD. Overview of the Head and Neck Tumor Segmentation for Magnetic Resonance Guided Applications (HNTS-MRG) 2024 Challenge. HEAD AND NECK TUMOR SEGMENTATION FOR MR-GUIDED APPLICATIONS : FIRST MICCAI CHALLENGE, HNTS-MRG 2024, HELD IN CONJUNCTION WITH MICCAI 2024, MARRAKESH, MOROCCO, OCTOBER 17, 2024, PROCEEDINGS 2025; 15273:1-35. [PMID: 40115167 PMCID: PMC11925392 DOI: 10.1007/978-3-031-83274-1_1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 03/23/2025]
Abstract
Magnetic resonance (MR)-guided radiation therapy (RT) is enhancing head and neck cancer (HNC) treatment through superior soft tissue contrast and longitudinal imaging capabilities. However, manual tumor segmentation remains a significant challenge, spurring interest in artificial intelligence (AI)-driven automation. To accelerate innovation in this field, we present the Head and Neck Tumor Segmentation for MR-Guided Applications (HNTS-MRG) 2024 Challenge, a satellite event of the 27th International Conference on Medical Image Computing and Computer Assisted Intervention. This challenge addresses the scarcity of large, publicly available AI-ready adaptive RT datasets in HNC and explores the potential of incorporating multi-timepoint data to enhance RT auto-segmentation performance. Participants tackled two HNC segmentation tasks: automatic delineation of primary gross tumor volume (GTVp) and gross metastatic regional lymph nodes (GTVn) on pre-RT (Task 1) and mid-RT (Task 2) T2-weighted scans. The challenge provided 150 HNC cases for training and 50 for final testing hosted on grand-challenge.org using a Docker submission framework. In total, 19 independent teams from across the world qualified by submitting both their algorithms and corresponding papers, resulting in 18 submissions for Task 1 and 15 submissions for Task 2. Evaluation using the mean aggregated Dice Similarity Coefficient showed top-performing AI methods achieved scores of 0.825 in Task 1 and 0.733 in Task 2. These results surpassed clinician interobserver variability benchmarks, marking significant strides in automated tumor segmentation for MR-guided RT applications in HNC.
Collapse
Affiliation(s)
- Kareem A Wahid
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer, Houston, TX, USA
- Department of Imaging Physics, The University of Texas MD Anderson Cancer, Houston, TX, USA
| | - Cem Dede
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer, Houston, TX, USA
| | - Dina M El-Habashy
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer, Houston, TX, USA
- Transitional Year Program, Corewell Health Wiliam Beaumont, Royal Oak, MI, USA
| | - Serageldin Kamel
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer, Houston, TX, USA
| | - Michael K Rooney
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer, Houston, TX, USA
| | - Yomna Khamis
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer, Houston, TX, USA
- Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, MD, USA
- Department of Clinical Oncology and Nuclear Medicine, Faculty of Medicine, Alexandria University, Alexandria, Egypt
| | - Moamen R A Abdelaal
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer, Houston, TX, USA
| | - Sara Ahmed
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer, Houston, TX, USA
| | - Kelsey L Corrigan
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer, Houston, TX, USA
| | - Enoch Chang
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer, Houston, TX, USA
| | - Stephanie O Dudzinski
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer, Houston, TX, USA
| | - Travis C Salzillo
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer, Houston, TX, USA
| | - Brigid A McDonald
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer, Houston, TX, USA
| | - Samuel L Mulder
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer, Houston, TX, USA
| | - Lucas McCullum
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer, Houston, TX, USA
- UT MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, USA
| | - Qusai Alakayleh
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer, Houston, TX, USA
| | - Carlos Sjogreen
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer, Houston, TX, USA
| | - Renjie He
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer, Houston, TX, USA
| | - Abdallah S R Mohamed
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer, Houston, TX, USA
- Department of Radiation Oncology, Baylor College of Medicine, Houston, TX, USA
| | - Stephen Y Lai
- Department of Head and Neck Surgery, The University of Texas MD Anderson Cancer, Houston, TX, USA
| | | | - Andrew J Schaefer
- Department of Computational Applied Mathematics and Operations Research, Rice University, Houston, TX, USA
| | - Mohamed A Naser
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer, Houston, TX, USA
| | - Clifton D Fuller
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer, Houston, TX, USA
| |
Collapse
|
2
|
Armato SG, Drukker K, Hadjiiski L, Wu CC, Kalpathy-Cramer J, Shih G, Giger ML, Baughan N, Bearce B, Flanders AE, Ball RL, Myers KJ, Whitney HM, MIDRC Grand Challenge Working Group T. MIDRC mRALE Mastermind Grand Challenge: AI to predict COVID severity on chest radiographs. J Med Imaging (Bellingham) 2025; 12:024505. [PMID: 40276098 PMCID: PMC12014941 DOI: 10.1117/1.jmi.12.2.024505] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2024] [Revised: 02/21/2025] [Accepted: 02/28/2025] [Indexed: 04/26/2025] Open
Abstract
Purpose The Medical Imaging and Data Resource Center (MIDRC) mRALE Mastermind Grand Challenge fostered the development of artificial intelligence (AI) techniques for the automated assignment of mRALE (modified radiographic assessment of lung edema) scores to portable chest radiographs from patients known to have COVID-19. Approach The challenge utilized 2079 training cases obtained from the publicly available MIDRC data commons, with validation and test cases sampled from not-yet-public MIDRC cases that were inaccessible to challenge participants. The reference standard mRALE scores for the challenge cases were established by a pool of 22 radiologist annotators. Using the MedICI challenge platform, participants submitted their trained algorithms encapsulated in Docker containers. Algorithms were evaluated by the challenge organizers on 814 test cases through two performance assessment metrics: quadratic-weighted kappa and prediction probability concordance. Results Nine AI algorithms were submitted to the challenge for assessment against the test set cases. The algorithm that demonstrated the highest agreement with the reference standard had a quadratic-weighted kappa of 0.885 and a prediction probability concordance of 0.875. Substantial variability in mRALE scores assigned by the annotators and output by the AI algorithms was observed. Conclusions The MIDRC mRALE Mastermind Grand Challenge revealed the potential of AI to assess COVID-19 severity from portable CXRs, demonstrating promising performance against the reference standard. The observed variability in mRALE scores highlights the challenges in standardizing severity assessment. These findings contribute to ongoing efforts to develop AI technologies for potential use in clinical practice and offer insights for the enhancement of COVID-19 severity assessment.
Collapse
Affiliation(s)
- Samuel G. Armato
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Karen Drukker
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Lubomir Hadjiiski
- University of Michigan, Department of Radiology, Ann Arbor, Michigan, United States
| | - Carol C. Wu
- University of Texas MD Anderson Cancer Center, Department of Thoracic Imaging, Houston, Texas, United States
| | - Jayashree Kalpathy-Cramer
- University of Colorado Anschutz Medical Campus, Department of Ophthalmology, Aurora, Colorado, United States
| | - George Shih
- Weill Cornell Medicine, Department of Radiology, New York, New York, United States
| | - Maryellen L. Giger
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Natalie Baughan
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Benjamin Bearce
- University of Colorado Anschutz Medical Campus, Department of Ophthalmology, Aurora, Colorado, United States
| | - Adam E. Flanders
- Thomas Jefferson University, Department of Radiology, Philadelphia, Pennsylvania, United States
| | - Robyn L. Ball
- The Jackson Laboratory, Bar Harbor, Maine, United States
| | | | - Heather M. Whitney
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - the MIDRC Grand Challenge Working Group
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
- University of Michigan, Department of Radiology, Ann Arbor, Michigan, United States
- University of Texas MD Anderson Cancer Center, Department of Thoracic Imaging, Houston, Texas, United States
- University of Colorado Anschutz Medical Campus, Department of Ophthalmology, Aurora, Colorado, United States
- Weill Cornell Medicine, Department of Radiology, New York, New York, United States
- Thomas Jefferson University, Department of Radiology, Philadelphia, Pennsylvania, United States
- The Jackson Laboratory, Bar Harbor, Maine, United States
- Puente Solutions, Phoenix, Arizona, United States
| |
Collapse
|
3
|
Ruff C, Bombach P, Roder C, Weinbrenner E, Artzner C, Zerweck L, Paulsen F, Hauser TK, Ernemann U, Gohla G. Multidisciplinary quantitative and qualitative assessment of IDH-mutant gliomas with full diagnostic deep learning image reconstruction. Eur J Radiol Open 2024; 13:100617. [PMID: 39717474 PMCID: PMC11664152 DOI: 10.1016/j.ejro.2024.100617] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2024] [Revised: 11/19/2024] [Accepted: 11/28/2024] [Indexed: 12/25/2024] Open
Abstract
Rationale and Objectives: Diagnostic accuracy and therapeutic decision-making for IDH-mutant gliomas in tumor board reviews are based on MRI and multidisciplinary interactions. Materials and Methods This study explores the feasibility of deep learning-based reconstruction (DLR) in MRI for IDH-mutant gliomas. The research utilizes a multidisciplinary approach, engaging neuroradiologists, neurosurgeons, neuro-oncologists, and radiotherapists to evaluate qualitative aspects of DLR and conventional reconstructed (CR) sequences. Furthermore, quantitative image quality and tumor volumes according to Response Assessment in Neuro-Oncology (RANO) 2.0 standards were assessed. Results All DLR sequences consistently outperformed CR sequences (median of 4 for all) in qualitative image quality across all raters (p < 0.001 for all) and revealed higher SNR and CNR values (p < 0.001 for all). Preference for all DLR over CR was overwhelming, with ratings of 84 % from the neuroradiologist, 100 % from the neurosurgeon, 92 % from the neuro-oncologist, and 84 % from the radiation oncologist. The RANO 2.0 compliant measurements showed no significant difference between the CR and DRL sequences (p = 0.142). Conclusion This study demonstrates the clinical feasibility of DLR in MR imaging of IDH-mutant gliomas, with significant time savings of 29.6 % on average and non-inferior image quality to CR. DLR sequences received strong multidisciplinary preference, underscoring their potential for enhancing neuro-oncological decision-making and suitability for clinical implementation.
Collapse
Affiliation(s)
- Christer Ruff
- Department of Diagnostic and Interventional Neuroradiology, Eberhard Karls-University Tuebingen, Tuebingen D-72076, Germany
| | - Paula Bombach
- Department of Neurology and Interdisciplinary Neuro-Oncology, University Hospital Tuebingen, Tuebingen D-72076, Germany
- Hertie Institute for Clinical Brain Research, Eberhard Karls University Tuebingen Center of Neuro-Oncology, Tuebingen D-72076, Germany
- Center for Neuro-Oncology, Comprehensive Cancer Center Tuebingen-Stuttgart, University Hospital of Tuebingen, Eberhard Karls University of Tuebingen, Tuebingen D-72070, Germany
| | - Constantin Roder
- Center for Neuro-Oncology, Comprehensive Cancer Center Tuebingen-Stuttgart, University Hospital of Tuebingen, Eberhard Karls University of Tuebingen, Tuebingen D-72070, Germany
- Department of Neurosurgery, University of Tuebingen, Tuebingen D-72076, Germany
| | - Eliane Weinbrenner
- Department of Diagnostic and Interventional Neuroradiology, Eberhard Karls-University Tuebingen, Tuebingen D-72076, Germany
| | - Christoph Artzner
- Department of Diagnostic and Interventional Radiology, Diakonie Klinikum Stuttgart, Stuttgart D-70176, Germany
| | - Leonie Zerweck
- Department of Diagnostic and Interventional Neuroradiology, Eberhard Karls-University Tuebingen, Tuebingen D-72076, Germany
| | - Frank Paulsen
- Department of Radiation Oncology, University Hospital Tuebingen, Tuebingen D-72076, Germany
| | - Till-Karsten Hauser
- Department of Diagnostic and Interventional Neuroradiology, Eberhard Karls-University Tuebingen, Tuebingen D-72076, Germany
| | - Ulrike Ernemann
- Department of Diagnostic and Interventional Neuroradiology, Eberhard Karls-University Tuebingen, Tuebingen D-72076, Germany
| | - Georg Gohla
- Department of Diagnostic and Interventional Neuroradiology, Eberhard Karls-University Tuebingen, Tuebingen D-72076, Germany
| |
Collapse
|
4
|
Wahid KA, Dede C, El-Habashy DM, Kamel S, Rooney MK, Khamis Y, Abdelaal MRA, Ahmed S, Corrigan KL, Chang E, Dudzinski SO, Salzillo TC, McDonald BA, Mulder SL, McCullum L, Alakayleh Q, Sjogreen C, He R, Mohamed AS, Lai SY, Christodouleas JP, Schaefer AJ, Naser MA, Fuller CD. Overview of the Head and Neck Tumor Segmentation for Magnetic Resonance Guided Applications (HNTS-MRG) 2024 Challenge. ARXIV 2024:arXiv:2411.18585v2. [PMID: 39650598 PMCID: PMC11623708] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 12/11/2024]
Abstract
Magnetic resonance (MR)-guided radiation therapy (RT) is enhancing head and neck cancer (HNC) treatment through superior soft tissue contrast and longitudinal imaging capabilities. However, manual tumor segmentation remains a significant challenge, spurring interest in artificial intelligence (AI)-driven automation. To accelerate innovation in this field, we present the Head and Neck Tumor Segmentation for MR-Guided Applications (HNTS-MRG) 2024 Challenge, a satellite event of the 27th International Conference on Medical Image Computing and Computer Assisted Intervention. This challenge addresses the scarcity of large, publicly available AI-ready adaptive RT datasets in HNC and explores the potential of incorporating multi-timepoint data to enhance RT auto-segmentation performance. Participants tackled two HNC segmentation tasks: automatic delineation of primary gross tumor volume (GTVp) and gross metastatic regional lymph nodes (GTVn) on pre-RT (Task 1) and mid-RT (Task 2) T2-weighted scans. The challenge provided 150 HNC cases for training and 50 for testing, hosted on grand-challenge.org using a Docker submission framework. In total, 19 independent teams from across the world qualified by submitting both their algorithms and corresponding papers, resulting in 18 submissions for Task 1 and 15 submissions for Task 2. Evaluation using the mean aggregated Dice Similarity Coefficient showed top-performing AI methods achieved scores of 0.825 in Task 1 and 0.733 in Task 2. These results surpassed clinician interobserver variability benchmarks, marking significant strides in automated tumor segmentation for MR-guided RT applications in HNC.
Collapse
Affiliation(s)
- Kareem A. Wahid
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer, Houston, Texas, USA
- Department of Imaging Physics, The University of Texas MD Anderson Cancer, Houston, Texas, USA
| | - Cem Dede
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer, Houston, Texas, USA
| | - Dina M. El-Habashy
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer, Houston, Texas, USA
- Transitional Year Program, Corewell Health Wiliam Beaumont, Royal Oak, MI, USA
| | - Serageldin Kamel
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer, Houston, Texas, USA
| | - Michael K. Rooney
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer, Houston, Texas, USA
| | - Yomna Khamis
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer, Houston, Texas, USA
- Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, MD, USA
- Department of Clinical Oncology and Nuclear Medicine, Faculty of Medicine, Alexandria University, Alexandria, Egypt
| | - Moamen R. A. Abdelaal
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer, Houston, Texas, USA
| | - Sara Ahmed
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer, Houston, Texas, USA
| | - Kelsey L. Corrigan
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer, Houston, Texas, USA
| | - Enoch Chang
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer, Houston, Texas, USA
| | - Stephanie O. Dudzinski
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer, Houston, Texas, USA
| | - Travis C. Salzillo
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer, Houston, Texas, USA
| | - Brigid A. McDonald
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer, Houston, Texas, USA
| | - Samuel L. Mulder
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer, Houston, Texas, USA
| | - Lucas McCullum
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer, Houston, Texas, USA
- UT MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, USA
| | - Qusai Alakayleh
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer, Houston, Texas, USA
| | - Carlos Sjogreen
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer, Houston, Texas, USA
| | - Renjie He
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer, Houston, Texas, USA
| | - Abdallah S.R. Mohamed
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer, Houston, Texas, USA
- Department of Radiation Oncology, Baylor College of Medicine, Houston, TX, USA
| | - Stephen Y. Lai
- Department of Head and Neck Surgery, The University of Texas MD Anderson Cancer, Houston, Texas, USA
| | | | - Andrew J. Schaefer
- Department of Computational Applied Mathematics and Operations Research, Rice University, Houston, TX, USA
| | - Mohamed A. Naser
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer, Houston, Texas, USA
| | - Clifton D. Fuller
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer, Houston, Texas, USA
| |
Collapse
|
5
|
Avanzo M, Stancanello J, Pirrone G, Drigo A, Retico A. The Evolution of Artificial Intelligence in Medical Imaging: From Computer Science to Machine and Deep Learning. Cancers (Basel) 2024; 16:3702. [PMID: 39518140 PMCID: PMC11545079 DOI: 10.3390/cancers16213702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2024] [Revised: 10/26/2024] [Accepted: 10/29/2024] [Indexed: 11/16/2024] Open
Abstract
Artificial intelligence (AI), the wide spectrum of technologies aiming to give machines or computers the ability to perform human-like cognitive functions, began in the 1940s with the first abstract models of intelligent machines. Soon after, in the 1950s and 1960s, machine learning algorithms such as neural networks and decision trees ignited significant enthusiasm. More recent advancements include the refinement of learning algorithms, the development of convolutional neural networks to efficiently analyze images, and methods to synthesize new images. This renewed enthusiasm was also due to the increase in computational power with graphical processing units and the availability of large digital databases to be mined by neural networks. AI soon began to be applied in medicine, first through expert systems designed to support the clinician's decision and later with neural networks for the detection, classification, or segmentation of malignant lesions in medical images. A recent prospective clinical trial demonstrated the non-inferiority of AI alone compared with a double reading by two radiologists on screening mammography. Natural language processing, recurrent neural networks, transformers, and generative models have both improved the capabilities of making an automated reading of medical images and moved AI to new domains, including the text analysis of electronic health records, image self-labeling, and self-reporting. The availability of open-source and free libraries, as well as powerful computing resources, has greatly facilitated the adoption of deep learning by researchers and clinicians. Key concerns surrounding AI in healthcare include the need for clinical trials to demonstrate efficacy, the perception of AI tools as 'black boxes' that require greater interpretability and explainability, and ethical issues related to ensuring fairness and trustworthiness in AI systems. Thanks to its versatility and impressive results, AI is one of the most promising resources for frontier research and applications in medicine, in particular for oncological applications.
Collapse
Affiliation(s)
- Michele Avanzo
- Medical Physics Department, Centro di Riferimento Oncologico di Aviano (CRO) IRCCS, 33081 Aviano, Italy; (G.P.); (A.D.)
| | | | - Giovanni Pirrone
- Medical Physics Department, Centro di Riferimento Oncologico di Aviano (CRO) IRCCS, 33081 Aviano, Italy; (G.P.); (A.D.)
| | - Annalisa Drigo
- Medical Physics Department, Centro di Riferimento Oncologico di Aviano (CRO) IRCCS, 33081 Aviano, Italy; (G.P.); (A.D.)
| | - Alessandra Retico
- National Institute for Nuclear Physics (INFN), Pisa Division, 56127 Pisa, Italy;
| |
Collapse
|
6
|
Jiang B, Ozkara BB, Zhu G, Boothroyd D, Allen JW, Barboriak DP, Chang P, Chan C, Chaudhari R, Chen H, Chukus A, Ding V, Douglas D, Filippi CG, Flanders AE, Godwin R, Hashmi S, Hess C, Hsu K, Lui YW, Maldjian JA, Michel P, Nalawade SS, Patel V, Raghavan P, Sair HI, Tanabe J, Welker K, Whitlow CT, Zaharchuk G, Wintermark M. Assessing the Performance of Artificial Intelligence Models: Insights from the American Society of Functional Neuroradiology Artificial Intelligence Competition. AJNR Am J Neuroradiol 2024; 45:1276-1283. [PMID: 38663992 PMCID: PMC11392353 DOI: 10.3174/ajnr.a8317] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2024] [Accepted: 04/22/2024] [Indexed: 05/05/2024]
Abstract
BACKGROUND AND PURPOSE Artificial intelligence models in radiology are frequently developed and validated using data sets from a single institution and are rarely tested on independent, external data sets, raising questions about their generalizability and applicability in clinical practice. The American Society of Functional Neuroradiology (ASFNR) organized a multicenter artificial intelligence competition to evaluate the proficiency of developed models in identifying various pathologies on NCCT, assessing age-based normality and estimating medical urgency. MATERIALS AND METHODS In total, 1201 anonymized, full-head NCCT clinical scans from 5 institutions were pooled to form the data set. The data set encompassed studies with normal findings as well as those with pathologies, including acute ischemic stroke, intracranial hemorrhage, traumatic brain injury, and mass effect (detection of these, task 1). NCCTs were also assessed to determine if findings were consistent with expected brain changes for the patient's age (task 2: age-based normality assessment) and to identify any abnormalities requiring immediate medical attention (task 3: evaluation of findings for urgent intervention). Five neuroradiologists labeled each NCCT, with consensus interpretations serving as the ground truth. The competition was announced online, inviting academic institutions and companies. Independent central analysis assessed the performance of each model. Accuracy, sensitivity, specificity, positive and negative predictive values, and receiver operating characteristic (ROC) curves were generated for each artificial intelligence model, along with the area under the ROC curve. RESULTS Four teams processed 1177 studies. The median age of patients was 62 years, with an interquartile range of 33 years. Nineteen teams from various academic institutions registered for the competition. Of these, 4 teams submitted their final results. No commercial entities participated in the competition. For task 1, areas under the ROC curve ranged from 0.49 to 0.59. For task 2, two teams completed the task with area under the ROC curve values of 0.57 and 0.52. For task 3, teams had little-to-no agreement with the ground truth. CONCLUSIONS To assess the performance of artificial intelligence models in real-world clinical scenarios, we analyzed their performance in the ASFNR Artificial Intelligence Competition. The first ASFNR Competition underscored the gap between expectation and reality; and the models largely fell short in their assessments. As the integration of artificial intelligence tools into clinical workflows increases, neuroradiologists must carefully recognize the capabilities, constraints, and consistency of these technologies. Before institutions adopt these algorithms, thorough validation is essential to ensure acceptable levels of performance in clinical settings.
Collapse
Affiliation(s)
- Bin Jiang
- From the Department of Radiology (B.J., C.C., R.C., A.C., D.D., S.H., G.Zaharchuk), Neuroradiology Division, Stanford University, Stanford, California
| | - Burak B Ozkara
- Department of Neuroradiology (B.B.O., H.C., M.W.), MD Anderson Cancer Center, Houston, Texas
| | - Guangming Zhu
- Department of Neurology (G.Zhu), The University of Arizona, Tucson, Arizona
| | - Derek Boothroyd
- Department of Medicine (D.B., V.D.), Stanford University School of Medicine, Stanford, California
| | - Jason W Allen
- Department of Radiology and Imaging Sciences (J.W.A.), Indiana University School of Medicine, Indianapolis, Indiana
| | - Daniel P Barboriak
- Department of Radiology (D.P.B.), Duke University Medical Center, Durham, North Carolina
| | - Peter Chang
- Department of Radiological Sciences (P.C.), University of California, Irvine, Irvine, California
| | - Cynthia Chan
- From the Department of Radiology (B.J., C.C., R.C., A.C., D.D., S.H., G.Zaharchuk), Neuroradiology Division, Stanford University, Stanford, California
| | - Ruchir Chaudhari
- From the Department of Radiology (B.J., C.C., R.C., A.C., D.D., S.H., G.Zaharchuk), Neuroradiology Division, Stanford University, Stanford, California
- Sutter Imaging (R.C.), Sutter Health, Sacramento, California
| | - Hui Chen
- Department of Neuroradiology (B.B.O., H.C., M.W.), MD Anderson Cancer Center, Houston, Texas
| | - Anjeza Chukus
- From the Department of Radiology (B.J., C.C., R.C., A.C., D.D., S.H., G.Zaharchuk), Neuroradiology Division, Stanford University, Stanford, California
| | - Victoria Ding
- Department of Medicine (D.B., V.D.), Stanford University School of Medicine, Stanford, California
| | - David Douglas
- From the Department of Radiology (B.J., C.C., R.C., A.C., D.D., S.H., G.Zaharchuk), Neuroradiology Division, Stanford University, Stanford, California
| | | | - Adam E Flanders
- Department of Radiology (A.E.F.), Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Ryan Godwin
- Department of Radiology (R.G.), University of Alabama at Birmingham, Birmingham, Alabama
| | - Syed Hashmi
- From the Department of Radiology (B.J., C.C., R.C., A.C., D.D., S.H., G.Zaharchuk), Neuroradiology Division, Stanford University, Stanford, California
| | - Christopher Hess
- Department of Radiology and Biomedical Imaging (C.H.), University of California, San Francisco, San Francisco, California
| | - Kevin Hsu
- Department of Radiology (K.H., Y.W.L), New York University Grossman School of Medicine, New York, New York
| | - Yvonne W Lui
- Department of Radiology (K.H., Y.W.L), New York University Grossman School of Medicine, New York, New York
| | - Joseph A Maldjian
- Department of Radiology (J.A.M., S.S.N.), University of Texas Southwestern Medical Center, Dallas, Texas
| | - Patrik Michel
- Department of Clinical Neurosciences (P.M.), Lausanne University Hospital, Lausanne, Switzerland
| | - Sahil S Nalawade
- Department of Radiology (J.A.M., S.S.N.), University of Texas Southwestern Medical Center, Dallas, Texas
| | - Vishal Patel
- Department of Radiology (V.P.), Mayo Clinic, Jacksonville, Florida
| | - Prashant Raghavan
- Department of Diagnostic Radiology and Nuclear Medicine (P.R.), University of Maryland School of Medicine, Baltimore, Maryland
| | - Haris I Sair
- The Russell H. Morgan Department of Radiology and Radiological Science (H.I.S.), Johns Hopkins University, Baltimore, Maryland
- The Malone Center for Engineering in Healthcare (H.I.S.), Whiting School of Engineering, Johns Hopkins University, Baltimore, Maryland
| | - Jody Tanabe
- Department of Radiology (J.T.), University of Colorado, Aurora, Colorado
| | - Kirk Welker
- Department of Radiology (K.W.), Mayo Clinic, Rochester, Minnesota
| | - Christopher T Whitlow
- Department of Radiology (C.T.W), Wake Forest University School of Medicine, Winston-Salem, North Carolina
| | - Greg Zaharchuk
- From the Department of Radiology (B.J., C.C., R.C., A.C., D.D., S.H., G.Zaharchuk), Neuroradiology Division, Stanford University, Stanford, California
| | - Max Wintermark
- Department of Neuroradiology (B.B.O., H.C., M.W.), MD Anderson Cancer Center, Houston, Texas
| |
Collapse
|
7
|
Gohla G, Hauser TK, Bombach P, Feucht D, Estler A, Bornemann A, Zerweck L, Weinbrenner E, Ernemann U, Ruff C. Speeding Up and Improving Image Quality in Glioblastoma MRI Protocol by Deep Learning Image Reconstruction. Cancers (Basel) 2024; 16:1827. [PMID: 38791906 PMCID: PMC11119715 DOI: 10.3390/cancers16101827] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2024] [Revised: 04/29/2024] [Accepted: 05/08/2024] [Indexed: 05/26/2024] Open
Abstract
A fully diagnostic MRI glioma protocol is key to monitoring therapy assessment but is time-consuming and especially challenging in critically ill and uncooperative patients. Artificial intelligence demonstrated promise in reducing scan time and improving image quality simultaneously. The purpose of this study was to investigate the diagnostic performance, the impact on acquisition acceleration, and the image quality of a deep learning optimized glioma protocol of the brain. Thirty-three patients with histologically confirmed glioblastoma underwent standardized brain tumor imaging according to the glioma consensus recommendations on a 3-Tesla MRI scanner. Conventional and deep learning-reconstructed (DLR) fluid-attenuated inversion recovery, and T2- and T1-weighted contrast-enhanced Turbo spin echo images with an improved in-plane resolution, i.e., super-resolution, were acquired. Two experienced neuroradiologists independently evaluated the image datasets for subjective image quality, diagnostic confidence, tumor conspicuity, noise levels, artifacts, and sharpness. In addition, the tumor volume was measured in the image datasets according to Response Assessment in Neuro-Oncology (RANO) 2.0, as well as compared between both imaging techniques, and various clinical-pathological parameters were determined. The average time saving of DLR sequences was 30% per MRI sequence. Simultaneously, DLR sequences showed superior overall image quality (all p < 0.001), improved tumor conspicuity and image sharpness (all p < 0.001, respectively), and less image noise (all p < 0.001), while maintaining diagnostic confidence (all p > 0.05), compared to conventional images. Regarding RANO 2.0, the volume of non-enhancing non-target lesions (p = 0.963), enhancing target lesions (p = 0.993), and enhancing non-target lesions (p = 0.951) did not differ between reconstruction types. The feasibility of the deep learning-optimized glioma protocol was demonstrated with a 30% reduction in acquisition time on average and an increased in-plane resolution. The evaluated DLR sequences improved subjective image quality and maintained diagnostic accuracy in tumor detection and tumor classification according to RANO 2.0.
Collapse
Affiliation(s)
- Georg Gohla
- Department of Diagnostic and Interventional Neuroradiology, Eberhard Karls-University Tübingen, 72076 Tübingen, Germany; (T.-K.H.); (A.E.); (L.Z.); (E.W.); (U.E.); (C.R.)
| | - Till-Karsten Hauser
- Department of Diagnostic and Interventional Neuroradiology, Eberhard Karls-University Tübingen, 72076 Tübingen, Germany; (T.-K.H.); (A.E.); (L.Z.); (E.W.); (U.E.); (C.R.)
| | - Paula Bombach
- Department of Neurology and Interdisciplinary Neuro-Oncology, University Hospital Tübingen, Hoppe-Seyler-Str. 3, 72076 Tübingen, Germany;
- Hertie Institute for Clinical Brain Research, Eberhard Karls University Tübingen Center of Neuro-Oncology, Ottfried-Müller-Straße 27, 72076 Tübingen, Germany
- Center for Neuro-Oncology, Comprehensive Cancer Center Tübingen-Stuttgart, University Hospital of Tuebingen, Eberhard Karls University of Tübingen, Herrenberger Straße 23, 72070 Tübingen, Germany
| | - Daniel Feucht
- Department of Neurosurgery, University Hospital Tübingen, Hoppe-Seyler-Str. 3, 72076 Tübingen, Germany;
| | - Arne Estler
- Department of Diagnostic and Interventional Neuroradiology, Eberhard Karls-University Tübingen, 72076 Tübingen, Germany; (T.-K.H.); (A.E.); (L.Z.); (E.W.); (U.E.); (C.R.)
| | - Antje Bornemann
- Department of Neuropathology, Institute of Pathology and Neuropathology, University Hospital Tübingen, Calwerstraße 3, 72076 Tübingen, Germany;
| | - Leonie Zerweck
- Department of Diagnostic and Interventional Neuroradiology, Eberhard Karls-University Tübingen, 72076 Tübingen, Germany; (T.-K.H.); (A.E.); (L.Z.); (E.W.); (U.E.); (C.R.)
| | - Eliane Weinbrenner
- Department of Diagnostic and Interventional Neuroradiology, Eberhard Karls-University Tübingen, 72076 Tübingen, Germany; (T.-K.H.); (A.E.); (L.Z.); (E.W.); (U.E.); (C.R.)
| | - Ulrike Ernemann
- Department of Diagnostic and Interventional Neuroradiology, Eberhard Karls-University Tübingen, 72076 Tübingen, Germany; (T.-K.H.); (A.E.); (L.Z.); (E.W.); (U.E.); (C.R.)
| | - Christer Ruff
- Department of Diagnostic and Interventional Neuroradiology, Eberhard Karls-University Tübingen, 72076 Tübingen, Germany; (T.-K.H.); (A.E.); (L.Z.); (E.W.); (U.E.); (C.R.)
| |
Collapse
|
8
|
Dadon Z, Rav Acha M, Orlev A, Carasso S, Glikson M, Gottlieb S, Alpert EA. Artificial Intelligence-Based Left Ventricular Ejection Fraction by Medical Students for Mortality and Readmission Prediction. Diagnostics (Basel) 2024; 14:767. [PMID: 38611680 PMCID: PMC11011323 DOI: 10.3390/diagnostics14070767] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Revised: 03/28/2024] [Accepted: 03/31/2024] [Indexed: 04/14/2024] Open
Abstract
INTRODUCTION Point-of-care ultrasound has become a universal practice, employed by physicians across various disciplines, contributing to diagnostic processes and decision-making. AIM To assess the association of reduced (<50%) left-ventricular ejection fraction (LVEF) based on prospective point-of-care ultrasound operated by medical students using an artificial intelligence (AI) tool and 1-year primary composite outcome, including mortality and readmission for cardiovascular-related causes. METHODS Eight trained medical students used a hand-held ultrasound device (HUD) equipped with an AI-based tool for automatic evaluation of the LVEF of non-selected patients hospitalized in a cardiology department from March 2019 through March 2020. RESULTS The study included 82 patients (72 males aged 58.5 ± 16.8 years), of whom 34 (41.5%) were diagnosed with AI-based reduced LVEF. The rates of the composite outcome were higher among patients with reduced systolic function compared to those with preserved LVEF (41.2% vs. 16.7%, p = 0.014). Adjusting for pertinent variables, reduced LVEF independently predicted the composite outcome (HR 2.717, 95% CI 1.083-6.817, p = 0.033). As compared to those with LVEF ≥ 50%, patients with reduced LVEF had a longer length of stay and higher rates of the secondary composite outcome, including in-hospital death, advanced ventilatory support, shock, and acute decompensated heart failure. CONCLUSION AI-based assessment of reduced systolic function in the hands of medical students, independently predicted 1-year mortality and cardiovascular-related readmission and was associated with unfavorable in-hospital outcomes. AI utilization by novice users may be an important tool for risk stratification for hospitalized patients.
Collapse
Affiliation(s)
- Ziv Dadon
- Jesselson Integrated Heart Center, Eisenberg R&D Authority, Shaare Zedek Medical Center, Jerusalem 9103102, Israel
- Faculty of Medicine, Hebrew University of Jerusalem, Jerusalem 9112102, Israel
| | - Moshe Rav Acha
- Jesselson Integrated Heart Center, Eisenberg R&D Authority, Shaare Zedek Medical Center, Jerusalem 9103102, Israel
- Faculty of Medicine, Hebrew University of Jerusalem, Jerusalem 9112102, Israel
| | - Amir Orlev
- Jesselson Integrated Heart Center, Eisenberg R&D Authority, Shaare Zedek Medical Center, Jerusalem 9103102, Israel
- Faculty of Medicine, Hebrew University of Jerusalem, Jerusalem 9112102, Israel
| | - Shemy Carasso
- Jesselson Integrated Heart Center, Eisenberg R&D Authority, Shaare Zedek Medical Center, Jerusalem 9103102, Israel
- Azrieli Faculty of Medicine, Bar-Ilan University, Safed 1311502, Israel
| | - Michael Glikson
- Jesselson Integrated Heart Center, Eisenberg R&D Authority, Shaare Zedek Medical Center, Jerusalem 9103102, Israel
- Faculty of Medicine, Hebrew University of Jerusalem, Jerusalem 9112102, Israel
| | - Shmuel Gottlieb
- Jesselson Integrated Heart Center, Eisenberg R&D Authority, Shaare Zedek Medical Center, Jerusalem 9103102, Israel
- Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv 6997801, Israel
| | - Evan Avraham Alpert
- Faculty of Medicine, Hebrew University of Jerusalem, Jerusalem 9112102, Israel
- Department of Emergency Medicine, Hadassah Medical Center—Ein Kerem, Jerusalem 9112001, Israel
| |
Collapse
|
9
|
Naqa IE, Drukker K. AI in imaging and therapy: innovations, ethics, and impact - introductory editorial. Br J Radiol 2023; 96:20239004. [PMID: 38011226 PMCID: PMC10546442 DOI: 10.1259/bjr.20239004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2023] Open
|