1
|
Ratiphunpong P, Inmutto N, Angkurawaranon S, Wantanajittikul K, Suwannasak A, Yarach U. A Pilot Study on Deep Learning With Simplified Intravoxel Incoherent Motion Diffusion-Weighted MRI Parameters for Differentiating Hepatocellular Carcinoma From Other Common Liver Masses. Top Magn Reson Imaging 2025; 34:e0316. [PMID: 40249154 DOI: 10.1097/rmr.0000000000000316] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2024] [Accepted: 02/18/2025] [Indexed: 04/19/2025]
Abstract
OBJECTIVES To develop and evaluate a deep learning technique for the differentiation of hepatocellular carcinoma (HCC) using "simplified intravoxel incoherent motion (IVIM) parameters" derived from only 3 b-value images. MATERIALS AND METHODS Ninety-eight retrospective magnetic resonance imaging data were collected (68 men, 30 women; mean age 59 ± 14 years), including T2-weighted imaging with fat suppression, in-phase, out-of-phase, and diffusion-weighted imaging (b = 0, 100, 800 s/mm2). Ninety percent of data were used for stratified 10-fold cross-validation. After data preprocessing, diffusion-weighted imaging images were used to compute simplified IVIM and apparent diffusion coefficient (ADC) maps. A 17-layer 3D convolutional neural network (3D-CNN) was implemented, and the input channels were modified for different strategies of input images. RESULTS The 3D-CNN with IVIM maps (ADC, f, and D*) demonstrated superior performance compared with other strategies, achieving an accuracy of 83.25 ± 6.24% and area under the receiver-operating characteristic curve of 92.70 ± 8.24%, significantly surpassing the baseline of 50% (P < 0.05) and outperforming other strategies in all evaluation metrics. This success underscores the effectiveness of simplified IVIM parameters in combination with a 3D-CNN architecture for enhancing HCC differentiation accuracy. CONCLUSIONS Simplified IVIM parameters derived from 3 b-values, when integrated with a 3D-CNN architecture, offer a robust framework for HCC differentiation.
Collapse
Affiliation(s)
- Phimphitcha Ratiphunpong
- Department of Radiologic Technology, Faculty of Associated Medical Science, Chiang Mai University, Chiang Mai, Thailand
- Radiological Technology School, Faculty of Health Science Technology, Chulabhorn Royal Academy, Bangkok, Thailand; and
| | - Nakarin Inmutto
- Department of Radiology, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand
| | - Salita Angkurawaranon
- Department of Radiology, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand
| | - Kittichai Wantanajittikul
- Department of Radiologic Technology, Faculty of Associated Medical Science, Chiang Mai University, Chiang Mai, Thailand
| | - Atita Suwannasak
- Department of Radiologic Technology, Faculty of Associated Medical Science, Chiang Mai University, Chiang Mai, Thailand
| | - Uten Yarach
- Department of Radiologic Technology, Faculty of Associated Medical Science, Chiang Mai University, Chiang Mai, Thailand
| |
Collapse
|
2
|
Khosravi B, Dapamede T, Li F, Chisango Z, Bikmal A, Garg S, Owosela B, Khosravi A, Chavoshi M, Trivedi HM, Wyles CC, Purkayastha S, Erickson BJ, Gichoya JW. Role of Model Size and Prompting Strategies in Extracting Labels from Free-Text Radiology Reports with Open-Source Large Language Models. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2025:10.1007/s10278-025-01505-7. [PMID: 40325326 DOI: 10.1007/s10278-025-01505-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/08/2025] [Revised: 03/25/2025] [Accepted: 04/13/2025] [Indexed: 05/07/2025]
Abstract
Extracting accurate labels from radiology reports is essential for training medical image analysis models. Large language models (LLMs) show promise for automating this process. The purpose of this study is to evaluate how model size and prompting strategies affect label extraction accuracy and downstream performance in open-source LLMs. Three open-source LLMs (Llama-3, Phi-3 mini, and Zephyr-beta) were used to extract labels from 227,827 MIMIC-CXR radiology reports. Performance was evaluated against human annotations on 2000 MIMIC-CXR reports, and through training image classifiers for pneumothorax and rib fracture detection tested on the CANDID-PTX dataset (n = 19,237). LLM-based labeling outperformed the CheXpert labeler, with the best LLM achieving 95% sensitivity for fracture detection versus CheXpert's 51%. Larger models showed better sensitivity, while chain-of-thought prompting had variable effects. Image classifiers showed resilience to labeling noise when tested externally. The choice of test set labeling schema significantly affected reported performance-a classifier trained on Llama-3 with chain-of-thought labels achieved AUCs of 0.96 and 0.84 for pneumothorax and fracture detection respectively when evaluated against human annotations, compared to 0.91 and 0.73 when evaluated on CheXpert labels. Open-source LLMs effectively extract labels from radiology reports at scale. While larger pre-trained models generally perform better, the choice of model size and prompting strategy should be task specific. Careful consideration of evaluation methods is critical for interpreting classifier performance.
Collapse
Affiliation(s)
- Bardia Khosravi
- Department of Radiology, Mayo Clinic, Rochester, MN, USA
- Department of Orthopedic Surgery, Mayo Clinic, Rochester, MN, USA
| | - Theo Dapamede
- Department of Radiology, Emory University, 101 Woodruff Circle, Atlanta, GA, 30322, USA
| | - Frank Li
- Department of Radiology, Emory University, 101 Woodruff Circle, Atlanta, GA, 30322, USA
| | | | | | - Sara Garg
- Emory School of Medicine, Atlanta, GA, USA
| | | | - Amirali Khosravi
- Department of Orthopedic Surgery, Mayo Clinic, Rochester, MN, USA
| | - Mohammadreza Chavoshi
- Department of Radiology, Emory University, 101 Woodruff Circle, Atlanta, GA, 30322, USA
| | - Hari M Trivedi
- Department of Radiology, Emory University, 101 Woodruff Circle, Atlanta, GA, 30322, USA
| | - Cody C Wyles
- Department of Orthopedic Surgery, Mayo Clinic, Rochester, MN, USA
| | | | | | - Judy W Gichoya
- Department of Radiology, Emory University, 101 Woodruff Circle, Atlanta, GA, 30322, USA.
| |
Collapse
|
3
|
Ozcan BB, Dogan BE, Xi Y, Knippa EE. Patient Perception of Artificial Intelligence Use in Interpretation of Screening Mammograms: A Survey Study. Radiol Imaging Cancer 2025; 7:e240290. [PMID: 40249272 DOI: 10.1148/rycan.240290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/19/2025]
Abstract
Purpose To assess patient perceptions of artificial intelligence (AI) use in the interpretation of screening mammograms. Materials and Methods In a prospective, institutional review board-approved study, all patients undergoing mammography screening at the authors' institution between February 2023 and August 2023 were offered a 29-question survey. Age, race and ethnicity, education, income level, and history of breast cancer and biopsy were collected. Univariable and multivariable logistic regression analyses were used to identify the independent factors associated with participants' acceptance of AI use. Results Of the 518 participants, the majority were between the ages of 40 and 69 years (377 of 518, 72.8%), at least college graduates (347 of 518, 67.0%), and non-Hispanic White (262 of 518, 50.6%). Participant-reported knowledge of AI was none or minimal in 76.5% (396 of 518). Stand-alone AI interpretation was accepted by 4.44% (23 of 518), whereas 71.0% (368 of 518) preferred AI to be used as a second reader. After an AI-reported abnormal screening, 88.9% (319 of 359) requested radiologist review versus 51.3% (184 of 359) of radiologist recall review by AI (P < .001). In cases of discrepancy, higher rate of participants would undergo diagnostic examination for radiologist recalls compared with AI recalls (94.2% [419 of 445] vs 92.6% [412 of 445]; P = .20]. Higher education was associated with higher AI acceptance (odds ratio [OR] 2.05, 95% CI: 1.31, 3.20; P = .002). Race was associated with higher concern for bias in Hispanic versus non-Hispanic White participants (OR 3.32, 95% CI: 1.15, 9.61; P = .005) and non-Hispanic Black versus non-Hispanic White participants (OR 4.31, 95% CI: 1.50, 12.39; P = .005). Conclusion AI use as a second reader of screening mammograms was accepted by participants. Participants' race and education level were significantly associated with AI acceptance. Keywords: Breast, Mammography, Artificial Intelligence Supplemental material is available for this article. Published under a CC BY 4.0 license.
Collapse
Affiliation(s)
- B Bersu Ozcan
- Department of Radiology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, MC 8896, Dallas, TX 75390-8896
| | - Basak E Dogan
- Department of Radiology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, MC 8896, Dallas, TX 75390-8896
| | - Yin Xi
- Department of Radiology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, MC 8896, Dallas, TX 75390-8896
- Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Department of Population and Data Sciences, Dallas, Tex
| | - Emily E Knippa
- Department of Radiology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, MC 8896, Dallas, TX 75390-8896
| |
Collapse
|
4
|
Moassefi M, Faghani S, Colak C, Sheedy SP, Andrieu PLC, Wang SS, McPhedran RL, Flicek KT, Suman G, Takahashi H, Bookwalter CA, Burnett TL, Erickson BJ, VanBuren WM. Advancing endometriosis detection in daily practice: a deep learning-enhanced multi-sequence MRI analytical model. Abdom Radiol (NY) 2025:10.1007/s00261-025-04942-8. [PMID: 40232413 DOI: 10.1007/s00261-025-04942-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2025] [Revised: 03/31/2025] [Accepted: 04/06/2025] [Indexed: 04/16/2025]
Abstract
BACKGROUND AND PURPOSE Endometriosis affects 5-10% of women of reproductive age. Despite its prevalence, diagnosing endometriosis through imaging remains challenging. Advances in deep learning (DL) are revolutionizing the diagnosis and management of complex medical conditions. This study aims to evaluate DL tools in enhancing the accuracy of multi-sequence MRI-based detection of endometriosis. METHOD We gathered a patient cohort from our institutional database, composed of patients with pathologically confirmed endometriosis from 2015 to 2024. We created an age-matched control group that underwent a similar MR protocol without an endometriosis diagnosis. We used sagittal fat-saturated T1-weighted (T1W FS) pre- and post-contrast and T2-weighted (T2W) MRIs. Our dataset was split at the patient level, allocating 12.5% for testing and conducting seven-fold cross-validation on the remainder. Seven abdominal radiologists with experience in endometriosis MRI and complex surgical planning and one women's imaging fellow with specific training in endometriosis MRI reviewed a random selection of images and documented their endometriosis detection. RESULTS 395 and 356 patients were included in the case and control groups respectively. The final 3D-DenseNet-121 classifier model demonstrated robust performance. Our findings indicated the most accurate predictions were obtained using T2W, T1W FS pre-, and post-contrast images. Using an ensemble technique on the test set resulted in an F1 Score of 0.881, AUROCC of 0.911, sensitivity of 0.976, and specificity of 0.720. Radiologists achieved 84.48% and 87.93% sensitivity without and with AI assistance in detecting endometriosis. The agreement among radiologists in predicting labels for endometriosis was measured as a Fleiss' kappa of 0.5718 without AI assistance and 0.6839 with AI assistance. CONCLUSION This study introduced the first DL model to use multi-sequence MRI on a large cohort, showing results equivalent to human detection by trained readers in identifying endometriosis.
Collapse
|
5
|
Faghani S, Tiegs-Heiden CA, Moassefi M, Powell GM, Ringler MD, Erickson BJ, Rhodes NG. Expanded AI learning: AI as a Tool for Human Learning. Acad Radiol 2025:S1076-6332(25)00284-3. [PMID: 40210520 DOI: 10.1016/j.acra.2025.03.040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2024] [Revised: 03/10/2025] [Accepted: 03/20/2025] [Indexed: 04/12/2025]
Abstract
RATIONALE AND OBJECTIVES To demonstrate that a deep learning (DL) model can be employed as a teaching tool to improve radiologists' ability to perform a subsequent imaging task without additional artificial intelligence (AI) assistance at time of image interpretation. METHODS AND MATERIALS Three human readers were tasked to categorize 50 frontal knee radiographs by male and female sex before and after reviewing data derived from our DL model. The model's high accuracy in performing this task was revealed to the human subjects, who were also supplied the DL model's resultant occlusion interpretation maps ("heat maps") to serve as a teaching tool for study before final testing. Two weeks later, the three human readers performed the same task with a new set of 50 radiographs. RESULTS The average accuracy of the three human readers was initially 0.59 (95%CI: 0.59-0.65), not statistically different than guessing given our sample skew. The DL model categorized sex with 0.96 accuracy. After study of AI-derived "heat maps" and associated radiographs, the average accuracy of the human readers, without the direct help of AI, on the new set of radiographs increased to 0.80 (95%CI: 0.73-0.86), a significant improvement (p=0.0270). CONCLUSION AI-derived data can be used as a teaching tool to improve radiologists' own ability to perform an imaging task. This is an idea that we have not before seen advanced in the radiology literature. SUMMARY STATEMENT AI can be used as a teaching tool to improve the intrinsic accuracy of radiologists, even without the concurrent use of AI.
Collapse
Affiliation(s)
- Shahriar Faghani
- Department of Radiology, Mayo Clinic, 200 First St SW, Rochester, MN 55905
| | | | - Mana Moassefi
- Department of Radiology, Mayo Clinic, 200 First St SW, Rochester, MN 55905
| | - Garret M Powell
- Department of Radiology, Mayo Clinic, 200 First St SW, Rochester, MN 55905
| | - Michael D Ringler
- Department of Radiology, Mayo Clinic, 200 First St SW, Rochester, MN 55905
| | - Bradley J Erickson
- Department of Radiology, Mayo Clinic, 200 First St SW, Rochester, MN 55905
| | - Nicholas G Rhodes
- Department of Radiology, Mayo Clinic, 200 First St SW, Rochester, MN 55905.
| |
Collapse
|
6
|
Rao A. A Radiologist's Perspective of Medical Annotations for AI Programs: The Entire Journey from Its Planning to Execution, Challenges Faced. Indian J Radiol Imaging 2025; 35:246-253. [PMID: 40297121 PMCID: PMC12034397 DOI: 10.1055/s-0044-1800860] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/30/2025] Open
Abstract
Artificial intelligence (AI) in radiology and medical science is finding increasing applications with annotations being an integral part of AI development. While annotation may be perceived as passive work of labeling a certain anatomy, the radiologist plays a more important role in this task apart from marking the structures needed. Apart from annotation, more important aspect of their role is planning the anatomies/pathologies needed, type of annotations to be done, choice of the annotation tool, training the annotators, planning the duration of annotation, etc. A close interaction with the technical team is a key factor in the success of the annotations. The quality check of both the internally and externally annotated data, creating a team of good annotators, training them, and periodically reviewing the quality of data become an integral part of their work. Documentation related to the annotation work is another important area where the clinician plays an integral role to comply with the Food and Drug Administration requirements, focused on a clinically explainable and validated AI algorithms. Thus, the clinician becomes an integral part in the ideation, design, implementation/execution of annotations, and its quality control. This article summarizes the experiences gained during planning and executing the annotations for multiple annotation projects involving various imaging modalities with different pathologies.
Collapse
Affiliation(s)
- Anuradha Rao
- Department of Clinical Science, Philips Innovation Campus, Bangalore, Karnataka, India
| |
Collapse
|
7
|
Savage CH, Chaudhari G, Smith AD, Sohn JH. RadSearch, a Semantic Search Model for Accurate Radiology Report Retrieval with Large Language Model Integration. Radiology 2025; 315:e240686. [PMID: 40232140 DOI: 10.1148/radiol.240686] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/16/2025]
Abstract
Background Current radiology report search tools are limited to keyword searches, which lack semantic understanding of underlying clinical conditions and are prone to false positives. Semantic search models address this issue, but their development requires scalable methods for generating radiology-specific training data. Purpose To develop a scalable method for training semantic search models for radiology reports and to evaluate a model, RadSearch, trained using this method. Materials and Methods In this retrospective study, a scalable method for generating training examples for semantic search was applied to CT and MRI reports generated between December 2021 and January 2022, and was used to train the model RadSearch. RadSearch performance was evaluated using four internal test sets (including one subset) and one external test set from another large tertiary medical center, including chest, abdomen, and head CT reports generated between December 2015 and June 2023. Performance was evaluated for findings-to-impression matching, retrieving reports with the same examination type, retrieving reports relevant to free-text queries, and improving the ability of a large language model (LLM) (Llama 3.1 8B Instruct) to provide accurate diagnoses from report finding descriptions. RadSearch performance was compared with that of other embedding models specialized for symmetric (All MPNet Base) and asymmetric (MS MARCO DistilBERT Base) semantic search and a state-of-the-art semantic search model (GTE-large). A reference set of 100 diagnoses with common radiologic descriptions was used for the LLM evaluation. Findings-to-impression matching and free-text query accuracy P values were calculated using χ2 and McNemar tests. Results The training set included 16 690 reports; the internal test sets included 13 598, 6178, and 9954 reports; and the external test set included 13 958 reports. For simulated free-text clinical queries, RadSearch successfully retrieved reports containing the specified findings for 83.0% (498 of 600) of reports and matching location for 89.8% (521 of 580) of reports, outperforming GTE-large, with performance at 65.7% (394 of 600; P < .001) and 58.8% (341 of 580; P < .001), respectively. For 100 report finding descriptions, the baseline accuracy of Llama 3.1 8B Instruct in providing the correct diagnosis without any embedding model search assistance was 30% (30 of 100), improving to 61% (61 of 100) with RadSearch integration (P < .001), which outperformed GTE-large integration (47% [47 of 100]; P = .03). Conclusion A semantic search model trained with scalable methods achieved state-of-the-art performance in retrieving reports with relevant findings and improved LLM diagnostic accuracy. © RSNA, 2025 Supplemental material is available for this article. See also the editorial by Yasaka and Abe in this issue.
Collapse
Affiliation(s)
- Cody H Savage
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, Md
- Department of Radiology, University of Alabama at Birmingham Heersink School of Medicine, Birmingham, Ala
| | - Gunvant Chaudhari
- Center for Intelligent Imaging, Department of Radiology and Biomedical Imaging, University of California, San Francisco, 505 Parnassus Ave, San Francisco, CA 94143
| | - Andrew D Smith
- Department of Radiology, University of Alabama at Birmingham Heersink School of Medicine, Birmingham, Ala
- Department of Diagnostic Imaging, St Jude Children's Research Hospital, Memphis, Tenn
| | - Jae Ho Sohn
- Center for Intelligent Imaging, Department of Radiology and Biomedical Imaging, University of California, San Francisco, 505 Parnassus Ave, San Francisco, CA 94143
| |
Collapse
|
8
|
Obeagu EI, Ezeanya CU, Ogenyi FC, Ifu DD. Big data analytics and machine learning in hematology: Transformative insights, applications and challenges. Medicine (Baltimore) 2025; 104:e41766. [PMID: 40068020 PMCID: PMC11902945 DOI: 10.1097/md.0000000000041766] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Revised: 12/14/2024] [Accepted: 02/17/2025] [Indexed: 03/14/2025] Open
Abstract
The integration of big data analytics and machine learning (ML) into hematology has ushered in a new era of precision medicine, offering transformative insights into disease management. By leveraging vast and diverse datasets, including genomic profiles, clinical laboratory results, and imaging data, these technologies enhance diagnostic accuracy, enable robust prognostic modeling, and support personalized therapeutic interventions. Advanced ML algorithms, such as neural networks and ensemble learning, facilitate the discovery of novel biomarkers and refine risk stratification for hematological disorders, including leukemias, lymphomas, and coagulopathies. Despite these advancements, significant challenges persist, particularly in the realms of data integration, algorithm validation, and ethical concerns. The heterogeneity of hematological datasets and the lack of standardized frameworks complicate their application, while the "black-box" nature of ML models raises issues of reliability and clinical trust. Moreover, safeguarding patient privacy in an era of data-driven medicine remains paramount, necessitating the development of secure and ethical analytical practices. Addressing these challenges is critical to ensuring equitable and effective implementation of these technologies. Collaborative efforts between hematologists, data scientists, and bioinformaticians are pivotal in translating these innovations into real-world clinical practice. Emphasis on developing explainable artificial intelligence models, integrating real-time analytics, and adopting federated learning approaches will further enhance the utility and adoption of these technologies. As big data analytics and ML continue to evolve, their potential to revolutionize hematology and improve patient outcomes remains immense.
Collapse
Affiliation(s)
| | | | - Fabian Chukwudi Ogenyi
- Department of Electrical, Telecommunication and Computer Engineering, Kampala International University, Kampala, Uganda
| | - Deborah Domini Ifu
- Department of Biomedical and Laboratory Science, Africa University, Mutare, Zimbabwe
| |
Collapse
|
9
|
Koçak B, Ponsiglione A, Stanzione A, Bluethgen C, Santinha J, Ugga L, Huisman M, Klontzas ME, Cannella R, Cuocolo R. Bias in artificial intelligence for medical imaging: fundamentals, detection, avoidance, mitigation, challenges, ethics, and prospects. Diagn Interv Radiol 2025; 31:75-88. [PMID: 38953330 PMCID: PMC11880872 DOI: 10.4274/dir.2024.242854] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2024] [Accepted: 06/11/2024] [Indexed: 07/04/2024]
Abstract
Although artificial intelligence (AI) methods hold promise for medical imaging-based prediction tasks, their integration into medical practice may present a double-edged sword due to bias (i.e., systematic errors). AI algorithms have the potential to mitigate cognitive biases in human interpretation, but extensive research has highlighted the tendency of AI systems to internalize biases within their model. This fact, whether intentional or not, may ultimately lead to unintentional consequences in the clinical setting, potentially compromising patient outcomes. This concern is particularly important in medical imaging, where AI has been more progressively and widely embraced than any other medical field. A comprehensive understanding of bias at each stage of the AI pipeline is therefore essential to contribute to developing AI solutions that are not only less biased but also widely applicable. This international collaborative review effort aims to increase awareness within the medical imaging community about the importance of proactively identifying and addressing AI bias to prevent its negative consequences from being realized later. The authors began with the fundamentals of bias by explaining its different definitions and delineating various potential sources. Strategies for detecting and identifying bias were then outlined, followed by a review of techniques for its avoidance and mitigation. Moreover, ethical dimensions, challenges encountered, and prospects were discussed.
Collapse
Affiliation(s)
- Burak Koçak
- University of Health Sciences Başakşehir Çam and Sakura City Hospital, Clinic of Radiology, İstanbul, Türkiye
| | - Andrea Ponsiglione
- University of Naples Federico II Department of Advanced Biomedical Sciences, Naples, Italy
| | - Arnaldo Stanzione
- University of Naples Federico II Department of Advanced Biomedical Sciences, Naples, Italy
| | - Christian Bluethgen
- University of Zurich University Hospital Zurich, Diagnostic and Interventional Radiology, Zurich, Switzerland
| | - João Santinha
- Digital Surgery LAB Champalimaud Research, Champalimaud Foundation; Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal
| | - Lorenzo Ugga
- University of Naples Federico II Department of Advanced Biomedical Sciences, Naples, Italy
| | - Merel Huisman
- Radboud University Medical Center Department of Radiology and Nuclear Medicine, Nijmegen, Netherlands
| | - Michail E. Klontzas
- University of Crete School of Medicine, Department of Radiology; University Hospital of Heraklion, Department of Medical Imaging,Crete, Greece; Karolinska Institute, Department of Clinical Science Intervention and Technology (CLINTEC), Division of Radiology, Solna, Sweden
| | - Roberto Cannella
- University of Palermo Department of Biomedicine, Neuroscience and Advanced Diagnostics, Section of Radiology, Palermo, Italy
| | - Renato Cuocolo
- University of Salerno Department of Medicine, Surgery and Dentistry, Baronissi, Italy
| |
Collapse
|
10
|
Armato SG, Drukker K, Hadjiiski L, Wu CC, Kalpathy-Cramer J, Shih G, Giger ML, Baughan N, Bearce B, Flanders AE, Ball RL, Myers KJ, Whitney HM, MIDRC Grand Challenge Working Group T. MIDRC mRALE Mastermind Grand Challenge: AI to predict COVID severity on chest radiographs. J Med Imaging (Bellingham) 2025; 12:024505. [PMID: 40276098 PMCID: PMC12014941 DOI: 10.1117/1.jmi.12.2.024505] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2024] [Revised: 02/21/2025] [Accepted: 02/28/2025] [Indexed: 04/26/2025] Open
Abstract
Purpose The Medical Imaging and Data Resource Center (MIDRC) mRALE Mastermind Grand Challenge fostered the development of artificial intelligence (AI) techniques for the automated assignment of mRALE (modified radiographic assessment of lung edema) scores to portable chest radiographs from patients known to have COVID-19. Approach The challenge utilized 2079 training cases obtained from the publicly available MIDRC data commons, with validation and test cases sampled from not-yet-public MIDRC cases that were inaccessible to challenge participants. The reference standard mRALE scores for the challenge cases were established by a pool of 22 radiologist annotators. Using the MedICI challenge platform, participants submitted their trained algorithms encapsulated in Docker containers. Algorithms were evaluated by the challenge organizers on 814 test cases through two performance assessment metrics: quadratic-weighted kappa and prediction probability concordance. Results Nine AI algorithms were submitted to the challenge for assessment against the test set cases. The algorithm that demonstrated the highest agreement with the reference standard had a quadratic-weighted kappa of 0.885 and a prediction probability concordance of 0.875. Substantial variability in mRALE scores assigned by the annotators and output by the AI algorithms was observed. Conclusions The MIDRC mRALE Mastermind Grand Challenge revealed the potential of AI to assess COVID-19 severity from portable CXRs, demonstrating promising performance against the reference standard. The observed variability in mRALE scores highlights the challenges in standardizing severity assessment. These findings contribute to ongoing efforts to develop AI technologies for potential use in clinical practice and offer insights for the enhancement of COVID-19 severity assessment.
Collapse
Affiliation(s)
- Samuel G. Armato
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Karen Drukker
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Lubomir Hadjiiski
- University of Michigan, Department of Radiology, Ann Arbor, Michigan, United States
| | - Carol C. Wu
- University of Texas MD Anderson Cancer Center, Department of Thoracic Imaging, Houston, Texas, United States
| | - Jayashree Kalpathy-Cramer
- University of Colorado Anschutz Medical Campus, Department of Ophthalmology, Aurora, Colorado, United States
| | - George Shih
- Weill Cornell Medicine, Department of Radiology, New York, New York, United States
| | - Maryellen L. Giger
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Natalie Baughan
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Benjamin Bearce
- University of Colorado Anschutz Medical Campus, Department of Ophthalmology, Aurora, Colorado, United States
| | - Adam E. Flanders
- Thomas Jefferson University, Department of Radiology, Philadelphia, Pennsylvania, United States
| | - Robyn L. Ball
- The Jackson Laboratory, Bar Harbor, Maine, United States
| | | | - Heather M. Whitney
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - the MIDRC Grand Challenge Working Group
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
- University of Michigan, Department of Radiology, Ann Arbor, Michigan, United States
- University of Texas MD Anderson Cancer Center, Department of Thoracic Imaging, Houston, Texas, United States
- University of Colorado Anschutz Medical Campus, Department of Ophthalmology, Aurora, Colorado, United States
- Weill Cornell Medicine, Department of Radiology, New York, New York, United States
- Thomas Jefferson University, Department of Radiology, Philadelphia, Pennsylvania, United States
- The Jackson Laboratory, Bar Harbor, Maine, United States
- Puente Solutions, Phoenix, Arizona, United States
| |
Collapse
|
11
|
Gamble C, Faghani S, Erickson BJ. Applying Conformal Prediction to a Deep Learning Model for Intracranial Hemorrhage Detection to Improve Trustworthiness. Radiol Artif Intell 2025; 7:e240032. [PMID: 39601654 DOI: 10.1148/ryai.240032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2024]
Abstract
Purpose To apply conformal prediction to a deep learning (DL) model for intracranial hemorrhage (ICH) detection and evaluate model performance in detection as well as model accuracy in identifying challenging cases. Materials and Methods This was a retrospective (November-December 2017) study of 491 noncontrast head CT volumes from the CQ500 dataset, in which three senior radiologists annotated sections containing ICH. The dataset was split into definite and challenging (uncertain) subsets, in which challenging images were defined as those in which there was disagreement among readers. A DL model was trained on patients from the definite data (training dataset) to perform ICH localization and classification into five classes. To develop an uncertainty-aware DL model, 1546 sections of the definite data (calibration dataset) were used for Mondrian conformal prediction (MCP). The uncertainty-aware DL model was tested on 8401 definite and challenging sections to assess its ability to identify challenging sections. The difference in predictive performance (P value) and ability to identify challenging sections (accuracy) were reported. Results The study included 146 patients (mean age, 45.7 years ± 9.9 [SD]; 76 [52.1%] men, 70 [47.9%] women). After the MCP procedure, the model achieved an F1 score of 0.919 for localization and classification. Additionally, it correctly identified patients with challenging cases with 95.3% (143 of 150) accuracy. It did not incorrectly label any definite sections as challenging. Conclusion The uncertainty-aware MCP-augmented DL model achieved high performance in ICH detection and high accuracy in identifying challenging sections, suggesting its usefulness in automated ICH detection and potential to increase trustworthiness of DL models in radiology. Keywords: CT, Head and Neck, Brain, Brain Stem, Hemorrhage, Feature Detection, Diagnosis, Supervised Learning Supplemental material is available for this article. © RSNA, 2025 See also commentary by Ngum and Filippi in this issue.
Collapse
Affiliation(s)
- Cooper Gamble
- From the Radiology Informatics Laboratory, Department of Radiology, Mayo Clinic, 200 1st St SW, Rochester, MN 55905
| | - Shahriar Faghani
- From the Radiology Informatics Laboratory, Department of Radiology, Mayo Clinic, 200 1st St SW, Rochester, MN 55905
| | - Bradley J Erickson
- From the Radiology Informatics Laboratory, Department of Radiology, Mayo Clinic, 200 1st St SW, Rochester, MN 55905
| |
Collapse
|
12
|
Alsharqi M, Edelman ER. Artificial Intelligence in Cardiovascular Imaging and Interventional Cardiology: Emerging Trends and Clinical Implications. JOURNAL OF THE SOCIETY FOR CARDIOVASCULAR ANGIOGRAPHY & INTERVENTIONS 2025; 4:102558. [PMID: 40230671 PMCID: PMC11993891 DOI: 10.1016/j.jscai.2024.102558] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/25/2024] [Revised: 12/10/2024] [Accepted: 12/26/2024] [Indexed: 04/16/2025]
Abstract
Artificial intelligence (AI) has revolutionized the field of cardiovascular imaging, serving as a unifying force that brings together multiple modalities under a single platform. The utility of noninvasive imaging ranges from diagnostic assessment and guiding interventions to prognostic stratification. Multimodality imaging has demonstrated important potential, particularly in patients with heterogeneous diseases, such as heart failure and atrial fibrillation. Facilitating complex interventional procedures requires accurate image acquisition and interpretation along with precise decision-making. The unique nature of interventional cardiology procedures benefiting from different imaging modalities presents an ideal target for the development of AI-assisted decision-making tools to improve workflow in the catheterization laboratory and personalize the need for transcatheter interventions. This review explores the advancements of AI in noninvasive cardiovascular imaging and interventional cardiology, addressing the clinical use and challenges of current imaging modalities, emerging trends, and promising applications as well as considerations for safe implementation of AI tools in clinical practice. Current practice has moved well beyond the question of whether we should or should not use AI in clinical health care settings. AI, in all its forms, has become deeply embedded in clinical workflows, particularly in cardiovascular imaging and interventional cardiology. It can, in the future, not only add precision and quantification but also serve as a means by which to fuse and link multimodalities together. It is only by understanding how AI techniques work, that the field can be harnessed for the greater good and avoid uninformed bias or misleading diagnoses.
Collapse
Affiliation(s)
- Maryam Alsharqi
- Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, Massachusetts
| | - Elazer R. Edelman
- Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, Massachusetts
- Cardiovascular Medicine, Brigham and Women’s Hospital, Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
13
|
Pereira FV, Ferreira D, Garmes H, Zantut-Wittmann DE, Rogério F, Fabbro MD, Formentin C, Forster CHQ, Reis F. Machine Learning Prediction of Pituitary Macroadenoma Consistency: Utilizing Demographic Data and Brain MRI Parameters. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2025:10.1007/s10278-025-01417-6. [PMID: 39920537 DOI: 10.1007/s10278-025-01417-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/05/2024] [Revised: 12/18/2024] [Accepted: 01/12/2025] [Indexed: 02/09/2025]
Abstract
Consistency of pituitary macroadenomas is a key determinant in surgical outcomes, with non-soft consistency linked to more complications and incomplete resections. This study aimed to develop a machine learning model to predict the consistency of pituitary macroadenomas to improve surgical planning and outcomes. A retrospective study of patients with pituitary macroadenomas was conducted. Data included brain magnetic resonance imaging findings (diameter and apparent diffusion coefficient), patient demographics (age and sex), and tumor consistency. Seventy patients were evaluated, 59 with soft consistency and 11 with non-soft consistency. The support vector machine (SVM) was the best model with ROC AUC score of 83.3% [95% CI 65.8, 97.6], AP AUC of 69.8% [95% CI 41.3, 91.1], sensitivity of 73.1% [95% CI 44.4, 100], specificity of 89.8% [95% CI 82, 96.7], F1 score of 0.63 [95% CI 0.36, 0.83], and Matthews correlation coefficient score of 0.57 [95% CI 0.29, 0.79]. These findings indicate a significant improvement over random classification, as confirmed by a permutation test (p < 0.05). Additionally, the model had a 67.4% probability of outperforming the second-best model in cross-validation, as determined through Bayesian analysis, and demonstrated statistical significance (p < 0.05) compared to non-ensemble models. Using explainability heuristics, both 2D and 3D probability maps highlighted areas with a higher probability of non-soft consistency. The attributes most influential in the correct classification by our best model were male sex and age ≤ 42.25 years. Despite some limitations, the SVM model showed promise in predicting tumor consistency, which could aid in surgical planning. To address concerns about generalizability, we have created an open-access repository to promote future external validation studies and collaboration with other research centers, with the goal of enhancing model prediction through transfer learning.
Collapse
Affiliation(s)
- Fernanda Veloso Pereira
- Department of Radiology, School of Medical Sciences, State University of Campinas (UNICAMP), Campinas, São Paulo, Brazil
| | - Davi Ferreira
- CEDS - Computer Science Department at Aeronautics Institute at Technology (ITA), São José Dos Campos, São Paulo, Brazil.
| | - Heraldo Garmes
- Division of Endocrinology, School of Medical Sciences, State University of Campinas (UNICAMP), Campinas, São Paulo, Brazil
| | | | - Fabio Rogério
- Department of Pathology, School of Medical Sciences, State University of Campinas (UNICAMP), Campinas, São Paulo, Brazil
| | - Mateus Dal Fabbro
- Department of Neurology, Neurosurgery Course, School of Medical Sciences, State University of Campinas (UNICAMP), Campinas, São Paulo, Brazil
| | - Cleiton Formentin
- Department of Neurology, Neurosurgery Course, School of Medical Sciences, State University of Campinas (UNICAMP), Campinas, São Paulo, Brazil
| | | | - Fabiano Reis
- Department of Radiology and Oncology Associate Professor and Neuroradiology Chief School of Medical Sciences, State University of Campinas (UNICAMP), Campinas, São Paulo, Brazil
| |
Collapse
|
14
|
Lekadir K, Frangi AF, Porras AR, Glocker B, Cintas C, Langlotz CP, Weicken E, Asselbergs FW, Prior F, Collins GS, Kaissis G, Tsakou G, Buvat I, Kalpathy-Cramer J, Mongan J, Schnabel JA, Kushibar K, Riklund K, Marias K, Amugongo LM, Fromont LA, Maier-Hein L, Cerdá-Alberich L, Martí-Bonmatí L, Cardoso MJ, Bobowicz M, Shabani M, Tsiknakis M, Zuluaga MA, Fritzsche MC, Camacho M, Linguraru MG, Wenzel M, De Bruijne M, Tolsgaard MG, Goisauf M, Cano Abadía M, Papanikolaou N, Lazrak N, Pujol O, Osuala R, Napel S, Colantonio S, Joshi S, Klein S, Aussó S, Rogers WA, Salahuddin Z, Starmans MPA. FUTURE-AI: international consensus guideline for trustworthy and deployable artificial intelligence in healthcare. BMJ 2025; 388:e081554. [PMID: 39909534 PMCID: PMC11795397 DOI: 10.1136/bmj-2024-081554] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 01/10/2025] [Indexed: 02/07/2025]
Affiliation(s)
- Karim Lekadir
- Artificial Intelligence in Medicine Lab (BCN-AIM), Departament de Matemàtiques i Informàtica, Universitat de Barcelona, Barcelona, Spain
- Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona, Spain
| | - Alejandro F Frangi
- Center for Computational Imaging & Simulation Technologies in Biomedicine, Schools of Computing and Medicine, University of Leeds, Leeds, UK
- Medical Imaging Research Centre (MIRC), Cardiovascular Science and Electronic Engineering Departments, KU Leuven, Leuven, Belgium
| | - Antonio R Porras
- Department of Biostatistics and Informatics, Colorado School of Public Health, University of Colorado Anschutz Medical Campus, Aurora, CO, USA
| | - Ben Glocker
- Department of Computing, Imperial College London, London, UK
| | | | - Curtis P Langlotz
- Departments of Radiology, Medicine, and Biomedical Data Science, Stanford University School of Medicine, Stanford, CA, USA
| | - Eva Weicken
- Fraunhofer Heinrich Hertz Institute, Berlin, Germany
| | - Folkert W Asselbergs
- Amsterdam University Medical Centers, Department of Cardiology, University of Amsterdam, Amsterdam, Netherlands
- Health Data Research UK and Institute of Health Informatics, University College London, London, UK
| | - Fred Prior
- Department of Biomedical Informatics, University of Arkansas for Medical Sciences, Little Rock, AR, USA
| | - Gary S Collins
- Centre for Statistics in Medicine, University of Oxford, Oxford, UK
| | - Georgios Kaissis
- Institute for AI and Informatics in Medicine, Klinikum rechts der Isar, Technical University Munich, Munich, Germany
| | - Gianna Tsakou
- Gruppo Maggioli, Research and Development Lab, Athens, Greece
| | | | | | - John Mongan
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA, USA
| | - Julia A Schnabel
- Institute of Machine Learning in Biomedical Imaging, Helmholtz Center Munich, Munich, Germany
| | - Kaisar Kushibar
- Artificial Intelligence in Medicine Lab (BCN-AIM), Departament de Matemàtiques i Informàtica, Universitat de Barcelona, Barcelona, Spain
| | - Katrine Riklund
- Department of Radiation Sciences, Diagnostic Radiology, Umeå University, Umeå, Sweden
| | - Kostas Marias
- Foundation for Research and Technology-Hellas (FORTH), Crete, Greece
| | - Lameck M Amugongo
- Department of Software Engineering, Namibia University of Science & Technology, Windhoek, Namibia
| | - Lauren A Fromont
- Centre for Genomic Regulation, Barcelona Institute of Science and Technology, Barcelona, Spain
| | - Lena Maier-Hein
- Division of Intelligent Medical Systems, German Cancer Research Centre, Heidelberg, Germany
| | | | - Luis Martí-Bonmatí
- Medical Imaging Department, Hospital Universitario y Politécnico La Fe, Valencia, Spain
| | - M Jorge Cardoso
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Maciej Bobowicz
- 2nd Division of Radiology, Medical University of Gdansk, Gdansk, Poland
| | - Mahsa Shabani
- Faculty of Law and Criminology, Ghent University, Ghent, Belgium
| | - Manolis Tsiknakis
- Foundation for Research and Technology-Hellas (FORTH), Crete, Greece
| | | | | | - Marina Camacho
- Artificial Intelligence in Medicine Lab (BCN-AIM), Departament de Matemàtiques i Informàtica, Universitat de Barcelona, Barcelona, Spain
| | - Marius George Linguraru
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington DC, USA
| | - Markus Wenzel
- Fraunhofer Heinrich Hertz Institute, Berlin, Germany
| | - Marleen De Bruijne
- Department of Radiology & Nuclear Medicine, Erasmus MC University Medical Centre, Rotterdam, Netherlands
| | - Martin G Tolsgaard
- Copenhagen Academy for Medical Education and Simulation Rigshospitalet, University of Copenhagen, Copenhagen, Denmark
| | | | | | | | - Noussair Lazrak
- Artificial Intelligence in Medicine Lab (BCN-AIM), Departament de Matemàtiques i Informàtica, Universitat de Barcelona, Barcelona, Spain
| | - Oriol Pujol
- Artificial Intelligence in Medicine Lab (BCN-AIM), Departament de Matemàtiques i Informàtica, Universitat de Barcelona, Barcelona, Spain
| | - Richard Osuala
- Artificial Intelligence in Medicine Lab (BCN-AIM), Departament de Matemàtiques i Informàtica, Universitat de Barcelona, Barcelona, Spain
| | - Sandy Napel
- Integrative Biomedical Imaging Informatics at Stanford (IBIIS), Department of Radiology, Stanford University, Stanford, CA, USA
| | - Sara Colantonio
- Institute of Information Science and Technologies of the National Research Council of Italy, Pisa, Italy
| | - Smriti Joshi
- Artificial Intelligence in Medicine Lab (BCN-AIM), Departament de Matemàtiques i Informàtica, Universitat de Barcelona, Barcelona, Spain
| | - Stefan Klein
- Department of Radiology & Nuclear Medicine, Erasmus MC University Medical Centre, Rotterdam, Netherlands
| | - Susanna Aussó
- Artificial Intelligence in Healthcare Program, TIC Salut Social Foundation, Barcelona, Spain
| | - Wendy A Rogers
- Department of Philosophy, and School of Medicine, Macquarie University, Sydney, Australia
| | - Zohaib Salahuddin
- The D-lab, Department of Precision Medicine, GROW-School for Oncology and Reproduction, Maastricht University, Maastricht, Netherlands
| | - Martijn P A Starmans
- Department of Radiology & Nuclear Medicine, Erasmus MC University Medical Centre, Rotterdam, Netherlands
| |
Collapse
|
15
|
Faghani S, Moassefi M, Yadav U, Buadi FK, Kumar SK, Erickson BJ, Gonsalves WI, Baffour FI. Whole-body low-dose computed tomography in patients with newly diagnosed multiple myeloma predicts cytogenetic risk: a deep learning radiogenomics study. Skeletal Radiol 2025; 54:267-273. [PMID: 38937291 PMCID: PMC11652250 DOI: 10.1007/s00256-024-04733-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/04/2024] [Revised: 05/31/2024] [Accepted: 06/12/2024] [Indexed: 06/29/2024]
Abstract
OBJECTIVE To develop a whole-body low-dose CT (WBLDCT) deep learning model and determine its accuracy in predicting the presence of cytogenetic abnormalities in multiple myeloma (MM). MATERIALS AND METHODS WBLDCTs of MM patients performed within a year of diagnosis were included. Cytogenetic assessments of clonal plasma cells via fluorescent in situ hybridization (FISH) were used to risk-stratify patients as high-risk (HR) or standard-risk (SR). Presence of any of del(17p), t(14;16), t(4;14), and t(14;20) on FISH was defined as HR. The dataset was evenly divided into five groups (folds) at the individual patient level for model training. Mean and standard deviation (SD) of the area under the receiver operating curve (AUROC) across the folds were recorded. RESULTS One hundred fifty-one patients with MM were included in the study. The model performed best for t(4;14), mean (SD) AUROC of 0.874 (0.073). The lowest AUROC was observed for trisomies: AUROC of 0.717 (0.058). Two- and 5-year survival rates for HR cytogenetics were 87% and 71%, respectively, compared to 91% and 79% for SR cytogenetics. Survival predictions by the WBLDCT deep learning model revealed 2- and 5-year survival rates for patients with HR cytogenetics as 87% and 71%, respectively, compared to 92% and 81% for SR cytogenetics. CONCLUSION A deep learning model trained on WBLDCT scans predicted the presence of cytogenetic abnormalities used for risk stratification in MM. Assessment of the model's performance revealed good to excellent classification of the various cytogenetic abnormalities.
Collapse
Affiliation(s)
- Shahriar Faghani
- Department of Radiology, Mayo Clinic, 200 1st St SW, Rochester, MN, 55905, USA
| | - Mana Moassefi
- Department of Radiology, Mayo Clinic, 200 1st St SW, Rochester, MN, 55905, USA
| | - Udit Yadav
- Division of Hematology, Mayo Clinic, 13400 E. Shea Blvd, Scottsdale, AZ, 85259, USA
| | - Francis K Buadi
- Division of Hematology, Mayo Clinic, 200 1st St SW, Rochester, MN, 55905, USA
| | - Shaji K Kumar
- Division of Hematology, Mayo Clinic, 200 1st St SW, Rochester, MN, 55905, USA
| | - Bradley J Erickson
- Department of Radiology, Mayo Clinic, 200 1st St SW, Rochester, MN, 55905, USA
| | - Wilson I Gonsalves
- Division of Hematology, Mayo Clinic, 200 1st St SW, Rochester, MN, 55905, USA
| | - Francis I Baffour
- Department of Radiology, Mayo Clinic, 200 1st St SW, Rochester, MN, 55905, USA.
| |
Collapse
|
16
|
Mercadal-Orfila G, Serrano López de las Hazas J, Riera-Jaume M, Herrera-Perez S. Developing a Prototype Machine Learning Model to Predict Quality of Life Measures in People Living With HIV. INTEGRATED PHARMACY RESEARCH AND PRACTICE 2025; 14:1-16. [PMID: 39872224 PMCID: PMC11766232 DOI: 10.2147/iprp.s492422] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2024] [Accepted: 01/14/2025] [Indexed: 01/30/2025] Open
Abstract
Background In the realm of Evidence-Based Medicine, introduced by Gordon Guyatt in the early 1990s, the integration of machine learning technologies marks a significant advancement towards more objective, evidence-driven healthcare. Evidence-Based Medicine principles focus on using the best available scientific evidence for clinical decision-making, enhancing healthcare quality and consistency by integrating this evidence with clinician expertise and patient values. Patient-Reported Outcome Measures (PROMs) and Patient-Reported Experience Measures (PREMs) have become essential in evaluating the broader impacts of treatments, especially for chronic conditions like HIV, reflecting patient health and well-being comprehensively. Purpose The study aims to leverage Machine Learning (ML) technologies to predict health outcomes from PROMs/PREMs data, focusing on people living with HIV. Patients and Methods Our research utilizes a ML Random Forest Regression to analyze PROMs/PREMs data collected from over 1200 people living with HIV through the NAVETA telemedicine system. Results The findings demonstrate the potential of ML algorithms to provide precise and consistent predictions of health outcomes, indicating high reliability and effectiveness in clinical settings. Notably, our ALGOPROMIA ML model achieved the highest predictive accuracy for questionnaires such as MOS30 VIH (Adj. R² = 0.984), ESTAR (Adj. R² = 0.963), and BERGER (Adj. R² = 0.936). Moderate performance was observed for the P3CEQ (Adj. R² = 0.753) and TSQM (Adj. R² = 0.698), reflecting variability in model accuracy across instruments. Additionally, the model demonstrated strong reliability in maintaining standardized prediction errors below 0.2 for most instruments, with probabilities of achieving this threshold being 96.43% for WHOQoL HIV Bref and 88.44% for ESTAR, while lower probabilities were observed for TSQM (44%) and WRFQ (51%). Conclusion The results from our machine learning algorithms are promising for predicting PROMs and PREMs in AIDS settings. This work highlights how integrating ML technologies can enhance clinical pharmaceutical decision-making and support personalized treatment strategies within a multidisciplinary integration framework. Furthermore, leveraging platforms like NAVETA for deploying these models presents a scalable approach to implementation, fostering patient-centered, value-based care.
Collapse
Affiliation(s)
- Gabriel Mercadal-Orfila
- Pharmacy Department, Hospital Mateu Orfila, Maón, Spain
- Department of Biochemistry and Molecular Biology, Universitat de Les Illes Balears (UIB), Palma de Mallorca, Spain
| | | | - Melchor Riera-Jaume
- Unidad de Enfermedades Infecciosas, Servicio de Medicina Interna, Hospital Universitario Son Espases, Palma de Mallorca, Spain
| | - Salvador Herrera-Perez
- Facultad de Ciencias de la Salud, Universidad Internacional de Valencia, Valencia, España
| |
Collapse
|
17
|
Dibaji M, Ospel J, Souza R, Bento M. Sex differences in brain MRI using deep learning toward fairer healthcare outcomes. Front Comput Neurosci 2024; 18:1452457. [PMID: 39606583 PMCID: PMC11598355 DOI: 10.3389/fncom.2024.1452457] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2024] [Accepted: 09/10/2024] [Indexed: 11/29/2024] Open
Abstract
This study leverages deep learning to analyze sex differences in brain MRI data, aiming to further advance fairness in medical imaging. We employed 3D T1-weighted Magnetic Resonance images from four diverse datasets: Calgary-Campinas-359, OASIS-3, Alzheimer's Disease Neuroimaging Initiative, and Cambridge Center for Aging and Neuroscience, ensuring a balanced representation of sexes and a broad demographic scope. Our methodology focused on minimal preprocessing to preserve the integrity of brain structures, utilizing a Convolutional Neural Network model for sex classification. The model achieved an accuracy of 87% on the test set without employing total intracranial volume (TIV) adjustment techniques. We observed that while the model exhibited biases at extreme brain sizes, it performed with less bias when the TIV distributions overlapped more. Saliency maps were used to identify brain regions significant in sex differentiation, revealing that certain supratentorial and infratentorial regions were important for predictions. Furthermore, our interdisciplinary team, comprising machine learning specialists and a radiologist, ensured diverse perspectives in validating the results. The detailed investigation of sex differences in brain MRI in this study, highlighted by the sex differences map, offers valuable insights into sex-specific aspects of medical imaging and could aid in developing sex-based bias mitigation strategies, contributing to the future development of fair AI algorithms. Awareness of the brain's differences between sexes enables more equitable AI predictions, promoting fairness in healthcare outcomes. Our code and saliency maps are available at https://github.com/mahsadibaji/sex-differences-brain-dl.
Collapse
Affiliation(s)
- Mahsa Dibaji
- Department of Electrical and Software Engineering, University of Calgary, Calgary, AB, Canada
| | - Johanna Ospel
- Department of Radiology, University of Calgary, Cumming School of Medicine, Calgary, AB, Canada
| | - Roberto Souza
- Department of Electrical and Software Engineering, University of Calgary, Calgary, AB, Canada
| | - Mariana Bento
- Department of Electrical and Software Engineering, University of Calgary, Calgary, AB, Canada
- Department of Biomedical Engineering, University of Calgary, Calgary, AB, Canada
| |
Collapse
|
18
|
Singh Y, Patel H, Vera-Garcia DV, Hathaway QA, Sarkar D, Quaia E. Beyond the hype: Navigating bias in AI-driven cancer detection. Oncotarget 2024; 15:764-766. [PMID: 39513852 PMCID: PMC11546210 DOI: 10.18632/oncotarget.28665] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2024] [Indexed: 11/16/2024] Open
Affiliation(s)
- Yashbir Singh
- Correspondence to:Yashbir Singh, Mayo Clinic, Rochester, MN 55905, USA
| | | | | | | | | | | |
Collapse
|
19
|
Khosravi B, Rouzrokh P, Erickson BJ, Garner HW, Wenger DE, Taunton MJ, Wyles CC. Analyzing Racial Differences in Imaging Joint Replacement Registries Using Generative Artificial Intelligence: Advancing Orthopaedic Data Equity. Arthroplast Today 2024; 29:101503. [PMID: 39376670 PMCID: PMC11456877 DOI: 10.1016/j.artd.2024.101503] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/12/2024] [Revised: 06/17/2024] [Accepted: 08/09/2024] [Indexed: 10/09/2024] Open
Abstract
Background Discrepancies in medical data sets can perpetuate bias, especially when training deep learning models, potentially leading to biased outcomes in clinical applications. Understanding these biases is crucial for the development of equitable healthcare technologies. This study employs generative deep learning technology to explore and understand radiographic differences based on race among patients undergoing total hip arthroplasty. Methods Utilizing a large institutional registry, we retrospectively analyzed pelvic radiographs from total hip arthroplasty patients, characterized by demographics and image features. Denoising diffusion probabilistic models generated radiographs conditioned on demographic and imaging characteristics. Fréchet Inception Distance assessed the generated image quality, showing the diversity and realism of the generated images. Sixty transition videos were generated that showed transforming White pelvises to their closest African American counterparts and vice versa while controlling for patients' sex, age, and body mass index. Two expert surgeons and 2 radiologists carefully studied these videos to understand the systematic differences that are present in the 2 races' radiographs. Results Our data set included 480,407 pelvic radiographs, with a predominance of White patients over African Americans. The generative denoising diffusion probabilistic model created high-quality images and reached an Fréchet Inception Distance of 6.8. Experts identified 6 characteristics differentiating races, including interacetabular distance, osteoarthritis degree, obturator foramina shape, femoral neck-shaft angle, pelvic ring shape, and femoral cortical thickness. Conclusions This study demonstrates the potential of generative models for understanding disparities in medical imaging data sets. By visualizing race-based differences, this method aids in identifying bias in downstream tasks, fostering the development of fairer healthcare practices.
Collapse
Affiliation(s)
- Bardia Khosravi
- Department of Orthopedic Surgery, Mayo Clinic, Rochester, MN, USA
- Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | - Pouria Rouzrokh
- Department of Orthopedic Surgery, Mayo Clinic, Rochester, MN, USA
- Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | | | | | | | | | - Cody C. Wyles
- Department of Orthopedic Surgery, Mayo Clinic, Rochester, MN, USA
- Department of Clinical Anatomy, Mayo Clinic, Rochester, MN, USA
| |
Collapse
|
20
|
Singh Y, Faghani S, Eaton JE, Venkatesh SK, Erickson BJ. Deep Learning-Based Prediction of Hepatic Decompensation in Patients With Primary Sclerosing Cholangitis With Computed Tomography. MAYO CLINIC PROCEEDINGS. DIGITAL HEALTH 2024; 2:470-476. [PMID: 40206109 PMCID: PMC11975999 DOI: 10.1016/j.mcpdig.2024.07.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 04/11/2025]
Abstract
Objective To investigate a deep learning model for predicting hepatic decompensation using computed tomography (CT) imaging in patients with primary sclerosing cholangitis (PSC). Patients and Methods Retrospective cohort study involving 277 adult patients with large-duct PSC who underwent an abdominal CT scan. The portal venous phase CT images were used as input to a 3D-DenseNet121 model, which was trained using 5-fold crossvalidation to classify hepatic decompensation. To further investigate the role of each anatomic region in the model's decision-making process, we trained the model on different sections of 3-dimensional CT images. This included training on the right, left, anterior, posterior, inferior, and superior halves of the image data set. For each half, as well as for the entire scan, we performed area under the receiving operating curve (AUROC) analysis. Results Hepatic decompensation occurred in 128 individuals after a median (interquartile range) of 1.5 years (142-1318 days) after the CT scan. The deep learning model exhibited promising results, with a mean ± SD AUROC of 0.89±0.04 for the baseline model. The mean ± SD AUROC for left, right, anterior, posterior, superior, and inferior halves were 0.83±0.03, 0.83±0.03, 0.82±0.09, 0.79±0.02, 0.78±0.02, and 0.76±0.04, respectively. Conclusion The study illustrates the potential of examining CT imaging using 3D-DenseNet121 deep learning model to predict hepatic decompensation in patients with PSC.
Collapse
Affiliation(s)
| | | | - John E Eaton
- Division of Gastroenterology & Hepatology, Mayo Clinic, Rochester, MN
| | | | | |
Collapse
|
21
|
Mäenpää SM, Korja M. Diagnostic test accuracy of externally validated convolutional neural network (CNN) artificial intelligence (AI) models for emergency head CT scans - A systematic review. Int J Med Inform 2024; 189:105523. [PMID: 38901270 DOI: 10.1016/j.ijmedinf.2024.105523] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2024] [Revised: 05/29/2024] [Accepted: 06/10/2024] [Indexed: 06/22/2024]
Abstract
BACKGROUND The surge in emergency head CT imaging and artificial intelligence (AI) advancements, especially deep learning (DL) and convolutional neural networks (CNN), have accelerated the development of computer-aided diagnosis (CADx) for emergency imaging. External validation assesses model generalizability, providing preliminary evidence of clinical potential. OBJECTIVES This study systematically reviews externally validated CNN-CADx models for emergency head CT scans, critically appraises diagnostic test accuracy (DTA), and assesses adherence to reporting guidelines. METHODS Studies comparing CNN-CADx model performance to reference standard were eligible. The review was registered in PROSPERO (CRD42023411641) and conducted on Medline, Embase, EBM-Reviews and Web of Science following PRISMA-DTA guideline. DTA reporting were systematically extracted and appraised using standardised checklists (STARD, CHARMS, CLAIM, TRIPOD, PROBAST, QUADAS-2). RESULTS Six of 5636 identified studies were eligible. The common target condition was intracranial haemorrhage (ICH), and intended workflow roles auxiliary to experts. Due to methodological and clinical between-study variation, meta-analysis was inappropriate. The scan-level sensitivity exceeded 90 % in 5/6 studies, while specificities ranged from 58,0-97,7 %. The SROC 95 % predictive region was markedly broader than the confidence region, ranging above 50 % sensitivity and 20 % specificity. All studies had unclear or high risk of bias and concern for applicability (QUADAS-2, PROBAST), and reporting adherence was below 50 % in 20 of 32 TRIPOD items. CONCLUSION 0.01 % of identified studies met the eligibility criteria. The evidence on the DTA of CNN-CADx models for emergency head CT scans remains limited in the scope of this review, as the reviewed studies were scarce, inapt for meta-analysis and undermined by inadequate methodological conduct and reporting. Properly conducted, external validation remains preliminary for evaluating the clinical potential of AI-CADx models, but prospective and pragmatic clinical validation in comparative trials remains most crucial. In conclusion, future AI-CADx research processes should be methodologically standardized and reported in a clinically meaningful way to avoid research waste.
Collapse
Affiliation(s)
- Saana M Mäenpää
- Department of Neurosurgery, University of Helsinki and Helsinki University Hospital, Helsinki, Finland.
| | - Miikka Korja
- Department of Neurosurgery, University of Helsinki and Helsinki University Hospital, Helsinki, Finland.
| |
Collapse
|
22
|
Mayfield JD, Ataya D, Abdalah M, Stringfield O, Bui MM, Raghunand N, Niell B, El Naqa I. Presurgical Upgrade Prediction of DCIS to Invasive Ductal Carcinoma Using Time-dependent Deep Learning Models with DCE MRI. Radiol Artif Intell 2024; 6:e230348. [PMID: 38900042 PMCID: PMC11427917 DOI: 10.1148/ryai.230348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/21/2024]
Abstract
Purpose To determine whether time-dependent deep learning models can outperform single time point models in predicting preoperative upgrade of ductal carcinoma in situ (DCIS) to invasive malignancy at dynamic contrast-enhanced (DCE) breast MRI without a lesion segmentation prerequisite. Materials and Methods In this exploratory study, 154 cases of biopsy-proven DCIS (25 upgraded at surgery and 129 not upgraded) were selected consecutively from a retrospective cohort of preoperative DCE MRI in women with a mean age of 59 years at time of diagnosis from 2012 to 2022. Binary classification was implemented with convolutional neural network (CNN)-long short-term memory (LSTM) architectures benchmarked against traditional CNNs without manual segmentation of the lesions. Combinatorial performance analysis of ResNet50 versus VGG16-based models was performed with each contrast phase. Binary classification area under the receiver operating characteristic curve (AUC) was reported. Results VGG16-based models consistently provided better holdout test AUCs than did ResNet50 in CNN and CNN-LSTM studies (multiphase test AUC, 0.67 vs 0.59, respectively, for CNN models [P = .04] and 0.73 vs 0.62 for CNN-LSTM models [P = .008]). The time-dependent model (CNN-LSTM) provided a better multiphase test AUC over single time point (CNN) models (0.73 vs 0.67; P = .04). Conclusion Compared with single time point architectures, sequential deep learning algorithms using preoperative DCE MRI improved prediction of DCIS lesions upgraded to invasive malignancy without the need for lesion segmentation. Keywords: MRI, Dynamic Contrast-enhanced, Breast, Convolutional Neural Network (CNN) Supplemental material is available for this article. © RSNA, 2024.
Collapse
MESH Headings
- Humans
- Female
- Breast Neoplasms/diagnostic imaging
- Breast Neoplasms/pathology
- Breast Neoplasms/surgery
- Deep Learning
- Middle Aged
- Magnetic Resonance Imaging/methods
- Retrospective Studies
- Carcinoma, Intraductal, Noninfiltrating/diagnostic imaging
- Carcinoma, Intraductal, Noninfiltrating/pathology
- Carcinoma, Intraductal, Noninfiltrating/surgery
- Contrast Media
- Carcinoma, Ductal, Breast/diagnostic imaging
- Carcinoma, Ductal, Breast/pathology
- Carcinoma, Ductal, Breast/surgery
- Aged
- Adult
- Predictive Value of Tests
- Image Interpretation, Computer-Assisted/methods
- Breast/diagnostic imaging
- Breast/pathology
- Breast/surgery
Collapse
Affiliation(s)
- John D Mayfield
- From the Departments of Radiology (J.D.M.), Oncologic Sciences (D.A., M.M.B., N.R., B.N.), and Medical Engineering (J.D.M.), University of South Florida College of Medicine, 12901 Bruce B. Downs Blvd, Tampa, FL 33612; and Department of Diagnostic Imaging and Interventional Radiology (D.A., B.N.), Department of Pathology (M.M.B.), Department of Cancer Physiology (N.R.), Quantitative Imaging CORE (M.A., O.S., I.E.N.), and Department of Machine Learning (M.M.B., I.E.N.), H. Lee Moffitt Cancer Center and Research Institute, Tampa, Fla
| | - Dana Ataya
- From the Departments of Radiology (J.D.M.), Oncologic Sciences (D.A., M.M.B., N.R., B.N.), and Medical Engineering (J.D.M.), University of South Florida College of Medicine, 12901 Bruce B. Downs Blvd, Tampa, FL 33612; and Department of Diagnostic Imaging and Interventional Radiology (D.A., B.N.), Department of Pathology (M.M.B.), Department of Cancer Physiology (N.R.), Quantitative Imaging CORE (M.A., O.S., I.E.N.), and Department of Machine Learning (M.M.B., I.E.N.), H. Lee Moffitt Cancer Center and Research Institute, Tampa, Fla
| | - Mahmoud Abdalah
- From the Departments of Radiology (J.D.M.), Oncologic Sciences (D.A., M.M.B., N.R., B.N.), and Medical Engineering (J.D.M.), University of South Florida College of Medicine, 12901 Bruce B. Downs Blvd, Tampa, FL 33612; and Department of Diagnostic Imaging and Interventional Radiology (D.A., B.N.), Department of Pathology (M.M.B.), Department of Cancer Physiology (N.R.), Quantitative Imaging CORE (M.A., O.S., I.E.N.), and Department of Machine Learning (M.M.B., I.E.N.), H. Lee Moffitt Cancer Center and Research Institute, Tampa, Fla
| | - Olya Stringfield
- From the Departments of Radiology (J.D.M.), Oncologic Sciences (D.A., M.M.B., N.R., B.N.), and Medical Engineering (J.D.M.), University of South Florida College of Medicine, 12901 Bruce B. Downs Blvd, Tampa, FL 33612; and Department of Diagnostic Imaging and Interventional Radiology (D.A., B.N.), Department of Pathology (M.M.B.), Department of Cancer Physiology (N.R.), Quantitative Imaging CORE (M.A., O.S., I.E.N.), and Department of Machine Learning (M.M.B., I.E.N.), H. Lee Moffitt Cancer Center and Research Institute, Tampa, Fla
| | - Marilyn M Bui
- From the Departments of Radiology (J.D.M.), Oncologic Sciences (D.A., M.M.B., N.R., B.N.), and Medical Engineering (J.D.M.), University of South Florida College of Medicine, 12901 Bruce B. Downs Blvd, Tampa, FL 33612; and Department of Diagnostic Imaging and Interventional Radiology (D.A., B.N.), Department of Pathology (M.M.B.), Department of Cancer Physiology (N.R.), Quantitative Imaging CORE (M.A., O.S., I.E.N.), and Department of Machine Learning (M.M.B., I.E.N.), H. Lee Moffitt Cancer Center and Research Institute, Tampa, Fla
| | - Natarajan Raghunand
- From the Departments of Radiology (J.D.M.), Oncologic Sciences (D.A., M.M.B., N.R., B.N.), and Medical Engineering (J.D.M.), University of South Florida College of Medicine, 12901 Bruce B. Downs Blvd, Tampa, FL 33612; and Department of Diagnostic Imaging and Interventional Radiology (D.A., B.N.), Department of Pathology (M.M.B.), Department of Cancer Physiology (N.R.), Quantitative Imaging CORE (M.A., O.S., I.E.N.), and Department of Machine Learning (M.M.B., I.E.N.), H. Lee Moffitt Cancer Center and Research Institute, Tampa, Fla
| | - Bethany Niell
- From the Departments of Radiology (J.D.M.), Oncologic Sciences (D.A., M.M.B., N.R., B.N.), and Medical Engineering (J.D.M.), University of South Florida College of Medicine, 12901 Bruce B. Downs Blvd, Tampa, FL 33612; and Department of Diagnostic Imaging and Interventional Radiology (D.A., B.N.), Department of Pathology (M.M.B.), Department of Cancer Physiology (N.R.), Quantitative Imaging CORE (M.A., O.S., I.E.N.), and Department of Machine Learning (M.M.B., I.E.N.), H. Lee Moffitt Cancer Center and Research Institute, Tampa, Fla
| | - Issam El Naqa
- From the Departments of Radiology (J.D.M.), Oncologic Sciences (D.A., M.M.B., N.R., B.N.), and Medical Engineering (J.D.M.), University of South Florida College of Medicine, 12901 Bruce B. Downs Blvd, Tampa, FL 33612; and Department of Diagnostic Imaging and Interventional Radiology (D.A., B.N.), Department of Pathology (M.M.B.), Department of Cancer Physiology (N.R.), Quantitative Imaging CORE (M.A., O.S., I.E.N.), and Department of Machine Learning (M.M.B., I.E.N.), H. Lee Moffitt Cancer Center and Research Institute, Tampa, Fla
| |
Collapse
|
23
|
Johnson PM. Advancing Equitable AI in Radiology through Contrastive Learning. Radiol Artif Intell 2024; 6:e240530. [PMID: 39320236 PMCID: PMC11427925 DOI: 10.1148/ryai.240530] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2024] [Revised: 08/28/2024] [Accepted: 09/05/2024] [Indexed: 09/26/2024]
Affiliation(s)
- Patricia M. Johnson
- From the Department of Radiology, New York University Grossman School
of Medicine, 650 1st Ave, New York, NY 10016-6402
| |
Collapse
|
24
|
Budge J, Farrell-Dillon K, Azhar B, Roy I. Letter to the Editor: "Unenhanced computed tomography radiomics help detect endoleaks after endovascular repair of abdominal aortic aneurysm". Eur Radiol 2024; 34:4850-4851. [PMID: 38197917 DOI: 10.1007/s00330-023-10565-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2023] [Revised: 11/18/2023] [Accepted: 11/27/2023] [Indexed: 01/11/2024]
Affiliation(s)
- James Budge
- St George's Vascular Institute, St George's University of London, London, UK.
| | | | - Bilal Azhar
- St George's Vascular Institute, St George's University of London, London, UK
| | - Iain Roy
- St George's Vascular Institute, St George's University of London, London, UK
| |
Collapse
|
25
|
Rouzrokh P, Clarke JE, Hosseiny M, Nikpanah M, Mokkarala M. Preparing Radiologists for an Artificial Intelligence-enhanced Future: Tips for Trainees. Radiographics 2024; 44:e240042. [PMID: 39024174 PMCID: PMC11310759 DOI: 10.1148/rg.240042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2024] [Revised: 03/06/2024] [Accepted: 03/13/2024] [Indexed: 07/20/2024]
Affiliation(s)
- Pouria Rouzrokh
- From the Department of Radiology, Radiology Informatics Laboratory,
Mayo Clinic, Rochester, Minn (P.R.); Department of Radiology, University of
California Los Angeles, Los Angeles, Calif (J.E.C.); Department of Radiology,
University of California San Diego, San Diego, Calif (M.H.); Department of
Radiology, University of Alabama at Birmingham, Birmingham, Ala (M.N.); and
Department of Radiology, Mallinckrodt Institute of Radiology, St. Louis, Mo
(M.M.)
| | - Jamie E. Clarke
- From the Department of Radiology, Radiology Informatics Laboratory,
Mayo Clinic, Rochester, Minn (P.R.); Department of Radiology, University of
California Los Angeles, Los Angeles, Calif (J.E.C.); Department of Radiology,
University of California San Diego, San Diego, Calif (M.H.); Department of
Radiology, University of Alabama at Birmingham, Birmingham, Ala (M.N.); and
Department of Radiology, Mallinckrodt Institute of Radiology, St. Louis, Mo
(M.M.)
| | - Melina Hosseiny
- From the Department of Radiology, Radiology Informatics Laboratory,
Mayo Clinic, Rochester, Minn (P.R.); Department of Radiology, University of
California Los Angeles, Los Angeles, Calif (J.E.C.); Department of Radiology,
University of California San Diego, San Diego, Calif (M.H.); Department of
Radiology, University of Alabama at Birmingham, Birmingham, Ala (M.N.); and
Department of Radiology, Mallinckrodt Institute of Radiology, St. Louis, Mo
(M.M.)
| | - Moozhan Nikpanah
- From the Department of Radiology, Radiology Informatics Laboratory,
Mayo Clinic, Rochester, Minn (P.R.); Department of Radiology, University of
California Los Angeles, Los Angeles, Calif (J.E.C.); Department of Radiology,
University of California San Diego, San Diego, Calif (M.H.); Department of
Radiology, University of Alabama at Birmingham, Birmingham, Ala (M.N.); and
Department of Radiology, Mallinckrodt Institute of Radiology, St. Louis, Mo
(M.M.)
| | - Mahati Mokkarala
- From the Department of Radiology, Radiology Informatics Laboratory,
Mayo Clinic, Rochester, Minn (P.R.); Department of Radiology, University of
California Los Angeles, Los Angeles, Calif (J.E.C.); Department of Radiology,
University of California San Diego, San Diego, Calif (M.H.); Department of
Radiology, University of Alabama at Birmingham, Birmingham, Ala (M.N.); and
Department of Radiology, Mallinckrodt Institute of Radiology, St. Louis, Mo
(M.M.)
| |
Collapse
|
26
|
López-Úbeda P, Martín-Noguerol T, Díaz-Angulo C, Luna A. Evaluation of large language models performance against humans for summarizing MRI knee radiology reports: A feasibility study. Int J Med Inform 2024; 187:105443. [PMID: 38615509 DOI: 10.1016/j.ijmedinf.2024.105443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Revised: 03/20/2024] [Accepted: 03/29/2024] [Indexed: 04/16/2024]
Abstract
OBJECTIVES This study addresses the critical need for accurate summarization in radiology by comparing various Large Language Model (LLM)-based approaches for automatic summary generation. With the increasing volume of patient information, accurately and concisely conveying radiological findings becomes crucial for effective clinical decision-making. Minor inaccuracies in summaries can lead to significant consequences, highlighting the need for reliable automated summarization tools. METHODS We employed two language models - Text-to-Text Transfer Transformer (T5) and Bidirectional and Auto-Regressive Transformers (BART) - in both fine-tuned and zero-shot learning scenarios and compared them with a Recurrent Neural Network (RNN). Additionally, we conducted a comparative analysis of 100 MRI report summaries, using expert human judgment and criteria such as coherence, relevance, fluency, and consistency, to evaluate the models against the original radiologist summaries. To facilitate this, we compiled a dataset of 15,508 retrospective knee Magnetic Resonance Imaging (MRI) reports from our Radiology Information System (RIS), focusing on the findings section to predict the radiologist's summary. RESULTS The fine-tuned models outperform the neural network and show superior performance in the zero-shot variant. Specifically, the T5 model achieved a Rouge-L score of 0.638. Based on the radiologist readers' study, the summaries produced by this model were found to be very similar to those produced by a radiologist, with about 70% similarity in fluency and consistency between the T5-generated summaries and the original ones. CONCLUSIONS Technological advances, especially in NLP and LLM, hold great promise for improving and streamlining the summarization of radiological findings, thus providing valuable assistance to radiologists in their work.
Collapse
Affiliation(s)
| | | | | | - Antonio Luna
- MRI Unit, Radiology Department, Health Time, Jaén, Spain.
| |
Collapse
|
27
|
Linguraru MG, Bakas S, Aboian M, Chang PD, Flanders AE, Kalpathy-Cramer J, Kitamura FC, Lungren MP, Mongan J, Prevedello LM, Summers RM, Wu CC, Adewole M, Kahn CE. Clinical, Cultural, Computational, and Regulatory Considerations to Deploy AI in Radiology: Perspectives of RSNA and MICCAI Experts. Radiol Artif Intell 2024; 6:e240225. [PMID: 38984986 PMCID: PMC11294958 DOI: 10.1148/ryai.240225] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2024] [Revised: 04/13/2024] [Accepted: 04/25/2024] [Indexed: 07/11/2024]
Abstract
The Radiological Society of North of America (RSNA) and the Medical Image Computing and Computer Assisted Intervention (MICCAI) Society have led a series of joint panels and seminars focused on the present impact and future directions of artificial intelligence (AI) in radiology. These conversations have collected viewpoints from multidisciplinary experts in radiology, medical imaging, and machine learning on the current clinical penetration of AI technology in radiology and how it is impacted by trust, reproducibility, explainability, and accountability. The collective points-both practical and philosophical-define the cultural changes for radiologists and AI scientists working together and describe the challenges ahead for AI technologies to meet broad approval. This article presents the perspectives of experts from MICCAI and RSNA on the clinical, cultural, computational, and regulatory considerations-coupled with recommended reading materials-essential to adopt AI technology successfully in radiology and, more generally, in clinical practice. The report emphasizes the importance of collaboration to improve clinical deployment, highlights the need to integrate clinical and medical imaging data, and introduces strategies to ensure smooth and incentivized integration. Keywords: Adults and Pediatrics, Computer Applications-General (Informatics), Diagnosis, Prognosis © RSNA, 2024.
Collapse
Affiliation(s)
- Marius George Linguraru
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation,
Children’s National Hospital, Washington, DC (M.G.L.); Divisions of
Radiology and Pediatrics, George Washington University School of Medicine and
Health Sciences, Washington, DC (M.G.L.); Division of Computational Pathology,
Department of Pathology & Laboratory Medicine, School of Medicine,
Indiana University, Indianapolis, Ind (S.B.); Department of Radiology,
Children’s Hospital of Philadelphia, Philadelphia, Pa (M.A.); Department
of Radiological Sciences, University of California Irvine, Irvine, Calif
(P.D.C.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa
(A.E.F.); Department of Ophthalmology, University of Colorado Anschutz Medical
Campus, Aurora, Colo (J.K.C.); Department of Applied Innovation and AI,
Diagnósticos da América SA (DasaInova), São Paulo, Brazil
(F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São
Paulo, São Paulo, Brazil (F.C.K.); Microsoft, Nuance, Burlington, Mass
(M.P.L.); Department of Radiology and Biomedical Imaging and Center for
Intelligent Imaging, University of California San Francisco, San Francisco,
Calif (J.M.); Department of Radiology, The Ohio State University Wexner Medical
Center, Columbus, Ohio (L.M.P.); Department of Radiology and Imaging Sciences,
National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); Division
of Diagnostic Imaging, University of Texas MD Anderson Cancer Center, Houston,
Tex (C.C.W.); Medical Artificial Intelligence Laboratory, University of Lagos
College of Medicine, Lagos, Nigeria (M.A.); and Department of Radiology,
University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA
19104-6243 (C.E.K.)
| | - Spyridon Bakas
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation,
Children’s National Hospital, Washington, DC (M.G.L.); Divisions of
Radiology and Pediatrics, George Washington University School of Medicine and
Health Sciences, Washington, DC (M.G.L.); Division of Computational Pathology,
Department of Pathology & Laboratory Medicine, School of Medicine,
Indiana University, Indianapolis, Ind (S.B.); Department of Radiology,
Children’s Hospital of Philadelphia, Philadelphia, Pa (M.A.); Department
of Radiological Sciences, University of California Irvine, Irvine, Calif
(P.D.C.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa
(A.E.F.); Department of Ophthalmology, University of Colorado Anschutz Medical
Campus, Aurora, Colo (J.K.C.); Department of Applied Innovation and AI,
Diagnósticos da América SA (DasaInova), São Paulo, Brazil
(F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São
Paulo, São Paulo, Brazil (F.C.K.); Microsoft, Nuance, Burlington, Mass
(M.P.L.); Department of Radiology and Biomedical Imaging and Center for
Intelligent Imaging, University of California San Francisco, San Francisco,
Calif (J.M.); Department of Radiology, The Ohio State University Wexner Medical
Center, Columbus, Ohio (L.M.P.); Department of Radiology and Imaging Sciences,
National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); Division
of Diagnostic Imaging, University of Texas MD Anderson Cancer Center, Houston,
Tex (C.C.W.); Medical Artificial Intelligence Laboratory, University of Lagos
College of Medicine, Lagos, Nigeria (M.A.); and Department of Radiology,
University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA
19104-6243 (C.E.K.)
| | - Mariam Aboian
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation,
Children’s National Hospital, Washington, DC (M.G.L.); Divisions of
Radiology and Pediatrics, George Washington University School of Medicine and
Health Sciences, Washington, DC (M.G.L.); Division of Computational Pathology,
Department of Pathology & Laboratory Medicine, School of Medicine,
Indiana University, Indianapolis, Ind (S.B.); Department of Radiology,
Children’s Hospital of Philadelphia, Philadelphia, Pa (M.A.); Department
of Radiological Sciences, University of California Irvine, Irvine, Calif
(P.D.C.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa
(A.E.F.); Department of Ophthalmology, University of Colorado Anschutz Medical
Campus, Aurora, Colo (J.K.C.); Department of Applied Innovation and AI,
Diagnósticos da América SA (DasaInova), São Paulo, Brazil
(F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São
Paulo, São Paulo, Brazil (F.C.K.); Microsoft, Nuance, Burlington, Mass
(M.P.L.); Department of Radiology and Biomedical Imaging and Center for
Intelligent Imaging, University of California San Francisco, San Francisco,
Calif (J.M.); Department of Radiology, The Ohio State University Wexner Medical
Center, Columbus, Ohio (L.M.P.); Department of Radiology and Imaging Sciences,
National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); Division
of Diagnostic Imaging, University of Texas MD Anderson Cancer Center, Houston,
Tex (C.C.W.); Medical Artificial Intelligence Laboratory, University of Lagos
College of Medicine, Lagos, Nigeria (M.A.); and Department of Radiology,
University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA
19104-6243 (C.E.K.)
| | - Peter D. Chang
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation,
Children’s National Hospital, Washington, DC (M.G.L.); Divisions of
Radiology and Pediatrics, George Washington University School of Medicine and
Health Sciences, Washington, DC (M.G.L.); Division of Computational Pathology,
Department of Pathology & Laboratory Medicine, School of Medicine,
Indiana University, Indianapolis, Ind (S.B.); Department of Radiology,
Children’s Hospital of Philadelphia, Philadelphia, Pa (M.A.); Department
of Radiological Sciences, University of California Irvine, Irvine, Calif
(P.D.C.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa
(A.E.F.); Department of Ophthalmology, University of Colorado Anschutz Medical
Campus, Aurora, Colo (J.K.C.); Department of Applied Innovation and AI,
Diagnósticos da América SA (DasaInova), São Paulo, Brazil
(F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São
Paulo, São Paulo, Brazil (F.C.K.); Microsoft, Nuance, Burlington, Mass
(M.P.L.); Department of Radiology and Biomedical Imaging and Center for
Intelligent Imaging, University of California San Francisco, San Francisco,
Calif (J.M.); Department of Radiology, The Ohio State University Wexner Medical
Center, Columbus, Ohio (L.M.P.); Department of Radiology and Imaging Sciences,
National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); Division
of Diagnostic Imaging, University of Texas MD Anderson Cancer Center, Houston,
Tex (C.C.W.); Medical Artificial Intelligence Laboratory, University of Lagos
College of Medicine, Lagos, Nigeria (M.A.); and Department of Radiology,
University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA
19104-6243 (C.E.K.)
| | - Adam E. Flanders
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation,
Children’s National Hospital, Washington, DC (M.G.L.); Divisions of
Radiology and Pediatrics, George Washington University School of Medicine and
Health Sciences, Washington, DC (M.G.L.); Division of Computational Pathology,
Department of Pathology & Laboratory Medicine, School of Medicine,
Indiana University, Indianapolis, Ind (S.B.); Department of Radiology,
Children’s Hospital of Philadelphia, Philadelphia, Pa (M.A.); Department
of Radiological Sciences, University of California Irvine, Irvine, Calif
(P.D.C.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa
(A.E.F.); Department of Ophthalmology, University of Colorado Anschutz Medical
Campus, Aurora, Colo (J.K.C.); Department of Applied Innovation and AI,
Diagnósticos da América SA (DasaInova), São Paulo, Brazil
(F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São
Paulo, São Paulo, Brazil (F.C.K.); Microsoft, Nuance, Burlington, Mass
(M.P.L.); Department of Radiology and Biomedical Imaging and Center for
Intelligent Imaging, University of California San Francisco, San Francisco,
Calif (J.M.); Department of Radiology, The Ohio State University Wexner Medical
Center, Columbus, Ohio (L.M.P.); Department of Radiology and Imaging Sciences,
National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); Division
of Diagnostic Imaging, University of Texas MD Anderson Cancer Center, Houston,
Tex (C.C.W.); Medical Artificial Intelligence Laboratory, University of Lagos
College of Medicine, Lagos, Nigeria (M.A.); and Department of Radiology,
University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA
19104-6243 (C.E.K.)
| | - Jayashree Kalpathy-Cramer
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation,
Children’s National Hospital, Washington, DC (M.G.L.); Divisions of
Radiology and Pediatrics, George Washington University School of Medicine and
Health Sciences, Washington, DC (M.G.L.); Division of Computational Pathology,
Department of Pathology & Laboratory Medicine, School of Medicine,
Indiana University, Indianapolis, Ind (S.B.); Department of Radiology,
Children’s Hospital of Philadelphia, Philadelphia, Pa (M.A.); Department
of Radiological Sciences, University of California Irvine, Irvine, Calif
(P.D.C.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa
(A.E.F.); Department of Ophthalmology, University of Colorado Anschutz Medical
Campus, Aurora, Colo (J.K.C.); Department of Applied Innovation and AI,
Diagnósticos da América SA (DasaInova), São Paulo, Brazil
(F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São
Paulo, São Paulo, Brazil (F.C.K.); Microsoft, Nuance, Burlington, Mass
(M.P.L.); Department of Radiology and Biomedical Imaging and Center for
Intelligent Imaging, University of California San Francisco, San Francisco,
Calif (J.M.); Department of Radiology, The Ohio State University Wexner Medical
Center, Columbus, Ohio (L.M.P.); Department of Radiology and Imaging Sciences,
National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); Division
of Diagnostic Imaging, University of Texas MD Anderson Cancer Center, Houston,
Tex (C.C.W.); Medical Artificial Intelligence Laboratory, University of Lagos
College of Medicine, Lagos, Nigeria (M.A.); and Department of Radiology,
University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA
19104-6243 (C.E.K.)
| | - Felipe C. Kitamura
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation,
Children’s National Hospital, Washington, DC (M.G.L.); Divisions of
Radiology and Pediatrics, George Washington University School of Medicine and
Health Sciences, Washington, DC (M.G.L.); Division of Computational Pathology,
Department of Pathology & Laboratory Medicine, School of Medicine,
Indiana University, Indianapolis, Ind (S.B.); Department of Radiology,
Children’s Hospital of Philadelphia, Philadelphia, Pa (M.A.); Department
of Radiological Sciences, University of California Irvine, Irvine, Calif
(P.D.C.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa
(A.E.F.); Department of Ophthalmology, University of Colorado Anschutz Medical
Campus, Aurora, Colo (J.K.C.); Department of Applied Innovation and AI,
Diagnósticos da América SA (DasaInova), São Paulo, Brazil
(F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São
Paulo, São Paulo, Brazil (F.C.K.); Microsoft, Nuance, Burlington, Mass
(M.P.L.); Department of Radiology and Biomedical Imaging and Center for
Intelligent Imaging, University of California San Francisco, San Francisco,
Calif (J.M.); Department of Radiology, The Ohio State University Wexner Medical
Center, Columbus, Ohio (L.M.P.); Department of Radiology and Imaging Sciences,
National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); Division
of Diagnostic Imaging, University of Texas MD Anderson Cancer Center, Houston,
Tex (C.C.W.); Medical Artificial Intelligence Laboratory, University of Lagos
College of Medicine, Lagos, Nigeria (M.A.); and Department of Radiology,
University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA
19104-6243 (C.E.K.)
| | - Matthew P. Lungren
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation,
Children’s National Hospital, Washington, DC (M.G.L.); Divisions of
Radiology and Pediatrics, George Washington University School of Medicine and
Health Sciences, Washington, DC (M.G.L.); Division of Computational Pathology,
Department of Pathology & Laboratory Medicine, School of Medicine,
Indiana University, Indianapolis, Ind (S.B.); Department of Radiology,
Children’s Hospital of Philadelphia, Philadelphia, Pa (M.A.); Department
of Radiological Sciences, University of California Irvine, Irvine, Calif
(P.D.C.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa
(A.E.F.); Department of Ophthalmology, University of Colorado Anschutz Medical
Campus, Aurora, Colo (J.K.C.); Department of Applied Innovation and AI,
Diagnósticos da América SA (DasaInova), São Paulo, Brazil
(F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São
Paulo, São Paulo, Brazil (F.C.K.); Microsoft, Nuance, Burlington, Mass
(M.P.L.); Department of Radiology and Biomedical Imaging and Center for
Intelligent Imaging, University of California San Francisco, San Francisco,
Calif (J.M.); Department of Radiology, The Ohio State University Wexner Medical
Center, Columbus, Ohio (L.M.P.); Department of Radiology and Imaging Sciences,
National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); Division
of Diagnostic Imaging, University of Texas MD Anderson Cancer Center, Houston,
Tex (C.C.W.); Medical Artificial Intelligence Laboratory, University of Lagos
College of Medicine, Lagos, Nigeria (M.A.); and Department of Radiology,
University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA
19104-6243 (C.E.K.)
| | - John Mongan
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation,
Children’s National Hospital, Washington, DC (M.G.L.); Divisions of
Radiology and Pediatrics, George Washington University School of Medicine and
Health Sciences, Washington, DC (M.G.L.); Division of Computational Pathology,
Department of Pathology & Laboratory Medicine, School of Medicine,
Indiana University, Indianapolis, Ind (S.B.); Department of Radiology,
Children’s Hospital of Philadelphia, Philadelphia, Pa (M.A.); Department
of Radiological Sciences, University of California Irvine, Irvine, Calif
(P.D.C.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa
(A.E.F.); Department of Ophthalmology, University of Colorado Anschutz Medical
Campus, Aurora, Colo (J.K.C.); Department of Applied Innovation and AI,
Diagnósticos da América SA (DasaInova), São Paulo, Brazil
(F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São
Paulo, São Paulo, Brazil (F.C.K.); Microsoft, Nuance, Burlington, Mass
(M.P.L.); Department of Radiology and Biomedical Imaging and Center for
Intelligent Imaging, University of California San Francisco, San Francisco,
Calif (J.M.); Department of Radiology, The Ohio State University Wexner Medical
Center, Columbus, Ohio (L.M.P.); Department of Radiology and Imaging Sciences,
National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); Division
of Diagnostic Imaging, University of Texas MD Anderson Cancer Center, Houston,
Tex (C.C.W.); Medical Artificial Intelligence Laboratory, University of Lagos
College of Medicine, Lagos, Nigeria (M.A.); and Department of Radiology,
University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA
19104-6243 (C.E.K.)
| | - Luciano M. Prevedello
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation,
Children’s National Hospital, Washington, DC (M.G.L.); Divisions of
Radiology and Pediatrics, George Washington University School of Medicine and
Health Sciences, Washington, DC (M.G.L.); Division of Computational Pathology,
Department of Pathology & Laboratory Medicine, School of Medicine,
Indiana University, Indianapolis, Ind (S.B.); Department of Radiology,
Children’s Hospital of Philadelphia, Philadelphia, Pa (M.A.); Department
of Radiological Sciences, University of California Irvine, Irvine, Calif
(P.D.C.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa
(A.E.F.); Department of Ophthalmology, University of Colorado Anschutz Medical
Campus, Aurora, Colo (J.K.C.); Department of Applied Innovation and AI,
Diagnósticos da América SA (DasaInova), São Paulo, Brazil
(F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São
Paulo, São Paulo, Brazil (F.C.K.); Microsoft, Nuance, Burlington, Mass
(M.P.L.); Department of Radiology and Biomedical Imaging and Center for
Intelligent Imaging, University of California San Francisco, San Francisco,
Calif (J.M.); Department of Radiology, The Ohio State University Wexner Medical
Center, Columbus, Ohio (L.M.P.); Department of Radiology and Imaging Sciences,
National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); Division
of Diagnostic Imaging, University of Texas MD Anderson Cancer Center, Houston,
Tex (C.C.W.); Medical Artificial Intelligence Laboratory, University of Lagos
College of Medicine, Lagos, Nigeria (M.A.); and Department of Radiology,
University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA
19104-6243 (C.E.K.)
| | - Ronald M. Summers
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation,
Children’s National Hospital, Washington, DC (M.G.L.); Divisions of
Radiology and Pediatrics, George Washington University School of Medicine and
Health Sciences, Washington, DC (M.G.L.); Division of Computational Pathology,
Department of Pathology & Laboratory Medicine, School of Medicine,
Indiana University, Indianapolis, Ind (S.B.); Department of Radiology,
Children’s Hospital of Philadelphia, Philadelphia, Pa (M.A.); Department
of Radiological Sciences, University of California Irvine, Irvine, Calif
(P.D.C.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa
(A.E.F.); Department of Ophthalmology, University of Colorado Anschutz Medical
Campus, Aurora, Colo (J.K.C.); Department of Applied Innovation and AI,
Diagnósticos da América SA (DasaInova), São Paulo, Brazil
(F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São
Paulo, São Paulo, Brazil (F.C.K.); Microsoft, Nuance, Burlington, Mass
(M.P.L.); Department of Radiology and Biomedical Imaging and Center for
Intelligent Imaging, University of California San Francisco, San Francisco,
Calif (J.M.); Department of Radiology, The Ohio State University Wexner Medical
Center, Columbus, Ohio (L.M.P.); Department of Radiology and Imaging Sciences,
National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); Division
of Diagnostic Imaging, University of Texas MD Anderson Cancer Center, Houston,
Tex (C.C.W.); Medical Artificial Intelligence Laboratory, University of Lagos
College of Medicine, Lagos, Nigeria (M.A.); and Department of Radiology,
University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA
19104-6243 (C.E.K.)
| | - Carol C. Wu
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation,
Children’s National Hospital, Washington, DC (M.G.L.); Divisions of
Radiology and Pediatrics, George Washington University School of Medicine and
Health Sciences, Washington, DC (M.G.L.); Division of Computational Pathology,
Department of Pathology & Laboratory Medicine, School of Medicine,
Indiana University, Indianapolis, Ind (S.B.); Department of Radiology,
Children’s Hospital of Philadelphia, Philadelphia, Pa (M.A.); Department
of Radiological Sciences, University of California Irvine, Irvine, Calif
(P.D.C.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa
(A.E.F.); Department of Ophthalmology, University of Colorado Anschutz Medical
Campus, Aurora, Colo (J.K.C.); Department of Applied Innovation and AI,
Diagnósticos da América SA (DasaInova), São Paulo, Brazil
(F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São
Paulo, São Paulo, Brazil (F.C.K.); Microsoft, Nuance, Burlington, Mass
(M.P.L.); Department of Radiology and Biomedical Imaging and Center for
Intelligent Imaging, University of California San Francisco, San Francisco,
Calif (J.M.); Department of Radiology, The Ohio State University Wexner Medical
Center, Columbus, Ohio (L.M.P.); Department of Radiology and Imaging Sciences,
National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); Division
of Diagnostic Imaging, University of Texas MD Anderson Cancer Center, Houston,
Tex (C.C.W.); Medical Artificial Intelligence Laboratory, University of Lagos
College of Medicine, Lagos, Nigeria (M.A.); and Department of Radiology,
University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA
19104-6243 (C.E.K.)
| | - Maruf Adewole
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation,
Children’s National Hospital, Washington, DC (M.G.L.); Divisions of
Radiology and Pediatrics, George Washington University School of Medicine and
Health Sciences, Washington, DC (M.G.L.); Division of Computational Pathology,
Department of Pathology & Laboratory Medicine, School of Medicine,
Indiana University, Indianapolis, Ind (S.B.); Department of Radiology,
Children’s Hospital of Philadelphia, Philadelphia, Pa (M.A.); Department
of Radiological Sciences, University of California Irvine, Irvine, Calif
(P.D.C.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa
(A.E.F.); Department of Ophthalmology, University of Colorado Anschutz Medical
Campus, Aurora, Colo (J.K.C.); Department of Applied Innovation and AI,
Diagnósticos da América SA (DasaInova), São Paulo, Brazil
(F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São
Paulo, São Paulo, Brazil (F.C.K.); Microsoft, Nuance, Burlington, Mass
(M.P.L.); Department of Radiology and Biomedical Imaging and Center for
Intelligent Imaging, University of California San Francisco, San Francisco,
Calif (J.M.); Department of Radiology, The Ohio State University Wexner Medical
Center, Columbus, Ohio (L.M.P.); Department of Radiology and Imaging Sciences,
National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); Division
of Diagnostic Imaging, University of Texas MD Anderson Cancer Center, Houston,
Tex (C.C.W.); Medical Artificial Intelligence Laboratory, University of Lagos
College of Medicine, Lagos, Nigeria (M.A.); and Department of Radiology,
University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA
19104-6243 (C.E.K.)
| | - Charles E. Kahn
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation,
Children’s National Hospital, Washington, DC (M.G.L.); Divisions of
Radiology and Pediatrics, George Washington University School of Medicine and
Health Sciences, Washington, DC (M.G.L.); Division of Computational Pathology,
Department of Pathology & Laboratory Medicine, School of Medicine,
Indiana University, Indianapolis, Ind (S.B.); Department of Radiology,
Children’s Hospital of Philadelphia, Philadelphia, Pa (M.A.); Department
of Radiological Sciences, University of California Irvine, Irvine, Calif
(P.D.C.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa
(A.E.F.); Department of Ophthalmology, University of Colorado Anschutz Medical
Campus, Aurora, Colo (J.K.C.); Department of Applied Innovation and AI,
Diagnósticos da América SA (DasaInova), São Paulo, Brazil
(F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São
Paulo, São Paulo, Brazil (F.C.K.); Microsoft, Nuance, Burlington, Mass
(M.P.L.); Department of Radiology and Biomedical Imaging and Center for
Intelligent Imaging, University of California San Francisco, San Francisco,
Calif (J.M.); Department of Radiology, The Ohio State University Wexner Medical
Center, Columbus, Ohio (L.M.P.); Department of Radiology and Imaging Sciences,
National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); Division
of Diagnostic Imaging, University of Texas MD Anderson Cancer Center, Houston,
Tex (C.C.W.); Medical Artificial Intelligence Laboratory, University of Lagos
College of Medicine, Lagos, Nigeria (M.A.); and Department of Radiology,
University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA
19104-6243 (C.E.K.)
| |
Collapse
|
28
|
Wang Y, Guo Y, Wang Z, Yu L, Yan Y, Gu Z. Enhancing semantic segmentation in chest X-ray images through image preprocessing: ps-KDE for pixel-wise substitution by kernel density estimation. PLoS One 2024; 19:e0299623. [PMID: 38913621 PMCID: PMC11195943 DOI: 10.1371/journal.pone.0299623] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Accepted: 05/08/2024] [Indexed: 06/26/2024] Open
Abstract
BACKGROUND In medical imaging, the integration of deep-learning-based semantic segmentation algorithms with preprocessing techniques can reduce the need for human annotation and advance disease classification. Among established preprocessing techniques, Contrast Limited Adaptive Histogram Equalization (CLAHE) has demonstrated efficacy in improving segmentation algorithms across various modalities, such as X-rays and CT. However, there remains a demand for improved contrast enhancement methods considering the heterogeneity of datasets and the various contrasts across different anatomic structures. METHOD This study proposes a novel preprocessing technique, ps-KDE, to investigate its impact on deep learning algorithms to segment major organs in posterior-anterior chest X-rays. Ps-KDE augments image contrast by substituting pixel values based on their normalized frequency across all images. We evaluate our approach on a U-Net architecture with ResNet34 backbone pre-trained on ImageNet. Five separate models are trained to segment the heart, left lung, right lung, left clavicle, and right clavicle. RESULTS The model trained to segment the left lung using ps-KDE achieved a Dice score of 0.780 (SD = 0.13), while that of trained on CLAHE achieved a Dice score of 0.717 (SD = 0.19), p<0.01. ps-KDE also appears to be more robust as CLAHE-based models misclassified right lungs in select test images for the left lung model. The algorithm for performing ps-KDE is available at https://github.com/wyc79/ps-KDE. DISCUSSION Our results suggest that ps-KDE offers advantages over current preprocessing techniques when segmenting certain lung regions. This could be beneficial in subsequent analyses such as disease classification and risk stratification.
Collapse
Affiliation(s)
- Yuanchen Wang
- Department of Biomedical Informatics, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Yujie Guo
- Department of Biomedical Informatics, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Ziqi Wang
- Department of Biomedical Informatics, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Linzi Yu
- Department of Biomedical Informatics, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Yujie Yan
- Department of Biomedical Informatics, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Zifan Gu
- Department of Biomedical Informatics, Harvard Medical School, Boston, Massachusetts, United States of America
| |
Collapse
|
29
|
Codipilly DC, Faghani S, Hagan C, Lewis J, Erickson BJ, Iyer PG. The Evolving Role of Artificial Intelligence in Gastrointestinal Histopathology: An Update. Clin Gastroenterol Hepatol 2024; 22:1170-1180. [PMID: 38154727 DOI: 10.1016/j.cgh.2023.11.044] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Revised: 11/20/2023] [Accepted: 11/21/2023] [Indexed: 12/30/2023]
Abstract
Significant advances in artificial intelligence (AI) over the past decade potentially may lead to dramatic effects on clinical practice. Digitized histology represents an area ripe for AI implementation. We describe several current needs within the world of gastrointestinal histopathology, and outline, using currently studied models, how AI potentially can address them. We also highlight pitfalls as AI makes inroads into clinical practice.
Collapse
Affiliation(s)
- D Chamil Codipilly
- Barrett's Esophagus Unit, Division of Gastroenterology and Hepatology, Mayo Clinic Rochester, Rochester, Minnesota
| | - Shahriar Faghani
- Mayo Artificial Intelligence Laboratory, Department of Radiology, Mayo Clinic, Rochester, Minnesota
| | - Catherine Hagan
- Department of Laboratory Medicine and Pathology, Mayo Clinic, Rochester, Minnesota
| | - Jason Lewis
- Department of Pathology, Mayo Clinic, Jacksonville, Florida
| | - Bradley J Erickson
- Mayo Artificial Intelligence Laboratory, Department of Radiology, Mayo Clinic, Rochester, Minnesota
| | - Prasad G Iyer
- Barrett's Esophagus Unit, Division of Gastroenterology and Hepatology, Mayo Clinic Rochester, Rochester, Minnesota.
| |
Collapse
|
30
|
Moassefi M, Faghani S, Khanipour Roshan S, Conte GM, Rassoulinejad Mousavi SM, Kaufmann TJ, Erickson BJ. Exploring the Impact of 3D Fast Spin Echo and Inversion Recovery Gradient Echo Sequences Magnetic Resonance Imaging Acquisition on Automated Brain Tumor Segmentation. MAYO CLINIC PROCEEDINGS. DIGITAL HEALTH 2024; 2:231-240. [PMID: 40207177 PMCID: PMC11975840 DOI: 10.1016/j.mcpdig.2024.03.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 04/11/2025]
Abstract
Objective To conduct a study comparing the performance of automated segmentation techniques using 2 different contrast-enhanced T1-weighted (CET1) magnetic resonance imaging (MRI) acquisition protocol. Patients and Methods We collected 100 preoperative glioblastoma (GBM) MRIs consisting of 50 IR-GRE and 50 3-dimensional fast spin echo (3D-FSE) image sets. Their gold-standard tumor segmentation mask was created based on the expert opinion of a neuroradiologist. Cases were randomly divided into training and test sets. We used the no new UNet (nnUNet) architecture pretrained on the 501-image public data set containing IR-GRE sequence image sets, followed by 2 training rounds with the IR-GRE and 3D-FSE images, respectively. For each patient, in the IR-GRE and 3D-FSE test sets, we had 2 prediction masks, one from the model fine-tuned with the IR-GRE training set and one with 3D-FSE. The dice similarity coefficients (DSCs) of the 2 sets of results for each case in the test sets were compared using the Wilcoxon tests. Results Models trained on 3D-FSE images outperformed IR-GRE models in lesion segmentation, with mean DSC differences of 0.057 and 0.022 in the respective test sets. For the 3D-FSE and IR-GRE test sets, the calculated P values comparing DSCs from 2 models were .02 and .61, respectively. Conclusion Including 3D-FSE MRI in the training data set improves segmentation performance when segmenting 3D-FSE images.
Collapse
Affiliation(s)
- Mana Moassefi
- Mayo Clinic Artificial Intelligence Laboratory, Department of Radiology, Mayo Clinic, Rochester, MN
- Department of Radiology, Mayo Clinic, Rochester, MN
| | - Shahriar Faghani
- Mayo Clinic Artificial Intelligence Laboratory, Department of Radiology, Mayo Clinic, Rochester, MN
- Department of Radiology, Mayo Clinic, Rochester, MN
| | | | - Gian Marco Conte
- Mayo Clinic Artificial Intelligence Laboratory, Department of Radiology, Mayo Clinic, Rochester, MN
- Department of Radiology, Mayo Clinic, Rochester, MN
| | - Seyed Moein Rassoulinejad Mousavi
- Mayo Clinic Artificial Intelligence Laboratory, Department of Radiology, Mayo Clinic, Rochester, MN
- Department of Radiology, Mayo Clinic, Rochester, MN
| | | | - Bradley J. Erickson
- Mayo Clinic Artificial Intelligence Laboratory, Department of Radiology, Mayo Clinic, Rochester, MN
- Department of Radiology, Mayo Clinic, Rochester, MN
| |
Collapse
|
31
|
Zhong Z, Li J, Kulkarni S, Zhang H, Fayad FH, Li Y, Collins S, Bai H, Ahn SH, Atalay MK, Gao X, Jiao Z. De-Biased Disentanglement Learning for Pulmonary Embolism Survival Prediction on Multimodal Data. IEEE J Biomed Health Inform 2024; 28:3732-3741. [PMID: 38568767 DOI: 10.1109/jbhi.2024.3384848] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/05/2024]
Abstract
Health disparities among marginalized populations with lower socioeconomic status significantly impact the fairness and effectiveness of healthcare delivery. The increasing integration of artificial intelligence (AI) into healthcare presents an opportunity to address these inequalities, provided that AI models are free from bias. This paper aims to address the bias challenges by population disparities within healthcare systems, existing in the presentation of and development of algorithms, leading to inequitable medical implementation for conditions such as pulmonary embolism (PE) prognosis. In this study, we explore the diverse bias in healthcare systems, which highlights the demand for a holistic framework to reducing bias by complementary aggregation. By leveraging de-biasing deep survival prediction models, we propose a framework that disentangles identifiable information from images, text reports, and clinical variables to mitigate potential biases within multimodal datasets. Our study offers several advantages over traditional clinical-based survival prediction methods, including richer survival-related characteristics and bias-complementary predicted results. By improving the robustness of survival analysis through this framework, we aim to benefit patients, clinicians, and researchers by enhancing fairness and accuracy in healthcare AI systems.
Collapse
|
32
|
Kim H, Kim K, Oh SJ, Lee S, Woo JH, Kim JH, Cha YK, Kim K, Chung MJ. AI-assisted Analysis to Facilitate Detection of Humeral Lesions on Chest Radiographs. Radiol Artif Intell 2024; 6:e230094. [PMID: 38446041 PMCID: PMC11140509 DOI: 10.1148/ryai.230094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2023] [Revised: 01/10/2024] [Accepted: 02/15/2024] [Indexed: 03/07/2024]
Abstract
Purpose To develop an artificial intelligence (AI) system for humeral tumor detection on chest radiographs (CRs) and evaluate the impact on reader performance. Materials and Methods In this retrospective study, 14 709 CRs (January 2000 to December 2021) were collected from 13 468 patients, including CT-proven normal (n = 13 116) and humeral tumor (n = 1593) cases. The data were divided into training and test groups. A novel training method called false-positive activation area reduction (FPAR) was introduced to enhance the diagnostic performance by focusing on the humeral region. The AI program and 10 radiologists were assessed using holdout test set 1, wherein the radiologists were tested twice (with and without AI test results). The performance of the AI system was evaluated using holdout test set 2, comprising 10 497 normal images. Receiver operating characteristic analyses were conducted for evaluating model performance. Results FPAR application in the AI program improved its performance compared with a conventional model based on the area under the receiver operating characteristic curve (0.87 vs 0.82, P = .04). The proposed AI system also demonstrated improved tumor localization accuracy (80% vs 57%, P < .001). In holdout test set 2, the proposed AI system exhibited a false-positive rate of 2%. AI assistance improved the radiologists' sensitivity, specificity, and accuracy by 8.9%, 1.2%, and 3.5%, respectively (P < .05 for all). Conclusion The proposed AI tool incorporating FPAR improved humeral tumor detection on CRs and reduced false-positive results in tumor visualization. It may serve as a supportive diagnostic tool to alert radiologists about humeral abnormalities. Keywords: Artificial Intelligence, Conventional Radiography, Humerus, Machine Learning, Shoulder, Tumor Supplemental material is available for this article. © RSNA, 2024.
Collapse
Affiliation(s)
- Harim Kim
- From the Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-Gu, Seoul 06351, South Korea (H.K., J.H.W., J.H.K., Y.K.C., M.J.C.); Medical AI Research Center, Samsung Medical Center, Seoul, South Korea (Kyungsu Kim, M.J.C.); Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Seoul, South Korea (Kyungsu Kim, Kyunga Kim, M.J.C.); and Department of Health Sciences and Technology (S.J.O.) and Department of Digital Health (S.L., Kyunga Kim), Samsung Advanced Institute for Health Sciences & Technology, Sungkyunkwan University, Seoul, South Korea
| | - Kyungsu Kim
- From the Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-Gu, Seoul 06351, South Korea (H.K., J.H.W., J.H.K., Y.K.C., M.J.C.); Medical AI Research Center, Samsung Medical Center, Seoul, South Korea (Kyungsu Kim, M.J.C.); Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Seoul, South Korea (Kyungsu Kim, Kyunga Kim, M.J.C.); and Department of Health Sciences and Technology (S.J.O.) and Department of Digital Health (S.L., Kyunga Kim), Samsung Advanced Institute for Health Sciences & Technology, Sungkyunkwan University, Seoul, South Korea
| | - Seong Je Oh
- From the Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-Gu, Seoul 06351, South Korea (H.K., J.H.W., J.H.K., Y.K.C., M.J.C.); Medical AI Research Center, Samsung Medical Center, Seoul, South Korea (Kyungsu Kim, M.J.C.); Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Seoul, South Korea (Kyungsu Kim, Kyunga Kim, M.J.C.); and Department of Health Sciences and Technology (S.J.O.) and Department of Digital Health (S.L., Kyunga Kim), Samsung Advanced Institute for Health Sciences & Technology, Sungkyunkwan University, Seoul, South Korea
| | - Sungjoo Lee
- From the Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-Gu, Seoul 06351, South Korea (H.K., J.H.W., J.H.K., Y.K.C., M.J.C.); Medical AI Research Center, Samsung Medical Center, Seoul, South Korea (Kyungsu Kim, M.J.C.); Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Seoul, South Korea (Kyungsu Kim, Kyunga Kim, M.J.C.); and Department of Health Sciences and Technology (S.J.O.) and Department of Digital Health (S.L., Kyunga Kim), Samsung Advanced Institute for Health Sciences & Technology, Sungkyunkwan University, Seoul, South Korea
| | - Jung Han Woo
- From the Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-Gu, Seoul 06351, South Korea (H.K., J.H.W., J.H.K., Y.K.C., M.J.C.); Medical AI Research Center, Samsung Medical Center, Seoul, South Korea (Kyungsu Kim, M.J.C.); Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Seoul, South Korea (Kyungsu Kim, Kyunga Kim, M.J.C.); and Department of Health Sciences and Technology (S.J.O.) and Department of Digital Health (S.L., Kyunga Kim), Samsung Advanced Institute for Health Sciences & Technology, Sungkyunkwan University, Seoul, South Korea
| | - Jong Hee Kim
- From the Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-Gu, Seoul 06351, South Korea (H.K., J.H.W., J.H.K., Y.K.C., M.J.C.); Medical AI Research Center, Samsung Medical Center, Seoul, South Korea (Kyungsu Kim, M.J.C.); Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Seoul, South Korea (Kyungsu Kim, Kyunga Kim, M.J.C.); and Department of Health Sciences and Technology (S.J.O.) and Department of Digital Health (S.L., Kyunga Kim), Samsung Advanced Institute for Health Sciences & Technology, Sungkyunkwan University, Seoul, South Korea
| | - Yoon Ki Cha
- From the Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-Gu, Seoul 06351, South Korea (H.K., J.H.W., J.H.K., Y.K.C., M.J.C.); Medical AI Research Center, Samsung Medical Center, Seoul, South Korea (Kyungsu Kim, M.J.C.); Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Seoul, South Korea (Kyungsu Kim, Kyunga Kim, M.J.C.); and Department of Health Sciences and Technology (S.J.O.) and Department of Digital Health (S.L., Kyunga Kim), Samsung Advanced Institute for Health Sciences & Technology, Sungkyunkwan University, Seoul, South Korea
| | - Kyunga Kim
- From the Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-Gu, Seoul 06351, South Korea (H.K., J.H.W., J.H.K., Y.K.C., M.J.C.); Medical AI Research Center, Samsung Medical Center, Seoul, South Korea (Kyungsu Kim, M.J.C.); Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Seoul, South Korea (Kyungsu Kim, Kyunga Kim, M.J.C.); and Department of Health Sciences and Technology (S.J.O.) and Department of Digital Health (S.L., Kyunga Kim), Samsung Advanced Institute for Health Sciences & Technology, Sungkyunkwan University, Seoul, South Korea
| | - Myung Jin Chung
- From the Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-Gu, Seoul 06351, South Korea (H.K., J.H.W., J.H.K., Y.K.C., M.J.C.); Medical AI Research Center, Samsung Medical Center, Seoul, South Korea (Kyungsu Kim, M.J.C.); Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Seoul, South Korea (Kyungsu Kim, Kyunga Kim, M.J.C.); and Department of Health Sciences and Technology (S.J.O.) and Department of Digital Health (S.L., Kyunga Kim), Samsung Advanced Institute for Health Sciences & Technology, Sungkyunkwan University, Seoul, South Korea
| |
Collapse
|
33
|
Tejani AS, Ng YS, Xi Y, Rayan JC. Understanding and Mitigating Bias in Imaging Artificial Intelligence. Radiographics 2024; 44:e230067. [PMID: 38635456 DOI: 10.1148/rg.230067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/20/2024]
Abstract
Artificial intelligence (AI) algorithms are prone to bias at multiple stages of model development, with potential for exacerbating health disparities. However, bias in imaging AI is a complex topic that encompasses multiple coexisting definitions. Bias may refer to unequal preference to a person or group owing to preexisting attitudes or beliefs, either intentional or unintentional. However, cognitive bias refers to systematic deviation from objective judgment due to reliance on heuristics, and statistical bias refers to differences between true and expected values, commonly manifesting as systematic error in model prediction (ie, a model with output unrepresentative of real-world conditions). Clinical decisions informed by biased models may lead to patient harm due to action on inaccurate AI results or exacerbate health inequities due to differing performance among patient populations. However, while inequitable bias can harm patients in this context, a mindful approach leveraging equitable bias can address underrepresentation of minority groups or rare diseases. Radiologists should also be aware of bias after AI deployment such as automation bias, or a tendency to agree with automated decisions despite contrary evidence. Understanding common sources of imaging AI bias and the consequences of using biased models can guide preventive measures to mitigate its impact. Accordingly, the authors focus on sources of bias at stages along the imaging machine learning life cycle, attempting to simplify potentially intimidating technical terminology for general radiologists using AI tools in practice or collaborating with data scientists and engineers for AI tool development. The authors review definitions of bias in AI, describe common sources of bias, and present recommendations to guide quality control measures to mitigate the impact of bias in imaging AI. Understanding the terms featured in this article will enable a proactive approach to identifying and mitigating bias in imaging AI. Published under a CC BY 4.0 license. Test Your Knowledge questions for this article are available in the supplemental material. See the invited commentary by Rouzrokh and Erickson in this issue.
Collapse
Affiliation(s)
- Ali S Tejani
- From the Department of Radiology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390
| | - Yee Seng Ng
- From the Department of Radiology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390
| | - Yin Xi
- From the Department of Radiology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390
| | - Jesse C Rayan
- From the Department of Radiology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390
| |
Collapse
|
34
|
Mickley JP, Grove AF, Rouzrokh P, Yang L, Larson AN, Sanchez-Sotello J, Maradit Kremers H, Wyles CC. A Stepwise Approach to Analyzing Musculoskeletal Imaging Data With Artificial Intelligence. Arthritis Care Res (Hoboken) 2024; 76:590-599. [PMID: 37849415 DOI: 10.1002/acr.25260] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 08/27/2023] [Accepted: 10/13/2023] [Indexed: 10/19/2023]
Abstract
The digitization of medical records and expanding electronic health records has created an era of "Big Data" with an abundance of available information ranging from clinical notes to imaging studies. In the field of rheumatology, medical imaging is used to guide both diagnosis and treatment of a wide variety of rheumatic conditions. Although there is an abundance of data to analyze, traditional methods of image analysis are human resource intensive. Fortunately, the growth of artificial intelligence (AI) may be a solution to handle large datasets. In particular, computer vision is a field within AI that analyzes images and extracts information. Computer vision has impressive capabilities and can be applied to rheumatologic conditions, necessitating a need to understand how computer vision works. In this article, we provide an overview of AI in rheumatology and conclude with a five step process to plan and conduct research in the field of computer vision. The five steps include (1) project definition, (2) data handling, (3) model development, (4) performance evaluation, and (5) deployment into clinical care.
Collapse
|
35
|
Rouzrokh P, Erickson BJ. Invited Commentary: The Double-edged Sword of Bias in Medical Imaging Artificial Intelligence. Radiographics 2024; 44:e230243. [PMID: 38635455 DOI: 10.1148/rg.230243] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/20/2024]
Affiliation(s)
- Pouria Rouzrokh
- From the Mayo Clinic Artificial Intelligence Laboratory and Radiology Informatics Laboratory, Department of Radiology, Mayo Clinic, 200 First St SW, Rochester, MN 55905
| | - Bradley J Erickson
- From the Mayo Clinic Artificial Intelligence Laboratory and Radiology Informatics Laboratory, Department of Radiology, Mayo Clinic, 200 First St SW, Rochester, MN 55905
| |
Collapse
|
36
|
Faghani S, Erickson BJ. Bone Age Prediction under Stress. Radiol Artif Intell 2024; 6:e240137. [PMID: 38629960 PMCID: PMC11140503 DOI: 10.1148/ryai.240137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2024] [Revised: 03/24/2024] [Accepted: 04/01/2024] [Indexed: 04/19/2024]
Affiliation(s)
- Shahriar Faghani
- From the Department of Radiology, Radiology Informatics Laboratory, Mayo Clinic, 200 1st St SW, Rochester, MN 55905
| | - Bradley J. Erickson
- From the Department of Radiology, Radiology Informatics Laboratory, Mayo Clinic, 200 1st St SW, Rochester, MN 55905
| |
Collapse
|
37
|
Juwara L, El-Hussuna A, El Emam K. An evaluation of synthetic data augmentation for mitigating covariate bias in health data. PATTERNS (NEW YORK, N.Y.) 2024; 5:100946. [PMID: 38645766 PMCID: PMC11026977 DOI: 10.1016/j.patter.2024.100946] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 10/23/2023] [Accepted: 02/08/2024] [Indexed: 04/23/2024]
Abstract
Data bias is a major concern in biomedical research, especially when evaluating large-scale observational datasets. It leads to imprecise predictions and inconsistent estimates in standard regression models. We compare the performance of commonly used bias-mitigating approaches (resampling, algorithmic, and post hoc approaches) against a synthetic data-augmentation method that utilizes sequential boosted decision trees to synthesize under-represented groups. The approach is called synthetic minority augmentation (SMA). Through simulations and analysis of real health datasets on a logistic regression workload, the approaches are evaluated across various bias scenarios (types and severity levels). Performance was assessed based on area under the curve, calibration (Brier score), precision of parameter estimates, confidence interval overlap, and fairness. Overall, SMA produces the closest results to the ground truth in low to medium bias (50% or less missing proportion). In high bias (80% or more missing proportion), the advantage of SMA is not obvious, with no specific method consistently outperforming others.
Collapse
Affiliation(s)
- Lamin Juwara
- School of Epidemiology and Public Health, University of Ottawa, Ottawa, ON, Canada
- Research Institute, Children’s Hospital of Eastern Ontario, Ottawa, ON, Canada
| | | | - Khaled El Emam
- School of Epidemiology and Public Health, University of Ottawa, Ottawa, ON, Canada
- Research Institute, Children’s Hospital of Eastern Ontario, Ottawa, ON, Canada
- Data Science, Replica Analytics Ltd., Ottawa, ON, Canada
| |
Collapse
|
38
|
Davis MA, Wu O, Ikuta I, Jordan JE, Johnson MH, Quigley E. Understanding Bias in Artificial Intelligence: A Practice Perspective. AJNR Am J Neuroradiol 2024; 45:371-373. [PMID: 38123951 PMCID: PMC11288570 DOI: 10.3174/ajnr.a8070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Accepted: 10/17/2023] [Indexed: 12/23/2023]
Abstract
In the fall of 2021, several experts in this space delivered a Webinar hosted by the American Society of Neuroradiology (ASNR) Diversity and Inclusion Committee, focused on expanding the understanding of bias in artificial intelligence, with a health equity lens, and provided key concepts for neuroradiologists to approach the evaluation of these tools. In this perspective, we distill key parts of this discussion, including understanding why this topic is important to neuroradiologists and lending insight on how neuroradiologists can develop a framework to assess health equity-related bias in artificial intelligence tools. In addition, we provide examples of clinical workflow implementation of these tools so that we can begin to see how artificial intelligence tools will impact discourse on equitable radiologic care. As continuous learners, we must be engaged in new and rapidly evolving technologies that emerge in our field. The Diversity and Inclusion Committee of the ASNR has addressed this subject matter through its programming content revolving around health equity in neuroradiologic advances.
Collapse
Affiliation(s)
- Melissa A Davis
- From Yale University (M.A.D., M.H.J.), New Haven, Connecticut
| | - Ona Wu
- Massachusetts General Hospital (O.W.), Charlestown, Massachusetts
| | - Ichiro Ikuta
- Mayo Clinic Arizona, Department of Radiology (I.I.), Phoenix, Arizona
| | - John E Jordan
- Stanford University School of Medicine (J.E.J.), Stanford, California
| | | | | |
Collapse
|
39
|
Faghani S, Moassefi M, Madhavan AA, Mark IT, Verdoorn JT, Erickson BJ, Benson JC. Identifying Patients with CSF-Venous Fistula Using Brain MRI: A Deep Learning Approach. AJNR Am J Neuroradiol 2024; 45:439-443. [PMID: 38423747 PMCID: PMC11288568 DOI: 10.3174/ajnr.a8173] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Accepted: 12/12/2023] [Indexed: 03/02/2024]
Abstract
BACKGROUND AND PURPOSE Spontaneous intracranial hypotension is an increasingly recognized condition. Spontaneous intracranial hypotension is caused by a CSF leak, which is commonly related to a CSF-venous fistula. In patients with spontaneous intracranial hypotension, multiple intracranial abnormalities can be observed on brain MR imaging, including dural enhancement, "brain sag," and pituitary engorgement. This study seeks to create a deep learning model for the accurate diagnosis of CSF-venous fistulas via brain MR imaging. MATERIALS AND METHODS A review of patients with clinically suspected spontaneous intracranial hypotension who underwent digital subtraction myelogram imaging preceded by brain MR imaging was performed. The patients were categorized as having a definite CSF-venous fistula, no fistula, or indeterminate findings on a digital subtraction myelogram. The data set was split into 5 folds at the patient level and stratified by label. A 5-fold cross-validation was then used to evaluate the reliability of the model. The predictive value of the model to identify patients with a CSF leak was assessed by using the area under the receiver operating characteristic curve for each validation fold. RESULTS There were 129 patients were included in this study. The median age was 54 years, and 66 (51.2%) had a CSF-venous fistula. In discriminating between positive and negative cases for CSF-venous fistulas, the classifier demonstrated an average area under the receiver operating characteristic curve of 0.8668 with a standard deviation of 0.0254 across the folds. CONCLUSIONS This study developed a deep learning model that can predict the presence of a spinal CSF-venous fistula based on brain MR imaging in patients with suspected spontaneous intracranial hypotension. However, further model refinement and external validation are necessary before clinical adoption. This research highlights the substantial potential of deep learning in diagnosing CSF-venous fistulas by using brain MR imaging.
Collapse
Affiliation(s)
- Shahriar Faghani
- From the Radiology Informatics Lab, Department of Radiology, Mayo Clinic, Rochester, Minnesota
| | - Mana Moassefi
- From the Radiology Informatics Lab, Department of Radiology, Mayo Clinic, Rochester, Minnesota
| | | | - Ian T. Mark
- Department of Radiology, Mayo Clinic, Rochester, Minnesota
| | | | - Bradley J. Erickson
- From the Radiology Informatics Lab, Department of Radiology, Mayo Clinic, Rochester, Minnesota
| | - John C. Benson
- Department of Radiology, Mayo Clinic, Rochester, Minnesota
| |
Collapse
|
40
|
Flory MN, Napel S, Tsai EB. Artificial Intelligence in Radiology: Opportunities and Challenges. Semin Ultrasound CT MR 2024; 45:152-160. [PMID: 38403128 DOI: 10.1053/j.sult.2024.02.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/27/2024]
Abstract
Artificial intelligence's (AI) emergence in radiology elicits both excitement and uncertainty. AI holds promise for improving radiology with regards to clinical practice, education, and research opportunities. Yet, AI systems are trained on select datasets that can contain bias and inaccuracies. Radiologists must understand these limitations and engage with AI developers at every step of the process - from algorithm initiation and design to development and implementation - to maximize benefit and minimize harm that can be enabled by this technology.
Collapse
Affiliation(s)
- Marta N Flory
- Department of Radiology, Stanford University School of Medicine, Center for Academic Medicine, Palo Alto, CA
| | - Sandy Napel
- Department of Radiology, Stanford University School of Medicine, Center for Academic Medicine, Palo Alto, CA
| | - Emily B Tsai
- Department of Radiology, Stanford University School of Medicine, Center for Academic Medicine, Palo Alto, CA.
| |
Collapse
|
41
|
van Assen M, Beecy A, Gershon G, Newsome J, Trivedi H, Gichoya J. Implications of Bias in Artificial Intelligence: Considerations for Cardiovascular Imaging. Curr Atheroscler Rep 2024; 26:91-102. [PMID: 38363525 DOI: 10.1007/s11883-024-01190-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/16/2024] [Indexed: 02/17/2024]
Abstract
PURPOSE OF REVIEW Bias in artificial intelligence (AI) models can result in unintended consequences. In cardiovascular imaging, biased AI models used in clinical practice can negatively affect patient outcomes. Biased AI models result from decisions made when training and evaluating a model. This paper is a comprehensive guide for AI development teams to understand assumptions in datasets and chosen metrics for outcome/ground truth, and how this translates to real-world performance for cardiovascular disease (CVD). RECENT FINDINGS CVDs are the number one cause of mortality worldwide; however, the prevalence, burden, and outcomes of CVD vary across gender and race. Several biomarkers are also shown to vary among different populations and ethnic/racial groups. Inequalities in clinical trial inclusion, clinical presentation, diagnosis, and treatment are preserved in health data that is ultimately used to train AI algorithms, leading to potential biases in model performance. Despite the notion that AI models themselves are biased, AI can also help to mitigate bias (e.g., bias auditing tools). In this review paper, we describe in detail implicit and explicit biases in the care of cardiovascular disease that may be present in existing datasets but are not obvious to model developers. We review disparities in CVD outcomes across different genders and race groups, differences in treatment of historically marginalized groups, and disparities in clinical trials for various cardiovascular diseases and outcomes. Thereafter, we summarize some CVD AI literature that shows bias in CVD AI as well as approaches that AI is being used to mitigate CVD bias.
Collapse
Affiliation(s)
- Marly van Assen
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA, USA.
| | - Ashley Beecy
- Division of Cardiology, Department of Medicine, Weill Cornell Medicine, New York, NY, USA
- Information Technology, NewYork-Presbyterian, New York, NY, USA
| | - Gabrielle Gershon
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA, USA
| | - Janice Newsome
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA, USA
| | - Hari Trivedi
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA, USA
| | - Judy Gichoya
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA, USA
| |
Collapse
|
42
|
Yang L, Oeding JF, de Marinis R, Marigi E, Sanchez-Sotelo J. Deep learning to automatically classify very large sets of preoperative and postoperative shoulder arthroplasty radiographs. J Shoulder Elbow Surg 2024; 33:773-780. [PMID: 37879598 DOI: 10.1016/j.jse.2023.09.021] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 09/06/2023] [Accepted: 09/10/2023] [Indexed: 10/27/2023]
Abstract
BACKGROUND Joint arthroplasty registries usually lack information on medical imaging owing to the laborious process of observing and recording, as well as the lack of standard methods to transfer the imaging information to the registries, which can limit the investigation of various research questions. Artificial intelligence (AI) algorithms can automate imaging-feature identification with high accuracy and efficiency. With the purpose of enriching shoulder arthroplasty registries with organized imaging information, it was hypothesized that an automated AI algorithm could be developed to classify and organize preoperative and postoperative radiographs from shoulder arthroplasty patients according to laterality, radiographic projection, and implant type. METHODS This study used a cohort of 2303 shoulder radiographs from 1724 shoulder arthroplasty patients. Two observers manually labeled all radiographs according to (1) laterality (left or right), (2) projection (anteroposterior, axillary, or lateral), and (3) whether the radiograph was a preoperative radiograph or showed an anatomic total shoulder arthroplasty or a reverse shoulder arthroplasty. All these labeled radiographs were randomly split into developmental and testing sets at the patient level and based on stratification. By use of 10-fold cross-validation, a 3-task deep-learning algorithm was trained on the developmental set to classify the 3 aforementioned characteristics. The trained algorithm was then evaluated on the testing set using quantitative metrics and visual evaluation techniques. RESULTS The trained algorithm perfectly classified laterality (F1 scores [harmonic mean values of precision and sensitivity] of 100% on the testing set). When classifying the imaging projection, the algorithm achieved F1 scores of 99.2%, 100%, and 100% on anteroposterior, axillary, and lateral views, respectively. When classifying the implant type, the model achieved F1 scores of 100%, 95.2%, and 100% on preoperative radiographs, anatomic total shoulder arthroplasty radiographs, and reverse shoulder arthroplasty radiographs, respectively. Visual evaluation using integrated maps showed that the algorithm focused on the relevant patient body and prosthesis parts for classification. It took the algorithm 20.3 seconds to analyze 502 images. CONCLUSIONS We developed an efficient, accurate, and reliable AI algorithm to automatically identify key imaging features of laterality, imaging view, and implant type in shoulder radiographs. This algorithm represents the first step to automatically classify and organize shoulder radiographs on a large scale in very little time, which will profoundly enrich shoulder arthroplasty registries.
Collapse
Affiliation(s)
- Linjun Yang
- Orthopedic Surgery Artificial Intelligence Laboratory, Department of Orthopedic Surgery, Mayo Clinic, Rochester, MN, USA; Department of Orthopedic Surgery, Mayo Clinic, Rochester, MN, USA
| | - Jacob F Oeding
- Orthopedic Surgery Artificial Intelligence Laboratory, Department of Orthopedic Surgery, Mayo Clinic, Rochester, MN, USA
| | - Rodrigo de Marinis
- Orthopedic Surgery Artificial Intelligence Laboratory, Department of Orthopedic Surgery, Mayo Clinic, Rochester, MN, USA
| | - Erick Marigi
- Orthopedic Surgery Artificial Intelligence Laboratory, Department of Orthopedic Surgery, Mayo Clinic, Rochester, MN, USA
| | - Joaquin Sanchez-Sotelo
- Orthopedic Surgery Artificial Intelligence Laboratory, Department of Orthopedic Surgery, Mayo Clinic, Rochester, MN, USA.
| |
Collapse
|
43
|
Al Mohammad B, Aldaradkeh A, Gharaibeh M, Reed W. Assessing radiologists' and radiographers' perceptions on artificial intelligence integration: opportunities and challenges. Br J Radiol 2024; 97:763-769. [PMID: 38273675 PMCID: PMC11027289 DOI: 10.1093/bjr/tqae022] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 09/30/2023] [Accepted: 01/21/2024] [Indexed: 01/27/2024] Open
Abstract
OBJECTIVES The objective of this study was to evaluate radiologists' and radiographers' opinions and perspectives on artificial intelligence (AI) and its integration into the radiology department. Additionally, we investigated the most common challenges and barriers that radiologists and radiographers face when learning about AI. METHODS A nationwide, online descriptive cross-sectional survey was distributed to radiologists and radiographers working in hospitals and medical centres from May 29, 2023 to July 30, 2023. The questionnaire examined the participants' opinions, feelings, and predictions regarding AI and its applications in the radiology department. Descriptive statistics were used to report the participants' demographics and responses. Five-points Likert-scale data were reported using divergent stacked bar graphs to highlight any central tendencies. RESULTS Responses were collected from 258 participants, revealing a positive attitude towards implementing AI. Both radiologists and radiographers predicted breast imaging would be the subspecialty most impacted by the AI revolution. MRI, mammography, and CT were identified as the primary modalities with significant importance in the field of AI application. The major barrier encountered by radiologists and radiographers when learning about AI was the lack of mentorship, guidance, and support from experts. CONCLUSION Participants demonstrated a positive attitude towards learning about AI and implementing it in the radiology practice. However, radiologists and radiographers encounter several barriers when learning about AI, such as the absence of experienced professionals support and direction. ADVANCES IN KNOWLEDGE Radiologists and radiographers reported several barriers to AI learning, with the most significant being the lack of mentorship and guidance from experts, followed by the lack of funding and investment in new technologies.
Collapse
Affiliation(s)
- Badera Al Mohammad
- Department of Allied Medical Sciences, Faculty of Applied Medical Sciences, Jordan University of Science and Technology, Irbid 22110, Jordan
| | - Afnan Aldaradkeh
- Department of Allied Medical Sciences, Faculty of Applied Medical Sciences, Jordan University of Science and Technology, Irbid 22110, Jordan
| | - Monther Gharaibeh
- Department of Special Surgery, Faculty of Medicine, The Hashemite University, Zarqa 13133, Jordan
| | - Warren Reed
- Discipline of Medical Imaging Science, Faculty of Medicine and Health, University of Sydney 2006, Sydney, NSW, Australia
| |
Collapse
|
44
|
Faghani S, Nicholas RG, Patel S, Baffour FI, Moassefi M, Rouzrokh P, Khosravi B, Powell GM, Leng S, Glazebrook KN, Erickson BJ, Tiegs-Heiden CA. Development of a deep learning model for the automated detection of green pixels indicative of gout on dual energy CT scan. RESEARCH IN DIAGNOSTIC AND INTERVENTIONAL IMAGING 2024; 9:100044. [PMID: 39076582 PMCID: PMC11265492 DOI: 10.1016/j.redii.2024.100044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Accepted: 02/24/2024] [Indexed: 07/31/2024]
Abstract
Background Dual-energy CT (DECT) is a non-invasive way to determine the presence of monosodium urate (MSU) crystals in the workup of gout. Color-coding distinguishes MSU from calcium following material decomposition and post-processing. Most software labels MSU as green and calcium as blue. There are limitations in the current image processing methods of segmenting green-encoded pixels. Additionally, identifying green foci is tedious, and automated detection would improve workflow. This study aimed to determine the optimal deep learning (DL) algorithm for segmenting green-encoded pixels of MSU crystals on DECTs. Methods DECT images of positive and negative gout cases were retrospectively collected. The dataset was split into train (N = 28) and held-out test (N = 30) sets. To perform cross-validation, the train set was split into seven folds. The images were presented to two musculoskeletal radiologists, who independently identified green-encoded voxels. Two 3D Unet-based DL models, Segresnet and SwinUNETR, were trained, and the Dice similarity coefficient (DSC), sensitivity, and specificity were reported as the segmentation metrics. Results Segresnet showed superior performance, achieving a DSC of 0.9999 for the background pixels, 0.7868 for the green pixels, and an average DSC of 0.8934 for both types of pixels, respectively. According to the post-processed results, the Segresnet reached voxel-level sensitivity and specificity of 98.72 % and 99.98 %, respectively. Conclusion In this study, we compared two DL-based segmentation approaches for detecting MSU deposits in a DECT dataset. The Segresnet resulted in superior performance metrics. The developed algorithm provides a potential fast, consistent, highly sensitive and specific computer-aided diagnosis tool. Ultimately, such an algorithm could be used by radiologists to streamline DECT workflow and improve accuracy in the detection of gout.
Collapse
Affiliation(s)
- Shahriar Faghani
- Radiology Informatics Lab, Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | - Rhodes G Nicholas
- Division of Musculoskeletal Radiology, Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | - Soham Patel
- Division of Musculoskeletal Radiology, Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | - Francis I Baffour
- Division of Musculoskeletal Radiology, Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | - Mana Moassefi
- Radiology Informatics Lab, Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | - Pouria Rouzrokh
- Radiology Informatics Lab, Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | - Bardia Khosravi
- Radiology Informatics Lab, Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | - Garret M Powell
- Division of Musculoskeletal Radiology, Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | - Shuai Leng
- Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | - Katrina N Glazebrook
- Division of Musculoskeletal Radiology, Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | | | - Christin A Tiegs-Heiden
- Division of Musculoskeletal Radiology, Department of Radiology, Mayo Clinic, Rochester, MN, USA
| |
Collapse
|
45
|
Hanneman K, Playford D, Dey D, van Assen M, Mastrodicasa D, Cook TS, Gichoya JW, Williamson EE, Rubin GD. Value Creation Through Artificial Intelligence and Cardiovascular Imaging: A Scientific Statement From the American Heart Association. Circulation 2024; 149:e296-e311. [PMID: 38193315 DOI: 10.1161/cir.0000000000001202] [Citation(s) in RCA: 15] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/10/2024]
Abstract
Multiple applications for machine learning and artificial intelligence (AI) in cardiovascular imaging are being proposed and developed. However, the processes involved in implementing AI in cardiovascular imaging are highly diverse, varying by imaging modality, patient subtype, features to be extracted and analyzed, and clinical application. This article establishes a framework that defines value from an organizational perspective, followed by value chain analysis to identify the activities in which AI might produce the greatest incremental value creation. The various perspectives that should be considered are highlighted, including clinicians, imagers, hospitals, patients, and payers. Integrating the perspectives of all health care stakeholders is critical for creating value and ensuring the successful deployment of AI tools in a real-world setting. Different AI tools are summarized, along with the unique aspects of AI applications to various cardiac imaging modalities, including cardiac computed tomography, magnetic resonance imaging, and positron emission tomography. AI is applicable and has the potential to add value to cardiovascular imaging at every step along the patient journey, from selecting the more appropriate test to optimizing image acquisition and analysis, interpreting the results for classification and diagnosis, and predicting the risk for major adverse cardiac events.
Collapse
|
46
|
Vrudhula A, Kwan AC, Ouyang D, Cheng S. Machine Learning and Bias in Medical Imaging: Opportunities and Challenges. Circ Cardiovasc Imaging 2024; 17:e015495. [PMID: 38377237 PMCID: PMC10883605 DOI: 10.1161/circimaging.123.015495] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/22/2024]
Abstract
Bias in health care has been well documented and results in disparate and worsened outcomes for at-risk groups. Medical imaging plays a critical role in facilitating patient diagnoses but involves multiple sources of bias including factors related to access to imaging modalities, acquisition of images, and assessment (ie, interpretation) of imaging data. Machine learning (ML) applied to diagnostic imaging has demonstrated the potential to improve the quality of imaging-based diagnosis and the precision of measuring imaging-based traits. Algorithms can leverage subtle information not visible to the human eye to detect underdiagnosed conditions or derive new disease phenotypes by linking imaging features with clinical outcomes, all while mitigating cognitive bias in interpretation. Importantly, however, the application of ML to diagnostic imaging has the potential to either reduce or propagate bias. Understanding the potential gain as well as the potential risks requires an understanding of how and what ML models learn. Common risks of propagating bias can arise from unbalanced training, suboptimal architecture design or selection, and uneven application of models. Notwithstanding these risks, ML may yet be applied to improve gain from imaging across all 3A's (access, acquisition, and assessment) for all patients. In this review, we present a framework for understanding the balance of opportunities and challenges for minimizing bias in medical imaging, how ML may improve current approaches to imaging, and what specific design considerations should be made as part of efforts to maximize the quality of health care for all.
Collapse
Affiliation(s)
- Amey Vrudhula
- Icahn School of Medicine at Mount Sinai, New York
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center
| | - Alan C Kwan
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center
| | - David Ouyang
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center
- Division of Artificial Intelligence in Medicine, Department of Medicine, Cedars-Sinai Medical Center
| | - Susan Cheng
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center
| |
Collapse
|
47
|
Sumner C, Kietzman A, Kadom N, Frigini A, Makary MS, Martin A, McKnight C, Retrouvey M, Spieler B, Griffith B. Medical Malpractice and Diagnostic Radiology: Challenges and Opportunities. Acad Radiol 2024; 31:233-241. [PMID: 37741730 DOI: 10.1016/j.acra.2023.08.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 08/10/2023] [Accepted: 08/14/2023] [Indexed: 09/25/2023]
Abstract
Medicolegal challenges in radiology are broad and impact both radiologists and patients. Radiologists may be affected directly by malpractice litigation or indirectly due to defensive imaging ordering practices. Patients also could be harmed physically, emotionally, or financially by unnecessary tests or procedures. As technology advances, the incorporation of artificial intelligence into medicine will bring with it new medicolegal challenges and opportunities. This article reviews the current and emerging direct and indirect effects of medical malpractice on radiologists and summarizes evidence-based solutions.
Collapse
Affiliation(s)
- Christina Sumner
- Department of Radiology and Imaging Sciences, Emory University (C.S., N.K.), Atlanta, GA
| | | | - Nadja Kadom
- Department of Radiology and Imaging Sciences, Emory University (C.S., N.K.), Atlanta, GA
| | - Alexandre Frigini
- Department of Radiology, Baylor College of Medicine (A.F.), Houston, TX
| | - Mina S Makary
- Department of Radiology, Ohio State University Wexner Medical Center (M.S.M.), Columbus, OH
| | - Ardenne Martin
- Louisiana State University Health Sciences Center (A.M.), New Orleans, LA
| | - Colin McKnight
- Department of Radiology, Vanderbilt University Medical Center (C.M.), Nashville, TN
| | - Michele Retrouvey
- Department of Radiology, Eastern Virginia Medical School/Medical Center Radiologists (M.R.), Norfolk, VA
| | - Bradley Spieler
- Department of Radiology, University Medical Center, Louisiana State University Health Sciences Center (B.S.), New Orleans, LA
| | - Brent Griffith
- Department of Radiology, Henry Ford Health (B.G.), Detroit, MI.
| |
Collapse
|
48
|
Cen HS, Dandamudi S, Lei X, Weight C, Desai M, Gill I, Duddalwar V. Diversity in Renal Mass Data Cohorts: Implications for Urology AI Researchers. Oncology 2023; 102:574-584. [PMID: 38104555 PMCID: PMC11178677 DOI: 10.1159/000535841] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2023] [Accepted: 12/08/2023] [Indexed: 12/19/2023]
Abstract
INTRODUCTION We examine the heterogeneity and distribution of the cohort populations in two publicly used radiological image cohorts, the Cancer Genome Atlas Kidney Renal Clear Cell Carcinoma (TCIA TCGA KIRC) collection and 2019 MICCAI Kidney Tumor Segmentation Challenge (KiTS19), and deviations in real-world population renal cancer data from the National Cancer Database (NCDB) Participant User Data File (PUF) and tertiary center data. PUF data are used as an anchor for prevalence rate bias assessment. Specific gene expression and, therefore, biology of RCC differ by self-reported race, especially between the African American and Caucasian populations. AI algorithms learn from datasets, but if the dataset misrepresents the population, reinforcing bias may occur. Ignoring these demographic features may lead to inaccurate downstream effects, thereby limiting the translation of these analyses to clinical practice. Consciousness of model training biases is vital to patient care decisions when using models in clinical settings. METHODS Data elements evaluated included gender, demographics, reported pathologic grading, and cancer staging. American Urological Association risk levels were used. Poisson regression was performed to estimate the population-based and sample-specific estimation for prevalence rate and corresponding 95% confidence interval. SAS 9.4 was used for data analysis. RESULTS Compared to PUF, KiTS19 and TCGA KIRC oversampled Caucasian by 9.5% (95% CI, -3.7 to 22.7%) and 15.1% (95% CI, 1.5 to 28.8%), undersampled African American by -6.7% (95% CI, -10% to -3.3%), and -5.5% (95% CI, -9.3% to -1.8%). Tertiary also undersampled African American by -6.6% (95% CI, -8.7% to -4.6%). The tertiary cohort largely undersampled aggressive cancers by -14.7% (95% CI, -20.9% to -8.4%). No statistically significant difference was found among PUF, TCGA, and KiTS19 in aggressive rate; however, heterogeneities in risk are notable. CONCLUSION Heterogeneities between cohorts need to be considered in future AI training and cross-validation for renal masses.
Collapse
Affiliation(s)
- Harmony Selena Cen
- Keck School of Medicine, University of Southern California, Los Angeles, California, USA,
| | - Siddhartha Dandamudi
- College of Human Medicine, Michigan State University, East Lansing, Michigan, USA
| | - Xiaomeng Lei
- Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Chris Weight
- Urologic Oncology, Cleveland Clinic, Cleveland, Ohio, USA
| | - Mihir Desai
- Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Inderbir Gill
- Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Vinay Duddalwar
- Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| |
Collapse
|
49
|
Whitney HM, Baughan N, Myers KJ, Drukker K, Gichoya J, Bower B, Chen W, Gruszauskas N, Kalpathy-Cramer J, Koyejo S, Sá RC, Sahiner B, Zhang Z, Giger ML. Longitudinal assessment of demographic representativeness in the Medical Imaging and Data Resource Center open data commons. J Med Imaging (Bellingham) 2023; 10:61105. [PMID: 37469387 PMCID: PMC10353566 DOI: 10.1117/1.jmi.10.6.061105] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 06/21/2023] [Accepted: 06/23/2023] [Indexed: 07/21/2023] Open
Abstract
Purpose The Medical Imaging and Data Resource Center (MIDRC) open data commons was launched to accelerate the development of artificial intelligence (AI) algorithms to help address the COVID-19 pandemic. The purpose of this study was to quantify longitudinal representativeness of the demographic characteristics of the primary MIDRC dataset compared to the United States general population (US Census) and COVID-19 positive case counts from the Centers for Disease Control and Prevention (CDC). Approach The Jensen-Shannon distance (JSD), a measure of similarity of two distributions, was used to longitudinally measure the representativeness of the distribution of (1) all unique patients in the MIDRC data to the 2020 US Census and (2) all unique COVID-19 positive patients in the MIDRC data to the case counts reported by the CDC. The distributions were evaluated in the demographic categories of age at index, sex, race, ethnicity, and the combination of race and ethnicity. Results Representativeness of the MIDRC data by ethnicity and the combination of race and ethnicity was impacted by the percentage of CDC case counts for which this was not reported. The distributions by sex and race have retained their level of representativeness over time. Conclusion The representativeness of the open medical imaging datasets in the curated public data commons at MIDRC has evolved over time as the number of contributing institutions and overall number of subjects have grown. The use of metrics, such as the JSD support measurement of representativeness, is one step needed for fair and generalizable AI algorithm development.
Collapse
Affiliation(s)
- Heather M. Whitney
- University of Chicago, Chicago, Illinois, United States
- The Medical Imaging and Data Resource Center (midrc.org)
| | - Natalie Baughan
- University of Chicago, Chicago, Illinois, United States
- The Medical Imaging and Data Resource Center (midrc.org)
| | - Kyle J. Myers
- The Medical Imaging and Data Resource Center (midrc.org)
- Puente Solutions LLC, Phoenix, Arizona, United States
| | - Karen Drukker
- University of Chicago, Chicago, Illinois, United States
- The Medical Imaging and Data Resource Center (midrc.org)
| | - Judy Gichoya
- The Medical Imaging and Data Resource Center (midrc.org)
- Emory University, Atlanta, Georgia, United States
| | - Brad Bower
- The Medical Imaging and Data Resource Center (midrc.org)
- National Institutes of Health, Bethesda, Maryland, United States
| | - Weijie Chen
- The Medical Imaging and Data Resource Center (midrc.org)
- United States Food and Drug Administration, Silver Spring, Maryland, United States
| | - Nicholas Gruszauskas
- University of Chicago, Chicago, Illinois, United States
- The Medical Imaging and Data Resource Center (midrc.org)
| | - Jayashree Kalpathy-Cramer
- The Medical Imaging and Data Resource Center (midrc.org)
- University of Colorado Anschutz Medical Campus, Aurora, Colorado, United States
| | - Sanmi Koyejo
- The Medical Imaging and Data Resource Center (midrc.org)
- Stanford University, Stanford, California, United States
| | - Rui C. Sá
- The Medical Imaging and Data Resource Center (midrc.org)
- National Institutes of Health, Bethesda, Maryland, United States
- University of California, San Diego, La Jolla, California, United States
| | - Berkman Sahiner
- The Medical Imaging and Data Resource Center (midrc.org)
- United States Food and Drug Administration, Silver Spring, Maryland, United States
| | - Zi Zhang
- The Medical Imaging and Data Resource Center (midrc.org)
- Jefferson Health, Philadelphia, Pennsylvania, United States
| | - Maryellen L. Giger
- University of Chicago, Chicago, Illinois, United States
- The Medical Imaging and Data Resource Center (midrc.org)
| |
Collapse
|
50
|
Drukker K, Chen W, Gichoya J, Gruszauskas N, Kalpathy-Cramer J, Koyejo S, Myers K, Sá RC, Sahiner B, Whitney H, Zhang Z, Giger M. Toward fairness in artificial intelligence for medical image analysis: identification and mitigation of potential biases in the roadmap from data collection to model deployment. J Med Imaging (Bellingham) 2023; 10:061104. [PMID: 37125409 PMCID: PMC10129875 DOI: 10.1117/1.jmi.10.6.061104] [Citation(s) in RCA: 32] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Accepted: 04/03/2023] [Indexed: 05/02/2023] Open
Abstract
Purpose To recognize and address various sources of bias essential for algorithmic fairness and trustworthiness and to contribute to a just and equitable deployment of AI in medical imaging, there is an increasing interest in developing medical imaging-based machine learning methods, also known as medical imaging artificial intelligence (AI), for the detection, diagnosis, prognosis, and risk assessment of disease with the goal of clinical implementation. These tools are intended to help improve traditional human decision-making in medical imaging. However, biases introduced in the steps toward clinical deployment may impede their intended function, potentially exacerbating inequities. Specifically, medical imaging AI can propagate or amplify biases introduced in the many steps from model inception to deployment, resulting in a systematic difference in the treatment of different groups. Approach Our multi-institutional team included medical physicists, medical imaging artificial intelligence/machine learning (AI/ML) researchers, experts in AI/ML bias, statisticians, physicians, and scientists from regulatory bodies. We identified sources of bias in AI/ML, mitigation strategies for these biases, and developed recommendations for best practices in medical imaging AI/ML development. Results Five main steps along the roadmap of medical imaging AI/ML were identified: (1) data collection, (2) data preparation and annotation, (3) model development, (4) model evaluation, and (5) model deployment. Within these steps, or bias categories, we identified 29 sources of potential bias, many of which can impact multiple steps, as well as mitigation strategies. Conclusions Our findings provide a valuable resource to researchers, clinicians, and the public at large.
Collapse
Affiliation(s)
- Karen Drukker
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Weijie Chen
- US Food and Drug Administration, Division of Imaging, Diagnostics, and Software Reliability, Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, Silver Spring, Maryland, United States
| | - Judy Gichoya
- Emory University, Department of Radiology, Atlanta, Georgia, United States
| | - Nicholas Gruszauskas
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | | | - Sanmi Koyejo
- Stanford University, Department of Computer Science, Stanford, California, United States
| | - Kyle Myers
- Puente Solutions LLC, Phoenix, Arizona, United States
| | - Rui C. Sá
- National Institutes of Health, Bethesda, Maryland, United States
- University of California, San Diego, La Jolla, California, United States
| | - Berkman Sahiner
- US Food and Drug Administration, Division of Imaging, Diagnostics, and Software Reliability, Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, Silver Spring, Maryland, United States
| | - Heather Whitney
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Zi Zhang
- Jefferson Health, Philadelphia, Pennsylvania, United States
| | - Maryellen Giger
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| |
Collapse
|