1
|
Rothenberg SA, Savage CH, Abou Elkassem A, Singh S, Abozeed M, Hamki O, Junck K, Tridandapani S, Li M, Li Y, Smith AD. Prospective Evaluation of AI Triage of Pulmonary Emboli on CT Pulmonary Angiograms. Radiology 2023; 309:e230702. [PMID: 37787676 DOI: 10.1148/radiol.230702] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/04/2023]
Abstract
Background Artificial intelligence (AI) algorithms have shown high accuracy for detection of pulmonary embolism (PE) on CT pulmonary angiography (CTPA) studies in academic studies. Purpose To determine whether use of an AI triage system to detect PE on CTPA studies improves radiologist performance or examination and report turnaround times in a clinical setting. Materials and Methods This prospective single-center study included adult participants who underwent CTPA for suspected PE in a clinical practice setting. Consecutive CTPA studies were evaluated in two phases, first by radiologists alone (n = 31) (May 2021 to June 2021) and then by radiologists aided by a commercially available AI triage system (n = 37) (September 2021 to December 2021). Sixty-two percent of radiologists (26 of 42 radiologists) interpreted studies in both phases. The reference standard was determined by an independent re-review of studies by thoracic radiologists and was used to calculate performance metrics. Diagnostic accuracy and turnaround times were compared using Pearson χ2 and Wilcoxon rank sum tests. Results Phases 1 and 2 included 503 studies (participant mean age, 54.0 years ± 17.8 [SD]; 275 female, 228 male) and 1023 studies (participant mean age, 55.1 years ± 17.5; 583 female, 440 male), respectively. In phases 1 and 2, 14.5% (73 of 503) and 15.9% (163 of 1023) of CTPA studies were positive for PE (P = .47). Mean wait time for positive PE studies decreased from 21.5 minutes without AI to 11.3 minutes with AI (P < .001). The accuracy and miss rate, respectively, for radiologist detection of any PE on CTPA studies was 97.6% and 12.3% without AI and 98.6% and 6.1% with AI, which was not significantly different (P = .15 and P = .11, respectively). Conclusion The use of an AI triage system to detect any PE on CTPA studies improved wait times but did not improve radiologist accuracy, miss rate, or examination and report turnaround times. © RSNA, 2023 Supplemental material is available for this article. See also the editorial by Murphy and Tee in this issue.
Collapse
Affiliation(s)
- Steven A Rothenberg
- From the Department of Radiology, University of Alabama at Birmingham, 619 S 19th St, Birmingham, AL 35233
| | - Cody H Savage
- From the Department of Radiology, University of Alabama at Birmingham, 619 S 19th St, Birmingham, AL 35233
| | - Asser Abou Elkassem
- From the Department of Radiology, University of Alabama at Birmingham, 619 S 19th St, Birmingham, AL 35233
| | - Satinder Singh
- From the Department of Radiology, University of Alabama at Birmingham, 619 S 19th St, Birmingham, AL 35233
| | - Mostafa Abozeed
- From the Department of Radiology, University of Alabama at Birmingham, 619 S 19th St, Birmingham, AL 35233
| | - Omar Hamki
- From the Department of Radiology, University of Alabama at Birmingham, 619 S 19th St, Birmingham, AL 35233
| | - Kevin Junck
- From the Department of Radiology, University of Alabama at Birmingham, 619 S 19th St, Birmingham, AL 35233
| | - Srini Tridandapani
- From the Department of Radiology, University of Alabama at Birmingham, 619 S 19th St, Birmingham, AL 35233
| | - Mei Li
- From the Department of Radiology, University of Alabama at Birmingham, 619 S 19th St, Birmingham, AL 35233
| | - Yufeng Li
- From the Department of Radiology, University of Alabama at Birmingham, 619 S 19th St, Birmingham, AL 35233
| | - Andrew D Smith
- From the Department of Radiology, University of Alabama at Birmingham, 619 S 19th St, Birmingham, AL 35233
| |
Collapse
|
2
|
Abstract
Large language models (LLMs) such as ChatGPT are advanced artificial intelligence models that are designed to process and understand human language. LLMs have the potential to improve radiology reporting and patient engagement by automating generation of the clinical history and impression of a radiology report, creating layperson reports, and providing patients with pertinent questions and answers about findings in radiology reports. However, LLMs are error prone, and human oversight is needed to reduce the risk of patient harm.
Collapse
Affiliation(s)
- Asser Abou Elkassem
- Department of Radiology, University of Alabama at Birmingham, JTN 452, 619 19th St S, Birmingham, AL 35249-6830
| | - Andrew D Smith
- Department of Radiology, University of Alabama at Birmingham, JTN 452, 619 19th St S, Birmingham, AL 35249-6830
| |
Collapse
|
3
|
Elkassem AA, Smith AD. Reply to "Addressing Concerns and Pitfalls in ChatGPT-Driven Radiology Reporting". AJR Am J Roentgenol 2023; 221:403-404. [PMID: 37406204 DOI: 10.2214/ajr.23.29612] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/07/2023]
|
4
|
Perchik JD, Smith AD, Elkassem AA, Park JM, Rothenberg SA, Tanwar M, Yi PH, Sturdivant A, Tridandapani S, Sotoudeh H. Artificial Intelligence Literacy: Developing a Multi-institutional Infrastructure for AI Education. Acad Radiol 2023; 30:1472-1480. [PMID: 36323613 DOI: 10.1016/j.acra.2022.10.002] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2022] [Revised: 09/23/2022] [Accepted: 10/01/2022] [Indexed: 11/17/2022]
Abstract
RATIONALE AND OBJECTIVES To evaluate the effectiveness of an artificial intelligence (AI) in radiology literacy course on participants from nine radiology residency programs in the Southeast and Mid-Atlantic United States. MATERIALS AND METHODS A week-long AI in radiology course was developed and included participants from nine radiology residency programs in the Southeast and Mid-Atlantic United States. Ten 30 minutes lectures utilizing a remote learning format covered basic AI terms and methods, clinical applications of AI in radiology by four different subspecialties, and special topics lectures on the economics of AI, ethics of AI, algorithm bias, and medicolegal implications of AI in medicine. A proctored hands-on clinical AI session allowed participants to directly use an FDA cleared AI-assisted viewer and reporting system for advanced cancer. Pre- and post-course electronic surveys were distributed to assess participants' knowledge of AI terminology and applications and interest in AI education. RESULTS There were an average of 75 participants each day of the course (range: 50-120). Nearly all participants reported a lack of sufficient exposure to AI in their radiology training (96.7%, 90/93). Mean participant score on the pre-course AI knowledge evaluation was 8.3/15, with a statistically significant increase to 10.1/15 on the post-course evaluation (p= 0.04). A majority of participants reported an interest in continued AI in radiology education in the future (78.6%, 22/28). CONCLUSION A multi-institutional AI in radiology literacy course successfully improved AI education of participants, with the majority of participants reporting a continued interest in AI in radiology education in the future.
Collapse
Affiliation(s)
- J D Perchik
- Department of Diagnostic Radiology, University of Alabama at Birmingham, Birmingham, Alabama.
| | - A D Smith
- Department of Diagnostic Radiology, University of Alabama at Birmingham, Birmingham, Alabama
| | - A A Elkassem
- Department of Diagnostic Radiology, University of Alabama at Birmingham, Birmingham, Alabama
| | - J M Park
- Department of Diagnostic Radiology, University of Alabama at Birmingham, Birmingham, Alabama
| | - S A Rothenberg
- Department of Diagnostic Radiology, University of Alabama at Birmingham, Birmingham, Alabama
| | - M Tanwar
- Department of Diagnostic Radiology, University of Alabama at Birmingham, Birmingham, Alabama
| | - P H Yi
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland Medical Intelligent Imaging Center, University of Maryland School of Medicine, Baltimore, Maryland
| | - A Sturdivant
- University of Alabama at Birmingham Heersink School of Medicine
| | - S Tridandapani
- Department of Diagnostic Radiology, University of Alabama at Birmingham, Birmingham, Alabama
| | - H Sotoudeh
- Department of Diagnostic Radiology, University of Alabama at Birmingham, Birmingham, Alabama
| |
Collapse
|
5
|
Elkassem AA, Allen BC, Lirette ST, Cox KL, Remer EM, Pickhardt PJ, Lubner MG, Sirlin CB, Dondlinger T, Schmainda M, Jacobus RB, Severino PE, Smith AD. Multiinstitutional Evaluation of the Liver Surface Nodularity Score on CT for Staging Liver Fibrosis and Predicting Liver-Related Events in Patients With Hepatitis C. AJR Am J Roentgenol 2022; 218:833-845. [PMID: 34935403 DOI: 10.2214/ajr.21.27062] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
BACKGROUND. In single-institution multireader studies, the liver surface nodularity (LSN) score accurately detects advanced liver fibrosis and cirrhosis and predicts liver decompensation in patients with chronic liver disease (CLD) from hepatitis C virus (HCV). OBJECTIVE. The purpose of this study was to assess the diagnostic performance of the LSN score alone and in combination with the (FIB-4; fibrosis index based on four factors) to detect advanced fibrosis and cirrhosis and to predict future liver-related events in a multiinstitutional cohort of patients with CLD from HCV. METHODS. This retrospective study included 40 consecutive patients, from each of five academic medical centers, with CLD from HCV who underwent nontargeted liver biopsy within 6 months before or after abdominal CT. Clinical data were recorded in a secure web-based database. A single central reader measured LSN scores using software. Diagnostic performance for detecting liver fibrosis stage was determined. Multivariable models were constructed to predict baseline liver decompensation and future liver-related events. RESULTS. After exclusions, the study included 191 patients (67 women, 124 men; mean age, 54 years) with fibrosis stages of F0-F1 (n = 37), F2 (n = 44), F3 (n = 46), and F4 (n = 64). Mean LSN score increased with higher stages (F0-F1, 2.26 ± 0.44; F2, 2.35 ± 0.37; F3, 2.42 ± 0.38; F4, 3.19 ± 0.89; p < .001). The AUC of LSN score alone was 0.87 for detecting advanced fibrosis (≥ F3) and 0.89 for detecting cirrhosis (F4), increasing to 0.92 and 0.94, respectively, when combined with FIB-4 scores (both p = .005). Combined scores at optimal cutoff points yielded sensitivity of 75% and specificity of 82% for advanced fibrosis, and sensitivity of 84% and specificity of 85% for cirrhosis. In multivariable models, LSN score was the strongest predictor of baseline liver decompensation (odds ratio, 14.28 per 1-unit increase; p < .001) and future liver-related events (hazard ratio, 2.87 per 1-unit increase; p = .03). CONCLUSION. In a multiinstitutional cohort of patients with CLD from HCV, LSN score alone and in combination with FIB-4 score exhibited strong diagnostic performance in detecting advanced fibrosis and cirrhosis. LSN score also predicted future liver-related events. CLINICAL IMPACT. The LSN score warrants a role in clinical practice as a quantitative marker for detecting advanced liver fibrosis, compensated cirrhosis, and decompensated cirrhosis and for predicting future liver-related events in patients with CLD from HCV.
Collapse
Affiliation(s)
- Asser Abou Elkassem
- Department of Radiology, The University of Alabama at Birmingham, JTN 452, 619 19th St S, Birmingham, AL 35249
| | - Brian C Allen
- Department of Radiology, Duke University Medical Center, Durham, NC
| | - Seth T Lirette
- Department of Data Science, University of Mississippi Medical Center, Jackson, MS
| | - Kelly L Cox
- Department of Radiology, Mayo Clinic, Jacksonville, FL
| | - Erick M Remer
- Department of Radiology, Cleveland Clinic Foundation, Cleveland, OH
| | - Perry J Pickhardt
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, WI
| | - Meghan G Lubner
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, WI
| | - Claude B Sirlin
- Department of Radiology, Liver Imaging Group, University of California San Diego, San Diego, CA
| | | | | | | | | | - Andrew D Smith
- Department of Radiology, The University of Alabama at Birmingham, JTN 452, 619 19th St S, Birmingham, AL 35249
- AI Metrics, Birmingham, AL
| |
Collapse
|
6
|
Varney E, Abou Elkassem A, Khan M, Parker E, Nichols T, Joyner D, Lirette ST, Howard-Claudio C, Smith AD. Prospective validation of a rapid CT-based bone mineral density screening method using colored spinal images. Abdom Radiol (NY) 2021; 46:1752-1760. [PMID: 33044652 DOI: 10.1007/s00261-020-02791-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2020] [Revised: 09/18/2020] [Accepted: 09/27/2020] [Indexed: 11/26/2022]
Abstract
PURPOSE To prospectively validate a method to accurately and rapidly differentiate normal from abnormal spinal bone mineral density (BMD) using colored abdominal CT images. METHODS For this prospective observational study, 196 asymptomatic women ≥ 50 years of age presenting for screening mammograms underwent routine nonenhanced CT imaging of the abdomen. The CT images were processed with software designed to generate sagittal colored images with green vertebral trabecular bone indicating normal BMD and red indicating abnormal BMD (low BMD or osteoporosis). Four radiologists evaluated L1/L2 BMD on sagittal images using visual assessment of grayscale images, quantitative measurements of mean vertebral attenuation, and visual assessment of colored images. Mean BMD values at L1/L2 using quantitative CT with a phantom served as the reference standard. The average accuracy and time of interpretation were calculated. Inter-observer agreement was assessed using intraclass correlation coefficient (ICC). RESULTS Mean attenuation at L1/L2 was highly correlated with mean BMD (r = 0.96/0.91, p < 0.001 for both). The average accuracy and mean time to assess BMD among four readers for differentiating normal from abnormal BMD was 66% and 6.0 s using visual assessment of grayscale images, 88% and 15.2 s using quantitative measurements of mean vertebral attenuation, and 92% and 2.1 s using visual assessment of colored images (p < 0.001 and p < 0.001, respectively). Inter-observer agreement was poor using visual assessment of grayscale images (ICC:0.31), good using quantitative measurements of mean vertebral attenuation (ICC:0.73), and excellent using visual assessment of colored images (ICC:0.90). CONCLUSION Detection of abnormal BMD using colored abdominal CT images was highly accurate, rapid, and had excellent inter-observer agreement.
Collapse
Affiliation(s)
- Elliot Varney
- Department of Radiology, University of Mississippi Medical Center, Jackson, MS, USA
| | - Asser Abou Elkassem
- Department of Radiology, University of Alabama at Birmingham, JTN 405, 619 19th Street South, Birmingham, AL, 35249-6830, USA.
| | - Majid Khan
- Department of Radiology, Johns Hopkins Hospital, Baltimore, MD, USA
| | - Ellen Parker
- Department of Radiology, University of Mississippi Medical Center, Jackson, MS, USA
| | - Todd Nichols
- Department of Radiology, University of Mississippi Medical Center, Jackson, MS, USA
| | - David Joyner
- Department of Radiology, University of Mississippi Medical Center, Jackson, MS, USA
| | - Seth T Lirette
- Center for Data Science, University of Mississippi Medical Center, Jackson, MS, USA
| | | | - Andrew D Smith
- Department of Radiology, University of Alabama at Birmingham, JTN 405, 619 19th Street South, Birmingham, AL, 35249-6830, USA
| |
Collapse
|
7
|
Abstract
Most renal masses are benign cysts; a subset are malignant. Most renal masses are incidental findings. Evaluation of renal cysts has evolved with updates to the Bosniak classification system and other guidelines. The Bosniak classification provides detailed definitions and extends the system from computed tomography to MR imaging. This article provides a simple approach to the evaluation of cystic or potentially cystic renal masses. The radiologist is central to this process. Key elements include confirming that a renal lesion is cystic and not solid, determining the need for further characterization by imaging, and judicious application of the Bosniak classification system.
Collapse
Affiliation(s)
- Andrew D Smith
- Department of Radiology, University of Alabama at Birmingham, JTN 452, 619 19th Street South, Birmingham, AL 35249-6830, USA.
| | - Asser Abou Elkassem
- Department of Radiology, University of Alabama at Birmingham, JTN 452, 619 19th Street South, Birmingham, AL 35249-6830, USA
| |
Collapse
|
8
|
Smith AD, Allen BC, Abou Elkassem A, Mresh R, Lirette ST, Shrestha Y, Giese JD, Stevens R, Williams D, Farag A, Khalaf A. Multi-institutional comparative effectiveness of advanced cancer longitudinal imaging response evaluation methods: Current practice versus artificial intelligence-assisted. J Clin Oncol 2020. [DOI: 10.1200/jco.2020.38.15_suppl.2010] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Abstract
2010 Background: Current-practice methods to evaluate advanced cancer longitudinal tumor response include manual measurements on digital medical images and dictation of text-based reports that are prone to errors, inefficient, and associated with low inter-observer agreement. The purpose of this study is to compare the effectiveness of advanced cancer longitudinal imaging response evaluation using current practice versus artificial intelligence (AI)-assisted methods. Methods: For this multi-institutional longitudinal retrospective study, body CT images from 120 consecutive patients with multiple serial imaging exams and advanced cancer treated with systemic therapy were independently evaluated by 24 radiologists using current-practice versus AI-assisted methods. For the current practice method, radiologists dictated text-based reports and separately categorized response (CR, PR, SD, and PD). For the AI-assisted method, custom software included AI algorithms for tumor measurement, target and non-target location labelling, and tumor localization at follow up. The AI-assisted software automatically categorized tumor response per RECIST 1.1 calculations and displayed longitudinal data in the form of a graph, table, and key images. All studies were read independently in triplicate for assessment of inter-observer agreement. Comparative effectiveness metrics included: major errors, time of image interpretation, and inter-observer agreement for final response category. Results: Major errors were found in 27.5% (99/360) for current-practice versus 0.3% (1/360) for AI-assisted methods (p < 0.001), corresponding to a 99% reduction in major errors. Average time of interpretation by radiologists was 18.7 min for current-practice versus 9.8 min for AI-assisted method (p < 0.001), with the AI-assisted method being nearly twice as fast. Total inter-observer agreement on final response categorization for radiologists was 52% (62/120) for current-practice versus 75% (90/120) for AI-assisted method (p < 0.001), corresponding to a 45% increase in total inter-observer agreement. Conclusion: In a large multi-institutional study, AI-assisted advanced cancer longitudinal imaging response evaluation significantly reduced major errors, was nearly twice as fast, and increased inter-observer agreement relative to the current-practice method, thereby establishing a new and improved standard of care.
Collapse
Affiliation(s)
| | | | | | - Rafah Mresh
- University of Alabama at Birmingham, Birmingham, AL
| | | | | | | | | | | | - Ahmed Farag
- University of Alabama at Birmingham, Birmingham, AL
| | - Ahmed Khalaf
- University of Alabama at Birmingham, Birmingham, AL
| | | |
Collapse
|
9
|
Smith A, Varney E, Zand K, Lewis T, Sirous R, York J, Florez E, Abou Elkassem A, Howard-Claudio CM, Roda M, Parker E, Scortegagna E, Joyner D, Sandlin D, Newsome A, Brewster P, Lirette ST, Griswold M. Precision analysis of a quantitative CT liver surface nodularity score. Abdom Radiol (NY) 2018; 43:3307-3316. [PMID: 29700590 DOI: 10.1007/s00261-018-1617-x] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
Abstract
PURPOSE To evaluate precision of a software-based liver surface nodularity (LSN) score derived from CT images. METHODS An anthropomorphic CT phantom was constructed with simulated liver containing smooth and nodular segments at the surface and simulated visceral and subcutaneous fat components. The phantom was scanned multiple times on a single CT scanner with adjustment of image acquisition and reconstruction parameters (N = 34) and on 22 different CT scanners from 4 manufacturers at 12 imaging centers. LSN scores were obtained using a software-based method. Repeatability and reproducibility were evaluated by intraclass correlation (ICC) and coefficient of variation. Using abdominal CT images from 68 patients with various stages of chronic liver disease, inter-observer agreement and test-retest repeatability among 12 readers assessing LSN by software- vs. visual-based scoring methods were evaluated by ICC. RESULTS There was excellent repeatability of LSN scores (ICC:0.79-0.99) using the CT phantom and routine image acquisition and reconstruction parameters (kVp 100-140, mA 200-400, and auto-mA, section thickness 1.25-5.0 mm, field of view 35-50 cm, and smooth or standard kernels). There was excellent reproducibility (smooth ICC: 0.97; 95% CI 0.95, 0.99; CV: 7%; nodular ICC: 0.94; 95% CI 0.89, 0.97; CV: 8%) for LSN scores derived from CT images from 22 different scanners. Inter-observer agreement for the software-based LSN scoring method was excellent (ICC: 0.84; 95% CI 0.79, 0.88; CV: 28%) vs. good for the visual-based method (ICC: 0.61; 95% CI 0.51, 0.69; CV: 43%). Test-retest repeatability for the software-based LSN scoring method was excellent (ICC: 0.82; 95% CI 0.79, 0.84; CV: 12%). CONCLUSION The software-based LSN score is a quantitative CT imaging biomarker with excellent repeatability, reproducibility, inter-observer agreement, and test-retest repeatability.
Collapse
Affiliation(s)
- Andrew Smith
- Department of Radiology, University of Mississippi Medical Center, Jackson, MS, USA. .,Department of Radiology, University of Alabama at Birmingham, Birmingham, AL, USA. .,Department of Radiology, UAB, JTN 452, 619 19th Street South, Birmingham, AL, 35249-6830, USA.
| | - Elliot Varney
- Department of Radiology, University of Mississippi Medical Center, Jackson, MS, USA
| | - Kevin Zand
- Department of Radiology, University of Mississippi Medical Center, Jackson, MS, USA
| | - Tara Lewis
- Department of Radiology, University of Mississippi Medical Center, Jackson, MS, USA
| | - Reza Sirous
- Department of Radiology, University of Mississippi Medical Center, Jackson, MS, USA
| | - James York
- Department of Radiology, University of Mississippi Medical Center, Jackson, MS, USA
| | - Edward Florez
- Department of Radiology, University of Mississippi Medical Center, Jackson, MS, USA
| | - Asser Abou Elkassem
- Department of Radiology, University of Mississippi Medical Center, Jackson, MS, USA.,Department of Radiology, University of Alabama at Birmingham, Birmingham, AL, USA
| | | | - Manohar Roda
- Department of Radiology, University of Mississippi Medical Center, Jackson, MS, USA
| | - Ellen Parker
- Department of Radiology, University of Mississippi Medical Center, Jackson, MS, USA
| | - Eduardo Scortegagna
- Department of Radiology, University of Mississippi Medical Center, Jackson, MS, USA
| | - David Joyner
- Department of Radiology, University of Mississippi Medical Center, Jackson, MS, USA
| | - David Sandlin
- Department of Radiology, University of Mississippi Medical Center, Jackson, MS, USA
| | - Ashley Newsome
- Department of Radiology, University of Mississippi Medical Center, Jackson, MS, USA
| | - Parker Brewster
- Department of Radiology, University of Mississippi Medical Center, Jackson, MS, USA
| | - Seth T Lirette
- Center for Data Science, University of Mississippi Medical Center, Jackson, MS, USA
| | - Michael Griswold
- Center for Data Science, University of Mississippi Medical Center, Jackson, MS, USA
| |
Collapse
|