1
|
Tsui B, Calabrese E, Zaharchuk G, Rauschecker AM. Reducing Gadolinium Contrast With Artificial Intelligence. J Magn Reson Imaging 2024; 60:848-859. [PMID: 37905681 DOI: 10.1002/jmri.29095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Revised: 10/11/2023] [Accepted: 10/12/2023] [Indexed: 11/02/2023] Open
Abstract
Gadolinium contrast is an important agent in magnetic resonance imaging (MRI), particularly in neuroimaging where it can help identify blood-brain barrier breakdown from an inflammatory, infectious, or neoplastic process. However, gadolinium contrast has several drawbacks, including nephrogenic systemic fibrosis, gadolinium deposition in the brain and bones, and allergic-like reactions. As computer hardware and technology continues to evolve, machine learning has become a possible solution for eliminating or reducing the dose of gadolinium contrast. This review summarizes the clinical uses of gadolinium contrast, the risks of gadolinium contrast, and state-of-the-art machine learning methods that have been applied to reduce or eliminate gadolinium contrast administration, as well as their current limitations, with a focus on neuroimaging applications. EVIDENCE LEVEL: 3 TECHNICAL EFFICACY: Stage 1.
Collapse
Affiliation(s)
- Brian Tsui
- Center for Intelligent Imaging, Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, California, USA
| | - Evan Calabrese
- Department of Radiology, Duke University School of Medicine, Durham, North Carolina, USA
| | - Greg Zaharchuk
- Department of Radiology, Stanford University, Stanford, California, USA
| | - Andreas M Rauschecker
- Center for Intelligent Imaging, Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, California, USA
| |
Collapse
|
2
|
Nerella S, Bandyopadhyay S, Zhang J, Contreras M, Siegel S, Bumin A, Silva B, Sena J, Shickel B, Bihorac A, Khezeli K, Rashidi P. Transformers and large language models in healthcare: A review. Artif Intell Med 2024; 154:102900. [PMID: 38878555 DOI: 10.1016/j.artmed.2024.102900] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Revised: 05/28/2024] [Accepted: 05/30/2024] [Indexed: 08/09/2024]
Abstract
With Artificial Intelligence (AI) increasingly permeating various aspects of society, including healthcare, the adoption of the Transformers neural network architecture is rapidly changing many applications. Transformer is a type of deep learning architecture initially developed to solve general-purpose Natural Language Processing (NLP) tasks and has subsequently been adapted in many fields, including healthcare. In this survey paper, we provide an overview of how this architecture has been adopted to analyze various forms of healthcare data, including clinical NLP, medical imaging, structured Electronic Health Records (EHR), social media, bio-physiological signals, biomolecular sequences. Furthermore, which have also include the articles that used the transformer architecture for generating surgical instructions and predicting adverse outcomes after surgeries under the umbrella of critical care. Under diverse settings, these models have been used for clinical diagnosis, report generation, data reconstruction, and drug/protein synthesis. Finally, we also discuss the benefits and limitations of using transformers in healthcare and examine issues such as computational cost, model interpretability, fairness, alignment with human values, ethical implications, and environmental impact.
Collapse
Affiliation(s)
- Subhash Nerella
- Department of Biomedical Engineering, University of Florida, Gainesville, United States
| | | | - Jiaqing Zhang
- Department of Electrical and Computer Engineering, University of Florida, Gainesville, United States
| | - Miguel Contreras
- Department of Biomedical Engineering, University of Florida, Gainesville, United States
| | - Scott Siegel
- Department of Biomedical Engineering, University of Florida, Gainesville, United States
| | - Aysegul Bumin
- Department of Computer and Information Science and Engineering, University of Florida, Gainesville, United States
| | - Brandon Silva
- Department of Computer and Information Science and Engineering, University of Florida, Gainesville, United States
| | - Jessica Sena
- Department Of Computer Science, Universidade Federal de Minas Gerais, Belo Horizonte, Brazil
| | - Benjamin Shickel
- Department of Medicine, University of Florida, Gainesville, United States
| | - Azra Bihorac
- Department of Medicine, University of Florida, Gainesville, United States
| | - Kia Khezeli
- Department of Biomedical Engineering, University of Florida, Gainesville, United States
| | - Parisa Rashidi
- Department of Biomedical Engineering, University of Florida, Gainesville, United States.
| |
Collapse
|
3
|
Heckel R, Jacob M, Chaudhari A, Perlman O, Shimron E. Deep learning for accelerated and robust MRI reconstruction. MAGMA (NEW YORK, N.Y.) 2024; 37:335-368. [PMID: 39042206 DOI: 10.1007/s10334-024-01173-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/03/2024] [Revised: 05/24/2024] [Accepted: 05/28/2024] [Indexed: 07/24/2024]
Abstract
Deep learning (DL) has recently emerged as a pivotal technology for enhancing magnetic resonance imaging (MRI), a critical tool in diagnostic radiology. This review paper provides a comprehensive overview of recent advances in DL for MRI reconstruction, and focuses on various DL approaches and architectures designed to improve image quality, accelerate scans, and address data-related challenges. It explores end-to-end neural networks, pre-trained and generative models, and self-supervised methods, and highlights their contributions to overcoming traditional MRI limitations. It also discusses the role of DL in optimizing acquisition protocols, enhancing robustness against distribution shifts, and tackling biases. Drawing on the extensive literature and practical insights, it outlines current successes, limitations, and future directions for leveraging DL in MRI reconstruction, while emphasizing the potential of DL to significantly impact clinical imaging practices.
Collapse
Affiliation(s)
- Reinhard Heckel
- Department of computer engineering, Technical University of Munich, Munich, Germany
| | - Mathews Jacob
- Department of Electrical and Computer Engineering, University of Iowa, Iowa, 52242, IA, USA
| | - Akshay Chaudhari
- Department of Radiology, Stanford University, Stanford, 94305, CA, USA
- Department of Biomedical Data Science, Stanford University, Stanford, 94305, CA, USA
| | - Or Perlman
- Department of Biomedical Engineering, Tel Aviv University, Tel Aviv, Israel
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel
| | - Efrat Shimron
- Department of Electrical and Computer Engineering, Technion-Israel Institute of Technology, Haifa, 3200004, Israel.
- Department of Biomedical Engineering, Technion-Israel Institute of Technology, Haifa, 3200004, Israel.
| |
Collapse
|
4
|
Ren CX, Xu GX, Dai DQ, Lin L, Sun Y, Liu QS. Cross-site prognosis prediction for nasopharyngeal carcinoma from incomplete multi-modal data. Med Image Anal 2024; 93:103103. [PMID: 38368752 DOI: 10.1016/j.media.2024.103103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Revised: 12/05/2023] [Accepted: 02/05/2024] [Indexed: 02/20/2024]
Abstract
Accurate prognosis prediction for nasopharyngeal carcinoma based on magnetic resonance (MR) images assists in the guidance of treatment intensity, thus reducing the risk of recurrence and death. To reduce repeated labor and sufficiently explore domain knowledge, aggregating labeled/annotated data from external sites enables us to train an intelligent model for a clinical site with unlabeled data. However, this task suffers from the challenges of incomplete multi-modal examination data fusion and image data heterogeneity among sites. This paper proposes a cross-site survival analysis method for prognosis prediction of nasopharyngeal carcinoma from domain adaptation viewpoint. Utilizing a Cox model as the basic framework, our method equips it with a cross-attention based multi-modal fusion regularization. This regularization model effectively fuses the multi-modal information from multi-parametric MR images and clinical features onto a domain-adaptive space, despite the absence of some modalities. To enhance the feature discrimination, we also extend the contrastive learning technique to censored data cases. Compared with the conventional approaches which directly deploy a trained survival model in a new site, our method achieves superior prognosis prediction performance in cross-site validation experiments. These results highlight the key role of cross-site adaptability of our method and support its value in clinical practice.
Collapse
Affiliation(s)
- Chuan-Xian Ren
- School of Mathematics, Sun Yat-sen University, Guangzhou 510275, China.
| | - Geng-Xin Xu
- School of Mathematics, Sun Yat-sen University, Guangzhou 510275, China
| | - Dao-Qing Dai
- School of Mathematics, Sun Yat-sen University, Guangzhou 510275, China
| | - Li Lin
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center; State Key Laboratory of Oncology in South China; Collaborative Innovation Center for Cancer Medicine; Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou 510060, China
| | - Ying Sun
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center; State Key Laboratory of Oncology in South China; Collaborative Innovation Center for Cancer Medicine; Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou 510060, China
| | - Qing-Shan Liu
- School of Computer Science, Nanjing University of Posts and Telecommunications, Nanjing 210023, China
| |
Collapse
|
5
|
Jiang M, Wang S, Song Z, Song L, Wang Y, Zhu C, Zheng Q. Cross 2SynNet: cross-device-cross-modal synthesis of routine brain MRI sequences from CT with brain lesion. MAGMA (NEW YORK, N.Y.) 2024; 37:241-256. [PMID: 38315352 DOI: 10.1007/s10334-023-01145-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Revised: 11/28/2023] [Accepted: 12/27/2023] [Indexed: 02/07/2024]
Abstract
OBJECTIVES CT and MR are often needed to determine the location and extent of brain lesions collectively to improve diagnosis. However, patients with acute brain diseases cannot complete the MRI examination within a short time. The aim of the study is to devise a cross-device and cross-modal medical image synthesis (MIS) method Cross2SynNet for synthesizing routine brain MRI sequences of T1WI, T2WI, FLAIR, and DWI from CT with stroke and brain tumors. MATERIALS AND METHODS For the retrospective study, the participants covered four different diseases of cerebral ischemic stroke (CIS-cohort), cerebral hemorrhage (CH-cohort), meningioma (M-cohort), glioma (G-cohort). The MIS model Cross2SynNet was established on the basic architecture of conditional generative adversarial network (CGAN), of which, the fully convolutional Transformer (FCT) module was adopted into generator to capture the short- and long-range dependencies between healthy and pathological tissues, and the edge loss function was to minimize the difference in gradient magnitude between synthetic image and ground truth. Three metrics of mean square error (MSE), peak signal-to-noise ratio (PSNR), and structure similarity index measure (SSIM) were used for evaluation. RESULTS A total of 230 participants (mean patient age, 59.77 years ± 13.63 [standard deviation]; 163 men [71%] and 67 women [29%]) were included, including CIS-cohort (95 participants between Dec 2019 and Feb 2022), CH-cohort (69 participants between Jan 2020 and Dec 2021), M-cohort (40 participants between Sep 2018 and Dec 2021), and G-cohort (26 participants between Sep 2019 and Dec 2021). The Cross2SynNet achieved averaged values of MSE = 0.008, PSNR = 21.728, and SSIM = 0.758 when synthesizing MRIs from CT, outperforming the CycleGAN, pix2pix, RegGAN, Pix2PixHD, and ResViT. The Cross2SynNet could synthesize the brain lesion on pseudo DWI even if the CT image did not exhibit clear signal in the acute ischemic stroke patients. CONCLUSIONS Cross2SynNet could achieve routine brain MRI synthesis of T1WI, T2WI, FLAIR, and DWI from CT with promising performance given the brain lesion of stroke and brain tumor.
Collapse
Affiliation(s)
- Minbo Jiang
- School of Computer and Control Engineering, Yantai University, No 30, Qingquan Road, Laishan District, Yantai, 264005, Shandong, China
| | - Shuai Wang
- Department of Radiology, Binzhou Medical University Hospital, Binzhou, 256603, China
| | - Zhiwei Song
- School of Computer and Control Engineering, Yantai University, No 30, Qingquan Road, Laishan District, Yantai, 264005, Shandong, China
| | - Limei Song
- School of Medical Imaging, Weifang Medical University, Weifang, 261000, China
| | - Yi Wang
- School of Computer and Control Engineering, Yantai University, No 30, Qingquan Road, Laishan District, Yantai, 264005, Shandong, China
| | - Chuanzhen Zhu
- School of Computer and Control Engineering, Yantai University, No 30, Qingquan Road, Laishan District, Yantai, 264005, Shandong, China
| | - Qiang Zheng
- School of Computer and Control Engineering, Yantai University, No 30, Qingquan Road, Laishan District, Yantai, 264005, Shandong, China.
| |
Collapse
|
6
|
Carass A, Greenman D, Dewey BE, Calabresi PA, Prince JL, Pham DL. Image harmonization improves consistency of intra-rater delineations of MS lesions in heterogeneous MRI. NEUROIMAGE. REPORTS 2024; 4:100195. [PMID: 38370461 PMCID: PMC10871705 DOI: 10.1016/j.ynirp.2024.100195] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/20/2024]
Abstract
Clinical magnetic resonance images (MRIs) lack a standard intensity scale due to differences in scanner hardware and the pulse sequences used to acquire the images. When MRIs are used for quantification, as in the evaluation of white matter lesions (WMLs) in multiple sclerosis, this lack of intensity standardization becomes a critical problem affecting both the staging and tracking of the disease and its treatment. This paper presents a study of harmonization on WML segmentation consistency, which is evaluated using an object detection classification scheme that incorporates manual delineations from both the original and harmonized MRIs. A cohort of ten people scanned on two different imaging platforms was studied. An expert rater, blinded to the image source, manually delineated WMLs on images from both scanners before and after harmonization. It was found that there is closer agreement in both global and per-lesion WML volume and spatial distribution after harmonization, demonstrating the importance of image harmonization prior to the creation of manual delineations. These results could lead to better truth models in both the development and evaluation of automated lesion segmentation algorithms.
Collapse
Affiliation(s)
- Aaron Carass
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Danielle Greenman
- Center for Neuroscience and Regenerative Medicine, The Henry M. Jackson Foundation for the Advancement of Military Medicine, Bethesda, MD 20817, USA
| | - Blake E. Dewey
- Department of Neurology, Johns Hopkins School of Medicine, Baltimore, MD 21287, USA
| | - Peter A. Calabresi
- Department of Neurology, Johns Hopkins School of Medicine, Baltimore, MD 21287, USA
| | - Jerry L. Prince
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Dzung L. Pham
- Department of Radiology, Uniformed Services University of the Health Sciences, Bethesda, MD 20814, USA
| |
Collapse
|
7
|
Raad R, Ray D, Varghese B, Hwang D, Gill I, Duddalwar V, Oberai AA. Conditional generative learning for medical image imputation. Sci Rep 2024; 14:171. [PMID: 38167932 PMCID: PMC10762085 DOI: 10.1038/s41598-023-50566-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Accepted: 12/21/2023] [Indexed: 01/05/2024] Open
Abstract
Image imputation refers to the task of generating a type of medical image given images of another type. This task becomes challenging when the difference between the available images, and the image to be imputed is large. In this manuscript, one such application is considered. It is derived from the dynamic contrast enhanced computed tomography (CECT) imaging of the kidneys: given an incomplete sequence of three CECT images, we are required to impute the missing image. This task is posed as one of probabilistic inference and a generative algorithm to generate samples of the imputed image, conditioned on the available images, is developed, trained, and tested. The output of this algorithm is the "best guess" of the imputed image, and a pixel-wise image of variance in the imputation. It is demonstrated that this best guess is more accurate than those generated by other, deterministic deep-learning based algorithms, including ones which utilize additional information and more complex loss terms. It is also shown that the pixel-wise variance image, which quantifies the confidence in the reconstruction, can be used to determine whether the result of the imputation meets a specified accuracy threshold and is therefore appropriate for a downstream task.
Collapse
Affiliation(s)
- Ragheb Raad
- Aerospace and Mechanical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, 90089, USA
| | - Deep Ray
- Department of Mathematics, University of Maryland, College Park, MD, 20742, USA
| | - Bino Varghese
- Radiology, Keck School of Medicine, University of Southern California, Los Angeles, CA, 90033, USA
| | - Darryl Hwang
- Radiology, Keck School of Medicine, University of Southern California, Los Angeles, CA, 90033, USA
| | - Inderbir Gill
- Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, 90033, USA
| | - Vinay Duddalwar
- Radiology, Keck School of Medicine, University of Southern California, Los Angeles, CA, 90033, USA
| | - Assad A Oberai
- Aerospace and Mechanical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, 90089, USA.
| |
Collapse
|
8
|
Tong MW, Tolpadi AA, Bhattacharjee R, Han M, Majumdar S, Pedoia V. Synthetic Knee MRI T 1p Maps as an Avenue for Clinical Translation of Quantitative Osteoarthritis Biomarkers. Bioengineering (Basel) 2023; 11:17. [PMID: 38247894 PMCID: PMC10812962 DOI: 10.3390/bioengineering11010017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Revised: 12/15/2023] [Accepted: 12/21/2023] [Indexed: 01/23/2024] Open
Abstract
A 2D U-Net was trained to generate synthetic T1p maps from T2 maps for knee MRI to explore the feasibility of domain adaptation for enriching existing datasets and enabling rapid, reliable image reconstruction. The network was developed using 509 healthy contralateral and injured ipsilateral knee images from patients with ACL injuries and reconstruction surgeries acquired across three institutions. Network generalizability was evaluated on 343 knees acquired in a clinical setting and 46 knees from simultaneous bilateral acquisition in a research setting. The deep neural network synthesized high-fidelity reconstructions of T1p maps, preserving textures and local T1p elevation patterns in cartilage with a normalized mean square error of 2.4% and Pearson's correlation coefficient of 0.93. Analysis of reconstructed T1p maps within cartilage compartments revealed minimal bias (-0.10 ms), tight limits of agreement, and quantification error (5.7%) below the threshold for clinically significant change (6.42%) associated with osteoarthritis. In an out-of-distribution external test set, synthetic maps preserved T1p textures, but exhibited increased bias and wider limits of agreement. This study demonstrates the capability of image synthesis to reduce acquisition time, derive meaningful information from existing datasets, and suggest a pathway for standardizing T1p as a quantitative biomarker for osteoarthritis.
Collapse
Affiliation(s)
- Michelle W. Tong
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA 94143, USA (S.M.); (V.P.)
- Department of Bioengineering, University of California Berkeley, Berkeley, CA 94720, USA
| | - Aniket A. Tolpadi
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA 94143, USA (S.M.); (V.P.)
- Department of Bioengineering, University of California Berkeley, Berkeley, CA 94720, USA
| | - Rupsa Bhattacharjee
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA 94143, USA (S.M.); (V.P.)
| | - Misung Han
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA 94143, USA (S.M.); (V.P.)
| | - Sharmila Majumdar
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA 94143, USA (S.M.); (V.P.)
| | - Valentina Pedoia
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA 94143, USA (S.M.); (V.P.)
| |
Collapse
|
9
|
Nazir A, Cheeema MN, Wang Z. ChatGPT-based Biological and Psychological Data Imputation. META-RADIOLOGY 2023; 1:100034. [PMID: 38784385 PMCID: PMC11115380 DOI: 10.1016/j.metrad.2023.100034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2024]
Abstract
Missing data are a common problem for large cohort or longitudinal research and have been handled through data imputation. Based on simplified models such as linear or nonlinear interpolations, current imputation methods may not be accurate for real-life data such as biological and behavioral data. The purpose of this work was to explore the capability of ChatGPT, a powerful Large Language Model (LLM) developed by OpenAI, for biological and psychological data imputation. We tested the feasibility using data from the Human Connectome Project. Performance was evaluated by comparing the imputed data against known ground truth (GT) and measured with metrics like Pearson correlation coefficient (r), relative accuracy (MP), and mean absolute error (MAE). Comparative analyses with traditional imputation techniques are also conducted to demonstrate the superior efficacy of the ChatGPT as a data imputer. In summary, through customized data-to-text prompting engineering, ChatGPT can successfully capture intricate patterns and dependencies within biological data, resulting in precise imputations. Fine-tuning ChatGPT with domain-specific biological vocabulary with human in-loop as an interpreter enhances the accuracy and relevance of the imputations.
Collapse
Affiliation(s)
- Anam Nazir
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine
| | - Muhammad Nadeem Cheeema
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine
| | - Ze Wang
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine
| |
Collapse
|
10
|
Tanenbaum LN, Bash SC, Zaharchuk G, Shankaranarayanan A, Chamberlain R, Wintermark M, Beaulieu C, Novick M, Wang L. Deep Learning-Generated Synthetic MR Imaging STIR Spine Images Are Superior in Image Quality and Diagnostically Equivalent to Conventional STIR: A Multicenter, Multireader Trial. AJNR Am J Neuroradiol 2023; 44:987-993. [PMID: 37414452 PMCID: PMC10411840 DOI: 10.3174/ajnr.a7920] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 06/01/2023] [Indexed: 07/08/2023]
Abstract
BACKGROUND AND PURPOSE Deep learning image reconstruction allows faster MR imaging acquisitions while matching or exceeding the standard of care and can create synthetic images from existing data sets. This multicenter, multireader spine study evaluated the performance of synthetically created STIR compared with acquired STIR. MATERIALS AND METHODS From a multicenter, multiscanner data base of 328 clinical cases, a nonreader neuroradiologist randomly selected 110 spine MR imaging studies in 93 patients (sagittal T1, T2, and STIR) and classified them into 5 categories of disease and healthy. A DICOM-based deep learning application generated a synthetically created STIR series from the sagittal T1 and T2 images. Five radiologists (3 neuroradiologists, 1 musculoskeletal radiologist, and 1 general radiologist) rated the STIR quality and classified disease pathology (study 1, n = 80). They then assessed the presence or absence of findings typically evaluated with STIR in patients with trauma (study 2, n = 30). The readers evaluated studies with either acquired STIR or synthetically created STIR in a blinded and randomized fashion with a 1-month washout period. The interchangeability of acquired STIR and synthetically created STIR was assessed using a noninferiority threshold of 10%. RESULTS For classification, there was a decrease in interreader agreement expected by randomly introducing synthetically created STIR of 3.23%. For trauma, there was an overall increase in interreader agreement by +1.9%. The lower bound of confidence for both exceeded the noninferiority threshold, indicating interchangeability of synthetically created STIR with acquired STIR. Both the Wilcoxon signed-rank and t tests showed higher image-quality scores for synthetically created STIR over acquired STIR (P < .0001). CONCLUSIONS Synthetically created STIR spine MR images were diagnostically interchangeable with acquired STIR, while providing significantly higher image quality, suggesting routine clinical practice potential.
Collapse
Affiliation(s)
| | - S C Bash
- From RadNet (L.N.T., S.C.B.), New York, New York
| | - G Zaharchuk
- Stanford University Medical Center (G.Z., C.B.), Stanford, California
| | | | - R Chamberlain
- Subtle Medical (A.S., R.C., L.W.), Menlo Park, California
| | - M Wintermark
- MD Anderson Cancer Center (M.W.), University of Texas, Houston, Texas
| | - C Beaulieu
- Stanford University Medical Center (G.Z., C.B.), Stanford, California
| | - M Novick
- All-American Teleradiology (M.N.), Bay Village, Ohio
| | - L Wang
- Subtle Medical (A.S., R.C., L.W.), Menlo Park, California
| |
Collapse
|