1
|
Foti G, Spoto F, Mignolli T, Spezia A, Romano L, Manenti G, Cardobi N, Avanzi P. Deep Learning-Driven Abbreviated Shoulder MRI Protocols: Diagnostic Accuracy in Clinical Practice. Tomography 2025; 11:48. [PMID: 40278715 PMCID: PMC12031227 DOI: 10.3390/tomography11040048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2025] [Revised: 04/08/2025] [Accepted: 04/14/2025] [Indexed: 04/26/2025] Open
Abstract
BACKGROUND Deep learning (DL) reconstruction techniques have shown promise in reducing MRI acquisition times while maintaining image quality. However, the impact of different acceleration factors on diagnostic accuracy in shoulder MRI remains unexplored in clinical practice. PURPOSE The purpose of this study was to evaluate the diagnostic accuracy of 2-fold and 4-fold DL-accelerated shoulder MRI protocols compared to standard protocols in clinical practice. MATERIALS AND METHODS In this prospective single-center study, 88 consecutive patients (49 males, 39 females; mean age, 51 years) underwent shoulder MRI examinations using standard, 2-fold (DL2), and 4-fold (DL4) accelerated protocols between June 2023 and January 2024. Four independent radiologists (experience range: 4-25 years) evaluated the presence of bone marrow edema (BME), rotator cuff tears, and labral lesions. The sensitivity, specificity, and interobserver agreement were calculated. Diagnostic confidence was assessed using a 4-point scale. The impact of reader experience was analyzed by stratifying the radiologists into ≤10 and >10 years of experience. RESULTS Both accelerated protocols demonstrated high diagnostic accuracy. For BME detection, DL2 and DL4 achieved 100% sensitivity and specificity. In rotator cuff evaluation, DL2 showed a sensitivity of 98-100% and specificity of 99-100%, while DL4 maintained a sensitivity of 95-98% and specificity of 99-100%. Labral tear detection showed perfect sensitivity (100%) with DL2 and slightly lower sensitivity (89-100%) with DL4. Interobserver agreement was excellent across the protocols (Kendall's W = 0.92-0.98). Reader experience did not significantly impact diagnostic performance. The area under the ROC curve was 0.94 for DL2 and 0.90 for DL4 (p = 0.32). CLINICAL IMPLICATIONS The implementation of DL-accelerated protocols, particularly DL2, could improve workflow efficiency by reducing acquisition times by 50% while maintaining diagnostic reliability. This could increase patient throughput and accessibility to MRI examinations without compromising diagnostic quality. CONCLUSIONS DL-accelerated shoulder MRI protocols demonstrate high diagnostic accuracy, with DL2 showing performance nearly identical to that of the standard protocol. While DL4 maintains acceptable diagnostic accuracy, it shows a slight sensitivity reduction for subtle pathologies, particularly among less experienced readers. The DL2 protocol represents an optimal balance between acquisition time reduction and diagnostic confidence.
Collapse
Affiliation(s)
- Giovanni Foti
- Department of Radiology, IRCCS Sacro Cuore Don Calabria Hospital, 37024 Negrar, Italy (T.M.); (L.R.)
| | - Flavio Spoto
- Department of Radiology, IRCCS Sacro Cuore Don Calabria Hospital, 37024 Negrar, Italy (T.M.); (L.R.)
| | - Thomas Mignolli
- Department of Radiology, IRCCS Sacro Cuore Don Calabria Hospital, 37024 Negrar, Italy (T.M.); (L.R.)
| | - Alessandro Spezia
- Department of Radiology, Policlinico Universitario GB Rossi, 37134 Verona, Italy; (A.S.); (N.C.)
| | - Luigi Romano
- Department of Radiology, IRCCS Sacro Cuore Don Calabria Hospital, 37024 Negrar, Italy (T.M.); (L.R.)
| | - Guglielmo Manenti
- Department of Diagnostic Imaging and Interventional Radiology, University Hospital, Policlinico Tor Vergata, 00133 Rome, Italy;
| | - Nicolò Cardobi
- Department of Radiology, Policlinico Universitario GB Rossi, 37134 Verona, Italy; (A.S.); (N.C.)
| | - Paolo Avanzi
- Department of Orthopaedic Surgery, IRCCS Sacro Cuore Don Calabria Hospital, 37024 Negrar, Italy;
| |
Collapse
|
2
|
Raymond C, Yao J, Clifford B, Feiweier T, Oshima S, Telesca D, Zhong X, Meyer H, Everson RG, Salamon N, Cloughesy TF, Ellingson BM. Leveraging Physics-Based Synthetic MR Images and Deep Transfer Learning for Artifact Reduction in Echo-Planar Imaging. AJNR Am J Neuroradiol 2025; 46:733-741. [PMID: 39947682 PMCID: PMC11979845 DOI: 10.3174/ajnr.a8566] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2024] [Accepted: 10/01/2024] [Indexed: 04/04/2025]
Abstract
BACKGOUND AND PURPOSE This study utilizes a physics-based approach to synthesize realistic MR artifacts and train a deep learning generative adversarial network (GAN) for use in artifact reduction on EPI, a crucial neuroimaging sequence with high acceleration that is notoriously susceptible to artifacts. MATERIALS AND METHODS A total of 4,573 anatomical MR sequences from 1,392 patients undergoing clinically indicated MRI of the brain were used to create a synthetic data set using physics-based, simulated artifacts commonly found in EPI. By using multiple MRI contrasts, we hypothesized the GAN would learn to correct common artifacts while preserving the inherent contrast information, even for contrasts the network has not been trained on. A modified Pix2PixGAN architecture with an Attention-R2UNet generator was used for the model. Three training strategies were employed: (1) An "all-in-one" model trained on all the artifacts at once; (2) a set of "single models", one for each artifact; and a (3) "stacked transfer learning" approach where a model is first trained on one artifact set, then this learning is transferred to a new model and the process is repeated for the next artifact set. Lastly, the "Stacked Transfer Learning" model was tested on ADC maps from single-shot diffusion MRI data in N = 49 patients diagnosed with recurrent glioblastoma to compare visual quality and lesion measurements between the natively acquired images and AI-corrected images. RESULTS The "stacked transfer learning" approach had superior artifact reduction performance compared to the other approaches as measured by Mean Squared Error (MSE = 0.0016), Structural Similarity Index (SSIM = 0.92), multiscale SSIM (MS-SSIM = 0.92), peak signal-to-noise ratio (PSNR = 28.10), and Hausdorff distance (HAUS = 4.08mm), suggesting that leveraging pre-trained knowledge and sequentially training on each artifact is the best approach this application. In recurrent glioblastoma, significantly higher visual quality was observed in model predicted images compared to native images, while quantitative measurements within the tumor regions remained consistent with non-corrected images. CONCLUSIONS The current study demonstrates the feasibility of using a physics-based method for synthesizing a large data set of images with realistic artifacts and the effectiveness of utilizing this synthetic data set in a "stacked transfer learning" approach to training a GAN for reduction of EPI-based artifacts.
Collapse
Affiliation(s)
- Catalina Raymond
- From the UCLA Brain Tumor Imaging Laboratory (C.R., J.Y., S.O., B.M.E.), David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA
- Department of Radiological Sciences (C.R., J.Y., S.O., X.Z., N.S., B.M.E), David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA
| | - Jingwen Yao
- From the UCLA Brain Tumor Imaging Laboratory (C.R., J.Y., S.O., B.M.E.), David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA
- Department of Radiological Sciences (C.R., J.Y., S.O., X.Z., N.S., B.M.E), David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA
| | - Bryan Clifford
- Siemens Medical Solutions USA, Inc. (B.C.), Los Angeles, CA
| | | | - Sonoko Oshima
- From the UCLA Brain Tumor Imaging Laboratory (C.R., J.Y., S.O., B.M.E.), David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA
- Department of Radiological Sciences (C.R., J.Y., S.O., X.Z., N.S., B.M.E), David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA
| | - Donatello Telesca
- Department of Biostatistics (D.T.), University of California, Los Angeles, Los Angeles, CA, USA
| | - Xiaodong Zhong
- Department of Radiological Sciences (C.R., J.Y., S.O., X.Z., N.S., B.M.E), David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA
- Department of Bioengineering (X.Z., B.M.E.), Henry Samueli School of Engineering and Applied Science, University of California, Los Angeles, Los Angeles, CA, USA
| | - Heiko Meyer
- Siemens Healthineers AG (T.F., H.M.), Erlangen, Germany
| | - Richard G Everson
- Department of Neurosurgery (R.G.E.), David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA
| | - Noriko Salamon
- Department of Radiological Sciences (C.R., J.Y., S.O., X.Z., N.S., B.M.E), David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA
| | - Timothy F Cloughesy
- Department of Neurology (T.F.C.), David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA
| | - Benjamin M Ellingson
- From the UCLA Brain Tumor Imaging Laboratory (C.R., J.Y., S.O., B.M.E.), David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA
- Department of Radiological Sciences (C.R., J.Y., S.O., X.Z., N.S., B.M.E), David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA
- Department of Psychiatry and Biobehavioral Sciences (B.M.E.), David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA
- Department of Bioengineering (X.Z., B.M.E.), Henry Samueli School of Engineering and Applied Science, University of California, Los Angeles, Los Angeles, CA, USA
| |
Collapse
|
3
|
Ma ZP, Zhu YM, Zhang XD, Zhao YX, Zheng W, Yuan SR, Li GY, Zhang TL. Investigating the Use of Generative Adversarial Networks-Based Deep Learning for Reducing Motion Artifacts in Cardiac Magnetic Resonance. J Multidiscip Healthc 2025; 18:787-799. [PMID: 39963324 PMCID: PMC11830935 DOI: 10.2147/jmdh.s492163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2024] [Accepted: 01/21/2025] [Indexed: 02/20/2025] Open
Abstract
Objective To evaluate the effectiveness of deep learning technology based on generative adversarial networks (GANs) in reducing motion artifacts in cardiac magnetic resonance (CMR) cine sequences. Methods The training and testing datasets consisted of 2000 and 200 pairs of clear and blurry images, respectively, acquired through simulated motion artifacts in CMR cine sequences. These datasets were used to establish and train a deep learning GAN model. To assess the efficacy of the deep learning network in mitigating motion artifacts, 100 images with simulated motion artifacts and 37 images with real-world motion artifacts encountered in clinical practice were selected. Image quality pre- and post-optimization was assessed using metrics including Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), Leningrad Focus Measure, and a 5-point Likert scale. Results After GAN optimization, notable improvements were observed in the PSNR, SSIM, and focus measure metrics for the 100 images with simulated artifacts. These metrics increased from initial values of 23.85±2.85, 0.71±0.08, and 4.56±0.67, respectively, to 27.91±1.74, 0.83±0.05, and 7.74±0.39 post-optimization. Additionally, the subjective assessment scores significantly improved from 2.44±1.08 to 4.44±0.66 (P<0.001). For the 37 images with real-world artifacts, the Tenengrad Focus Measure showed a significant enhancement, rising from 6.06±0.91 to 10.13±0.48 after artifact removal. Subjective ratings also increased from 3.03±0.73 to 3.73±0.87 (P<0.001). Conclusion GAN-based deep learning technology effectively reduces motion artifacts present in CMR cine images, demonstrating significant potential for clinical application in optimizing CMR motion artifact management.
Collapse
Affiliation(s)
- Ze-Peng Ma
- Department of Radiology, Affiliated Hospital of Hebei University/ Clinical Medical College, Hebei University, Baoding, 071000, People’s Republic of China
- Hebei Key Laboratory of Precise Imaging of inflammation Tumors, Baoding, Hebei Province, 071000, People’s Republic of China
| | - Yue-Ming Zhu
- College of Electronic and Information Engineering, Hebei University, Baoding, Hebei Province, 071002, People’s Republic of China
| | - Xiao-Dan Zhang
- Department of Ultrasound, Affiliated Hospital of Hebei University, Baoding, Hebei Province, 071000, People’s Republic of China
| | - Yong-Xia Zhao
- Department of Radiology, Affiliated Hospital of Hebei University/ Clinical Medical College, Hebei University, Baoding, 071000, People’s Republic of China
| | - Wei Zheng
- College of Electronic and Information Engineering, Hebei University, Baoding, Hebei Province, 071002, People’s Republic of China
| | - Shuang-Rui Yuan
- Department of Radiology, Affiliated Hospital of Hebei University/ Clinical Medical College, Hebei University, Baoding, 071000, People’s Republic of China
| | - Gao-Yang Li
- Department of Radiology, Affiliated Hospital of Hebei University/ Clinical Medical College, Hebei University, Baoding, 071000, People’s Republic of China
| | - Tian-Le Zhang
- Department of Radiology, Affiliated Hospital of Hebei University/ Clinical Medical College, Hebei University, Baoding, 071000, People’s Republic of China
| |
Collapse
|
4
|
Foti G, Longo C. Deep learning and AI in reducing magnetic resonance imaging scanning time: advantages and pitfalls in clinical practice. Pol J Radiol 2024; 89:e443-e451. [PMID: 39444654 PMCID: PMC11497590 DOI: 10.5114/pjr/192822] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2024] [Accepted: 09/03/2024] [Indexed: 10/25/2024] Open
Abstract
Magnetic resonance imaging (MRI) is a powerful imaging modality, but one of its drawbacks is its relatively long scanning time to acquire high-resolution images. Reducing the scanning time has become a critical area of focus in MRI, aiming to enhance patient comfort, reduce motion artifacts, and increase MRI throughput. In the past 5 years, artificial intelligence (AI)-based algorithms, particularly deep learning models, have been developed to reconstruct high-resolution images from significantly fewer data points. These new techniques significantly enhance MRI efficiency, improve patient comfort and lower patient motion artifacts. Improving MRI throughput with lower scanning duration increases accessibility, potentially reducing the need for additional MRI machines and associated costs. Several fields can benefit from shortened protocols, especially for routine exams. In oncologic imaging, faster MRI scans can facilitate more regular monitoring of cancer patients. In patients suffering from neurological disorders, rapid brain imaging can aid in the quick assessment of conditions like stroke, multiple sclerosis, and epilepsy, improving patient outcomes. In chronic inflammatory disease, faster imaging may help in reducing the interval between imaging to better check therapy outcomes. Additionally, reducing scanning time could effectively help MRI to play a role in emergency medicine and acute conditions such as trauma or acute ischaemic stroke. The purpose of this paper is to describe and discuss the advantages and disadvantages of introducing deep learning reconstruction techniques to reduce MRI scanning times in clinical practice.
Collapse
Affiliation(s)
- Giovanni Foti
- Department of Radiology, IRCCS Sacro Cuore Don Calabria Hospital, Negrar, Italy
| | - Chiara Longo
- Department of Radiology, IRCCS Sacro Cuore Don Calabria Hospital, Negrar, Italy
| |
Collapse
|
5
|
Patel N, Celaya A, Eltaher M, Glenn R, Savannah KB, Brock KK, Sanchez JI, Calderone TL, Cleere D, Elsaiey A, Cagley M, Gupta N, Victor D, Beretta L, Koay EJ, Netherton TJ, Fuentes DT. Training robust T1-weighted magnetic resonance imaging liver segmentation models using ensembles of datasets with different contrast protocols and liver disease etiologies. Sci Rep 2024; 14:20988. [PMID: 39251664 PMCID: PMC11385384 DOI: 10.1038/s41598-024-71674-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Accepted: 08/29/2024] [Indexed: 09/11/2024] Open
Abstract
Image segmentation of the liver is an important step in treatment planning for liver cancer. However, manual segmentation at a large scale is not practical, leading to increasing reliance on deep learning models to automatically segment the liver. This manuscript develops a generalizable deep learning model to segment the liver on T1-weighted MR images. In particular, three distinct deep learning architectures (nnUNet, PocketNet, Swin UNETR) were considered using data gathered from six geographically different institutions. A total of 819 T1-weighted MR images were gathered from both public and internal sources. Our experiments compared each architecture's testing performance when trained both intra-institutionally and inter-institutionally. Models trained using nnUNet and its PocketNet variant achieved mean Dice-Sorensen similarity coefficients>0.9 on both intra- and inter-institutional test set data. The performance of these models suggests that nnUNet and PocketNet liver segmentation models trained on a large and diverse collection of T1-weighted MR images would on average achieve good intra-institutional segmentation performance.
Collapse
Affiliation(s)
- Nihil Patel
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
- Department of Molecular and Cellular Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Adrian Celaya
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
- Department of Computational Applied Mathematics and Operations Research, Rice University, Houston, Texas, USA
- Department of Molecular and Cellular Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Mohamed Eltaher
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
- Department of Molecular and Cellular Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Rachel Glenn
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
- Department of Molecular and Cellular Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Kari Brewer Savannah
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
- Department of Molecular and Cellular Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Kristy K Brock
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
- Department of Molecular and Cellular Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Jessica I Sanchez
- Department of Molecular and Cellular Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
- Department of Molecular and Cellular Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Tiffany L Calderone
- Department of Molecular and Cellular Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
- Department of Molecular and Cellular Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Darrel Cleere
- Department of Gastroenterology, Houston Methodist Hospital, Houston, Texas, USA
- Department of Molecular and Cellular Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Ahmed Elsaiey
- Department of Gastroenterology, Houston Methodist Hospital, Houston, Texas, USA
- Department of Molecular and Cellular Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Matthew Cagley
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
- Department of Molecular and Cellular Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Nakul Gupta
- Department of Radiology, Houston Methodist Hospital, Houston, Texas, USA
- Department of Molecular and Cellular Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - David Victor
- Department of Gastroenterology, Houston Methodist Hospital, Houston, Texas, USA
- Department of Molecular and Cellular Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Laura Beretta
- Department of Molecular and Cellular Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
- Department of Molecular and Cellular Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Eugene J Koay
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
- Department of Molecular and Cellular Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Tucker J Netherton
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA.
- Department of Molecular and Cellular Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA.
| | - David T Fuentes
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA.
- Department of Molecular and Cellular Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA.
| |
Collapse
|
6
|
Safari M, Yang X, Chang CW, Qiu RLJ, Fatemi A, Archambault L. Unsupervised MRI motion artifact disentanglement: introducing MAUDGAN. Phys Med Biol 2024; 69:115057. [PMID: 38714192 DOI: 10.1088/1361-6560/ad4845] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Accepted: 05/07/2024] [Indexed: 05/09/2024]
Abstract
Objective.This study developed an unsupervised motion artifact reduction method for magnetic resonance imaging (MRI) images of patients with brain tumors. The proposed novel design uses multi-parametric multicenter contrast-enhanced T1W (ceT1W) and T2-FLAIR MRI images.Approach.The proposed framework included two generators, two discriminators, and two feature extractor networks. A 3-fold cross-validation was used to train and fine-tune the hyperparameters of the proposed model using 230 brain MRI images with tumors, which were then tested on 148 patients'in-vivodatasets. An ablation was performed to evaluate the model's compartments. Our model was compared with Pix2pix and CycleGAN. Six evaluation metrics were reported, including normalized mean squared error (NMSE), structural similarity index (SSIM), multi-scale-SSIM (MS-SSIM), peak signal-to-noise ratio (PSNR), visual information fidelity (VIF), and multi-scale gradient magnitude similarity deviation (MS-GMSD). Artifact reduction and consistency of tumor regions, image contrast, and sharpness were evaluated by three evaluators using Likert scales and compared with ANOVA and Tukey's HSD tests.Main results.On average, our method outperforms comparative models to remove heavy motion artifacts with the lowest NMSE (18.34±5.07%) and MS-GMSD (0.07 ± 0.03) for heavy motion artifact level. Additionally, our method creates motion-free images with the highest SSIM (0.93 ± 0.04), PSNR (30.63 ± 4.96), and VIF (0.45 ± 0.05) values, along with comparable MS-SSIM (0.96 ± 0.31). Similarly, our method outperformed comparative models in removingin-vivomotion artifacts for different distortion levels except for MS- SSIM and VIF, which have comparable performance with CycleGAN. Moreover, our method had a consistent performance for different artifact levels. For the heavy level of motion artifacts, our method got the highest Likert scores of 2.82 ± 0.52, 1.88 ± 0.71, and 1.02 ± 0.14 (p-values≪0.0001) for our method, CycleGAN, and Pix2pix respectively. Similar trends were also found for other motion artifact levels.Significance.Our proposed unsupervised method was demonstrated to reduce motion artifacts from the ceT1W brain images under a multi-parametric framework.
Collapse
Affiliation(s)
- Mojtaba Safari
- Département de physique, de génie physique et d'optique, et Centre de recherche sur le cancer, Université Laval, Québec, Québec, Canada
- Service de physique médicale et radioprotection, Centre Intégré de Cancérologie, CHU de Québec-Université Laval et Centre de recherche du CHU de Québec, Québec, Québec, Canada
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Chih-Wei Chang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Richard L J Qiu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Ali Fatemi
- Department of Physics, Jackson State University, Jackson, MS, United States of America
- Merit Health Central, Department of Radiation Oncology, Gamma Knife Center, Jackson, MS, United States of America
| | - Louis Archambault
- Département de physique, de génie physique et d'optique, et Centre de recherche sur le cancer, Université Laval, Québec, Québec, Canada
- Service de physique médicale et radioprotection, Centre Intégré de Cancérologie, CHU de Québec-Université Laval et Centre de recherche du CHU de Québec, Québec, Québec, Canada
| |
Collapse
|
7
|
Hossain MB, Shinde RK, Imtiaz SM, Hossain FMF, Jeon SH, Kwon KC, Kim N. Swin Transformer and the Unet Architecture to Correct Motion Artifacts in Magnetic Resonance Image Reconstruction. Int J Biomed Imaging 2024; 2024:8972980. [PMID: 38725808 PMCID: PMC11081754 DOI: 10.1155/2024/8972980] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Revised: 04/08/2024] [Accepted: 04/23/2024] [Indexed: 05/12/2024] Open
Abstract
We present a deep learning-based method that corrects motion artifacts and thus accelerates data acquisition and reconstruction of magnetic resonance images. The novel model, the Motion Artifact Correction by Swin Network (MACS-Net), uses a Swin transformer layer as the fundamental block and the Unet architecture as the neural network backbone. We employ a hierarchical transformer with shifted windows to extract multiscale contextual features during encoding. A new dual upsampling technique is employed to enhance the spatial resolutions of feature maps in the Swin transformer-based decoder layer. A raw magnetic resonance imaging dataset is used for network training and testing; the data contain various motion artifacts with ground truth images of the same subjects. The results were compared to six state-of-the-art MRI image motion correction methods using two types of motions. When motions were brief (within 5 s), the method reduced the average normalized root mean square error (NRMSE) from 45.25% to 17.51%, increased the mean structural similarity index measure (SSIM) from 79.43% to 91.72%, and increased the peak signal-to-noise ratio (PSNR) from 18.24 to 26.57 dB. Similarly, when motions were extended from 5 to 10 s, our approach decreased the average NRMSE from 60.30% to 21.04%, improved the mean SSIM from 33.86% to 90.33%, and increased the PSNR from 15.64 to 24.99 dB. The anatomical structures of the corrected images and the motion-free brain data were similar.
Collapse
Affiliation(s)
- Md. Biddut Hossain
- Department of Information and Communication Engineering, Chungbuk National University, Cheongju-si 28644, Chungcheongbuk-do, Republic of Korea
| | - Rupali Kiran Shinde
- Department of Information and Communication Engineering, Chungbuk National University, Cheongju-si 28644, Chungcheongbuk-do, Republic of Korea
| | - Shariar Md Imtiaz
- Department of Information and Communication Engineering, Chungbuk National University, Cheongju-si 28644, Chungcheongbuk-do, Republic of Korea
| | - F. M. Fahmid Hossain
- Department of Information and Communication Engineering, Chungbuk National University, Cheongju-si 28644, Chungcheongbuk-do, Republic of Korea
| | - Seok-Hee Jeon
- Department of Electronics Engineering, Incheon National University, 119 Academy-ro, Yeonsu-gu, Incheon 22012, Republic of Korea
| | - Ki-Chul Kwon
- Department of Information and Communication Engineering, Chungbuk National University, Cheongju-si 28644, Chungcheongbuk-do, Republic of Korea
| | - Nam Kim
- Department of Information and Communication Engineering, Chungbuk National University, Cheongju-si 28644, Chungcheongbuk-do, Republic of Korea
| |
Collapse
|
8
|
Patel N, Eltaher M, Glenn R, Savannah KB, Brock K, Sanchez J, Calderone T, Cleere D, Elsaiey A, Cagley M, Gupta N, Victor D, Beretta L, Celaya A, Koay E, Netherton T, Fuentes D. Training Robust T1-Weighted Magnetic Resonance Imaging Liver Segmentation Models Using Ensembles of Datasets with Different Contrast Protocols and Liver Disease Etiologies. RESEARCH SQUARE 2024:rs.3.rs-4259791. [PMID: 38746406 PMCID: PMC11092841 DOI: 10.21203/rs.3.rs-4259791/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2024]
Abstract
Image segmentation of the liver is an important step in several treatments for liver cancer. However, manual segmentation at a large scale is not practical, leading to increasing reliance on deep learning models to automatically segment the liver. This manuscript develops a deep learning model to segment the liver on T1w MR images. We sought to determine the best architecture by training, validating, and testing three different deep learning architectures using a total of 819 T1w MR images gathered from six different datasets, both publicly and internally available. Our experiments compared each architecture's testing performance when trained on data from the same dataset via 5-fold cross validation to its testing performance when trained on all other datasets. Models trained using nnUNet achieved mean Dice-Sorensen similarity coefficients > 90% when tested on each of the six datasets individually. The performance of these models suggests that an nnUNet liver segmentation model trained on a large and diverse collection of T1w MR images would be robust to potential changes in contrast protocol and disease etiology.
Collapse
Affiliation(s)
- Nihil Patel
- The University of Texas MD Anderson Cancer Center
| | | | - Rachel Glenn
- The University of Texas MD Anderson Cancer Center
| | | | - Kristy Brock
- The University of Texas MD Anderson Cancer Center
| | | | | | | | | | | | | | | | | | | | - Eugene Koay
- The University of Texas MD Anderson Cancer Center
| | | | | |
Collapse
|
9
|
Safari M, Yang X, Fatemi A, Archambault L. MRI motion artifact reduction using a conditional diffusion probabilistic model (MAR-CDPM). Med Phys 2024; 51:2598-2610. [PMID: 38009583 DOI: 10.1002/mp.16844] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Revised: 11/09/2023] [Accepted: 11/09/2023] [Indexed: 11/29/2023] Open
Abstract
BACKGROUND High-resolution magnetic resonance imaging (MRI) with excellent soft-tissue contrast is a valuable tool utilized for diagnosis and prognosis. However, MRI sequences with long acquisition time are susceptible to motion artifacts, which can adversely affect the accuracy of post-processing algorithms. PURPOSE This study proposes a novel retrospective motion correction method named "motion artifact reduction using conditional diffusion probabilistic model" (MAR-CDPM). The MAR-CDPM aimed to remove motion artifacts from multicenter three-dimensional contrast-enhanced T1 magnetization-prepared rapid acquisition gradient echo (3D ceT1 MPRAGE) brain dataset with different brain tumor types. MATERIALS AND METHODS This study employed two publicly accessible MRI datasets: one containing 3D ceT1 MPRAGE and 2D T2-fluid attenuated inversion recovery (FLAIR) images from 230 patients with diverse brain tumors, and the other comprising 3D T1-weighted (T1W) MRI images of 148 healthy volunteers, which included real motion artifacts. The former was used to train and evaluate the model using the in silico data, and the latter was used to evaluate the model performance to remove real motion artifacts. A motion simulation was performed in k-space domain to generate an in silico dataset with minor, moderate, and heavy distortion levels. The diffusion process of the MAR-CDPM was then implemented in k-space to convert structure data into Gaussian noise by gradually increasing motion artifact levels. A conditional network with a Unet backbone was trained to reverse the diffusion process to convert the distorted images to structured data. The MAR-CDPM was trained in two scenarios: one conditioning on the time step t $t$ of the diffusion process, and the other conditioning on both t $t$ and T2-FLAIR images. The MAR-CDPM was quantitatively and qualitatively compared with supervised Unet, Unet conditioned on T2-FLAIR, CycleGAN, Pix2pix, and Pix2pix conditioned on T2-FLAIR models. To quantify the spatial distortions and the level of remaining motion artifacts after applying the models, quantitative metrics were reported including normalized mean squared error (NMSE), structural similarity index (SSIM), multiscale structural similarity index (MS-SSIM), peak signal-to-noise ratio (PSNR), visual information fidelity (VIF), and multiscale gradient magnitude similarity deviation (MS-GMSD). Tukey's Honestly Significant Difference multiple comparison test was employed to quantify the difference between the models where p-value < 0.05 $ < 0.05$ was considered statistically significant. RESULTS Qualitatively, MAR-CDPM outperformed these methods in preserving soft-tissue contrast and different brain regions. It also successfully preserved tumor boundaries for heavy motion artifacts, like the supervised method. Our MAR-CDPM recovered motion-free in silico images with the highest PSNR and VIF for all distortion levels where the differences were statistically significant (p-values< 0.05 $< 0.05$ ). In addition, our method conditioned on t and T2-FLAIR outperformed (p-values< 0.05 $< 0.05$ ) other methods to remove motion artifacts from the in silico dataset in terms of NMSE, MS-SSIM, SSIM, and MS-GMSD. Moreover, our method conditioned on only t outperformed generative models (p-values< 0.05 $< 0.05$ ) and had comparable performances compared with the supervised model (p-values> 0.05 $> 0.05$ ) to remove real motion artifacts. CONCLUSIONS The MAR-CDPM could successfully remove motion artifacts from 3D ceT1 MPRAGE. It is particularly beneficial for elderly who may experience involuntary movements during high-resolution MRI imaging with long acquisition times.
Collapse
Affiliation(s)
- Mojtaba Safari
- Département de physique, de génie physique et d'optique, et Centre de recherche sur le cancer, Université Laval, Quebec, Quebec, Canada
- Service de physique médicale et radioprotection, Centre Intégré de Cancérologie, CHU de Québec-Université Laval et Centre de recherche du CHU de Québec, Quebec, Quebec, Canada
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Ali Fatemi
- Department of Physics, Jackson State University, Jackson, Mississippi, USA
- Merit Health Central, Department of Radiation Oncology, Gamma Knife Center, Jackson, Mississippi, USA
| | - Louis Archambault
- Département de physique, de génie physique et d'optique, et Centre de recherche sur le cancer, Université Laval, Quebec, Quebec, Canada
- Service de physique médicale et radioprotection, Centre Intégré de Cancérologie, CHU de Québec-Université Laval et Centre de recherche du CHU de Québec, Quebec, Quebec, Canada
| |
Collapse
|
10
|
Patel V, Wang A, Monk AP, Schneider MTY. Enhancing Knee MR Image Clarity through Image Domain Super-Resolution Reconstruction. Bioengineering (Basel) 2024; 11:186. [PMID: 38391672 PMCID: PMC11154235 DOI: 10.3390/bioengineering11020186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Revised: 02/03/2024] [Accepted: 02/10/2024] [Indexed: 02/24/2024] Open
Abstract
This study introduces a hybrid analytical super-resolution (SR) pipeline aimed at enhancing the resolution of medical magnetic resonance imaging (MRI) scans. The primary objective is to overcome the limitations of clinical MRI resolution without the need for additional expensive hardware. The proposed pipeline involves three key steps: pre-processing to re-slice and register the image stacks; SR reconstruction to combine information from three orthogonal image stacks to generate a high-resolution image stack; and post-processing using an artefact reduction convolutional neural network (ARCNN) to reduce the block artefacts introduced during SR reconstruction. The workflow was validated on a dataset of six knee MRIs obtained at high resolution using various sequences. Quantitative analysis of the method revealed promising results, showing an average mean error of 1.40 ± 2.22% in voxel intensities between the SR denoised images and the original high-resolution images. Qualitatively, the method improved out-of-plane resolution while preserving in-plane image quality. The hybrid SR pipeline also displayed robustness across different MRI sequences, demonstrating potential for clinical application in orthopaedics and beyond. Although computationally intensive, this method offers a viable alternative to costly hardware upgrades and holds promise for improving diagnostic accuracy and generating more anatomically accurate models of the human body.
Collapse
Affiliation(s)
- Vishal Patel
- Auckland Bioengineering Institute, The University of Auckland, Auckland 1010, New Zealand; (V.P.); (A.P.M.); (M.T.-Y.S.)
| | - Alan Wang
- Auckland Bioengineering Institute, The University of Auckland, Auckland 1010, New Zealand; (V.P.); (A.P.M.); (M.T.-Y.S.)
- Faculty of Medical and Health Sciences, The University of Auckland, Auckland 1010, New Zealand
| | - Andrew Paul Monk
- Auckland Bioengineering Institute, The University of Auckland, Auckland 1010, New Zealand; (V.P.); (A.P.M.); (M.T.-Y.S.)
| | - Marco Tien-Yueh Schneider
- Auckland Bioengineering Institute, The University of Auckland, Auckland 1010, New Zealand; (V.P.); (A.P.M.); (M.T.-Y.S.)
| |
Collapse
|
11
|
Spieker V, Eichhorn H, Hammernik K, Rueckert D, Preibisch C, Karampinos DC, Schnabel JA. Deep Learning for Retrospective Motion Correction in MRI: A Comprehensive Review. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:846-859. [PMID: 37831582 DOI: 10.1109/tmi.2023.3323215] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/15/2023]
Abstract
Motion represents one of the major challenges in magnetic resonance imaging (MRI). Since the MR signal is acquired in frequency space, any motion of the imaged object leads to complex artefacts in the reconstructed image in addition to other MR imaging artefacts. Deep learning has been frequently proposed for motion correction at several stages of the reconstruction process. The wide range of MR acquisition sequences, anatomies and pathologies of interest, and motion patterns (rigid vs. deformable and random vs. regular) makes a comprehensive solution unlikely. To facilitate the transfer of ideas between different applications, this review provides a detailed overview of proposed methods for learning-based motion correction in MRI together with their common challenges and potentials. This review identifies differences and synergies in underlying data usage, architectures, training and evaluation strategies. We critically discuss general trends and outline future directions, with the aim to enhance interaction between different application areas and research fields.
Collapse
|
12
|
Brock KK, Chen SR, Sheth RA, Siewerdsen JH. Imaging in Interventional Radiology: 2043 and Beyond. Radiology 2023; 308:e230146. [PMID: 37462500 PMCID: PMC10374939 DOI: 10.1148/radiol.230146] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Revised: 04/27/2023] [Accepted: 04/28/2023] [Indexed: 07/21/2023]
Abstract
Since its inception in the early 20th century, interventional radiology (IR) has evolved tremendously and is now a distinct clinical discipline with its own training pathway. The arsenal of modalities at work in IR includes x-ray radiography and fluoroscopy, CT, MRI, US, and molecular and multimodality imaging within hybrid interventional environments. This article briefly reviews the major developments in imaging technology in IR over the past century, summarizes technologies now representative of the standard of care, and reflects on emerging advances in imaging technology that could shape the field in the century ahead. The role of emergent imaging technologies in enabling high-precision interventions is also briefly reviewed, including image-guided ablative therapies.
Collapse
Affiliation(s)
- Kristy K. Brock
- From the Departments of Imaging Physics (K.K.B., J.H.S.),
Interventional Radiology (S.R.C., R.A.S.), Neurosurgery (J.H.S.), and Radiation
Physics (J.H.S.), The University of Texas MD Anderson Cancer Center, 1400
Pressler St, FCT14.6050 Pickens Academic Tower, Houston, TX 77030-4000
| | - Stephen R. Chen
- From the Departments of Imaging Physics (K.K.B., J.H.S.),
Interventional Radiology (S.R.C., R.A.S.), Neurosurgery (J.H.S.), and Radiation
Physics (J.H.S.), The University of Texas MD Anderson Cancer Center, 1400
Pressler St, FCT14.6050 Pickens Academic Tower, Houston, TX 77030-4000
| | - Rahul A. Sheth
- From the Departments of Imaging Physics (K.K.B., J.H.S.),
Interventional Radiology (S.R.C., R.A.S.), Neurosurgery (J.H.S.), and Radiation
Physics (J.H.S.), The University of Texas MD Anderson Cancer Center, 1400
Pressler St, FCT14.6050 Pickens Academic Tower, Houston, TX 77030-4000
| | - Jeffrey H. Siewerdsen
- From the Departments of Imaging Physics (K.K.B., J.H.S.),
Interventional Radiology (S.R.C., R.A.S.), Neurosurgery (J.H.S.), and Radiation
Physics (J.H.S.), The University of Texas MD Anderson Cancer Center, 1400
Pressler St, FCT14.6050 Pickens Academic Tower, Houston, TX 77030-4000
| |
Collapse
|