1
|
Ying C, Chen Y, Yan Y, Flores S, Laforest R, Benzinger TLS, An H. Accuracy and Longitudinal Consistency of PET/MR Attenuation Correction in Amyloid PET Imaging amid Software and Hardware Upgrades. AJNR Am J Neuroradiol 2025; 46:635-642. [PMID: 39251256 PMCID: PMC11979810 DOI: 10.3174/ajnr.a8490] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2024] [Accepted: 09/04/2024] [Indexed: 09/11/2024]
Abstract
BACKGROUND AND PURPOSE Integrated PET/MR allows the simultaneous acquisition of PET biomarkers and structural and functional MRI to study Alzheimer disease (AD). Attenuation correction (AC), crucial for PET quantification, can be performed by using a deep learning approach, DL-Dixon, based on standard Dixon images. Longitudinal amyloid PET imaging, which provides important information about disease progression or treatment responses in AD, is usually acquired over several years. Hardware and software upgrades often occur during a multiple-year study period, resulting in data variability. This study aims to harmonize PET/MR DL-Dixon AC amid software and head coil updates and evaluate its accuracy and longitudinal consistency. MATERIALS AND METHODS Tri-modality PET/MR and CT images were obtained from 329 participants, with a subset of 38 undergoing tri-modality scans twice within approximately 3 years. Transfer learning was used to fine-tune DL-Dixon models on images from 2 scanner software versions (VB20P and VE11P) and 2 head coils (16-channel and 32-channel coils). The accuracy and longitudinal consistency of the DL-Dixon AC were evaluated. Power analyses were performed to estimate the sample size needed to detect various levels of longitudinal changes in the PET standardized uptake value ratio (SUVR). RESULTS The DL-Dixon method demonstrated high accuracy across all data, irrespective of scanner software versions and head coils. More than 95.6% of brain voxels showed less than 10% PET relative absolute error in all participants. The median [interquartile range] PET mean relative absolute error was 1.10% [0.93%, 1.26%], 1.24% [1.03%, 1.54%], 0.99% [0.86%, 1.13%] in the cortical summary region, and 1.04% [0.83%, 1.36%], 1.08% [0.84%, 1.34%], 1.05% [0.72%, 1.32%] in cerebellum by using the DL-Dixon models for the VB20P 16-channel coil, VE11P 16-channel coil, and VE11P 32-channel coil data, respectively. The within-subject coefficient of variation and intraclass correlation coefficient of PET SUVR in the cortical regions were comparable between the DL-Dixon and CT AC. Power analysis indicated that similar numbers of participants would be needed to detect the same level of PET changes by using DL-Dixon and CT AC. CONCLUSIONS DL-Dixon exhibited excellent accuracy and longitudinal consistency across the 2 software versions and head coils, demonstrating its robustness for longitudinal PET/MR neuroimaging studies in AD.
Collapse
Affiliation(s)
- Chunwei Ying
- From the Mallinckrodt Institute of Radiology (C.Y., S.F., R.L., T.L.S.B., H.U.), Washington University School of Medicine, St. Louis, Missouri
| | - Yasheng Chen
- Department of Neurology (Y.C., H.A.), Washington University School of Medicine, St. Louis, Missouri
| | - Yan Yan
- Department of Surgery (Y.Y., T.L.S.B.), Washington University School of Medicine, St. Louis, Missouri
| | - Shaney Flores
- From the Mallinckrodt Institute of Radiology (C.Y., S.F., R.L., T.L.S.B., H.U.), Washington University School of Medicine, St. Louis, Missouri
| | - Richard Laforest
- From the Mallinckrodt Institute of Radiology (C.Y., S.F., R.L., T.L.S.B., H.U.), Washington University School of Medicine, St. Louis, Missouri
| | - Tammie L S Benzinger
- From the Mallinckrodt Institute of Radiology (C.Y., S.F., R.L., T.L.S.B., H.U.), Washington University School of Medicine, St. Louis, Missouri
- Knight Alzheimer Disease Research Center (T.L.S.B.), Washington University School of Medicine, St. Louis, Missouri
- Department of Neurosurgery (T.L.S.B.), Washington University School of Medicine, St. Louis, Missouri
| | - Hongyu An
- From the Mallinckrodt Institute of Radiology (C.Y., S.F., R.L., T.L.S.B., H.U.), Washington University School of Medicine, St. Louis, Missouri
- Department of Neurology (Y.C., H.A.), Washington University School of Medicine, St. Louis, Missouri
| |
Collapse
|
2
|
Yang J, Afaq A, Sibley R, McMilan A, Pirasteh A. Deep learning applications for quantitative and qualitative PET in PET/MR: technical and clinical unmet needs. MAGMA (NEW YORK, N.Y.) 2024:10.1007/s10334-024-01199-y. [PMID: 39167304 DOI: 10.1007/s10334-024-01199-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Revised: 08/06/2024] [Accepted: 08/08/2024] [Indexed: 08/23/2024]
Abstract
We aim to provide an overview of technical and clinical unmet needs in deep learning (DL) applications for quantitative and qualitative PET in PET/MR, with a focus on attenuation correction, image enhancement, motion correction, kinetic modeling, and simulated data generation. (1) DL-based attenuation correction (DLAC) remains an area of limited exploration for pediatric whole-body PET/MR and lung-specific DLAC due to data shortages and technical limitations. (2) DL-based image enhancement approximating MR-guided regularized reconstruction with a high-resolution MR prior has shown promise in enhancing PET image quality. However, its clinical value has not been thoroughly evaluated across various radiotracers, and applications outside the head may pose challenges due to motion artifacts. (3) Robust training for DL-based motion correction requires pairs of motion-corrupted and motion-corrected PET/MR data. However, these pairs are rare. (4) DL-based approaches can address the limitations of dynamic PET, such as long scan durations that may cause patient discomfort and motion, providing new research opportunities. (5) Monte-Carlo simulations using anthropomorphic digital phantoms can provide extensive datasets to address the shortage of clinical data. This summary of technical/clinical challenges and potential solutions may provide research opportunities for the research community towards the clinical translation of DL solutions.
Collapse
Affiliation(s)
- Jaewon Yang
- Department of Radiology, University of Texas Southwestern, 5323 Harry Hines Blvd., Dallas, TX, USA.
| | - Asim Afaq
- Department of Radiology, University of Texas Southwestern, 5323 Harry Hines Blvd., Dallas, TX, USA
| | - Robert Sibley
- Department of Radiology, University of Texas Southwestern, 5323 Harry Hines Blvd., Dallas, TX, USA
| | - Alan McMilan
- Departments of Radiology and Medical Physics, University of Wisconsin-Madison, 600 Highland Ave, Madison, WI, USA
| | - Ali Pirasteh
- Departments of Radiology and Medical Physics, University of Wisconsin-Madison, 600 Highland Ave, Madison, WI, USA
| |
Collapse
|
3
|
Law MWK, Tse MY, Ho LCC, Lau KK, Wong OL, Yuan J, Cheung KY, Yu SK. A study of Bayesian deep network uncertainty and its application to synthetic CT generation for MR-only radiotherapy treatment planning. Med Phys 2024; 51:1244-1262. [PMID: 37665783 DOI: 10.1002/mp.16666] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2022] [Revised: 06/05/2023] [Accepted: 07/20/2023] [Indexed: 09/06/2023] Open
Abstract
BACKGROUND The use of synthetic computed tomography (CT) for radiotherapy treatment planning has received considerable attention because of the absence of ionizing radiation and close spatial correspondence to source magnetic resonance (MR) images, which have excellent tissue contrast. However, in an MR-only environment, little effort has been made to examine the quality of synthetic CT images without using the original CT images. PURPOSE To estimate synthetic CT quality without referring to original CT images, this study established the relationship between synthetic CT uncertainty and Bayesian uncertainty, and proposed a new Bayesian deep network for generating synthetic CT images and estimating synthetic CT uncertainty for MR-only radiotherapy treatment planning. METHODS AND MATERIALS A novel deep Bayesian network was formulated using probabilistic network weights. Two mathematical expressions were proposed to quantify the Bayesian uncertainty of the network and synthetic CT uncertainty, which was closely related to the mean absolute error (MAE) in Hounsfield Unit (HU) of synthetic CT. These uncertainties were examined to demonstrate the accuracy of representing the synthetic CT uncertainty using a Bayesian counterpart. We developed a hybrid Bayesian architecture and a new data normalization scheme, enabling the Bayesian network to generate both accurate synthetic CT and reliable uncertainty information when probabilistic weights were applied. The proposed method was evaluated in 59 patients (13/12/32/2 for training/validation/testing/uncertainty visualization) diagnosed with prostate cancer, who underwent same-day pelvic CT- and MR-acquisitions. To assess the relationship between Bayesian and synthetic CT uncertainties, linear and non-linear correlation coefficients were calculated on per-voxel, per-tissue, and per-patient bases. For accessing the accuracy of the CT number and dosimetric accuracy, the proposed method was compared with a commercially available atlas-based method (MRCAT) and a U-Net conditional-generative adversarial network (UcGAN). RESULTS The proposed model exhibited 44.33 MAE, outperforming UcGAN 52.51 and MRCAT 54.87. The gamma rate (2%/2 mm dose difference/distance to agreement) of the proposed model was 98.68%, comparable to that of UcGAN (98.60%) and MRCAT (98.56%). The per-patient and per-tissue linear correlation coefficients between the Bayesian and synthetic CT uncertainties ranged from 0.53 to 0.83, implying a moderate to strong linear correlation. Per-voxel correlation coefficients varied from -0.13 to 0.67 depending on the regions-of-interest evaluated, indicating tissue-dependent correlation. The R2 value for estimating MAE solely using Bayesian uncertainty was 0.98, suggesting that the uncertainty of the proposed model was an ideal candidate for predicting synthetic CT error, without referring to the original CT. CONCLUSION This study established a relationship between the Bayesian model uncertainty and synthetic CT uncertainty. A novel Bayesian deep network was proposed to generate a synthetic CT and estimate its uncertainty. Various metrics were used to thoroughly examine the relationship between the uncertainties of the proposed Bayesian model and the generated synthetic CT. Compared with existing approaches, the proposed model showed comparable CT number and dosimetric accuracies. The experiments showed that the proposed Bayesian model was capable of producing accurate synthetic CT, and was an effective indicator of the uncertainty and error associated with synthetic CT in MR-only workflows.
Collapse
Affiliation(s)
- Max Wai-Kong Law
- Medical Physics Department, Hong Kong Sanatorium and Hospital, Hong Kong SAR, China
| | - Mei-Yan Tse
- Medical Physics Department, Hong Kong Sanatorium and Hospital, Hong Kong SAR, China
| | - Leon Chin-Chak Ho
- Medical Physics Department, Hong Kong Sanatorium and Hospital, Hong Kong SAR, China
| | - Ka-Ki Lau
- Medical Physics Department, Hong Kong Sanatorium and Hospital, Hong Kong SAR, China
| | - Oi Lei Wong
- Research Department, Hong Kong Sanatorium and Hospital, Hong Kong SAR, China
| | - Jing Yuan
- Research Department, Hong Kong Sanatorium and Hospital, Hong Kong SAR, China
| | - Kin Yin Cheung
- Medical Physics Department, Hong Kong Sanatorium and Hospital, Hong Kong SAR, China
| | - Siu Ki Yu
- Medical Physics Department, Hong Kong Sanatorium and Hospital, Hong Kong SAR, China
| |
Collapse
|
4
|
Krokos G, MacKewn J, Dunn J, Marsden P. A review of PET attenuation correction methods for PET-MR. EJNMMI Phys 2023; 10:52. [PMID: 37695384 PMCID: PMC10495310 DOI: 10.1186/s40658-023-00569-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 08/07/2023] [Indexed: 09/12/2023] Open
Abstract
Despite being thirteen years since the installation of the first PET-MR system, the scanners constitute a very small proportion of the total hybrid PET systems installed. This is in stark contrast to the rapid expansion of the PET-CT scanner, which quickly established its importance in patient diagnosis within a similar timeframe. One of the main hurdles is the development of an accurate, reproducible and easy-to-use method for attenuation correction. Quantitative discrepancies in PET images between the manufacturer-provided MR methods and the more established CT- or transmission-based attenuation correction methods have led the scientific community in a continuous effort to develop a robust and accurate alternative. These can be divided into four broad categories: (i) MR-based, (ii) emission-based, (iii) atlas-based and the (iv) machine learning-based attenuation correction, which is rapidly gaining momentum. The first is based on segmenting the MR images in various tissues and allocating a predefined attenuation coefficient for each tissue. Emission-based attenuation correction methods aim in utilising the PET emission data by simultaneously reconstructing the radioactivity distribution and the attenuation image. Atlas-based attenuation correction methods aim to predict a CT or transmission image given an MR image of a new patient, by using databases containing CT or transmission images from the general population. Finally, in machine learning methods, a model that could predict the required image given the acquired MR or non-attenuation-corrected PET image is developed by exploiting the underlying features of the images. Deep learning methods are the dominant approach in this category. Compared to the more traditional machine learning, which uses structured data for building a model, deep learning makes direct use of the acquired images to identify underlying features. This up-to-date review goes through the literature of attenuation correction approaches in PET-MR after categorising them. The various approaches in each category are described and discussed. After exploring each category separately, a general overview is given of the current status and potential future approaches along with a comparison of the four outlined categories.
Collapse
Affiliation(s)
- Georgios Krokos
- School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas' Hospital London, King's College London, 1st Floor Lambeth Wing, Westminster Bridge Road, London, SE1 7EH, UK.
| | - Jane MacKewn
- School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas' Hospital London, King's College London, 1st Floor Lambeth Wing, Westminster Bridge Road, London, SE1 7EH, UK
| | - Joel Dunn
- School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas' Hospital London, King's College London, 1st Floor Lambeth Wing, Westminster Bridge Road, London, SE1 7EH, UK
| | - Paul Marsden
- School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas' Hospital London, King's College London, 1st Floor Lambeth Wing, Westminster Bridge Road, London, SE1 7EH, UK
| |
Collapse
|
5
|
Singh D, Monga A, de Moura HL, Zhang X, Zibetti MVW, Regatte RR. Emerging Trends in Fast MRI Using Deep-Learning Reconstruction on Undersampled k-Space Data: A Systematic Review. Bioengineering (Basel) 2023; 10:1012. [PMID: 37760114 PMCID: PMC10525988 DOI: 10.3390/bioengineering10091012] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 08/22/2023] [Accepted: 08/24/2023] [Indexed: 09/29/2023] Open
Abstract
Magnetic Resonance Imaging (MRI) is an essential medical imaging modality that provides excellent soft-tissue contrast and high-resolution images of the human body, allowing us to understand detailed information on morphology, structural integrity, and physiologic processes. However, MRI exams usually require lengthy acquisition times. Methods such as parallel MRI and Compressive Sensing (CS) have significantly reduced the MRI acquisition time by acquiring less data through undersampling k-space. The state-of-the-art of fast MRI has recently been redefined by integrating Deep Learning (DL) models with these undersampled approaches. This Systematic Literature Review (SLR) comprehensively analyzes deep MRI reconstruction models, emphasizing the key elements of recently proposed methods and highlighting their strengths and weaknesses. This SLR involves searching and selecting relevant studies from various databases, including Web of Science and Scopus, followed by a rigorous screening and data extraction process using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. It focuses on various techniques, such as residual learning, image representation using encoders and decoders, data-consistency layers, unrolled networks, learned activations, attention modules, plug-and-play priors, diffusion models, and Bayesian methods. This SLR also discusses the use of loss functions and training with adversarial networks to enhance deep MRI reconstruction methods. Moreover, we explore various MRI reconstruction applications, including non-Cartesian reconstruction, super-resolution, dynamic MRI, joint learning of reconstruction with coil sensitivity and sampling, quantitative mapping, and MR fingerprinting. This paper also addresses research questions, provides insights for future directions, and emphasizes robust generalization and artifact handling. Therefore, this SLR serves as a valuable resource for advancing fast MRI, guiding research and development efforts of MRI reconstruction for better image quality and faster data acquisition.
Collapse
Affiliation(s)
- Dilbag Singh
- Center of Biomedical Imaging, Department of Radiology, New York University Grossman School of Medicine, New York, NY 10016, USA; (A.M.); (H.L.d.M.); (X.Z.); (M.V.W.Z.)
| | | | | | | | | | - Ravinder R. Regatte
- Center of Biomedical Imaging, Department of Radiology, New York University Grossman School of Medicine, New York, NY 10016, USA; (A.M.); (H.L.d.M.); (X.Z.); (M.V.W.Z.)
| |
Collapse
|
6
|
Xu J, Noo F. Convex optimization algorithms in medical image reconstruction-in the age of AI. Phys Med Biol 2022; 67:10.1088/1361-6560/ac3842. [PMID: 34757943 PMCID: PMC10405576 DOI: 10.1088/1361-6560/ac3842] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2021] [Accepted: 11/10/2021] [Indexed: 11/12/2022]
Abstract
The past decade has seen the rapid growth of model based image reconstruction (MBIR) algorithms, which are often applications or adaptations of convex optimization algorithms from the optimization community. We review some state-of-the-art algorithms that have enjoyed wide popularity in medical image reconstruction, emphasize known connections between different algorithms, and discuss practical issues such as computation and memory cost. More recently, deep learning (DL) has forayed into medical imaging, where the latest development tries to exploit the synergy between DL and MBIR to elevate the MBIR's performance. We present existing approaches and emerging trends in DL-enhanced MBIR methods, with particular attention to the underlying role of convexity and convex algorithms on network architecture. We also discuss how convexity can be employed to improve the generalizability and representation power of DL networks in general.
Collapse
Affiliation(s)
- Jingyan Xu
- Department of Radiology, Johns Hopkins University, Baltimore, MD, United States of America
| | - Frédéric Noo
- Department of Radiology and Imaging Sciences, University of Utah, Salt Lake City, UT, United States of America
| |
Collapse
|