1
|
A systematic literature review: deep learning techniques for synthetic medical image generation and their applications in radiotherapy. FRONTIERS IN RADIOLOGY 2024; 4:1385742. [PMID: 38601888 PMCID: PMC11004271 DOI: 10.3389/fradi.2024.1385742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Accepted: 03/11/2024] [Indexed: 04/12/2024]
Abstract
The aim of this systematic review is to determine whether Deep Learning (DL) algorithms can provide a clinically feasible alternative to classic algorithms for synthetic Computer Tomography (sCT). The following categories are presented in this study: ∙ MR-based treatment planning and synthetic CT generation techniques. ∙ Generation of synthetic CT images based on Cone Beam CT images. ∙ Low-dose CT to High-dose CT generation. ∙ Attenuation correction for PET images. To perform appropriate database searches, we reviewed journal articles published between January 2018 and June 2023. Current methodology, study strategies, and results with relevant clinical applications were analyzed as we outlined the state-of-the-art of deep learning based approaches to inter-modality and intra-modality image synthesis. This was accomplished by contrasting the provided methodologies with traditional research approaches. The key contributions of each category were highlighted, specific challenges were identified, and accomplishments were summarized. As a final step, the statistics of all the cited works from various aspects were analyzed, which revealed that DL-based sCTs have achieved considerable popularity, while also showing the potential of this technology. In order to assess the clinical readiness of the presented methods, we examined the current status of DL-based sCT generation.
Collapse
|
2
|
Attenuation correction and truncation completion for breast PET/MR imaging using deep learning. Phys Med Biol 2024; 69:045031. [PMID: 38252969 DOI: 10.1088/1361-6560/ad2126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Accepted: 01/22/2024] [Indexed: 01/24/2024]
Abstract
Objective. Simultaneous PET/MR scanners combine the high sensitivity of MR imaging with the functional imaging of PET. However, attenuation correction of breast PET/MR imaging is technically challenging. The purpose of this study is to establish a robust attenuation correction algorithm for breast PET/MR images that relies on deep learning (DL) to recreate the missing portions of the patient's anatomy (truncation completion), as well as to provide bone information for attenuation correction from only the PET data.Approach. Data acquired from 23 female subjects with invasive breast cancer scanned with18F-fluorodeoxyglucose PET/CT and PET/MR localized to the breast region were used for this study. Three DL models, U-Net with mean absolute error loss (DLMAE) model, U-Net with mean squared error loss (DLMSE) model, and U-Net with perceptual loss (DLPerceptual) model, were trained to predict synthetic CT images (sCT) for PET attenuation correction (AC) given non-attenuation corrected (NAC) PETPET/MRimages as inputs. The DL and Dixon-based sCT reconstructed PET images were compared against those reconstructed from CT images by calculating the percent error of the standardized uptake value (SUV) and conducting Wilcoxon signed rank statistical tests.Main results. sCT images from the DLMAEmodel, the DLMSEmodel, and the DLPerceptualmodel were similar in mean absolute error (MAE), peak-signal-to-noise ratio, and normalized cross-correlation. No significant difference in SUV was found between the PET images reconstructed using the DLMSEand DLPerceptualsCTs compared to the reference CT for AC in all tissue regions. All DL methods performed better than the Dixon-based method according to SUV analysis.Significance. A 3D U-Net with MSE or perceptual loss model can be implemented into a reconstruction workflow, and the derived sCT images allow successful truncation completion and attenuation correction for breast PET/MR images.
Collapse
|
3
|
Deep learning based synthesis of MRI, CT and PET: Review and analysis. Med Image Anal 2024; 92:103046. [PMID: 38052145 DOI: 10.1016/j.media.2023.103046] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Revised: 11/14/2023] [Accepted: 11/29/2023] [Indexed: 12/07/2023]
Abstract
Medical image synthesis represents a critical area of research in clinical decision-making, aiming to overcome the challenges associated with acquiring multiple image modalities for an accurate clinical workflow. This approach proves beneficial in estimating an image of a desired modality from a given source modality among the most common medical imaging contrasts, such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Positron Emission Tomography (PET). However, translating between two image modalities presents difficulties due to the complex and non-linear domain mappings. Deep learning-based generative modelling has exhibited superior performance in synthetic image contrast applications compared to conventional image synthesis methods. This survey comprehensively reviews deep learning-based medical imaging translation from 2018 to 2023 on pseudo-CT, synthetic MR, and synthetic PET. We provide an overview of synthetic contrasts in medical imaging and the most frequently employed deep learning networks for medical image synthesis. Additionally, we conduct a detailed analysis of each synthesis method, focusing on their diverse model designs based on input domains and network architectures. We also analyse novel network architectures, ranging from conventional CNNs to the recent Transformer and Diffusion models. This analysis includes comparing loss functions, available datasets and anatomical regions, and image quality assessments and performance in other downstream tasks. Finally, we discuss the challenges and identify solutions within the literature, suggesting possible future directions. We hope that the insights offered in this survey paper will serve as a valuable roadmap for researchers in the field of medical image synthesis.
Collapse
|
4
|
Evaluating a radiotherapy deep learning synthetic CT algorithm for PET-MR attenuation correction in the pelvis. EJNMMI Phys 2024; 11:10. [PMID: 38282050 DOI: 10.1186/s40658-024-00617-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Accepted: 01/15/2024] [Indexed: 01/30/2024] Open
Abstract
BACKGROUND Positron emission tomography-magnetic resonance (PET-MR) attenuation correction is challenging because the MR signal does not represent tissue density and conventional MR sequences cannot image bone. A novel zero echo time (ZTE) MR sequence has been previously developed which generates signal from cortical bone with images acquired in 65 s. This has been combined with a deep learning model to generate a synthetic computed tomography (sCT) for MR-only radiotherapy. This study aimed to evaluate this algorithm for PET-MR attenuation correction in the pelvis. METHODS Ten patients being treated with ano-rectal radiotherapy received a [Formula: see text]F-FDG-PET-MR in the radiotherapy position. Attenuation maps were generated from ZTE-based sCT (sCTAC) and the standard vendor-supplied MRAC. The radiotherapy planning CT scan was rigidly registered and cropped to generate a gold standard attenuation map (CTAC). PET images were reconstructed using each attenuation map and compared for standard uptake value (SUV) measurement, automatic thresholded gross tumour volume (GTV) delineation and GTV metabolic parameter measurement. The last was assessed for clinical equivalence to CTAC using two one-sided paired t tests with a significance level corrected for multiple testing of [Formula: see text]. Equivalence margins of [Formula: see text] were used. RESULTS Mean whole-image SUV differences were -0.02% (sCTAC) compared to -3.0% (MRAC), with larger differences in the bone regions (-0.5% to -16.3%). There was no difference in thresholded GTVs, with Dice similarity coefficients [Formula: see text]. However, there were larger differences in GTV metabolic parameters. Mean differences to CTAC in [Formula: see text] were [Formula: see text] (± standard error, sCTAC) and [Formula: see text] (MRAC), and [Formula: see text] (sCTAC) and [Formula: see text] (MRAC) in [Formula: see text]. The sCTAC was statistically equivalent to CTAC within a [Formula: see text] equivalence margin for [Formula: see text] and [Formula: see text] ([Formula: see text] and [Formula: see text]), whereas the MRAC was not ([Formula: see text] and [Formula: see text]). CONCLUSION Attenuation correction using this radiotherapy ZTE-based sCT algorithm was substantially more accurate than current MRAC methods with only a 40 s increase in MR acquisition time. This did not impact tumour delineation but did significantly improve the accuracy of whole-image and tumour SUV measurements, which were clinically equivalent to CTAC. This suggests PET images reconstructed with sCTAC would enable accurate quantitative PET images to be acquired on a PET-MR scanner.
Collapse
|
5
|
A review of PET attenuation correction methods for PET-MR. EJNMMI Phys 2023; 10:52. [PMID: 37695384 PMCID: PMC10495310 DOI: 10.1186/s40658-023-00569-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 08/07/2023] [Indexed: 09/12/2023] Open
Abstract
Despite being thirteen years since the installation of the first PET-MR system, the scanners constitute a very small proportion of the total hybrid PET systems installed. This is in stark contrast to the rapid expansion of the PET-CT scanner, which quickly established its importance in patient diagnosis within a similar timeframe. One of the main hurdles is the development of an accurate, reproducible and easy-to-use method for attenuation correction. Quantitative discrepancies in PET images between the manufacturer-provided MR methods and the more established CT- or transmission-based attenuation correction methods have led the scientific community in a continuous effort to develop a robust and accurate alternative. These can be divided into four broad categories: (i) MR-based, (ii) emission-based, (iii) atlas-based and the (iv) machine learning-based attenuation correction, which is rapidly gaining momentum. The first is based on segmenting the MR images in various tissues and allocating a predefined attenuation coefficient for each tissue. Emission-based attenuation correction methods aim in utilising the PET emission data by simultaneously reconstructing the radioactivity distribution and the attenuation image. Atlas-based attenuation correction methods aim to predict a CT or transmission image given an MR image of a new patient, by using databases containing CT or transmission images from the general population. Finally, in machine learning methods, a model that could predict the required image given the acquired MR or non-attenuation-corrected PET image is developed by exploiting the underlying features of the images. Deep learning methods are the dominant approach in this category. Compared to the more traditional machine learning, which uses structured data for building a model, deep learning makes direct use of the acquired images to identify underlying features. This up-to-date review goes through the literature of attenuation correction approaches in PET-MR after categorising them. The various approaches in each category are described and discussed. After exploring each category separately, a general overview is given of the current status and potential future approaches along with a comparison of the four outlined categories.
Collapse
|
6
|
Pelvic PET/MR attenuation correction in the image space using deep learning. Front Oncol 2023; 13:1220009. [PMID: 37692851 PMCID: PMC10484800 DOI: 10.3389/fonc.2023.1220009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Accepted: 07/31/2023] [Indexed: 09/12/2023] Open
Abstract
Introduction The five-class Dixon-based PET/MR attenuation correction (AC) model, which adds bone information to the four-class model by registering major bones from a bone atlas, has been shown to be error-prone. In this study, we introduce a novel method of accounting for bone in pelvic PET/MR AC by directly predicting the errors in the PET image space caused by the lack of bone in four-class Dixon-based attenuation correction. Methods A convolutional neural network was trained to predict the four-class AC error map relative to CT-based attenuation correction. Dixon MR images and the four-class attenuation correction µ-map were used as input to the models. CT and PET/MR examinations for 22 patients ([18F]FDG) were used for training and validation, and 17 patients were used for testing (6 [18F]PSMA-1007 and 11 [68Ga]Ga-PSMA-11). A quantitative analysis of PSMA uptake using voxel- and lesion-based error metrics was used to assess performance. Results In the voxel-based analysis, the proposed model reduced the median root mean squared percentage error from 12.1% and 8.6% for the four- and five-class Dixon-based AC methods, respectively, to 6.2%. The median absolute percentage error in the maximum standardized uptake value (SUVmax) in bone lesions improved from 20.0% and 7.0% for four- and five-class Dixon-based AC methods to 3.8%. Conclusion The proposed method reduces the voxel-based error and SUVmax errors in bone lesions when compared to the four- and five-class Dixon-based AC models.
Collapse
|
7
|
Novel Multiparametric Magnetic Resonance Imaging-Based Deep Learning and Clinical Parameter Integration for the Prediction of Long-Term Biochemical Recurrence-Free Survival in Prostate Cancer after Radical Prostatectomy. Cancers (Basel) 2023; 15:3416. [PMID: 37444526 DOI: 10.3390/cancers15133416] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Revised: 06/19/2023] [Accepted: 06/26/2023] [Indexed: 07/15/2023] Open
Abstract
Radical prostatectomy (RP) is the main treatment of prostate cancer (PCa). Biochemical recurrence (BCR) following RP remains the first sign of aggressive disease; hence, better assessment of potential long-term post-RP BCR-free survival is crucial. Our study aimed to evaluate a combined clinical-deep learning (DL) model using multiparametric magnetic resonance imaging (mpMRI) for predicting long-term post-RP BCR-free survival in PCa. A total of 437 patients with PCa who underwent mpMRI followed by RP between 2008 and 2009 were enrolled; radiomics features were extracted from T2-weighted imaging, apparent diffusion coefficient maps, and contrast-enhanced sequences by manually delineating the index tumors. Deep features from the same set of imaging were extracted using a deep neural network based on pretrained EfficentNet-B0. Here, we present a clinical model (six clinical variables), radiomics model, DL model (DLM-Deep feature), combined clinical-radiomics model (CRM-Multi), and combined clinical-DL model (CDLM-Deep feature) that were built using Cox models regularized with the least absolute shrinkage and selection operator. We compared their prognostic performances using stratified fivefold cross-validation. In a median follow-up of 61 months, 110/437 patients experienced BCR. CDLM-Deep feature achieved the best performance (hazard ratio [HR] = 7.72), followed by DLM-Deep feature (HR = 4.37) or RM-Multi (HR = 2.67). CRM-Multi performed moderately. Our results confirm the superior performance of our mpMRI-derived DL algorithm over conventional radiomics.
Collapse
|
8
|
A survey on deep learning applied to medical images: from simple artificial neural networks to generative models. Neural Comput Appl 2023; 35:2291-2323. [PMID: 36373133 PMCID: PMC9638354 DOI: 10.1007/s00521-022-07953-4] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Accepted: 10/12/2022] [Indexed: 11/06/2022]
Abstract
Deep learning techniques, in particular generative models, have taken on great importance in medical image analysis. This paper surveys fundamental deep learning concepts related to medical image generation. It provides concise overviews of studies which use some of the latest state-of-the-art models from last years applied to medical images of different injured body areas or organs that have a disease associated with (e.g., brain tumor and COVID-19 lungs pneumonia). The motivation for this study is to offer a comprehensive overview of artificial neural networks (NNs) and deep generative models in medical imaging, so more groups and authors that are not familiar with deep learning take into consideration its use in medicine works. We review the use of generative models, such as generative adversarial networks and variational autoencoders, as techniques to achieve semantic segmentation, data augmentation, and better classification algorithms, among other purposes. In addition, a collection of widely used public medical datasets containing magnetic resonance (MR) images, computed tomography (CT) scans, and common pictures is presented. Finally, we feature a summary of the current state of generative models in medical image including key features, current challenges, and future research paths.
Collapse
|
9
|
Deep learning and radiomics framework for PSMA-RADS classification of prostate cancer on PSMA PET. EJNMMI Res 2022; 12:76. [PMID: 36580220 PMCID: PMC9800682 DOI: 10.1186/s13550-022-00948-1] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Accepted: 12/12/2022] [Indexed: 12/30/2022] Open
Abstract
BACKGROUND Accurate classification of sites of interest on prostate-specific membrane antigen (PSMA) positron emission tomography (PET) images is an important diagnostic requirement for the differentiation of prostate cancer (PCa) from foci of physiologic uptake. We developed a deep learning and radiomics framework to perform lesion-level and patient-level classification on PSMA PET images of patients with PCa. METHODS This was an IRB-approved, HIPAA-compliant, retrospective study. Lesions on [18F]DCFPyL PET/CT scans were assigned to PSMA reporting and data system (PSMA-RADS) categories and randomly partitioned into training, validation, and test sets. The framework extracted image features, radiomic features, and tissue type information from a cropped PET image slice containing a lesion and performed PSMA-RADS and PCa classification. Performance was evaluated by assessing the area under the receiver operating characteristic curve (AUROC). A t-distributed stochastic neighbor embedding (t-SNE) analysis was performed. Confidence and probability scores were measured. Statistical significance was determined using a two-tailed t test. RESULTS PSMA PET scans from 267 men with PCa had 3794 lesions assigned to PSMA-RADS categories. The framework yielded AUROC values of 0.87 and 0.90 for lesion-level and patient-level PSMA-RADS classification, respectively, on the test set. The framework yielded AUROC values of 0.92 and 0.85 for lesion-level and patient-level PCa classification, respectively, on the test set. A t-SNE analysis revealed learned relationships between the PSMA-RADS categories and disease findings. Mean confidence scores reflected the expected accuracy and were significantly higher for correct predictions than for incorrect predictions (P < 0.05). Measured probability scores reflected the likelihood of PCa consistent with the PSMA-RADS framework. CONCLUSION The framework provided lesion-level and patient-level PSMA-RADS and PCa classification on PSMA PET images. The framework was interpretable and provided confidence and probability scores that may assist physicians in making more informed clinical decisions.
Collapse
|
10
|
Recent topics of the clinical utility of PET/MRI in oncology and neuroscience. Ann Nucl Med 2022; 36:798-803. [PMID: 35896912 DOI: 10.1007/s12149-022-01780-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Accepted: 07/22/2022] [Indexed: 11/29/2022]
Abstract
Since the inline positron emission tomography (PET)/magnetic resonance imaging (MRI) system appeared in clinical, more than a decade has passed. In this article, we have reviewed recently-published articles about PET/MRI. There have been articles about staging in rectal and breast cancers by PET/MRI using fluorodeoxyglucose (FDG) with higher diagnostic performance in oncology. Assessing possible metastatic bone lesions is considered a proper target by FDG PET/MRI. Other than FDG, PET/MRI with prostate specific membrane antigen (PSMA)-targeted tracers or fibroblast activation protein inhibitor have been reported. Especially, PSMA PET/MRI has been reported to be a promising tool for determining appropriate sites in biopsy. Independent of tracers, the clinical application of artificial intelligence (AI) for images obtained by PET/MRI is one of the current topics in this field, suggesting clinical usefulness for differentiating breast lesions or grading prostate cancer. In addition, AI has been reported to be helpful for noise reduction for reconstructing images, which would be promising for reducing radiation exposure. Furthermore, PET/MRI has a clinical role in neuroscience, including localization of the epileptogenic zone. PET/MRI with new PET tracers could be useful for differentiation among neurological disorders. Clinical applications of integrated PET/MRI in various fields are expected to be reported in the future.
Collapse
|
11
|
Quantitative evaluation of a deep learning-based framework to generate whole-body attenuation maps using LSO background radiation in long axial FOV PET scanners. Eur J Nucl Med Mol Imaging 2022; 49:4490-4502. [PMID: 35852557 PMCID: PMC9606046 DOI: 10.1007/s00259-022-05909-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2022] [Accepted: 07/10/2022] [Indexed: 12/19/2022]
Abstract
Purpose Attenuation correction is a critically important step in data correction in positron emission tomography (PET) image formation. The current standard method involves conversion of Hounsfield units from a computed tomography (CT) image to construct attenuation maps (µ-maps) at 511 keV. In this work, the increased sensitivity of long axial field-of-view (LAFOV) PET scanners was exploited to develop and evaluate a deep learning (DL) and joint reconstruction-based method to generate µ-maps utilizing background radiation from lutetium-based (LSO) scintillators. Methods Data from 18 subjects were used to train convolutional neural networks to enhance initial µ-maps generated using joint activity and attenuation reconstruction algorithm (MLACF) with transmission data from LSO background radiation acquired before and after the administration of 18F-fluorodeoxyglucose (18F-FDG) (µ-mapMLACF-PRE and µ-mapMLACF-POST respectively). The deep learning-enhanced µ-maps (µ-mapDL-MLACF-PRE and µ-mapDL-MLACF-POST) were compared against MLACF-derived and CT-based maps (µ-mapCT). The performance of the method was also evaluated by assessing PET images reconstructed using each µ-map and computing volume-of-interest based standard uptake value measurements and percentage relative mean error (rME) and relative mean absolute error (rMAE) relative to CT-based method. Results No statistically significant difference was observed in rME values for µ-mapDL-MLACF-PRE and µ-mapDL-MLACF-POST both in fat-based and water-based soft tissue as well as bones, suggesting that presence of the radiopharmaceutical activity in the body had negligible effects on the resulting µ-maps. The rMAE values µ-mapDL-MLACF-POST were reduced by a factor of 3.3 in average compared to the rMAE of µ-mapMLACF-POST. Similarly, the average rMAE values of PET images reconstructed using µ-mapDL-MLACF-POST (PETDL-MLACF-POST) were 2.6 times smaller than the average rMAE values of PET images reconstructed using µ-mapMLACF-POST. The mean absolute errors in SUV values of PETDL-MLACF-POST compared to PETCT were less than 5% in healthy organs, less than 7% in brain grey matter and 4.3% for all tumours combined. Conclusion We describe a deep learning-based method to accurately generate µ-maps from PET emission data and LSO background radiation, enabling CT-free attenuation and scatter correction in LAFOV PET scanners. Supplementary Information The online version contains supplementary material available at 10.1007/s00259-022-05909-3.
Collapse
|
12
|
Radiomics and artificial intelligence in prostate cancer: new tools for molecular hybrid imaging and theragnostics. Eur Radiol Exp 2022; 6:27. [PMID: 35701671 PMCID: PMC9198151 DOI: 10.1186/s41747-022-00282-0] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Accepted: 04/20/2022] [Indexed: 11/21/2022] Open
Abstract
In prostate cancer (PCa), the use of new radiopharmaceuticals has improved the accuracy of diagnosis and staging, refined surveillance strategies, and introduced specific and personalized radioreceptor therapies. Nuclear medicine, therefore, holds great promise for improving the quality of life of PCa patients, through managing and processing a vast amount of molecular imaging data and beyond, using a multi-omics approach and improving patients’ risk-stratification for tailored medicine. Artificial intelligence (AI) and radiomics may allow clinicians to improve the overall efficiency and accuracy of using these “big data” in both the diagnostic and theragnostic field: from technical aspects (such as semi-automatization of tumor segmentation, image reconstruction, and interpretation) to clinical outcomes, improving a deeper understanding of the molecular environment of PCa, refining personalized treatment strategies, and increasing the ability to predict the outcome. This systematic review aims to describe the current literature on AI and radiomics applied to molecular imaging of prostate cancer.
Collapse
|
13
|
Applications of Generative Adversarial Networks (GANs) in Positron Emission Tomography (PET) imaging: A review. Eur J Nucl Med Mol Imaging 2022; 49:3717-3739. [PMID: 35451611 DOI: 10.1007/s00259-022-05805-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Accepted: 04/12/2022] [Indexed: 11/04/2022]
Abstract
PURPOSE This paper reviews recent applications of Generative Adversarial Networks (GANs) in Positron Emission Tomography (PET) imaging. Recent advances in Deep Learning (DL) and GANs catalysed the research of their applications in medical imaging modalities. As a result, several unique GAN topologies have emerged and been assessed in an experimental environment over the last two years. METHODS The present work extensively describes GAN architectures and their applications in PET imaging. The identification of relevant publications was performed via approved publication indexing websites and repositories. Web of Science, Scopus, and Google Scholar were the major sources of information. RESULTS The research identified a hundred articles that address PET imaging applications such as attenuation correction, de-noising, scatter correction, removal of artefacts, image fusion, high-dose image estimation, super-resolution, segmentation, and cross-modality synthesis. These applications are presented and accompanied by the corresponding research works. CONCLUSION GANs are rapidly employed in PET imaging tasks. However, specific limitations must be eliminated to reach their full potential and gain the medical community's trust in everyday clinical practice.
Collapse
|
14
|
Low-Dose 68 Ga-PSMA Prostate PET/MRI Imaging Using Deep Learning Based on MRI Priors. Front Oncol 2022; 11:818329. [PMID: 35155207 PMCID: PMC8825350 DOI: 10.3389/fonc.2021.818329] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Accepted: 12/27/2021] [Indexed: 12/02/2022] Open
Abstract
Background 68 Ga-prostate-specific membrane antigen (PSMA) PET/MRI has become an effective imaging method for prostate cancer. The purpose of this study was to use deep learning methods to perform low-dose image restoration on PSMA PET/MRI and to evaluate the effect of synthesis on the images and the medical diagnosis of patients at risk of prostate cancer. Methods We reviewed the 68 Ga-PSMA PET/MRI data of 41 patients. The low-dose PET (LDPET) images of these patients were restored to full-dose PET (FDPET) images through a deep learning method based on MRI priors. The synthesized images were evaluated according to quantitative scores from nuclear medicine doctors and multiple imaging indicators, such as peak-signal noise ratio (PSNR), structural similarity (SSIM), normalization mean square error (NMSE), and relative contrast-to-noise ratio (RCNR). Results The clinical quantitative scores of the FDPET images synthesized from 25%- and 50%-dose images based on MRI priors were 3.84±0.36 and 4.03±0.17, respectively, which were higher than the scores of the target images. Correspondingly, the PSNR, SSIM, NMSE, and RCNR values of the FDPET images synthesized from 50%-dose PET images based on MRI priors were 39.88±3.83, 0.896±0.092, 0.012±0.007, and 0.996±0.080, respectively. Conclusion According to a combination of quantitative scores from nuclear medicine doctors and evaluations with multiple image indicators, the synthesis of FDPET images based on MRI priors using and 50%-dose PET images did not affect the clinical diagnosis of prostate cancer. Prostate cancer patients can undergo 68 Ga-PSMA prostate PET/MRI scans with radiation doses reduced by up to 50% through the use of deep learning methods to synthesize FDPET images.
Collapse
|
15
|
Clinical Application of Artificial Intelligence in Positron Emission Tomography: Imaging of Prostate Cancer. PET Clin 2021; 17:137-143. [PMID: 34809863 DOI: 10.1016/j.cpet.2021.09.002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
PET imaging with targeted novel tracers has been commonly used in the clinical management of prostate cancer. The use of artificial intelligence (AI) in PET imaging is a relatively new approach and in this review article, we will review the current trends and categorize the currently available research into the quantification of tumor burden within the organ, evaluation of metastatic disease, and translational/supplemental research which aims to improve other AI research efforts.
Collapse
|
16
|
Artificial Intelligence-Based Data Corrections for Attenuation and Scatter in Position Emission Tomography and Single-Photon Emission Computed Tomography. PET Clin 2021; 16:543-552. [PMID: 34364816 PMCID: PMC10562009 DOI: 10.1016/j.cpet.2021.06.010] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
Recent developments in artificial intelligence (AI) technology have enabled new developments that can improve attenuation and scatter correction in PET and single-photon emission computed tomography (SPECT). These technologies will enable the use of accurate and quantitative imaging without the need to acquire a computed tomography image, greatly expanding the capability of PET/MR imaging, PET-only, and SPECT-only scanners. The use of AI to aid in scatter correction will lead to improvements in image reconstruction speed, and improve patient throughput. This article outlines the use of these new tools, surveys contemporary implementation, and discusses their limitations.
Collapse
|
17
|
Deep learning based synthetic-CT generation in radiotherapy and PET: A review. Med Phys 2021; 48:6537-6566. [PMID: 34407209 DOI: 10.1002/mp.15150] [Citation(s) in RCA: 76] [Impact Index Per Article: 25.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Revised: 06/06/2021] [Accepted: 07/13/2021] [Indexed: 01/22/2023] Open
Abstract
Recently,deep learning (DL)-based methods for the generation of synthetic computed tomography (sCT) have received significant research attention as an alternative to classical ones. We present here a systematic review of these methods by grouping them into three categories, according to their clinical applications: (i) to replace computed tomography in magnetic resonance (MR) based treatment planning, (ii) facilitate cone-beam computed tomography based image-guided adaptive radiotherapy, and (iii) derive attenuation maps for the correction of positron emission tomography. Appropriate database searching was performed on journal articles published between January 2014 and December 2020. The DL methods' key characteristics were extracted from each eligible study, and a comprehensive comparison among network architectures and metrics was reported. A detailed review of each category was given, highlighting essential contributions, identifying specific challenges, and summarizing the achievements. Lastly, the statistics of all the cited works from various aspects were analyzed, revealing the popularity and future trends and the potential of DL-based sCT generation. The current status of DL-based sCT generation was evaluated, assessing the clinical readiness of the presented methods.
Collapse
|
18
|
Feasibility of Deep Learning-Guided Attenuation and Scatter Correction of Whole-Body 68Ga-PSMA PET Studies in the Image Domain. Clin Nucl Med 2021; 46:609-615. [PMID: 33661195 DOI: 10.1097/rlu.0000000000003585] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVE This study evaluates the feasibility of direct scatter and attenuation correction of whole-body 68Ga-PSMA PET images in the image domain using deep learning. METHODS Whole-body 68Ga-PSMA PET images of 399 subjects were used to train a residual deep learning model, taking PET non-attenuation-corrected images (PET-nonAC) as input and CT-based attenuation-corrected PET images (PET-CTAC) as target (reference). Forty-six whole-body 68Ga-PSMA PET images were used as an independent validation dataset. For validation, synthetic deep learning-based attenuation-corrected PET images were assessed considering the corresponding PET-CTAC images as reference. The evaluation metrics included the mean absolute error (MAE) of the SUV, peak signal-to-noise ratio, and structural similarity index (SSIM) in the whole body, as well as in different regions of the body, namely, head and neck, chest, and abdomen and pelvis. RESULTS The deep learning-guided direct attenuation and scatter correction produced images of comparable visual quality to PET-CTAC images. It achieved an MAE, relative error (RE%), SSIM, and peak signal-to-noise ratio of 0.91 ± 0.29 (SUV), -2.46% ± 10.10%, 0.973 ± 0.034, and 48.171 ± 2.964, respectively, within whole-body images of the independent external validation dataset. The largest RE% was observed in the head and neck region (-5.62% ± 11.73%), although this region exhibited the highest value of SSIM metric (0.982 ± 0.024). The MAE (SUV) and RE% within the different regions of the body were less than 2.0% and 6%, respectively, indicating acceptable performance of the deep learning model. CONCLUSIONS This work demonstrated the feasibility of direct attenuation and scatter correction of whole-body 68Ga-PSMA PET images in the image domain using deep learning with clinically tolerable errors. The technique has the potential of performing attenuation correction on stand-alone PET or PET/MRI systems.
Collapse
|
19
|
Abstract
BACKGROUND Radiotherapy (RT) planning for cervical cancer patients entails the acquisition of both Computed Tomography (CT) and Magnetic Resonance Imaging (MRI). Further, molecular imaging by Positron Emission Tomography (PET) could contribute to target volume delineation as well as treatment response monitoring. The objective of this study was to investigate the feasibility of a PET/MRI-only RT planning workflow of patients with cervical cancer. This includes attenuation correction (AC) of MRI hardware and dedicated positioning equipment as well as evaluating MRI-derived synthetic CT (sCT) of the pelvic region for positioning verification and dose calculation to enable a PET/MRI-only setup. MATERIAL AND METHODS 16 patients underwent PET/MRI using a dedicated RT setup after the routine CT (or PET/CT), including eight pilot patients and eight cervical cancer patients who were subsequently referred for RT. Data from 18 patients with gynecological cancer were added for training a deep convolutional neural network to generate sCT from Dixon MRI. The mean absolute difference between the dose distributions calculated on sCT and a reference CT was measured in the RT target volume and organs at risk. PET AC by sCT and a reference CT were compared in the tumor volume. RESULTS All patients completed the examination. sCT was inferred for each patient in less than 5 s. The dosimetric analysis of the sCT-based dose planning showed a mean absolute error (MAE) of 0.17 ± 0.12 Gy inside the planning target volumes (PTV). PET images reconstructed with sCT and CT had no significant difference in quantification for all patients. CONCLUSIONS These results suggest that multiparametric PET/MRI can be successfully integrated as a one-stop-shop in the RT workflow of patients with cervical cancer.
Collapse
|
20
|
Evaluation of Deep Learning-based Approaches to Segment Bowel Air Pockets and Generate Pelvis Attenuation Maps from CAIPIRINHA-accelerated Dixon MR Images. J Nucl Med 2021; 63:468-475. [PMID: 34301782 DOI: 10.2967/jnumed.120.261032] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2020] [Revised: 06/06/2021] [Indexed: 11/16/2022] Open
Abstract
Attenuation correction (AC) remains a challenge in pelvis PET/MR imaging. In addition to the segmentation/model-based approaches, deep learning methods have shown promise in synthesizing accurate pelvis attenuation maps (μ-maps). However, these methods often misclassify air pockets in the digestive tract, which can introduce bias in the reconstructed PET images. The aims of this work were to develop deep learning-based methods to automatically segment air pockets and generate pseudo-CT images from CAIPIRINHA-accelerated MR Dixon images. Methods: A convolutional neural network (CNN) was trained to segment air pockets using 3D CAIPIRINHA-accelerated MR Dixon datasets from 35 subjects and was evaluated against semi-automated segmentations. A separate CNN was trained to synthesize pseudo-CT μ-maps from the Dixon images. Its accuracy was evaluated by comparing the deep learning-, model- and CT-based μ-maps using data from 30 of the subjects. Finally, the impact of different μ-maps and air pocket segmentation methods on the PET quantification was investigated. Results: Air pockets segmented using the CNN agreed well with semi-automated segmentations, with a mean Dice similarity coefficient of 0.75. Volumetric similarity score between two segmentations was 0.85 ± 0.14. The mean absolute relative change (RCs) with respect to the CT-based μ-maps were 2.6% and 5.1% in the whole pelvis for the deep learning and model-based μ-maps, respectively. The average RC between PET images reconstructed with deep learning and CT-based μ-maps was 2.6%. Conclusion: We presented a deep learning-based method to automatically segment air pockets from CAIPIRINHA-accelerated Dixon images with comparable accuracy to semi-automatic segmentations. We also showed that the μ-maps synthesized using a deep learning-based method from CAIPIRINHA-accelerated Dixon images are more accurate than those generated with the model-based approach available on integrated PET/MRI scanner.
Collapse
|
21
|
Quantitative Molecular Positron Emission Tomography Imaging Using Advanced Deep Learning Techniques. Annu Rev Biomed Eng 2021; 23:249-276. [PMID: 33797938 DOI: 10.1146/annurev-bioeng-082420-020343] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The widespread availability of high-performance computing and the popularity of artificial intelligence (AI) with machine learning and deep learning (ML/DL) algorithms at the helm have stimulated the development of many applications involving the use of AI-based techniques in molecular imaging research. Applications reported in the literature encompass various areas, including innovative design concepts in positron emission tomography (PET) instrumentation, quantitative image reconstruction and analysis techniques, computer-aided detection and diagnosis, as well as modeling and prediction of outcomes. This review reflects the tremendous interest in quantitative molecular imaging using ML/DL techniques during the past decade, ranging from the basic principles of ML/DL techniques to the various steps required for obtaining quantitatively accurate PET data, including algorithms used to denoise or correct for physical degrading factors as well as to quantify tracer uptake and metabolic tumor volume for treatment monitoring or radiation therapy treatment planning and response prediction.This review also addresses future opportunities and current challenges facing the adoption of ML/DL approaches and their role in multimodality imaging.
Collapse
|
22
|
The promise of artificial intelligence and deep learning in PET and SPECT imaging. Phys Med 2021; 83:122-137. [DOI: 10.1016/j.ejmp.2021.03.008] [Citation(s) in RCA: 84] [Impact Index Per Article: 28.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/27/2020] [Revised: 02/18/2021] [Accepted: 03/03/2021] [Indexed: 02/06/2023] Open
|
23
|
Current Status and Future Perspectives of Artificial Intelligence in Magnetic Resonance Breast Imaging. CONTRAST MEDIA & MOLECULAR IMAGING 2020; 2020:6805710. [PMID: 32934610 PMCID: PMC7474774 DOI: 10.1155/2020/6805710] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/09/2020] [Revised: 04/17/2020] [Accepted: 05/28/2020] [Indexed: 12/12/2022]
Abstract
Recent advances in artificial intelligence (AI) and deep learning (DL) have impacted many scientific fields including biomedical maging. Magnetic resonance imaging (MRI) is a well-established method in breast imaging with several indications including screening, staging, and therapy monitoring. The rapid development and subsequent implementation of AI into clinical breast MRI has the potential to affect clinical decision-making, guide treatment selection, and improve patient outcomes. The goal of this review is to provide a comprehensive picture of the current status and future perspectives of AI in breast MRI. We will review DL applications and compare them to standard data-driven techniques. We will emphasize the important aspect of developing quantitative imaging biomarkers for precision medicine and the potential of breast MRI and DL in this context. Finally, we will discuss future challenges of DL applications for breast MRI and an AI-augmented clinical decision strategy.
Collapse
|