1
|
Ma KC, Mena E, Lindenberg L, Lay NS, Eclarinal P, Citrin DE, Pinto PA, Wood BJ, Dahut WL, Gulley JL, Madan RA, Choyke PL, Turkbey IB, Harmon SA. Deep learning-based whole-body PSMA PET/CT attenuation correction utilizing Pix-2-Pix GAN. Oncotarget 2024; 15:288-300. [PMID: 38712741 DOI: 10.18632/oncotarget.28583] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/08/2024] Open
Abstract
PURPOSE Sequential PET/CT studies oncology patients can undergo during their treatment follow-up course is limited by radiation dosage. We propose an artificial intelligence (AI) tool to produce attenuation-corrected PET (AC-PET) images from non-attenuation-corrected PET (NAC-PET) images to reduce need for low-dose CT scans. METHODS A deep learning algorithm based on 2D Pix-2-Pix generative adversarial network (GAN) architecture was developed from paired AC-PET and NAC-PET images. 18F-DCFPyL PSMA PET-CT studies from 302 prostate cancer patients, split into training, validation, and testing cohorts (n = 183, 60, 59, respectively). Models were trained with two normalization strategies: Standard Uptake Value (SUV)-based and SUV-Nyul-based. Scan-level performance was evaluated by normalized mean square error (NMSE), mean absolute error (MAE), structural similarity index (SSIM), and peak signal-to-noise ratio (PSNR). Lesion-level analysis was performed in regions-of-interest prospectively from nuclear medicine physicians. SUV metrics were evaluated using intraclass correlation coefficient (ICC), repeatability coefficient (RC), and linear mixed-effects modeling. RESULTS Median NMSE, MAE, SSIM, and PSNR were 13.26%, 3.59%, 0.891, and 26.82, respectively, in the independent test cohort. ICC for SUVmax and SUVmean were 0.88 and 0.89, which indicated a high correlation between original and AI-generated quantitative imaging markers. Lesion location, density (Hounsfield units), and lesion uptake were all shown to impact relative error in generated SUV metrics (all p < 0.05). CONCLUSION The Pix-2-Pix GAN model for generating AC-PET demonstrates SUV metrics that highly correlate with original images. AI-generated PET images show clinical potential for reducing the need for CT scans for attenuation correction while preserving quantitative markers and image quality.
Collapse
Affiliation(s)
- Kevin C Ma
- Artificial Intelligence Resource, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Esther Mena
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Liza Lindenberg
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Nathan S Lay
- Artificial Intelligence Resource, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Phillip Eclarinal
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Deborah E Citrin
- Radiation Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Peter A Pinto
- Urologic Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Bradford J Wood
- Center for Interventional Oncology, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - William L Dahut
- Genitourinary Malignancies Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - James L Gulley
- Center for Immuno-Oncology, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Ravi A Madan
- Genitourinary Malignancies Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Peter L Choyke
- Artificial Intelligence Resource, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Ismail Baris Turkbey
- Artificial Intelligence Resource, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Stephanie A Harmon
- Artificial Intelligence Resource, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| |
Collapse
|
2
|
Joseph J, Biji I, Babu N, Pournami PN, Jayaraj PB, Puzhakkal N, Sabu C, Patel V. Fan beam CT image synthesis from cone beam CT image using nested residual UNet based conditional generative adversarial network. Phys Eng Sci Med 2023; 46:703-717. [PMID: 36943626 DOI: 10.1007/s13246-023-01244-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 03/09/2023] [Indexed: 03/23/2023]
Abstract
A radiotherapy technique called Image-Guided Radiation Therapy adopts frequent imaging throughout a treatment session. Fan Beam Computed Tomography (FBCT) based planning followed by Cone Beam Computed Tomography (CBCT) based radiation delivery drastically improved the treatment accuracy. Furtherance in terms of radiation exposure and cost can be achieved if FBCT could be replaced with CBCT. This paper proposes a Conditional Generative Adversarial Network (CGAN) for CBCT-to-FBCT synthesis. Specifically, a new architecture called Nested Residual UNet (NR-UNet) is introduced as the generator of the CGAN. A composite loss function, which comprises adversarial loss, Mean Squared Error (MSE), and Gradient Difference Loss (GDL), is used with the generator. The CGAN utilises the inter-slice dependency in the input by taking three consecutive CBCT slices to generate an FBCT slice. The model is trained using Head-and-Neck (H&N) FBCT-CBCT images of 53 cancer patients. The synthetic images exhibited a Peak Signal-to-Noise Ratio of 34.04±0.93 dB, Structural Similarity Index Measure of 0.9751±0.001 and a Mean Absolute Error of 14.81±4.70 HU. On average, the proposed model guarantees an improvement in Contrast-to-Noise Ratio four times better than the input CBCT images. The model also minimised the MSE and alleviated blurriness. Compared to the CBCT-based plan, the synthetic image results in a treatment plan closer to the FBCT-based plan. The three-slice to single-slice translation captures the three-dimensional contextual information in the input. Besides, it withstands the computational complexity associated with a three-dimensional image synthesis model. Furthermore, the results demonstrate that the proposed model is superior to the state-of-the-art methods.
Collapse
Affiliation(s)
- Jiffy Joseph
- Computer science and Engineering Department, National Institute of Technology Calicut, Kattangal, Calicut, Kerala, 673601, India.
| | - Ivan Biji
- Computer science and Engineering Department, National Institute of Technology Calicut, Kattangal, Calicut, Kerala, 673601, India
| | - Naveen Babu
- Computer science and Engineering Department, National Institute of Technology Calicut, Kattangal, Calicut, Kerala, 673601, India
| | - P N Pournami
- Computer science and Engineering Department, National Institute of Technology Calicut, Kattangal, Calicut, Kerala, 673601, India
| | - P B Jayaraj
- Computer science and Engineering Department, National Institute of Technology Calicut, Kattangal, Calicut, Kerala, 673601, India
| | - Niyas Puzhakkal
- Department of Medical Physics, MVR Cancer Centre & Research Institute, Poolacode, Calicut, Kerala, 673601, India
| | - Christy Sabu
- Computer science and Engineering Department, National Institute of Technology Calicut, Kattangal, Calicut, Kerala, 673601, India
| | - Vedkumar Patel
- Computer science and Engineering Department, National Institute of Technology Calicut, Kattangal, Calicut, Kerala, 673601, India
| |
Collapse
|
3
|
Leynes AP, Ahn S, Wangerin KA, Kaushik SS, Wiesinger F, Hope TA, Larson PEZ. Attenuation Coefficient Estimation for PET/MRI With Bayesian Deep Learning Pseudo-CT and Maximum-Likelihood Estimation of Activity and Attenuation. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2022; 6:678-689. [PMID: 38223528 PMCID: PMC10785227 DOI: 10.1109/trpms.2021.3118325] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2024]
Abstract
A major remaining challenge for magnetic resonance-based attenuation correction methods (MRAC) is their susceptibility to sources of magnetic resonance imaging (MRI) artifacts (e.g., implants and motion) and uncertainties due to the limitations of MRI contrast (e.g., accurate bone delineation and density, and separation of air/bone). We propose using a Bayesian deep convolutional neural network that in addition to generating an initial pseudo-CT from MR data, it also produces uncertainty estimates of the pseudo-CT to quantify the limitations of the MR data. These outputs are combined with the maximum-likelihood estimation of activity and attenuation (MLAA) reconstruction that uses the PET emission data to improve the attenuation maps. With the proposed approach uncertainty estimation and pseudo-CT prior for robust MLAA (UpCT-MLAA), we demonstrate accurate estimation of PET uptake in pelvic lesions and show recovery of metal implants. In patients without implants, UpCT-MLAA had acceptable but slightly higher root-mean-squared-error (RMSE) than Zero-echotime and Dixon Deep pseudo-CT when compared to CTAC. In patients with metal implants, MLAA recovered the metal implant; however, anatomy outside the implant region was obscured by noise and crosstalk artifacts. Attenuation coefficients from the pseudo-CT from Dixon MRI were accurate in normal anatomy; however, the metal implant region was estimated to have attenuation coefficients of air. UpCT-MLAA estimated attenuation coefficients of metal implants alongside accurate anatomic depiction outside of implant regions.
Collapse
Affiliation(s)
- Andrew P Leynes
- Department of Radiology and Biomedical Imaging, University of California at San Francisco, San Francisco, CA 94158 USA
- UC Berkeley-UC San Francisco Joint Graduate Program in Bioengineering, University of California at Berkeley, Berkeley, CA 94720 USA
| | - Sangtae Ahn
- Biology and Physics Department, GE Research, Niskayuna, NY 12309 USA
| | | | - Sandeep S Kaushik
- MR Applications Science Laboratory Europe, GE Healthcare, 80807 Munich, Germany
- Department of Computer Science, Technical University of Munich, 80333 Munich, Germany
- Department of Quantitative Biomedicine, University of Zurich, 8057 Zurich, Switzerland
| | - Florian Wiesinger
- MR Applications Science Laboratory Europe, GE Healthcare, 80807 Munich, Germany
| | - Thomas A Hope
- Department of Radiology and Biomedical Imaging, University of California at San Francisco, San Francisco, CA, USA
- Department of Radiology, San Francisco VA Medical Center, San Francisco, CA 94121 USA
| | - Peder E Z Larson
- Department of Radiology and Biomedical Imaging, University of California at San Francisco, San Francisco, CA 94158 USA
- UC Berkeley-UC San Francisco Joint Graduate Program in Bioengineering, University of California at Berkeley, Berkeley, CA 94720 USA
| |
Collapse
|
4
|
Ali H, Biswas MR, Mohsen F, Shah U, Alamgir A, Mousa O, Shah Z. The role of generative adversarial networks in brain MRI: a scoping review. Insights Imaging 2022; 13:98. [PMID: 35662369 PMCID: PMC9167371 DOI: 10.1186/s13244-022-01237-0] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2022] [Accepted: 05/11/2022] [Indexed: 11/23/2022] Open
Abstract
The performance of artificial intelligence (AI) for brain MRI can improve if enough data are made available. Generative adversarial networks (GANs) showed a lot of potential to generate synthetic MRI data that can capture the distribution of real MRI. Besides, GANs are also popular for segmentation, noise removal, and super-resolution of brain MRI images. This scoping review aims to explore how GANs methods are being used on brain MRI data, as reported in the literature. The review describes the different applications of GANs for brain MRI, presents the most commonly used GANs architectures, and summarizes the publicly available brain MRI datasets for advancing the research and development of GANs-based approaches. This review followed the guidelines of PRISMA-ScR to perform the study search and selection. The search was conducted on five popular scientific databases. The screening and selection of studies were performed by two independent reviewers, followed by validation by a third reviewer. Finally, the data were synthesized using a narrative approach. This review included 139 studies out of 789 search results. The most common use case of GANs was the synthesis of brain MRI images for data augmentation. GANs were also used to segment brain tumors and translate healthy images to diseased images or CT to MRI and vice versa. The included studies showed that GANs could enhance the performance of AI methods used on brain MRI imaging data. However, more efforts are needed to transform the GANs-based methods in clinical applications.
Collapse
Affiliation(s)
- Hazrat Ali
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar.
| | - Md Rafiul Biswas
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar
| | - Farida Mohsen
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar
| | - Uzair Shah
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar
| | - Asma Alamgir
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar
| | - Osama Mousa
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar
| | - Zubair Shah
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar.
| |
Collapse
|
5
|
Mecheter I, Abbod M, Zaidi H, Amira A. Brain MR images segmentation using 3D CNN with features recalibration mechanism for segmented CT generation. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.03.039] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
6
|
Presotto L, Bettinardi V, Bagnalasta M, Scifo P, Savi A, Vanoli EG, Fallanca F, Picchio M, Perani D, Gianolli L, De Bernardi E. Evaluation of a 2D UNet-Based Attenuation Correction Methodology for PET/MR Brain Studies. J Digit Imaging 2022; 35:432-445. [PMID: 35091873 PMCID: PMC9156597 DOI: 10.1007/s10278-021-00551-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Revised: 11/10/2021] [Accepted: 11/16/2021] [Indexed: 12/15/2022] Open
Abstract
Deep learning (DL) strategies applied to magnetic resonance (MR) images in positron emission tomography (PET)/MR can provide synthetic attenuation correction (AC) maps, and consequently PET images, more accurate than segmentation or atlas-registration strategies. As first objective, we aim to investigate the best MR image to be used and the best point of the AC pipeline to insert the synthetic map in. Sixteen patients underwent a 18F-fluorodeoxyglucose (FDG) PET/computed tomography (CT) and a PET/MR brain study in the same day. PET/CT images were reconstructed with attenuation maps obtained: (1) from CT (reference), (2) from MR with an atlas-based and a segmentation-based method and (3) with a 2D UNet trained on MR image/attenuation map pairs. As for MR, T1-weighted and Zero Time Echo (ZTE) images were considered; as for attenuation maps, CTs and 511 keV low-resolution attenuation maps were assessed. As second objective, we assessed the ability of DL strategies to provide proper AC maps in presence of cranial anatomy alterations due to surgery. Three 11C-methionine (METH) PET/MR studies were considered. PET images were reconstructed with attenuation maps obtained: (1) from diagnostic coregistered CT (reference), (2) from MR with an atlas-based and a segmentation-based method and (3) with 2D UNets trained on the sixteen FDG anatomically normal patients. Only UNets taking ZTE images in input were considered. FDG and METH PET images were quantitatively evaluated. As for anatomically normal FDG patients, UNet AC models generally provide an uptake estimate with lower bias than atlas-based or segmentation-based methods. The intersubject average bias on images corrected with UNet AC maps is always smaller than 1.5%, except for AC maps generated on too coarse grids. The intersubject bias variability is the lowest (always lower than 2%) for UNet AC maps coming from ZTE images, larger for other methods. UNet models working on MR ZTE images and generating synthetic CT or 511 keV low-resolution attenuation maps therefore provide the best results in terms of both accuracy and variability. As for METH anatomically altered patients, DL properly reconstructs anatomical alterations. Quantitative results on PET images confirm those found on anatomically normal FDG patients.
Collapse
Affiliation(s)
- Luca Presotto
- Nuclear Medicine Department, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Valentino Bettinardi
- Nuclear Medicine Department, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Matteo Bagnalasta
- Nuclear Medicine Department, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Paola Scifo
- Nuclear Medicine Department, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Annarita Savi
- Nuclear Medicine Department, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | | | - Federico Fallanca
- Nuclear Medicine Department, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Maria Picchio
- Nuclear Medicine Department, IRCCS San Raffaele Scientific Institute, Milan, Italy ,Vita-Salute San Raffaele University, Milan, Italy
| | - Daniela Perani
- Nuclear Medicine Department, IRCCS San Raffaele Scientific Institute, Milan, Italy ,Vita-Salute San Raffaele University, Milan, Italy
| | - Luigi Gianolli
- Nuclear Medicine Department, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Elisabetta De Bernardi
- School of Medicine and Surgery, University of Milano-Bicocca, via Cadore 48, Monza, 20900 Italy ,Bicocca Bioinformatics Biostatistics and Bioimaging Centre - B4, University of Milan-Bicocca, Monza, Italy
| |
Collapse
|
7
|
Applications of Generative Adversarial Networks (GANs) in Positron Emission Tomography (PET) imaging: A review. Eur J Nucl Med Mol Imaging 2022; 49:3717-3739. [PMID: 35451611 DOI: 10.1007/s00259-022-05805-w] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Accepted: 04/12/2022] [Indexed: 11/04/2022]
Abstract
PURPOSE This paper reviews recent applications of Generative Adversarial Networks (GANs) in Positron Emission Tomography (PET) imaging. Recent advances in Deep Learning (DL) and GANs catalysed the research of their applications in medical imaging modalities. As a result, several unique GAN topologies have emerged and been assessed in an experimental environment over the last two years. METHODS The present work extensively describes GAN architectures and their applications in PET imaging. The identification of relevant publications was performed via approved publication indexing websites and repositories. Web of Science, Scopus, and Google Scholar were the major sources of information. RESULTS The research identified a hundred articles that address PET imaging applications such as attenuation correction, de-noising, scatter correction, removal of artefacts, image fusion, high-dose image estimation, super-resolution, segmentation, and cross-modality synthesis. These applications are presented and accompanied by the corresponding research works. CONCLUSION GANs are rapidly employed in PET imaging tasks. However, specific limitations must be eliminated to reach their full potential and gain the medical community's trust in everyday clinical practice.
Collapse
|
8
|
Kläser K, Varsavsky T, Markiewicz P, Vercauteren T, Hammers A, Atkinson D, Thielemans K, Hutton B, Cardoso MJ, Ourselin S. Imitation learning for improved 3D PET/MR attenuation correction. Med Image Anal 2021; 71:102079. [PMID: 33951598 PMCID: PMC7611431 DOI: 10.1016/j.media.2021.102079] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2020] [Revised: 04/01/2021] [Accepted: 04/06/2021] [Indexed: 12/24/2022]
Abstract
The assessment of the quality of synthesised/pseudo Computed Tomography (pCT) images is commonly measured by an intensity-wise similarity between the ground truth CT and the pCT. However, when using the pCT as an attenuation map (μ-map) for PET reconstruction in Positron Emission Tomography Magnetic Resonance Imaging (PET/MRI) minimising the error between pCT and CT neglects the main objective of predicting a pCT that when used as μ-map reconstructs a pseudo PET (pPET) which is as similar as possible to the gold standard CT-derived PET reconstruction. This observation motivated us to propose a novel multi-hypothesis deep learning framework explicitly aimed at PET reconstruction application. A convolutional neural network (CNN) synthesises pCTs by minimising a combination of the pixel-wise error between pCT and CT and a novel metric-loss that itself is defined by a CNN and aims to minimise consequent PET residuals. Training is performed on a database of twenty 3D MR/CT/PET brain image pairs. Quantitative results on a fully independent dataset of twenty-three 3D MR/CT/PET image pairs show that the network is able to synthesise more accurate pCTs. The Mean Absolute Error on the pCT (110.98 HU ± 19.22 HU) compared to a baseline CNN (172.12 HU ± 19.61 HU) and a multi-atlas propagation approach (153.40 HU ± 18.68 HU), and subsequently lead to a significant improvement in the PET reconstruction error (4.74% ± 1.52% compared to baseline 13.72% ± 2.48% and multi-atlas propagation 6.68% ± 2.06%).
Collapse
Affiliation(s)
- Kerstin Kläser
- Department of Medical Physics & Biomedical Engineering, University College London, London WC1E 6BT, UK; School of Biomedical Engineering & Imaging Sciences, King's College London, London SE1 7EH, UK.
| | - Thomas Varsavsky
- Department of Medical Physics & Biomedical Engineering, University College London, London WC1E 6BT, UK; School of Biomedical Engineering & Imaging Sciences, King's College London, London SE1 7EH, UK
| | - Pawel Markiewicz
- Department of Medical Physics & Biomedical Engineering, University College London, London WC1E 6BT, UK; School of Biomedical Engineering & Imaging Sciences, King's College London, London SE1 7EH, UK
| | - Tom Vercauteren
- School of Biomedical Engineering & Imaging Sciences, King's College London, London SE1 7EH, UK
| | - Alexander Hammers
- School of Biomedical Engineering & Imaging Sciences, King's College London, London SE1 7EH, UK; Kings College London & GSTT PET Centre, St. Thomas Hospital, London, UK
| | - David Atkinson
- Centre for Medical Imaging, University College London, London W1W 7TS, UK
| | - Kris Thielemans
- Institute of Nuclear Medicine, University College London, London NW1 2BU, UK
| | - Brian Hutton
- Institute of Nuclear Medicine, University College London, London NW1 2BU, UK
| | - M J Cardoso
- School of Biomedical Engineering & Imaging Sciences, King's College London, London SE1 7EH, UK
| | - Sébastien Ourselin
- School of Biomedical Engineering & Imaging Sciences, King's College London, London SE1 7EH, UK
| |
Collapse
|
9
|
Lee JS. A Review of Deep-Learning-Based Approaches for Attenuation Correction in Positron Emission Tomography. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021. [DOI: 10.1109/trpms.2020.3009269] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
10
|
Arabi H, AkhavanAllaf A, Sanaat A, Shiri I, Zaidi H. The promise of artificial intelligence and deep learning in PET and SPECT imaging. Phys Med 2021; 83:122-137. [DOI: 10.1016/j.ejmp.2021.03.008] [Citation(s) in RCA: 84] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/27/2020] [Revised: 02/18/2021] [Accepted: 03/03/2021] [Indexed: 02/06/2023] Open
|