1
|
Zhang Q, Zhou C, Zhang X, Fan W, Zheng H, Liang D, Hu Z. Realization of high-end PET devices that assist conventional PET devices in improving image quality via diffusion modeling. EJNMMI Phys 2024; 11:103. [PMID: 39692956 DOI: 10.1186/s40658-024-00706-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2024] [Accepted: 11/21/2024] [Indexed: 12/19/2024] Open
Abstract
PURPOSE This study aimed to implement high-end positron emission tomography (PET) equipment to assist conventional PET equipment in improving image quality via a distribution learning-based diffusion model. METHODS A diffusion model was first trained on a dataset of high-quality (HQ) images acquired by a high-end PET device (uEXPLORER scanner), and the quality of the conventional PET images was later improved on the basis of this trained model built on null-space constraints. Data from 180 patients were used in this study. Among them, 137 patients who underwent total-body PET/computed tomography scans via a uEXPLORER scanner at the Sun Yat-sen University Cancer Center were retrospectively enrolled. The datasets of 50 of these patients were used to train the diffusion model. The remaining 87 cases and 43 PET images acquired from The Cancer Imaging Archive were used to quantitatively and qualitatively evaluate the proposed method. The nonlocal means (NLM) method, UNet and a generative adversarial network (GAN) were used as reference methods. RESULTS The incorporation of HQ imaging priors derived from high-end devices into the diffusion model through network training can enable the sharing of information between scanners, thereby pushing the limits of conventional scanners and improving their imaging quality. The quantitative results showed that the diffusion model based on null-space constraints produced better and more stable results than those of the methods based on NLM, UNet and the GAN and is well suited for cross-center and cross-device imaging. CONCLUSION A diffusion model based on null-space constraints is a flexible framework that can effectively utilize the prior information provided by high-end scanners to improve the image quality of conventional scanners in cross-center and cross-device scenarios.
Collapse
Affiliation(s)
- Qiyang Zhang
- Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Chao Zhou
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, 510060, China
| | - Xu Zhang
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, 510060, China
| | - Wei Fan
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, 510060, China
| | - Hairong Zheng
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Dong Liang
- Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Zhanli Hu
- Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, 518055, China.
| |
Collapse
|
2
|
Cui J, Luo Y, Chen D, Shi K, Su X, Liu H. IE-CycleGAN: improved cycle consistent adversarial network for unpaired PET image enhancement. Eur J Nucl Med Mol Imaging 2024; 51:3874-3887. [PMID: 39042332 DOI: 10.1007/s00259-024-06823-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2024] [Accepted: 06/30/2024] [Indexed: 07/24/2024]
Abstract
PURPOSE Technological advances in instruments have greatly promoted the development of positron emission tomography (PET) scanners. State-of-the-art PET scanners such as uEXPLORER can collect PET images of significantly higher quality. However, these scanners are not currently available in most local hospitals due to the high cost of manufacturing and maintenance. Our study aims to convert low-quality PET images acquired by common PET scanners into images of comparable quality to those obtained by state-of-the-art scanners without the need for paired low- and high-quality PET images. METHODS In this paper, we proposed an improved CycleGAN (IE-CycleGAN) model for unpaired PET image enhancement. The proposed method is based on CycleGAN, and the correlation coefficient loss and patient-specific prior loss were added to constrain the structure of the generated images. Furthermore, we defined a normalX-to-advanced training strategy to enhance the generalization ability of the network. The proposed method was validated on unpaired uEXPLORER datasets and Biograph Vision local hospital datasets. RESULTS For the uEXPLORER dataset, the proposed method achieved better results than non-local mean filtering (NLM), block-matching and 3D filtering (BM3D), and deep image prior (DIP), which are comparable to Unet (supervised) and CycleGAN (supervised). For the Biograph Vision local hospital datasets, the proposed method achieved higher contrast-to-noise ratios (CNR) and tumor-to-background SUVmax ratios (TBR) than NLM, BM3D, and DIP. In addition, the proposed method showed higher contrast, SUVmax, and TBR than Unet (supervised) and CycleGAN (supervised) when applied to images from different scanners. CONCLUSION The proposed unpaired PET image enhancement method outperforms NLM, BM3D, and DIP. Moreover, it performs better than the Unet (supervised) and CycleGAN (supervised) when implemented on local hospital datasets, which demonstrates its excellent generalization ability.
Collapse
Affiliation(s)
- Jianan Cui
- The Institute of Information Processing and Automation, College of Information Engineering, Zhejiang University of Technology, Hangzhou, China
| | - Yi Luo
- The State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, China
| | - Donghe Chen
- The PET Center, Department of Nuclear Medicine, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, 310003, Zhejiang, China
| | - Kuangyu Shi
- The Department of Nuclear Medicine, Bern University Hospital, Inselspital, University of Bern, Bern, Switzerland
| | - Xinhui Su
- The PET Center, Department of Nuclear Medicine, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, 310003, Zhejiang, China.
| | - Huafeng Liu
- The State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, China.
| |
Collapse
|
3
|
Zhang Q, Hu Y, Zhou C, Zhao Y, Zhang N, Zhou Y, Yang Y, Zheng H, Fan W, Liang D, Hu Z. Reducing pediatric total-body PET/CT imaging scan time with multimodal artificial intelligence technology. EJNMMI Phys 2024; 11:1. [PMID: 38165551 PMCID: PMC10761657 DOI: 10.1186/s40658-023-00605-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Accepted: 12/20/2023] [Indexed: 01/04/2024] Open
Abstract
OBJECTIVES This study aims to decrease the scan time and enhance image quality in pediatric total-body PET imaging by utilizing multimodal artificial intelligence techniques. METHODS A total of 270 pediatric patients who underwent total-body PET/CT scans with a uEXPLORER at the Sun Yat-sen University Cancer Center were retrospectively enrolled. 18F-fluorodeoxyglucose (18F-FDG) was administered at a dose of 3.7 MBq/kg with an acquisition time of 600 s. Short-term scan PET images (acquired within 6, 15, 30, 60 and 150 s) were obtained by truncating the list-mode data. A three-dimensional (3D) neural network was developed with a residual network as the basic structure, fusing low-dose CT images as prior information, which were fed to the network at different scales. The short-term PET images and low-dose CT images were processed by the multimodal 3D network to generate full-length, high-dose PET images. The nonlocal means method and the same 3D network without the fused CT information were used as reference methods. The performance of the network model was evaluated by quantitative and qualitative analyses. RESULTS Multimodal artificial intelligence techniques can significantly improve PET image quality. When fused with prior CT information, the anatomical information of the images was enhanced, and 60 s of scan data produced images of quality comparable to that of the full-time data. CONCLUSION Multimodal artificial intelligence techniques can effectively improve the quality of pediatric total-body PET/CT images acquired using ultrashort scan times. This has the potential to decrease the use of sedation, enhance guardian confidence, and reduce the probability of motion artifacts.
Collapse
Affiliation(s)
- Qiyang Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Yingying Hu
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, 510060, China
| | - Chao Zhou
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, 510060, China
| | - Yumo Zhao
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, 510060, China
| | - Na Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Yun Zhou
- United Imaging Healthcare Group, Central Research Institute, Shanghai, 201807, China
| | - Yongfeng Yang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Hairong Zheng
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Wei Fan
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, 510060, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.
| |
Collapse
|
4
|
Lim H, Dewaraja YK, Fessler JA. SPECT reconstruction with a trained regularizer using CT-side information: Application to 177Lu SPECT imaging. IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING 2023; 9:846-856. [PMID: 38516350 PMCID: PMC10956080 DOI: 10.1109/tci.2023.3318993] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/23/2024]
Abstract
Improving low-count SPECT can shorten scans and support pre-therapy theranostic imaging for dosimetry-based treatment planning, especially with radionuclides like 177Lu known for low photon yields. Conventional methods often underperform in low-count settings, highlighting the need for trained regularization in model-based image reconstruction. This paper introduces a trained regularizer for SPECT reconstruction that leverages segmentation based on CT imaging. The regularizer incorporates CT-side information via a segmentation mask from a pre-trained network (nnUNet). In this proof-of-concept study, we used patient studies with 177Lu DOTATATE to train and tested with phantom and patient datasets, simulating pre-therapy imaging conditions. Our results show that the proposed method outperforms both standard unregularized EM algorithms and conventional regularization with CT-side information. Specifically, our method achieved marked improvements in activity quantification, noise reduction, and root mean square error. The enhanced low-count SPECT approach has promising implications for theranostic imaging, post-therapy imaging, whole body SPECT, and reducing SPECT acquisition times.
Collapse
Affiliation(s)
- Hongki Lim
- Department of Electronic Engineering, Inha University, Incheon, 22212, South Korea
| | - Yuni K Dewaraja
- Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI 48109 USA
| | - Jeffrey A Fessler
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109 USA
| |
Collapse
|
5
|
Li S, Gong K, Badawi RD, Kim EJ, Qi J, Wang G. Neural KEM: A Kernel Method With Deep Coefficient Prior for PET Image Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:785-796. [PMID: 36288234 PMCID: PMC10081957 DOI: 10.1109/tmi.2022.3217543] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Image reconstruction of low-count positron emission tomography (PET) data is challenging. Kernel methods address the challenge by incorporating image prior information in the forward model of iterative PET image reconstruction. The kernelized expectation-maximization (KEM) algorithm has been developed and demonstrated to be effective and easy to implement. A common approach for a further improvement of the kernel method would be adding an explicit regularization, which however leads to a complex optimization problem. In this paper, we propose an implicit regularization for the kernel method by using a deep coefficient prior, which represents the kernel coefficient image in the PET forward model using a convolutional neural-network. To solve the maximum-likelihood neural network-based reconstruction problem, we apply the principle of optimization transfer to derive a neural KEM algorithm. Each iteration of the algorithm consists of two separate steps: a KEM step for image update from the projection data and a deep-learning step in the image domain for updating the kernel coefficient image using the neural network. This optimization algorithm is guaranteed to monotonically increase the data likelihood. The results from computer simulations and real patient data have demonstrated that the neural KEM can outperform existing KEM and deep image prior methods.
Collapse
|
6
|
Dynamic PET image reconstruction incorporating a median nonlocal means kernel method. Comput Biol Med 2021; 139:104713. [PMID: 34768034 DOI: 10.1016/j.compbiomed.2021.104713] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Revised: 07/26/2021] [Accepted: 07/27/2021] [Indexed: 11/20/2022]
Abstract
In dynamic positron emission tomography (PET) imaging, the reconstructed image of a single frame often exhibits high noise due to limited counting statistics of projection data. This study proposed a median nonlocal means (MNLM)-based kernel method for dynamic PET image reconstruction. The kernel matrix is derived from median nonlocal means of pre-reconstructed composite images. Then the PET image intensities in all voxels were modeled as a kernel matrix multiplied by coefficients and incorporated into the forward model of PET projection data. Then, the coefficients of each feature were estimated by the maximum likelihood method. Using simulated low-count dynamic data of Zubal head phantom, the quantitative performance of the proposed MNLM kernel method was investigated and compared with the maximum-likelihood method, conventional kernel method with and without median filter, and nonlocal means (NLM) kernel method. Simulation results showed that the MNLM kernel method achieved visual and quantitative accuracy improvements (in terms of the ensemble mean squared error, bias versus variance, and contrast versus noise performances). Especially for frame 2 with the lowest count level of a single frame, the MNLM kernel method achieves lower ensemble mean squared error (10.43%) than the NLM kernel method (13.68%), conventional kernel method with and without median filter (11.88% and 23.50%), and MLEM algorithm (24.77%). The study on real low-dose 18F-FDG rat data also showed that the MNLM kernel method outperformed other methods in visual and quantitative accuracy improvements (in terms of regional noise versus intensity mean performance).
Collapse
|
7
|
Xie Z, Li T, Zhang X, Qi W, Asma E, Qi J. Anatomically aided PET image reconstruction using deep neural networks. Med Phys 2021; 48:5244-5258. [PMID: 34129690 PMCID: PMC8510002 DOI: 10.1002/mp.15051] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2021] [Revised: 05/07/2021] [Accepted: 06/02/2021] [Indexed: 11/08/2022] Open
Abstract
PURPOSE The developments of PET/CT and PET/MR scanners provide opportunities for improving PET image quality by using anatomical information. In this paper, we propose a novel co-learning three-dimensional (3D) convolutional neural network (CNN) to extract modality-specific features from PET/CT image pairs and integrate complementary features into an iterative reconstruction framework to improve PET image reconstruction. METHODS We used a pretrained deep neural network to represent PET images. The network was trained using low-count PET and CT image pairs as inputs and high-count PET images as labels. This network was then incorporated into a constrained maximum likelihood framework to regularize PET image reconstruction. Two different network structures were investigated for the integration of anatomical information from CT images. One was a multichannel CNN, which treated PET and CT volumes as separate channels of the input. The other one was multibranch CNN, which implemented separate encoders for PET and CT images to extract latent features and fed the combined latent features into a decoder. Using computer-based Monte Carlo simulations and two real patient datasets, the proposed method has been compared with existing methods, including the maximum likelihood expectation maximization (MLEM) reconstruction, a kernel-based reconstruction and a CNN-based deep penalty method with and without anatomical guidance. RESULTS Reconstructed images showed that the proposed constrained ML reconstruction approach produced higher quality images than the competing methods. The tumors in the lung region have higher contrast in the proposed constrained ML reconstruction than in the CNN-based deep penalty reconstruction. The image quality was further improved by incorporating the anatomical information. Moreover, the liver standard deviation was lower in the proposed approach than all the competing methods at a matched lesion contrast. CONCLUSIONS The supervised co-learning strategy can improve the performance of constrained maximum likelihood reconstruction. Compared with existing techniques, the proposed method produced a better lesion contrast versus background standard deviation trade-off curve, which can potentially improve lesion detection.
Collapse
Affiliation(s)
- Zhaoheng Xie
- Department of Biomedical Engineering, University of
California, Davis, CA, USA
| | - Tiantian Li
- Department of Biomedical Engineering, University of
California, Davis, CA, USA
| | - Xuezhu Zhang
- Department of Biomedical Engineering, University of
California, Davis, CA, USA
| | - Wenyuan Qi
- Canon Medical Research USA, Inc., Vernon Hills, IL,
USA
| | - Evren Asma
- Canon Medical Research USA, Inc., Vernon Hills, IL,
USA
| | - Jinyi Qi
- Department of Biomedical Engineering, University of
California, Davis, CA, USA
| |
Collapse
|
8
|
Gao Y, Zhu Y, Bilgel M, Ashrafinia S, Lu L, Rahmim A. Voxel-based partial volume correction of PET images via subtle MRI guided non-local means regularization. Phys Med 2021; 89:129-139. [PMID: 34365117 DOI: 10.1016/j.ejmp.2021.07.028] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/17/2021] [Revised: 07/18/2021] [Accepted: 07/19/2021] [Indexed: 11/26/2022] Open
Abstract
PURPOSE Positron emission tomography (PET) images tend to be significantly degraded by the partial volume effect (PVE) resulting from the limited spatial resolution of the reconstructed images. Our purpose is to propose a partial volume correction (PVC) method to tackle this issue. METHODS In the present work, we explore a voxel-based PVC method under the least squares framework (LS) employing anatomical non-local means (NLMA) regularization. The well-known non-local means (NLM) filter utilizes the high degree of information redundancy that typically exists in images, and is typically used to directly reduce image noise by replacing each voxel intensity with a weighted average of its non-local neighbors. Here we explore NLM as a regularization term within iterative-deconvolution model to perform PVC. Further, an anatomical-guided version of NLM was proposed that incorporates MRI information into NLM to improve resolution and suppress image noise. The proposed approach makes subtle usage of the accompanying MRI information to define a more appropriate search space within the prior model. To optimize the regularized LS objective function, we used the Gauss-Seidel (GS) algorithm with the one-step-late (OSL) technique. RESULTS After the import of NLMA, the visual and quality results are all improved. With a visual check, we notice that NLMA reduce the noise compared to other PVC methods. This is also validated in bias-noise curve compared to non-MRI-guided PVC framework. We can see NLMA gives better bias-noise trade-off compared to other PVC methods. CONCLUSIONS Our efforts were evaluated in the base of amyloid brain PET imaging using the BrainWeb phantom and in vivo human data. We also compared our method with other PVC methods. Overall, we demonstrated the value of introducing subtle MRI-guidance in the regularization process, the proposed NLMA method resulting in promising visual as well as quantitative performance improvements.
Collapse
Affiliation(s)
- Yuanyuan Gao
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, China; Department of Radiology, Johns Hopkins University, Baltimore, MD 21287, USA.
| | - Yansong Zhu
- Department of Radiology, Johns Hopkins University, Baltimore, MD 21287, USA; Department of Electrical & Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA; Departments of Radiology and Physics, University of British Columbia, Vancouver, BC V5Z 1M9, Canada
| | - Murat Bilgel
- Laboratory of Behavioral Neuroscience, National Institute on Aging, Baltimore, MD 20892, USA
| | - Saeed Ashrafinia
- Department of Radiology, Johns Hopkins University, Baltimore, MD 21287, USA; Department of Electrical & Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Lijun Lu
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, China
| | - Arman Rahmim
- Department of Radiology, Johns Hopkins University, Baltimore, MD 21287, USA; Department of Electrical & Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA; Departments of Radiology and Physics, University of British Columbia, Vancouver, BC V5Z 1M9, Canada.
| |
Collapse
|
9
|
Wang X, Zhou L, Wang Y, Jiang H, Ye H. Improved low-dose positron emission tomography image reconstruction using deep learned prior. Phys Med Biol 2021; 66. [PMID: 33882466 DOI: 10.1088/1361-6560/abfa36] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2020] [Accepted: 04/21/2021] [Indexed: 01/18/2023]
Abstract
Positron emission tomography (PET) is a promising medical imaging technology that provides non-invasive and quantitative measurement of biochemical process in the human bodies. PET image reconstruction is challenging due to the ill-poseness of the inverse problem. With lower statistics caused by the limited detected photons, low-dose PET imaging leads to noisy reconstructed images with much quality degradation. Recently, deep neural networks (DNN) have been widely used in computer vision tasks and attracted growing interests in medical imaging. In this paper, we proposed a maximuma posteriori(MAP) reconstruction algorithm incorporating a convolutional neural network (CNN) representation in the formation of the prior. Rather than using the CNN in post-processing, we embedded the neural network in the reconstruction framework for image representation. Using the simulated data, we first quantitatively evaluated our proposed method in terms of the noise-bias tradeoff, and compared with the filtered maximum likelihood (ML), the conventional MAP, and the CNN post-processing methods. In addition to the simulation experiments, the proposed method was further quantitatively validated on the acquired patient brain and body data with the tradeoff between noise and contrast. The results demonstrated that the proposed CNN-MAP method improved noise-bias tradeoff compared with the filtered ML, the conventional MAP, and the CNN post-processing methods in the simulation study. For the patient study, the CNN-MAP method achieved better noise-contrast tradeoff over the other three methods. The quantitative enhancements indicate the potential value of the proposed CNN-MAP method in low-dose PET imaging.
Collapse
Affiliation(s)
- Xinhui Wang
- MinFound Medical Systems Co., Ltd., Hangzhou, People's Republic of China.,Zhejiang MinFound Intelligent Healthcare Technology Co. Ltd., Hangzhou, People's Republic of China
| | - Long Zhou
- MinFound Medical Systems Co., Ltd., Hangzhou, People's Republic of China.,Zhejiang MinFound Intelligent Healthcare Technology Co. Ltd., Hangzhou, People's Republic of China
| | - Yaofa Wang
- MinFound Medical Systems Co., Ltd., Hangzhou, People's Republic of China.,College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, People's Republic of China
| | - Haochuan Jiang
- MinFound Medical Systems Co., Ltd., Hangzhou, People's Republic of China
| | - Hongwei Ye
- MinFound Medical Systems Co., Ltd., Hangzhou, People's Republic of China.,Zhejiang MinFound Intelligent Healthcare Technology Co. Ltd., Hangzhou, People's Republic of China
| |
Collapse
|
10
|
Bland J, Mehranian A, Belzunce MA, Ellis S, da Costa‐Luis C, McGinnity CJ, Hammers A, Reader AJ. Intercomparison of MR-informed PET image reconstruction methods. Med Phys 2019; 46:5055-5074. [PMID: 31494961 PMCID: PMC6899618 DOI: 10.1002/mp.13812] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2019] [Revised: 08/23/2019] [Accepted: 08/23/2019] [Indexed: 12/28/2022] Open
Abstract
PURPOSE Numerous image reconstruction methodologies for positron emission tomography (PET) have been developed that incorporate magnetic resonance (MR) imaging structural information, producing reconstructed images with improved suppression of noise and reduced partial volume effects. However, the influence of MR structural information also increases the possibility of suppression or bias of structures present only in the PET data (PET-unique regions). To address this, further developments for MR-informed methods have been proposed, for example, through inclusion of the current reconstructed PET image, alongside the MR image, in the iterative reconstruction process. In this present work, a number of kernel and maximum a posteriori (MAP) methodologies are compared, with the aim of identifying methods that enable a favorable trade-off between the suppression of noise and the retention of unique features present in the PET data. METHODS The reconstruction methods investigated were: the MR-informed conventional and spatially compact kernel methods, referred to as KEM and KEM largest value sparsification (LVS) respectively; the MR-informed Bowsher and Gaussian MR-guided MAP methods; and the PET-MR-informed hybrid kernel and anato-functional MAP methods. The trade-off between improving the reconstruction of the whole brain region and the PET-unique regions was investigated for all methods in comparison with postsmoothed maximum likelihood expectation maximization (MLEM), evaluated in terms of structural similarity index (SSIM), normalized root mean square error (NRMSE), bias, and standard deviation. Both simulated BrainWeb (10 noise realizations) and real [18 F] fluorodeoxyglucose (FDG) three-dimensional datasets were used. The real [18 F]FDG dataset was augmented with simulated tumors to allow comparison of the reconstruction methodologies for the case of known regions of PET-MR discrepancy and evaluated at full counts (100%) and at a reduced (10%) count level. RESULTS For the high-count simulated and real data studies, the anato-functional MAP method performed better than the other methods under investigation (MR-informed, PET-MR-informed and postsmoothed MLEM), in terms of achieving the best trade-off for the reconstruction of the whole brain and PET-unique regions, assessed in terms of the SSIM, NRMSE, and bias vs standard deviation. The inclusion of PET information in the anato-functional MAP method enables the reconstruction of PET-unique regions to attain similarly low levels of bias as unsmoothed MLEM, while moderately improving the whole brain image quality for low levels of regularization. However, for low count simulated datasets the anato-functional MAP method performs poorly, due to the inclusion of noisy PET information in the regularization term. For the low counts simulated dataset, KEM LVS and to a lesser extent, HKEM performed better than the other methods under investigation in terms of achieving the best trade-off for the reconstruction of the whole brain and PET-unique regions, assessed in terms of the SSIM, NRMSE, and bias vs standard deviation. CONCLUSION For the reconstruction of noisy data, multiple MR-informed methods produce favorable whole brain vs PET-unique region trade-off in terms of the image quality metrics of SSIM and NRMSE, comfortably outperforming the whole image denoising of postsmoothed MLEM.
Collapse
Affiliation(s)
- James Bland
- School of Biomedical Engineering and Imaging SciencesKing's College LondonSt Thomas' HospitalLondonSE1 7EHUK
| | - Abolfazl Mehranian
- School of Biomedical Engineering and Imaging SciencesKing's College LondonSt Thomas' HospitalLondonSE1 7EHUK
| | - Martin A. Belzunce
- School of Biomedical Engineering and Imaging SciencesKing's College LondonSt Thomas' HospitalLondonSE1 7EHUK
| | - Sam Ellis
- School of Biomedical Engineering and Imaging SciencesKing's College LondonSt Thomas' HospitalLondonSE1 7EHUK
| | - Casper da Costa‐Luis
- School of Biomedical Engineering and Imaging SciencesKing's College LondonSt Thomas' HospitalLondonSE1 7EHUK
| | - Colm J. McGinnity
- King's College London & Guy's and St Thomas' PET CentreSt Thomas' HospitalLondonSE1 7EHUK
| | - Alexander Hammers
- King's College London & Guy's and St Thomas' PET CentreSt Thomas' HospitalLondonSE1 7EHUK
| | - Andrew J. Reader
- School of Biomedical Engineering and Imaging SciencesKing's College LondonSt Thomas' HospitalLondonSE1 7EHUK
| |
Collapse
|
11
|
Zhang W, Gao J, Yang Y, Liang D, Liu X, Zheng H, Hu Z. Image reconstruction for positron emission tomography based on patch-based regularization and dictionary learning. Med Phys 2019; 46:5014-5026. [PMID: 31494950 PMCID: PMC6899708 DOI: 10.1002/mp.13804] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2018] [Revised: 07/18/2019] [Accepted: 08/18/2019] [Indexed: 12/31/2022] Open
Abstract
PURPOSE Positron emission tomography (PET) is an important tool for nuclear medical imaging. It has been widely used in clinical diagnosis, scientific research, and drug testing. PET is a kind of emission computed tomography. Its basic imaging principle is to use the positron annihilation radiation generated by radionuclide decay to generate gamma photon images. However, in practical applications, due to the low gamma photon counting rate, limited acquisition time, inconsistent detector characteristics, and electronic noise, measured PET projection data often contain considerable noise, which results in ill-conditioned PET images. Therefore, determining how to obtain high-quality reconstructed PET images suitable for clinical applications is a valuable research topic. In this context, this paper presents an image reconstruction algorithm based on patch-based regularization and dictionary learning (DL) called the patch-DL algorithm. Compared to other algorithms, the proposed algorithm can retain more image details while suppressing noise. METHODS Expectation-maximization (EM)-like image updating, image smoothing, pixel-by-pixel image fusion, and DL are the four steps of the proposed reconstruction algorithm. We used a two-dimensional (2D) brain phantom to evaluate the proposed algorithm by simulating sinograms that contained random Poisson noise. We also quantitatively compared the patch-DL algorithm with a pixel-based algorithm, a patch-based algorithm, and an adaptive dictionary learning (AD) algorithm. RESULTS Through computer simulations, we demonstrated the advantages of the patch-DL method over the pixel-, patch-, and AD-based methods in terms of the tradeoff between noise suppression and detail retention in reconstructed images. Quantitative analysis shows that the proposed method results in a better performance statistically [according to the mean absolute error (MAE), correlation coefficient (CORR), and root mean square error (RMSE)] in considered region of interests (ROI) with two simulated count levels. Additionally, to analyze whether the results among these methods have significant differences, we used one-way analysis of variance (ANOVA) to calculate the corresponding P values. The results show that most of the P < 0.01; some P> 0.01 < 0.05. Therefore, our method can achieve a better quantitative performance than those of traditional methods. CONCLUSIONS The results show that the proposed algorithm has the potential to improve the quality of PET image reconstruction. Since the proposed algorithm was validated only with simulated 2D data, it still needs to be further validated with real three-dimensional data. In the future, we intend to explore GPU parallelization technology to further improve the computational efficiency and shorten the computation time.
Collapse
Affiliation(s)
- Wanhong Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.,College of Electrical and Information Engineering, Hunan University, Changsha, 410082, China
| | - Juan Gao
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Yongfeng Yang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Xin Liu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Hairong Zheng
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| |
Collapse
|
12
|
Gong K, Catana C, Qi J, Li Q. PET Image Reconstruction Using Deep Image Prior. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:1655-1665. [PMID: 30575530 PMCID: PMC6584077 DOI: 10.1109/tmi.2018.2888491] [Citation(s) in RCA: 121] [Impact Index Per Article: 20.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Recently, deep neural networks have been widely and successfully applied in computer vision tasks and have attracted growing interest in medical imaging. One barrier for the application of deep neural networks to medical imaging is the need for large amounts of prior training pairs, which is not always feasible in clinical practice. This is especially true for medical image reconstruction problems, where raw data are needed. Inspired by the deep image prior framework, in this paper, we proposed a personalized network training method where no prior training pairs are needed, but only the patient's own prior information. The network is updated during the iterative reconstruction process using the patient-specific prior information and measured data. We formulated the maximum-likelihood estimation as a constrained optimization problem and solved it using the alternating direction method of multipliers algorithm. Magnetic resonance imaging guided positron emission tomography reconstruction was employed as an example to demonstrate the effectiveness of the proposed framework. Quantification results based on simulation and real data show that the proposed reconstruction framework can outperform Gaussian post-smoothing and anatomically guided reconstructions using the kernel method or the neural-network penalty.
Collapse
|
13
|
Abstract
PET images often suffer poor signal-to-noise ratio (SNR). Our objective is to improve the SNR of PET images using a deep neural network (DNN) model and MRI images without requiring any higher SNR PET images in training. Our proposed DNN model consists of three modified U-Nets (3U-net). The PET training input data and targets were reconstructed using filtered-backprojection (FBP) and maximum likelihood expectation maximization (MLEM), respectively. FBP reconstruction was used because of its computational efficiency so that the trained network not only removes noise, but also accelerates image reconstruction. Digital brain phantoms downloaded from BrainWeb were used to evaluate the proposed method. Poisson noise was added into sinogram data to simulate a 6 min brain PET scan. Attenuation effect was included and corrected before the image reconstruction. Extra Poisson noise was introduced to the training inputs to improve the network denoising capability. Three independent experiments were conducted to examine the reproducibility. A lesion was inserted into testing data to evaluate the impact of mismatched MRI information using the contrast-to-noise ratio (CNR). The negative impact on noise reduction was also studied when miscoregistration between PET and MRI images occurs. Compared with 1U-net trained with only PET images, training with PET/MRI decreased the mean squared error (MSE) by 31.3% and 34.0% for 1U-net and 3U-net, respectively. The MSE reduction is equivalent to an increase in the count level by 2.5 folds and 2.9 folds for 1U-net and 3U-net, respectively. Compared with the MLEM images, the lesion CNR was improved 2.7 folds and 1.4 folds for 1U-net and 3U-net, respectively. The results show that the proposed method could improve the PET SNR without having higher SNR PET images.
Collapse
Affiliation(s)
- Chih-Chieh Liu
- Department of Biomedical Engineering, University of California, Davis, CA, United States of America
| | | |
Collapse
|
14
|
High Temporal-Resolution Dynamic PET Image Reconstruction Using a New Spatiotemporal Kernel Method. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:664-674. [PMID: 30222553 PMCID: PMC6422751 DOI: 10.1109/tmi.2018.2869868] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Current clinical dynamic PET has an effective temporal resolution of 5-10 seconds, which can be adequate for traditional compartmental modeling but is inadequate for exploiting the benefit of more advanced tracer kinetic modeling for characterization of diseases (e.g., cancer and heart disease). There is a need to improve dynamic PET to allow fine temporal sampling of 1-2 seconds. However, the reconstruction of these short-time frames from tomographic data is extremely challenging as the count level of each frame is very low and high noise presents in both spatial and temporal domains. Previously, the kernel framework has been developed and demonstrated as a statistically efficient approach to utilizing image prior for low-count PET image reconstruction. Nevertheless, the existing kernel methods mainly explore spatial correlations in the data and only have a limited ability in suppressing temporal noise. In this paper, we propose a new kernel method which extends the previous spatial kernel method to the general spatiotemporal domain. The new kernelized model encodes both spatial and temporal correlations obtained from image prior information and are incorporated into the PET forward projection model to improve themaximumlikelihood(ML) image reconstruction. Computer simulations and an application to real patient scan have shown that the proposed approach can achieve effective noise reduction in both spatial and temporal domains and outperform the spatial kernel method and conventional ML reconstruction method for improving the high temporal-resolution dynamic PET imaging.
Collapse
|
15
|
Yang B, Ying L, Tang J. Artificial Neural Network Enhanced Bayesian PET Image Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:1297-1309. [PMID: 29870360 PMCID: PMC6132251 DOI: 10.1109/tmi.2018.2803681] [Citation(s) in RCA: 36] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
In positron emission tomography (PET) image reconstruction, the Bayesian framework with various regularization terms has been implemented to constrain the radio tracer distribution. Varying the regularizing weight of a maximum a posteriori (MAP) algorithm specifies a lower bound of the tradeoff between variance and spatial resolution measured from the reconstructed images. The purpose of this paper is to build a patch-based image enhancement scheme to reduce the size of the unachievable region below the bound and thus to quantitatively improve the Bayesian PET imaging. We cast the proposed enhancement as a regression problem which models a highly nonlinear and spatial-varying mapping between the reconstructed image patches and an enhanced image patch. An artificial neural network model named multilayer perceptron (MLP) with backpropagation was used to solve this regression problem through learning from examples. Using the BrainWeb phantoms, we simulated brain PET data at different count levels of different subjects with and without lesions. The MLP was trained using the image patches reconstructed with a MAP algorithm of different regularization parameters for one normal subject at a certain count level. To evaluate the performance of the trained MLP, reconstructed images from other simulations and two patient brain PET imaging data sets were processed. In every testing cases, we demonstrate that the MLP enhancement technique improves the noise and bias tradeoff compared with the MAP reconstruction using different regularizing weights thus decreasing the size of the unachievable region defined by the MAP algorithm in the variance/resolution plane.
Collapse
Affiliation(s)
- Bao Yang
- Department of Electrical and Computer Engineering, Oakland University, Rochester, MI, USA
| | - Leslie Ying
- Departments of Biomedical Engineering and Electrical Engineering, The State University of New York at Buffalo, Buffalo, NY, USA
| | | |
Collapse
|
16
|
Xu Z, Gao M, Papadakis GZ, Luna B, Jain S, Mollura DJ, Bagci U. Joint solution for PET image segmentation, denoising, and partial volume correction. Med Image Anal 2018; 46:229-243. [PMID: 29627687 PMCID: PMC6080255 DOI: 10.1016/j.media.2018.03.007] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2017] [Revised: 03/15/2018] [Accepted: 03/17/2018] [Indexed: 10/17/2022]
Abstract
Segmentation, denoising, and partial volume correction (PVC) are three major processes in the quantification of uptake regions in post-reconstruction PET images. These problems are conventionally addressed by independent steps. In this study, we hypothesize that these three processes are dependent; therefore, jointly solving them can provide optimal support for quantification of the PET images. To achieve this, we utilize interactions among these processes when designing solutions for each challenge. We also demonstrate that segmentation can help in denoising and PVC by locally constraining the smoothness and correction criteria. For denoising, we adapt generalized Anscombe transformation to Gaussianize the multiplicative noise followed by a new adaptive smoothing algorithm called regional mean denoising. For PVC, we propose a volume consistency-based iterative voxel-based correction algorithm in which denoised and delineated PET images guide the correction process during each iteration precisely. For PET image segmentation, we use affinity propagation (AP)-based iterative clustering method that helps the integration of PVC and denoising algorithms into the delineation process. Qualitative and quantitative results, obtained from phantoms, clinical, and pre-clinical data, show that the proposed framework provides an improved and joint solution for segmentation, denoising, and partial volume correction.
Collapse
Affiliation(s)
- Ziyue Xu
- Center for Infectious Disease Imaging (CIDI), Radiology and Imaging Science Department, National Institutes of Health (NIH), Bethesda, MD 20892, USA
| | - Mingchen Gao
- Center for Infectious Disease Imaging (CIDI), Radiology and Imaging Science Department, National Institutes of Health (NIH), Bethesda, MD 20892, USA
| | - Georgios Z Papadakis
- Center for Infectious Disease Imaging (CIDI), Radiology and Imaging Science Department, National Institutes of Health (NIH), Bethesda, MD 20892, USA
| | - Brian Luna
- University of California at Irvine, Irvine, CA, USA
| | - Sanjay Jain
- Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Daniel J Mollura
- Center for Infectious Disease Imaging (CIDI), Radiology and Imaging Science Department, National Institutes of Health (NIH), Bethesda, MD 20892, USA
| | - Ulas Bagci
- University of Central Florida, Orlando, FL, USA.
| |
Collapse
|
17
|
Gong K, Cheng-Liao J, Wang G, Chen KT, Catana C, Qi J. Direct Patlak Reconstruction From Dynamic PET Data Using the Kernel Method With MRI Information Based on Structural Similarity. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:955-965. [PMID: 29610074 PMCID: PMC5933939 DOI: 10.1109/tmi.2017.2776324] [Citation(s) in RCA: 49] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
Positron emission tomography (PET) is a functional imaging modality widely used in oncology, cardiology, and neuroscience. It is highly sensitive, but suffers from relatively poor spatial resolution, as compared with anatomical imaging modalities, such as magnetic resonance imaging (MRI). With the recent development of combined PET/MR systems, we can improve the PET image quality by incorporating MR information into image reconstruction. Previously, kernel learning has been successfully embedded into static and dynamic PET image reconstruction using either PET temporal or MRI information. Here, we combine both PET temporal and MRI information adaptively to improve the quality of direct Patlak reconstruction. We examined different approaches to combine the PET and MRI information in kernel learning to address the issue of potential mismatches between MRI and PET signals. Computer simulations and hybrid real-patient data acquired on a simultaneous PET/MR scanner were used to evaluate the proposed methods. Results show that the method that combines PET temporal information and MRI spatial information adaptively based on the structure similarity index has the best performance in terms of noise reduction and resolution improvement.
Collapse
|
18
|
Boudjelal A, Messali Z, Elmoataz A, Attallah B. Improved Simultaneous Algebraic Reconstruction Technique Algorithm for Positron-Emission Tomography Image Reconstruction via Minimizing the Fast Total Variation. J Med Imaging Radiat Sci 2017; 48:385-393. [PMID: 31047474 DOI: 10.1016/j.jmir.2017.09.005] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2017] [Revised: 09/05/2017] [Accepted: 09/15/2017] [Indexed: 12/15/2022]
Abstract
CONTEXT There has been considerable progress in the instrumentation for data measurement and computer methods for generating images of measured PET data. These computer methods have been developed to solve the inverse problem, also known as the "image reconstruction from projections" problem. AIM In this paper, we propose a modified Simultaneous Algebraic Reconstruction Technique (SART) algorithm to improve the quality of image reconstruction by incorporating total variation (TV) minimization into the iterative SART algorithm. METHODOLOGY The SART updates the estimated image by forward projecting the initial image onto the sinogram space. Then, the difference between the estimated sinogram and the given sinogram is back-projected onto the image domain. This difference is then subtracted from the initial image to obtain a corrected image. Fast total variation (FTV) minimization is applied to the image obtained in the SART step. The second step is the result obtained from the previous FTV update. The SART and the FTV minimization steps run iteratively in an alternating manner. Fifty iterations were applied to the SART algorithm used in each of the regularization-based methods. In addition to the conventional SART algorithm, spatial smoothing was used to enhance the quality of the image. All images were sized at 128 × 128 pixels. RESULTS The proposed algorithm successfully accomplished edge preservation. A detailed scrutiny revealed that the reconstruction algorithms differed; for example, the SART and the proposed FTV-SART algorithm effectively preserved the hot lesion edges, whereas artifacts and deviations were more likely to occur in the ART algorithm than in the other algorithms. CONCLUSIONS Compared to the standard SART, the proposed algorithm is more robust in removing background noise while preserving edges to suppress the existent image artifacts. The quality measurements and visual inspections show a significant improvement in image quality compared to the conventional SART and Algebraic Reconstruction Technique (ART) algorithms.
Collapse
Affiliation(s)
- Abdelwahhab Boudjelal
- Electronics Department, University of Mohammed Boudiaf-M'sila, M'sila, Algeria; Image Team, GREYC Laboratory, University of Caen Normandy, Caen Cedex, France.
| | - Zoubeida Messali
- Electronics Department, University of Mohamed El Bachir El Ibrahimi-Bordj Bou Arréridj, Bordj Bou Arréridj, Algeria
| | - Abderrahim Elmoataz
- Image Team, GREYC Laboratory, University of Caen Normandy, Caen Cedex, France
| | - Bilal Attallah
- Electronics Department, University of Mohammed Boudiaf-M'sila, M'sila, Algeria; Image Team, GREYC Laboratory, University of Caen Normandy, Caen Cedex, France
| |
Collapse
|
19
|
Beijst C, Kunnen B, Lam MGEH, de Jong HWAM. Technical Advances in Image Guidance of Radionuclide Therapy. J Nucl Med Technol 2017; 45:272-279. [PMID: 29042472 DOI: 10.2967/jnmt.117.190991] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2017] [Accepted: 07/05/2017] [Indexed: 11/16/2022] Open
Abstract
Internal radiation therapy with radionuclides (i.e., radionuclide therapy) owes its success to the many advantages over other, more conventional, treatment options. One distinct advantage of radionuclide therapies is the potential to use (part of) the emitted radiation for imaging of the radionuclide distribution. The combination of diagnostic and therapeutic properties in a set of matched radiopharmaceuticals (sometimes combined in a single radiopharmaceutical) is often referred to as theranostics and allows accurate diagnostic imaging before therapy. The use of imaging benefits treatment planning, dosimetry, and assessment of treatment response. This paper focuses on a selection of advances in imaging technology relevant for image guidance of radionuclide therapy. This involves developments in nuclear imaging modalities, as well as other anatomic and functional imaging modalities. The quality and quantitative accuracy of images used for guidance of radionuclide therapy is continuously being improved, which in turn may improve the therapeutic outcome and efficiency of radionuclide therapies.
Collapse
Affiliation(s)
- Casper Beijst
- Department of Radiology and Nuclear Medicine, UMC Utrecht, Utrecht, The Netherlands; and .,Image Sciences Institute, UMC Utrecht, Utrecht, The Netherlands
| | - Britt Kunnen
- Department of Radiology and Nuclear Medicine, UMC Utrecht, Utrecht, The Netherlands; and.,Image Sciences Institute, UMC Utrecht, Utrecht, The Netherlands
| | - Marnix G E H Lam
- Department of Radiology and Nuclear Medicine, UMC Utrecht, Utrecht, The Netherlands; and
| | - Hugo W A M de Jong
- Department of Radiology and Nuclear Medicine, UMC Utrecht, Utrecht, The Netherlands; and
| |
Collapse
|
20
|
Rausch I, Quick HH, Cal-Gonzalez J, Sattler B, Boellaard R, Beyer T. Technical and instrumentational foundations of PET/MRI. Eur J Radiol 2017; 94:A3-A13. [PMID: 28431784 DOI: 10.1016/j.ejrad.2017.04.004] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2017] [Accepted: 04/07/2017] [Indexed: 12/23/2022]
Abstract
This paper highlights the origins of combined positron emission tomography (PET) and magnetic resonance imaging (MRI) whole-body systems that were first introduced for applications in humans in 2010. This text first covers basic aspects of each imaging modality before describing the technical and methodological challenges of combining PET and MRI within a single system. After several years of development, combined and even fully-integrated PET/MRI systems have become available and made their way into the clinic. This multi-modality imaging system lends itself to the advanced exploration of diseases to support personalized medicine in a long run. To that extent, this paper provides an introduction to PET/MRI methodology and important technical solutions.
Collapse
Affiliation(s)
- Ivo Rausch
- Center for Medical Physics and Biomedical Engineering, Medical University Vienna, Vienna, Austria.
| | - Harald H Quick
- High Field and Hybrid MR Imaging, University Hospital Essen, Essen, Germany; Erwin L. Hahn Institute for MR Imaging, University of Duisburg-Essen, Essen, Germany
| | - Jacobo Cal-Gonzalez
- Center for Medical Physics and Biomedical Engineering, Medical University Vienna, Vienna, Austria
| | - Bernhard Sattler
- Department of Nuclear Medicine, University Hospital Leipzig, Leipzig, Germany
| | - Ronald Boellaard
- Department of Nuclear Medicine and Molecular Imaging, Academisch Ziekenhuis Groningen, Groningen, The Netherlands
| | - Thomas Beyer
- Center for Medical Physics and Biomedical Engineering, Medical University Vienna, Vienna, Austria
| |
Collapse
|
21
|
Yu X, Wang C, Hu H, Liu H. Low Dose PET Image Reconstruction with Total Variation Using Alternating Direction Method. PLoS One 2016; 11:e0166871. [PMID: 28005929 PMCID: PMC5179096 DOI: 10.1371/journal.pone.0166871] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2016] [Accepted: 11/04/2016] [Indexed: 11/21/2022] Open
Abstract
In this paper, a total variation (TV) minimization strategy is proposed to overcome the problem of sparse spatial resolution and large amounts of noise in low dose positron emission tomography (PET) imaging reconstruction. Two types of objective function were established based on two statistical models of measured PET data, least-square (LS) TV for the Gaussian distribution and Poisson-TV for the Poisson distribution. To efficiently obtain high quality reconstructed images, the alternating direction method (ADM) is used to solve these objective functions. As compared with the iterative shrinkage/thresholding (IST) based algorithms, the proposed ADM can make full use of the TV constraint and its convergence rate is faster. The performance of the proposed approach is validated through comparisons with the expectation-maximization (EM) method using synthetic and experimental biological data. In the comparisons, the results of both LS-TV and Poisson-TV are taken into consideration to find which models are more suitable for PET imaging, in particular low-dose PET. To evaluate the results quantitatively, we computed bias, variance, and the contrast recovery coefficient (CRC) and drew profiles of the reconstructed images produced by the different methods. The results show that both Poisson-TV and LS-TV can provide a high visual quality at a low dose level. The bias and variance of the proposed LS-TV and Poisson-TV methods are 20% to 74% less at all counting levels than those of the EM method. Poisson-TV gives the best performance in terms of high-accuracy reconstruction with the lowest bias and variance as compared to the ground truth (14.3% less bias and 21.9% less variance). In contrast, LS-TV gives the best performance in terms of the high contrast of the reconstruction with the highest CRC.
Collapse
Affiliation(s)
- Xingjian Yu
- State Key Laboratory of Modern Optical Instrumentation, Department of Optical Engineering, Zhejiang University, Hangzhou, China
| | - Chenye Wang
- State Key Laboratory of Modern Optical Instrumentation, Department of Optical Engineering, Zhejiang University, Hangzhou, China
| | - Hongjie Hu
- Sir Run Run Shaw Hospital, Zhejiang University, Hangzhou, China
| | - Huafeng Liu
- State Key Laboratory of Modern Optical Instrumentation, Department of Optical Engineering, Zhejiang University, Hangzhou, China
- * E-mail:
| |
Collapse
|
22
|
Xie T, Zaidi H. Development of computational small animal models and their applications in preclinical imaging and therapy research. Med Phys 2016; 43:111. [PMID: 26745904 DOI: 10.1118/1.4937598] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
The development of multimodality preclinical imaging techniques and the rapid growth of realistic computer simulation tools have promoted the construction and application of computational laboratory animal models in preclinical research. Since the early 1990s, over 120 realistic computational animal models have been reported in the literature and used as surrogates to characterize the anatomy of actual animals for the simulation of preclinical studies involving the use of bioluminescence tomography, fluorescence molecular tomography, positron emission tomography, single-photon emission computed tomography, microcomputed tomography, magnetic resonance imaging, and optical imaging. Other applications include electromagnetic field simulation, ionizing and nonionizing radiation dosimetry, and the development and evaluation of new methodologies for multimodality image coregistration, segmentation, and reconstruction of small animal images. This paper provides a comprehensive review of the history and fundamental technologies used for the development of computational small animal models with a particular focus on their application in preclinical imaging as well as nonionizing and ionizing radiation dosimetry calculations. An overview of the overall process involved in the design of these models, including the fundamental elements used for the construction of different types of computational models, the identification of original anatomical data, the simulation tools used for solving various computational problems, and the applications of computational animal models in preclinical research. The authors also analyze the characteristics of categories of computational models (stylized, voxel-based, and boundary representation) and discuss the technical challenges faced at the present time as well as research needs in the future.
Collapse
Affiliation(s)
- Tianwu Xie
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva 4 CH-1211, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva 4 CH-1211, Switzerland; Geneva Neuroscience Center, Geneva University, Geneva CH-1205, Switzerland; and Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen 9700 RB, The Netherlands
| |
Collapse
|
23
|
Hutchcroft W, Wang G, Chen KT, Catana C, Qi J. Anatomically-aided PET reconstruction using the kernel method. Phys Med Biol 2016; 61:6668-6683. [PMID: 27541810 PMCID: PMC5095621 DOI: 10.1088/0031-9155/61/18/6668] [Citation(s) in RCA: 46] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.
Collapse
Affiliation(s)
- Will Hutchcroft
- Department of Biomedical Engineering, University of California-Davis, Davis, CA, USA
| | - Guobao Wang
- Department of Biomedical Engineering, University of California-Davis, Davis, CA, USA
| | - Kevin T. Chen
- Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Charlestown, MA, USA
| | - Ciprian Catana
- Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Charlestown, MA, USA
| | - Jinyi Qi
- Department of Biomedical Engineering, University of California-Davis, Davis, CA, USA
| |
Collapse
|
24
|
Tahaei MS, Reader AJ. Patch-based image reconstruction for PET using prior-image derived dictionaries. Phys Med Biol 2016; 61:6833-6855. [DOI: 10.1088/0031-9155/61/18/6833] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
25
|
Ehrhardt MJ, Markiewicz P, Liljeroth M, Barnes A, Kolehmainen V, Duncan JS, Pizarro L, Atkinson D, Hutton BF, Ourselin S, Thielemans K, Arridge SR. PET Reconstruction With an Anatomical MRI Prior Using Parallel Level Sets. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:2189-2199. [PMID: 27101601 DOI: 10.1109/tmi.2016.2549601] [Citation(s) in RCA: 41] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
The combination of positron emission tomography (PET) and magnetic resonance imaging (MRI) offers unique possibilities. In this paper we aim to exploit the high spatial resolution of MRI to enhance the reconstruction of simultaneously acquired PET data. We propose a new prior to incorporate structural side information into a maximum a posteriori reconstruction. The new prior combines the strengths of previously proposed priors for the same problem: it is very efficient in guiding the reconstruction at edges available from the side information and it reduces locally to edge-preserving total variation in the degenerate case when no structural information is available. In addition, this prior is segmentation-free, convex and no a priori assumptions are made on the correlation of edge directions of the PET and MRI images. We present results for a simulated brain phantom and for real data acquired by the Siemens Biograph mMR for a hardware phantom and a clinical scan. The results from simulations show that the new prior has a better trade-off between enhancing common anatomical boundaries and preserving unique features than several other priors. Moreover, it has a better mean absolute bias-to-mean standard deviation trade-off and yields reconstructions with superior relative l2-error and structural similarity index. These findings are underpinned by the real data results from a hardware phantom and a clinical patient confirming that the new prior is capable of promoting well-defined anatomical boundaries.
Collapse
|
26
|
|
27
|
Chun SY. The Use of Anatomical Information for Molecular Image Reconstruction Algorithms: Attenuation/Scatter Correction, Motion Compensation, and Noise Reduction. Nucl Med Mol Imaging 2016; 50:13-23. [PMID: 26941855 DOI: 10.1007/s13139-016-0399-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2015] [Revised: 01/11/2016] [Accepted: 01/13/2016] [Indexed: 01/05/2023] Open
Abstract
PET and SPECT are important tools for providing valuable molecular information about patients to clinicians. Advances in nuclear medicine hardware technologies and statistical image reconstruction algorithms enabled significantly improved image quality. Sequentially or simultaneously acquired anatomical images such as CT and MRI from hybrid scanners are also important ingredients for improving the image quality of PET or SPECT further. High-quality anatomical information has been used and investigated for attenuation and scatter corrections, motion compensation, and noise reduction via post-reconstruction filtering and regularization in inverse problems. In this article, we will review works using anatomical information for molecular image reconstruction algorithms for better image quality by describing mathematical models, discussing sources of anatomical information for different cases, and showing some examples.
Collapse
Affiliation(s)
- Se Young Chun
- School of Electrical and Computer Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan, Republic of Korea
| |
Collapse
|
28
|
Cheng L, Hobbs RF, Sgouros G, Frey EC. Development and evaluation of convergent and accelerated penalized SPECT image reconstruction methods for improved dose-volume histogram estimation in radiopharmaceutical therapy. Med Phys 2015; 41:112507. [PMID: 25370666 DOI: 10.1118/1.4897613] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Three-dimensional (3D) dosimetry has the potential to provide better prediction of response of normal tissues and tumors and is based on 3D estimates of the activity distribution in the patient obtained from emission tomography. Dose-volume histograms (DVHs) are an important summary measure of 3D dosimetry and a widely used tool for treatment planning in radiation therapy. Accurate estimates of the radioactivity distribution in space and time are desirable for accurate 3D dosimetry. The purpose of this work was to develop and demonstrate the potential of penalized SPECT image reconstruction methods to improve DVHs estimates obtained from 3D dosimetry methods. METHODS The authors developed penalized image reconstruction methods, using maximum a posteriori (MAP) formalism, which intrinsically incorporate regularization in order to control noise and, unlike linear filters, are designed to retain sharp edges. Two priors were studied: one is a 3D hyperbolic prior, termed single-time MAP (STMAP), and the second is a 4D hyperbolic prior, termed cross-time MAP (CTMAP), using both the spatial and temporal information to control noise. The CTMAP method assumed perfect registration between the estimated activity distributions and projection datasets from the different time points. Accelerated and convergent algorithms were derived and implemented. A modified NURBS-based cardiac-torso phantom with a multicompartment kidney model and organ activities and parameters derived from clinical studies were used in a Monte Carlo simulation study to evaluate the methods. Cumulative dose-rate volume histograms (CDRVHs) and cumulative DVHs (CDVHs) obtained from the phantom and from SPECT images reconstructed with both the penalized algorithms and OS-EM were calculated and compared both qualitatively and quantitatively. The STMAP method was applied to patient data and CDRVHs obtained with STMAP and OS-EM were compared qualitatively. RESULTS The results showed that the penalized algorithms substantially improved the CDRVH and CDVH estimates for large organs such as the liver compared to optimally postfiltered OS-EM. For example, the mean squared errors (MSEs) of the CDRVHs for the liver at 5 h postinjection obtained with CTMAP and STMAP were about 15% and 17%, respectively, of the MSEs obtained with optimally filtered OS-EM. For the CDVH estimates, the MSEs obtained with CTMAP and STMAP were about 16% and 19%, respectively, of the MSEs from OS-EM. For the kidneys and renal cortices, larger residual errors were observed for all algorithms, likely due to partial volume effects. The STMAP method showed promising qualitative results when applied to patient data. CONCLUSIONS Penalized image reconstruction methods were developed and evaluated through a simulation study. The study showed that the MAP algorithms substantially improved CDVH estimates for large organs such as the liver compared to optimally postfiltered OS-EM reconstructions. For small organs with fine structural detail such as the kidneys, a large residual error was observed for both MAP algorithms and OS-EM. While CTMAP provided marginally better MSEs than STMAP, given the extra effort needed to handle misregistration of images at different time points in the algorithm and the potential impact of residual misregistration, 3D regularization methods, such as that used in STMAP, appear to be a more practical choice.
Collapse
Affiliation(s)
- Lishui Cheng
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, Maryland 21287 and Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, Maryland 21287
| | - Robert F Hobbs
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, Maryland 21287
| | - George Sgouros
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, Maryland 21287
| | - Eric C Frey
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, Maryland 21287
| |
Collapse
|
29
|
Abstract
Image reconstruction from low-count positron emission tomography (PET) projection data is challenging because the inverse problem is ill-posed. Prior information can be used to improve image quality. Inspired by the kernel methods in machine learning, this paper proposes a kernel based method that models PET image intensity in each pixel as a function of a set of features obtained from prior information. The kernel-based image model is incorporated into the forward model of PET projection data and the coefficients can be readily estimated by the maximum likelihood (ML) or penalized likelihood image reconstruction. A kernelized expectation-maximization algorithm is presented to obtain the ML estimate. Computer simulations show that the proposed approach can achieve better bias versus variance trade-off and higher contrast recovery for dynamic PET image reconstruction than the conventional maximum likelihood method with and without post-reconstruction denoising. Compared with other regularization-based methods, the kernel method is easier to implement and provides better image quality for low-count data. Application of the proposed kernel method to a 4-D dynamic PET patient dataset showed promising results.
Collapse
|
30
|
Tang J, Rahmim A. Anatomy assisted PET image reconstruction incorporating multi-resolution joint entropy. Phys Med Biol 2014; 60:31-48. [PMID: 25479422 DOI: 10.1088/0031-9155/60/1/31] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Abstract
A promising approach in PET image reconstruction is to incorporate high resolution anatomical information (measured from MR or CT) taking the anato-functional similarity measures such as mutual information or joint entropy (JE) as the prior. These similarity measures only classify voxels based on intensity values, while neglecting structural spatial information. In this work, we developed an anatomy-assisted maximum a posteriori (MAP) reconstruction algorithm wherein the JE measure is supplied by spatial information generated using wavelet multi-resolution analysis. The proposed wavelet-based JE (WJE) MAP algorithm involves calculation of derivatives of the subband JE measures with respect to individual PET image voxel intensities, which we have shown can be computed very similarly to how the inverse wavelet transform is implemented. We performed a simulation study with the BrainWeb phantom creating PET data corresponding to different noise levels. Realistically simulated T1-weighted MR images provided by BrainWeb modeling were applied in the anatomy-assisted reconstruction with the WJE-MAP algorithm and the intensity-only JE-MAP algorithm. Quantitative analysis showed that the WJE-MAP algorithm performed similarly to the JE-MAP algorithm at low noise level in the gray matter (GM) and white matter (WM) regions in terms of noise versus bias tradeoff. When noise increased to medium level in the simulated data, the WJE-MAP algorithm started to surpass the JE-MAP algorithm in the GM region, which is less uniform with smaller isolated structures compared to the WM region. In the high noise level simulation, the WJE-MAP algorithm presented clear improvement over the JE-MAP algorithm in both the GM and WM regions. In addition to the simulation study, we applied the reconstruction algorithms to real patient studies involving DPA-173 PET data and Florbetapir PET data with corresponding T1-MPRAGE MRI images. Compared to the intensity-only JE-MAP algorithm, the WJE-MAP algorithm resulted in comparable regional mean values to those from the maximum likelihood algorithm while reducing noise. Achieving robust performance in various noise-level simulation and patient studies, the WJE-MAP algorithm demonstrates its potential in clinical quantitative PET imaging.
Collapse
Affiliation(s)
- Jing Tang
- Department of Electrical and Computer Engineering, Oakland University, 2200 N Squirrel Rd, Rochester, MI 48309, USA
| | | |
Collapse
|
31
|
Nguyen VG, Lee SJ. Incorporating anatomical side information into PET reconstruction using nonlocal regularization. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2013; 22:3961-3973. [PMID: 23744678 DOI: 10.1109/tip.2013.2265881] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
With the introduction of combined positron emission tomography (PET)/computed tomography (CT) or PET/magnetic resonance imaging (MRI) scanners, there is an increasing emphasis on reconstructing PET images with the aid of the anatomical side information obtained from X-ray CT or MRI scanners. In this paper, we propose a new approach to incorporating prior anatomical information into PET reconstruction using the nonlocal regularization method. The nonlocal regularizer developed for this application is designed to selectively consider the anatomical information only when it is reliable. As our proposed nonlocal regularization method does not directly use anatomical edges or boundaries which are often used in conventional methods, it is not only free from additional processes to extract anatomical boundaries or segmented regions, but also more robust to the signal mismatch problem that is caused by the indirect relationship between the PET image and the anatomical image. We perform simulations with digital phantoms. According to our experimental results, compared to the conventional method based on the traditional local regularization method, our nonlocal regularization method performs well even with the imperfect prior anatomical information or in the presence of signal mismatch between the PET image and the anatomical image.
Collapse
Affiliation(s)
- Van-Giang Nguyen
- Department of Electronic Engineering, Paichai University, Daejeon, Korea.
| | | |
Collapse
|
32
|
Abstract
The resolution of positron emission tomography (PET) images is limited by the physics of positron-electron annihilation and instrumentation for photon coincidence detection. Model-based methods that incorporate accurate physical and statistical models have produced significant improvements in reconstructed image quality when compared with filtered backprojection reconstruction methods. However, it has often been suggested that by incorporating anatomical information, the resolution and noise properties of PET images could be further improved, leading to better quantitation or lesion detection. With the recent development of combined MR-PET scanners, we can now collect intrinsically coregistered magnetic resonance images. It is therefore possible to routinely make use of anatomical information in PET reconstruction, provided appropriate methods are available. In this article, we review research efforts over the past 20 years to develop these methods. We discuss approaches based on the use of both Markov random field priors and joint information or entropy measures. The general framework for these methods is described, and their performance and longer-term potential and limitations are discussed.
Collapse
Affiliation(s)
- Bing Bai
- Department of Radiology, University of Southern California, Los Angeles, CA, USA.
| | | | | |
Collapse
|
33
|
Mehranian A, Rahmim A, Ay MR, Kotasidis F, Zaidi H. An ordered-subsets proximal preconditioned gradient algorithm for edge-preserving PET image reconstruction. Med Phys 2013; 40:052503. [DOI: 10.1118/1.4801898] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
|