1
|
Lopes L, Lopez-Montes A, Chen Y, Koller P, Rathod N, Blomgren A, Caobelli F, Rominger A, Shi K, Seifert R. The Evolution of Artificial Intelligence in Nuclear Medicine. Semin Nucl Med 2025; 55:313-327. [PMID: 39934005 DOI: 10.1053/j.semnuclmed.2025.01.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2025] [Revised: 01/24/2025] [Accepted: 01/24/2025] [Indexed: 02/13/2025]
Abstract
Nuclear medicine has continuously evolved since its beginnings, constantly improving the diagnosis and treatment of various diseases. The integration of artificial intelligence (AI) is one of the latest revolutionizing chapters, promising significant advancements in diagnosis, prognosis, segmentation, image quality enhancement, and theranostics. Early AI applications in nuclear medicine focused on improving diagnostic accuracy, leveraging machine learning algorithms for disease classification and outcome prediction. Advances in deep learning, including convolutional and more recently transformer-based neural networks, have further enabled more precise diagnosis and image segmentation as well as low-dose imaging, and patient-specific dosimetry for personalized treatment. Generative AI, driven by large language models and diffusion techniques, is now allowing the process, interpretation, and generation of complex medical language and images. Despite these achievements, challenges such as data scarcity, heterogeneity, and ethical concerns remain barriers to clinical translation. Addressing these issues through interdisciplinary collaboration will pave the way for a broader adoption of AI in nuclear medicine, potentially enhancing patient care and optimizing diagnosis and therapeutic outcomes.
Collapse
Affiliation(s)
- Leonor Lopes
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland; Graduate School for Cellular and Biomedical Sciences, University of Bern, Bern, Switzerland.
| | - Alejandro Lopez-Montes
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Yizhou Chen
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland; Graduate School for Cellular and Biomedical Sciences, University of Bern, Bern, Switzerland
| | - Pia Koller
- Department of Computer Science, Ludwig-Maximilians-University of Munich, Munich, Germany
| | - Narendra Rathod
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - August Blomgren
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland; Graduate School for Cellular and Biomedical Sciences, University of Bern, Bern, Switzerland
| | - Federico Caobelli
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Axel Rominger
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Kuangyu Shi
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland; Department of Informatics, Technical University of Munich, Munich, Germany
| | - Robert Seifert
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| |
Collapse
|
2
|
Ai X, Huang B, Chen F, Shi L, Li B, Wang S, Liu Q. RED: Residual estimation diffusion for low-dose PET sinogram reconstruction. Med Image Anal 2025; 102:103558. [PMID: 40121810 DOI: 10.1016/j.media.2025.103558] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2024] [Revised: 03/08/2025] [Accepted: 03/17/2025] [Indexed: 03/25/2025]
Abstract
Recent advances in diffusion models have demonstrated exceptional performance in generative tasks across various fields. In positron emission tomography (PET), the reduction in tracer dose leads to information loss in sinograms. Using diffusion models to reconstruct missing information can improve imaging quality. Traditional diffusion models effectively use Gaussian noise for image reconstructions. However, in low-dose PET reconstruction, Gaussian noise can worsen the already sparse data by introducing artifacts and inconsistencies. To address this issue, we propose a diffusion model named residual estimation diffusion (RED). From the perspective of diffusion mechanism, RED uses the residual between sinograms to replace Gaussian noise in diffusion process, respectively sets the low-dose and full-dose sinograms as the starting point and endpoint of reconstruction. This mechanism helps preserve the original information in the low-dose sinogram, thereby enhancing reconstruction reliability. From the perspective of data consistency, RED introduces a drift correction strategy to reduce accumulated prediction errors during the reverse process. Calibrating the intermediate results of reverse iterations helps maintain the data consistency and enhances the stability of reconstruction process. In the experiments, RED achieved the best performance across all metrics. Specifically, the PSNR metric showed improvements of 2.75, 5.45, and 8.08 dB in DRF4, 20, and 100 respectively, compared to traditional methods. The code is available at: https://github.com/yqx7150/RED.
Collapse
Affiliation(s)
- Xingyu Ai
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Bin Huang
- School of Mathematics and Computer Sciences, Nanchang University, Nanchang 330031, China
| | - Fang Chen
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Liu Shi
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Binxuan Li
- Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei 230000, China
| | - Shaoyu Wang
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Qiegen Liu
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| |
Collapse
|
3
|
Braune A, Hosch R, Kersting D, Müller J, Hofheinz F, Herrmann K, Nensa F, Kotzerke J, Seifert R. External phantom-based validation of a deep-learning network trained for upscaling of digital low count PET data. EJNMMI Phys 2025; 12:38. [PMID: 40237913 PMCID: PMC12003253 DOI: 10.1186/s40658-025-00745-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2024] [Accepted: 03/17/2025] [Indexed: 04/18/2025] Open
Abstract
BACKGROUND A reduction of dose and/or acquisition duration of PET examinations is desirable in terms of radiation protection, patient comfort and throughput, but leads to decreased image quality due to poorer image statistics. Recently, different deep-learning based methods have been proposed to improve image quality of low-count PET images. For example, one such approach allows the generation of AI-enhanced PET images (AI-PET) based on ultra-low count PET/CT scans. The performance of this algorithm has so far only been clinically evaluated on patient data featuring limited scan statistics and unknown actual activity concentration. Therefore, this study investigates the performance of this deep-learning algorithm using PET measurements of a phantom resembling different lesion sizes and count statistics (from ultra-low to high) to understand the capabilities and limitations of AI-based post processing for improved image quality in ultra-low count PET imaging. METHODS A previously trained pix2pixHD Generative Adversarial Network was evaluated. To this end, a NEMA PET body phantom filled with two sphere-to-background activity concentration ratios (4:1 and 10:1) and two attenuation scenarios to investigate the effects of obese patients was scanned in list mode. Images were reconstructed with 13 different acquisition durations ranging from 5 s up to 900 s. Image noise, recovery coefficients, SUV-differences, image quality measurement metrics such as the Structural Similarity Index Metric, and the contrast-to-noise-ratio were assessed. In addition, the benefits of the deep-learning network over Gaussian smoothing were investigated. RESULTS The presented AI-algorithm is very well suitable for denoising ultra-low count PET images and for restoring structural information, but increases image noise in ultra-high count PET scans. The generated AI-PET scans strongly underestimate SUV especially in small lesions with a diameter ≤ 17 mm, while quantitative measures of large lesions ≥ 37 mm in diameter were accurately recovered. In ultra-low count or low contrast images, the AI algorithm might not be able to recognize small lesions ≤ 13 mm in diameter. In comparison to standardized image post-processing using a Gaussian filter, the deep-learning network is better suited to improve image quality, but at the same time degrades SUV accuracy to a greater extent than post-filtering and quantitative SUV accuracy varies for different lesion sizes. CONCLUSIONS Phantom-based validation of AI-based algorithms allows for a detailed assessment of the performance, limitations, and generalizability of deep-learning based algorithms for PET image enhancement. Here it was confirmed that the AI-based approach performs very well in denoising ultra-low count PET images and outperforms traditional Gaussian post-filtering. However, there are strong limitations in terms of quantitative accuracy and detectability of small lesions.
Collapse
Affiliation(s)
- Anja Braune
- Department of Nuclear Medicine, University Hospital Carl Gustav Carus at the Technische Universität Dresden, Fetscherstraße 74, 01307, Dresden, Germany.
- Department of Positron-Emission-Tomography, Helmholtz-Zentrum Dresden-Rossendorf e.V., Institute of Radiopharmaceutical Cancer Research, Bautzner Landstr. 400, 01328, Dresden, Germany.
- Carl Gustav Carus Faculty of Medicine, Technische Universität Dresden, Fetscherstraße 74, 01307, Dresden, Germany.
| | - René Hosch
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - David Kersting
- Department of Nuclear Medicine, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Juliane Müller
- Department of Nuclear Medicine, University Hospital Carl Gustav Carus at the Technische Universität Dresden, Fetscherstraße 74, 01307, Dresden, Germany
| | - Frank Hofheinz
- Department of Positron-Emission-Tomography, Helmholtz-Zentrum Dresden-Rossendorf e.V., Institute of Radiopharmaceutical Cancer Research, Bautzner Landstr. 400, 01328, Dresden, Germany
| | - Ken Herrmann
- Department of Nuclear Medicine, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Felix Nensa
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Jörg Kotzerke
- Department of Nuclear Medicine, University Hospital Carl Gustav Carus at the Technische Universität Dresden, Fetscherstraße 74, 01307, Dresden, Germany
- Carl Gustav Carus Faculty of Medicine, Technische Universität Dresden, Fetscherstraße 74, 01307, Dresden, Germany
| | - Robert Seifert
- Department of Nuclear Medicine, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Freiburgstrasse 18, 3010, Bern, Switzerland
| |
Collapse
|
4
|
Kusakari S, Sato K, Tsushima Y, Matsuoka M, Sasaya T, Sunaguchi N, Matsubara K, Kawashima H, Hyodo K, Yuasa T, Zeniya T. Fundamental study on improving the quality of X-ray fluorescence computed tomography images by applying deep image prior to projection images as a pre-denoising method. Int J Comput Assist Radiol Surg 2025; 20:665-676. [PMID: 39739291 DOI: 10.1007/s11548-024-03307-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2024] [Accepted: 12/04/2024] [Indexed: 01/02/2025]
Abstract
PURPOSE We are developing a three-dimensional X-ray fluorescence computed tomography (3D XFCT) system using non-radioactive-labeled compounds for preclinical studies as a new modality that provides images of biological functions. Improvements in image quality and detection limits are required for the in vivo imaging. The aim of this study was to improve the quality of XFCT images by applying a deep image prior (DIP), which is a type of convolutional neural network, to projection images as a pre-denoising method, and then compare with DIP post-denoising. METHODS DIP can restore images using only the projection images acquired by XFCT. The projected images were processed with DIP for denoising. Three-dimensional images were reconstructed using the ordered subsets expectation maximization method for XFCT systems with multi-pinhole collimators. To evaluate the effectiveness of DIP pre-denoising, we constructed an XFCT system using synchrotron radiation and performed imaging experiments on a physical phantom and a mouse brain sample. The proposed method was compared with the DIP post-denoising and other denoising methods. RESULTS The proposed DIP pre-denoising method reduced noise and significantly improved the image quality and was superior to the DIP post-denoising and other methods. The contrast-to-noise ratio improved by 3.7 to 4.6 times with almost no deterioration in spatial resolution, and the detection limit improved from 0.069 to 0.035 mg/mL. There was a strong linear relationship between the iodine concentration and pixel values. Finally, image quality of the mouse brain improved. CONCLUSIONS Through experiments using phantoms and mouse brains, this study demonstrated that the application of DIP to projection images as a pre-denoising method can significantly improve the image quality and detection limit of 3D XFCT without degrading the spatial resolution. DIP was more effective when applied as pre-denoising than as post-denoising and can contribute to in vivo 3D imaging in the future.
Collapse
Affiliation(s)
- Sota Kusakari
- Graduate School of Science and Technology, Hirosaki University, Hirosaki, Japan
| | - Kazuki Sato
- Graduate School of Science and Technology, Hirosaki University, Hirosaki, Japan
| | - Yuta Tsushima
- Graduate School of Science and Technology, Hirosaki University, Hirosaki, Japan
| | - Masahiro Matsuoka
- Graduate School of Science and Engineering, Yamagata University, Yonezawa, Japan
| | - Tenta Sasaya
- Graduate School of Science and Engineering, Yamagata University, Yonezawa, Japan
| | - Naoki Sunaguchi
- Department of Radiological and Medical Laboratory Science, Nagoya University Graduate School of Medicine, Nagoya, Japan
| | - Keisuke Matsubara
- Department of Management Science and Engineering, Faculty of Systems Science and Technology, Akita Prefectural University, Yurihonjo, Japan
| | - Hidekazu Kawashima
- Radioisotope Research Center, Kyoto Pharmaceutical University, Kyoto, Japan
| | - Kazuyuki Hyodo
- High Energy Accelerator Research Organization, Tsukuba, Japan
| | - Tetsuya Yuasa
- Graduate School of Science and Engineering, Yamagata University, Yonezawa, Japan
| | - Tsutomu Zeniya
- Graduate School of Science and Technology, Hirosaki University, Hirosaki, Japan.
| |
Collapse
|
5
|
Pan B, Marsden PK, Reader AJ. Self-supervised parametric map estimation for multiplexed PET with a deep image prior. Phys Med Biol 2025; 70:045002. [PMID: 39774095 DOI: 10.1088/1361-6560/ada717] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2024] [Accepted: 01/07/2025] [Indexed: 01/11/2025]
Abstract
Multiplexed positron emission tomography (mPET) imaging allows simultaneous observation of physiological and pathological information from multiple tracers in a single PET scan. Although supervised deep learning has demonstrated superior performance in mPET image separation compared to purely model-based methods, acquiring large amounts of paired single-tracer data and multi-tracer data for training poses a practical challenge and needs extended scan durations for patients. In addition, the generalisation ability of the supervised learning framework is a concern, as the patient being scanned and their tracer kinetics may potentially fall outside the training distribution. In this work, we propose a self-supervised learning framework based on the deep image prior (DIP) for mPET image separation using just one dataset. In particular, we integrate the multi-tracer compartmental model into the DIP framework to estimate the parametric maps of each tracer from the measured dynamic dual-tracer activity images. Consequently, the separated dynamic single-tracer activity images can be recovered from the estimated tracer-specific parametric maps. In the proposed method, dynamic dual-tracer activity images are used as the training label, and the static dual-tracer image (reconstructed from the same patient data from the start to the end of acquisition) is used as the network input. The performance of the proposed method was evaluated on a simulated brain phantom for dynamic dual-tracer [18F]FDG+[11C]MET activity image separation and parametric map estimation. The results demonstrate that the proposed method outperforms the conventional voxel-wise multi-tracer compartmental modeling method (vMTCM) and the two-step method DIP-Dn+vMTCM (where dynamic dual-tracer activity images are first denoised using a U-net within the DIP framework, followed by vMTCM separation) in terms of lower bias and standard deviation in the separated single-tracer images and also for the estimated parametric maps for each tracer, at both voxel and ROI levels.
Collapse
Affiliation(s)
- Bolin Pan
- School of Biomedical Engineering and Imaging Sciences, King's College London, London SE1 7EU, United Kingdom
| | - Paul K Marsden
- School of Biomedical Engineering and Imaging Sciences, King's College London, London SE1 7EU, United Kingdom
| | - Andrew J Reader
- School of Biomedical Engineering and Imaging Sciences, King's College London, London SE1 7EU, United Kingdom
| |
Collapse
|
6
|
Huang J, Yang L, Wang F, Wu Y, Nan Y, Wu W, Wang C, Shi K, Aviles-Rivero AI, Schönlieb CB, Zhang D, Yang G. Enhancing global sensitivity and uncertainty quantification in medical image reconstruction with Monte Carlo arbitrary-masked mamba. Med Image Anal 2025; 99:103334. [PMID: 39255733 DOI: 10.1016/j.media.2024.103334] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2024] [Revised: 08/05/2024] [Accepted: 09/01/2024] [Indexed: 09/12/2024]
Abstract
Deep learning has been extensively applied in medical image reconstruction, where Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs) represent the predominant paradigms, each possessing distinct advantages and inherent limitations: CNNs exhibit linear complexity with local sensitivity, whereas ViTs demonstrate quadratic complexity with global sensitivity. The emerging Mamba has shown superiority in learning visual representation, which combines the advantages of linear scalability and global sensitivity. In this study, we introduce MambaMIR, an Arbitrary-Masked Mamba-based model with wavelet decomposition for joint medical image reconstruction and uncertainty estimation. A novel Arbitrary Scan Masking (ASM) mechanism "masks out" redundant information to introduce randomness for further uncertainty estimation. Compared to the commonly used Monte Carlo (MC) dropout, our proposed MC-ASM provides an uncertainty map without the need for hyperparameter tuning and mitigates the performance drop typically observed when applying dropout to low-level tasks. For further texture preservation and better perceptual quality, we employ the wavelet transformation into MambaMIR and explore its variant based on the Generative Adversarial Network, namely MambaMIR-GAN. Comprehensive experiments have been conducted for multiple representative medical image reconstruction tasks, demonstrating that the proposed MambaMIR and MambaMIR-GAN outperform other baseline and state-of-the-art methods in different reconstruction tasks, where MambaMIR achieves the best reconstruction fidelity and MambaMIR-GAN has the best perceptual quality. In addition, our MC-ASM provides uncertainty maps as an additional tool for clinicians, while mitigating the typical performance drop caused by the commonly used dropout.
Collapse
Affiliation(s)
- Jiahao Huang
- Bioengineering Department and Imperial-X, Imperial College London, London W12 7SL, United Kingdom; National Heart and Lung Institute, Imperial College London, London SW7 2AZ, United Kingdom; Cardiovascular Research Centre, Royal Brompton Hospital, London SW3 6NP, United Kingdom
| | - Liutao Yang
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Fanwen Wang
- Bioengineering Department and Imperial-X, Imperial College London, London W12 7SL, United Kingdom; National Heart and Lung Institute, Imperial College London, London SW7 2AZ, United Kingdom; Cardiovascular Research Centre, Royal Brompton Hospital, London SW3 6NP, United Kingdom
| | - Yinzhe Wu
- Bioengineering Department and Imperial-X, Imperial College London, London W12 7SL, United Kingdom; National Heart and Lung Institute, Imperial College London, London SW7 2AZ, United Kingdom; Cardiovascular Research Centre, Royal Brompton Hospital, London SW3 6NP, United Kingdom
| | - Yang Nan
- Bioengineering Department and Imperial-X, Imperial College London, London W12 7SL, United Kingdom; National Heart and Lung Institute, Imperial College London, London SW7 2AZ, United Kingdom
| | - Weiwen Wu
- School of Biomedical Engineering, Shenzhen Campus of Sun Yat-sen University, Guangdong, China
| | - Chengyan Wang
- Human Phenome Institute, Fudan University, Shanghai, China
| | - Kuangyu Shi
- Department of Nuclear Medicine, Inselspital, University of Bern, Bern, Switzerland; Department of Informatics, Technical University of Munich, Munich, Germany
| | - Angelica I Aviles-Rivero
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge, United Kingdom
| | - Carola-Bibiane Schönlieb
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge, United Kingdom
| | - Daoqiang Zhang
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Guang Yang
- Bioengineering Department and Imperial-X, Imperial College London, London W12 7SL, United Kingdom; National Heart and Lung Institute, Imperial College London, London SW7 2AZ, United Kingdom; Cardiovascular Research Centre, Royal Brompton Hospital, London SW3 6NP, United Kingdom; School of Biomedical Engineering & Imaging Sciences, King's College London, London WC2R 2LS, United Kingdom.
| |
Collapse
|
7
|
Shu Z, Entezari A. RBP-DIP: Residual back projection with deep image prior for ill-posed CT reconstruction. Neural Netw 2024; 180:106740. [PMID: 39305785 DOI: 10.1016/j.neunet.2024.106740] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2024] [Revised: 09/09/2024] [Accepted: 09/13/2024] [Indexed: 11/14/2024]
Abstract
The success of deep image prior (DIP) in a number of image processing tasks has motivated their application in image reconstruction problems in computed tomography (CT). In this paper, we introduce a residual back projection technique (RBP) that improves the performance of deep image prior framework in iterative CT reconstruction, especially when the reconstruction problem is highly ill-posed. The RBP-DIP framework uses an untrained U-net in conjunction with a novel residual back projection connection to minimize the objective function while improving reconstruction accuracy. In each iteration, the weights of the untrained U-net are optimized, and the output of the U-net in the current iteration is used to update the input of the U-net in the next iteration through the proposed RBP connection. The introduction of the RBP connection strengthens the regularization effects of the DIP framework in the context of iterative CT reconstruction leading to improvements in accuracy. Our experiments demonstrate that the RBP-DIP framework offers improvements over other state-of-the-art conventional IR methods, as well as pre-trained and untrained models with similar network structures under multiple conditions. These improvements are particularly significant in the few-view and limited-angle CT reconstructions, where the corresponding inverse problems are highly ill-posed and the training data is limited. Furthermore, RBP-DIP has the potential for further improvement. Most existing IR algorithms, pre-trained models, and enhancements applicable to the original DIP algorithm can also be integrated into the RBP-DIP framework.
Collapse
Affiliation(s)
- Ziyu Shu
- CISE department, University of Florida, 32603, USA.
| | | |
Collapse
|
8
|
Kim S, Park H, Park SH. A review of deep learning-based reconstruction methods for accelerated MRI using spatiotemporal and multi-contrast redundancies. Biomed Eng Lett 2024; 14:1221-1242. [PMID: 39465106 PMCID: PMC11502678 DOI: 10.1007/s13534-024-00425-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2024] [Revised: 08/27/2024] [Accepted: 09/06/2024] [Indexed: 10/29/2024] Open
Abstract
Accelerated magnetic resonance imaging (MRI) has played an essential role in reducing data acquisition time for MRI. Acceleration can be achieved by acquiring fewer data points in k-space, which results in various artifacts in the image domain. Conventional reconstruction methods have resolved the artifacts by utilizing multi-coil information, but with limited robustness. Recently, numerous deep learning-based reconstruction methods have been developed, enabling outstanding reconstruction performances with higher acceleration. Advances in hardware and developments of specialized network architectures have produced such achievements. Besides, MRI signals contain various redundant information including multi-coil redundancy, multi-contrast redundancy, and spatiotemporal redundancy. Utilization of the redundant information combined with deep learning approaches allow not only higher acceleration, but also well-preserved details in the reconstructed images. Consequently, this review paper introduces the basic concepts of deep learning and conventional accelerated MRI reconstruction methods, followed by review of recent deep learning-based reconstruction methods that exploit various redundancies. Lastly, the paper concludes by discussing the challenges, limitations, and potential directions of future developments.
Collapse
Affiliation(s)
- Seonghyuk Kim
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - HyunWook Park
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - Sung-Hong Park
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, 291 Daehak-ro, Yuseong-gu, Daejeon, 34141 Republic of Korea
| |
Collapse
|
9
|
Kuang X, Li B, Lyu T, Xue Y, Huang H, Xie Q, Zhu W. PET image reconstruction using weighted nuclear norm maximization and deep learning prior. Phys Med Biol 2024; 69:215023. [PMID: 39374634 DOI: 10.1088/1361-6560/ad841d] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2024] [Accepted: 10/07/2024] [Indexed: 10/09/2024]
Abstract
The ill-posed Positron emission tomography (PET) reconstruction problem usually results in limited resolution and significant noise. Recently, deep neural networks have been incorporated into PET iterative reconstruction framework to improve the image quality. In this paper, we propose a new neural network-based iterative reconstruction method by using weighted nuclear norm (WNN) maximization, which aims to recover the image details in the reconstruction process. The novelty of our method is the application of WNN maximization rather than WNN minimization in PET image reconstruction. Meanwhile, a neural network is used to control the noise originated from WNN maximization. Our method is evaluated on simulated and clinical datasets. The simulation results show that the proposed approach outperforms state-of-the-art neural network-based iterative methods by achieving the best contrast/noise tradeoff with a remarkable contrast improvement on the lesion contrast recovery. The study on clinical datasets also demonstrates that our method can recover lesions of different sizes while suppressing noise in various low-dose PET image reconstruction tasks. Our code is available athttps://github.com/Kuangxd/PETReconstruction.
Collapse
Affiliation(s)
- Xiaodong Kuang
- Center for Frontier Fundamental Studies, Zhejiang Lab, Hangzhou, People's Republic of China
| | - Bingxuan Li
- Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, People's Republic of China
| | - Tianling Lyu
- Center for Frontier Fundamental Studies, Zhejiang Lab, Hangzhou, People's Republic of China
| | - Yitian Xue
- Center for Frontier Fundamental Studies, Zhejiang Lab, Hangzhou, People's Republic of China
| | - Hailiang Huang
- Center for Frontier Fundamental Studies, Zhejiang Lab, Hangzhou, People's Republic of China
| | - Qingguo Xie
- Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, People's Republic of China
| | - Wentao Zhu
- Center for Frontier Fundamental Studies, Zhejiang Lab, Hangzhou, People's Republic of China
| |
Collapse
|
10
|
Jafaritadi M, Teuho J, Lehtonen E, Klén R, Saraste A, Levin CS. Deep generative denoising networks enhance quality and accuracy of gated cardiac PET data. Ann Nucl Med 2024; 38:775-788. [PMID: 38842629 DOI: 10.1007/s12149-024-01945-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Accepted: 05/16/2024] [Indexed: 06/07/2024]
Abstract
BACKGROUND Cardiac positron emission tomography (PET) can visualize and quantify the molecular and physiological pathways of cardiac function. However, cardiac and respiratory motion can introduce blurring that reduces PET image quality and quantitative accuracy. Dual cardiac- and respiratory-gated PET reconstruction can mitigate motion artifacts but increases noise as only a subset of data are used for each time frame of the cardiac cycle. AIM The objective of this study is to create a zero-shot image denoising framework using a conditional generative adversarial networks (cGANs) for improving image quality and quantitative accuracy in non-gated and dual-gated cardiac PET images. METHODS Our study included retrospective list-mode data from 40 patients who underwent an 18F-fluorodeoxyglucose (18F-FDG) cardiac PET study. We initially trained and evaluated a 3D cGAN-known as Pix2Pix-on simulated non-gated low-count PET data paired with corresponding full-count target data, and then deployed the model on an unseen test set acquired on the same PET/CT system including both non-gated and dual-gated PET data. RESULTS Quantitative analysis demonstrated that the 3D Pix2Pix network architecture achieved significantly (p value<0.05) enhanced image quality and accuracy in both non-gated and gated cardiac PET images. At 5%, 10%, and 15% preserved count statistics, the model increased peak signal-to-noise ratio (PSNR) by 33.7%, 21.2%, and 15.5%, structural similarity index (SSIM) by 7.1%, 3.3%, and 2.2%, and reduced mean absolute error (MAE) by 61.4%, 54.3%, and 49.7%, respectively. When tested on dual-gated PET data, the model consistently reduced noise, irrespective of cardiac/respiratory motion phases, while maintaining image resolution and accuracy. Significant improvements were observed across all gates, including a 34.7% increase in PSNR, a 7.8% improvement in SSIM, and a 60.3% reduction in MAE. CONCLUSION The findings of this study indicate that dual-gated cardiac PET images, which often have post-reconstruction artifacts potentially affecting diagnostic performance, can be effectively improved using a generative pre-trained denoising network.
Collapse
Affiliation(s)
| | - Jarmo Teuho
- Turku PET Center, University of Turku, Turku, Finland
- Turku PET Center, Turku University Hospital, Turku, Finland
| | - Eero Lehtonen
- Turku PET Center, University of Turku, Turku, Finland
| | - Riku Klén
- Turku PET Center, University of Turku, Turku, Finland
- Turku PET Center, Turku University Hospital, Turku, Finland
| | - Antti Saraste
- Turku PET Center, University of Turku, Turku, Finland
- Turku PET Center, Turku University Hospital, Turku, Finland
- Heart Center, Turku University Hospital, Turku, Finland
| | - Craig S Levin
- Department of Radiology, Stanford University, Stanford, CA, USA.
- Department of Physics, Stanford University, Stanford, CA, USA.
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA.
- Department of Bioengineering, Stanford University, Stanford, CA, USA.
| |
Collapse
|
11
|
Hashimoto F, Onishi Y, Ote K, Tashima H, Yamaya T. Two-step optimization for accelerating deep image prior-based PET image reconstruction. Radiol Phys Technol 2024; 17:776-781. [PMID: 39096446 DOI: 10.1007/s12194-024-00831-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2024] [Revised: 07/25/2024] [Accepted: 07/27/2024] [Indexed: 08/05/2024]
Abstract
Deep learning, particularly convolutional neural networks (CNNs), has advanced positron emission tomography (PET) image reconstruction. However, it requires extensive, high-quality training datasets. Unsupervised learning methods, such as deep image prior (DIP), have shown promise for PET image reconstruction. Although DIP-based PET image reconstruction methods demonstrate superior performance, they involve highly time-consuming calculations. This study proposed a two-step optimization method to accelerate end-to-end DIP-based PET image reconstruction and improve PET image quality. The proposed two-step method comprised a pre-training step using conditional DIP denoising, followed by an end-to-end reconstruction step with fine-tuning. Evaluations using Monte Carlo simulation data demonstrated that the proposed two-step method significantly reduced the computation time and improved the image quality, thereby rendering it a practical and efficient approach for end-to-end DIP-based PET image reconstruction.
Collapse
Affiliation(s)
- Fumio Hashimoto
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan.
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho,Inage-Ku, Chiba, 263-8522, Japan.
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa,Inage-Ku, Chiba, 263-8555, Japan.
| | - Yuya Onishi
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan
| | - Kibo Ote
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan
| | - Hideaki Tashima
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa,Inage-Ku, Chiba, 263-8555, Japan
| | - Taiga Yamaya
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho,Inage-Ku, Chiba, 263-8522, Japan
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa,Inage-Ku, Chiba, 263-8555, Japan
| |
Collapse
|
12
|
Vashistha R, Vegh V, Moradi H, Hammond A, O’Brien K, Reutens D. Modular GAN: positron emission tomography image reconstruction using two generative adversarial networks. FRONTIERS IN RADIOLOGY 2024; 4:1466498. [PMID: 39328298 PMCID: PMC11425657 DOI: 10.3389/fradi.2024.1466498] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/18/2024] [Accepted: 08/08/2024] [Indexed: 09/28/2024]
Abstract
Introduction The reconstruction of PET images involves converting sinograms, which represent the measured counts of radioactive emissions using detector rings encircling the patient, into meaningful images. However, the quality of PET data acquisition is impacted by physical factors, photon count statistics and detector characteristics, which affect the signal-to-noise ratio, resolution and quantitative accuracy of the resulting images. To address these influences, correction methods have been developed to mitigate each of these issues separately. Recently, generative adversarial networks (GANs) based on machine learning have shown promise in learning the complex mapping between acquired PET data and reconstructed tomographic images. This study aims to investigate the properties of training images that contribute to GAN performance when non-clinical images are used for training. Additionally, we describe a method to correct common PET imaging artefacts without relying on patient-specific anatomical images. Methods The modular GAN framework includes two GANs. Module 1, resembling Pix2pix architecture, is trained on non-clinical sinogram-image pairs. Training data are optimised by considering image properties defined by metrics. The second module utilises adaptive instance normalisation and style embedding to enhance the quality of images from Module 1. Additional perceptual and patch-based loss functions are employed in training both modules. The performance of the new framework was compared with that of existing methods, (filtered backprojection (FBP) and ordered subset expectation maximisation (OSEM) without and with point spread function (OSEM-PSF)) with respect to correction for attenuation, patient motion and noise in simulated, NEMA phantom and human imaging data. Evaluation metrics included structural similarity (SSIM), peak-signal-to-noise ratio (PSNR), relative root mean squared error (rRMSE) for simulated data, and contrast-to-noise ratio (CNR) for NEMA phantom and human data. Results For simulated test data, the performance of the proposed framework was both qualitatively and quantitatively superior to that of FBP and OSEM. In the presence of noise, Module 1 generated images with a SSIM of 0.48 and higher. These images exhibited coarse structures that were subsequently refined by Module 2, yielding images with an SSIM higher than 0.71 (at least 22% higher than OSEM). The proposed method was robust against noise and motion. For NEMA phantoms, it achieved higher CNR values than OSEM. For human images, the CNR in brain regions was significantly higher than that of FBP and OSEM (p < 0.05, paired t-test). The CNR of images reconstructed with OSEM-PSF was similar to those reconstructed using the proposed method. Conclusion The proposed image reconstruction method can produce PET images with artefact correction.
Collapse
Affiliation(s)
- Rajat Vashistha
- Centre for Advanced Imaging, University of Queensland, Brisbane, QLD, Australia
- ARC Training Centre for Innovation in Biomedical Imaging Technology, University of Queensland, Brisbane, QLD, Australia
| | - Viktor Vegh
- Centre for Advanced Imaging, University of Queensland, Brisbane, QLD, Australia
- ARC Training Centre for Innovation in Biomedical Imaging Technology, University of Queensland, Brisbane, QLD, Australia
| | - Hamed Moradi
- Centre for Advanced Imaging, University of Queensland, Brisbane, QLD, Australia
- ARC Training Centre for Innovation in Biomedical Imaging Technology, University of Queensland, Brisbane, QLD, Australia
- Diagnostic Imaging, Siemens Healthcare Pty Ltd., Melbourne, QLD,Australia
| | - Amanda Hammond
- Diagnostic Imaging, Siemens Healthcare Pty Ltd., Melbourne, QLD,Australia
| | - Kieran O’Brien
- Diagnostic Imaging, Siemens Healthcare Pty Ltd., Melbourne, QLD,Australia
| | - David Reutens
- Centre for Advanced Imaging, University of Queensland, Brisbane, QLD, Australia
- ARC Training Centre for Innovation in Biomedical Imaging Technology, University of Queensland, Brisbane, QLD, Australia
| |
Collapse
|
13
|
Yang J, Afaq A, Sibley R, McMilan A, Pirasteh A. Deep learning applications for quantitative and qualitative PET in PET/MR: technical and clinical unmet needs. MAGMA (NEW YORK, N.Y.) 2024:10.1007/s10334-024-01199-y. [PMID: 39167304 DOI: 10.1007/s10334-024-01199-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Revised: 08/06/2024] [Accepted: 08/08/2024] [Indexed: 08/23/2024]
Abstract
We aim to provide an overview of technical and clinical unmet needs in deep learning (DL) applications for quantitative and qualitative PET in PET/MR, with a focus on attenuation correction, image enhancement, motion correction, kinetic modeling, and simulated data generation. (1) DL-based attenuation correction (DLAC) remains an area of limited exploration for pediatric whole-body PET/MR and lung-specific DLAC due to data shortages and technical limitations. (2) DL-based image enhancement approximating MR-guided regularized reconstruction with a high-resolution MR prior has shown promise in enhancing PET image quality. However, its clinical value has not been thoroughly evaluated across various radiotracers, and applications outside the head may pose challenges due to motion artifacts. (3) Robust training for DL-based motion correction requires pairs of motion-corrupted and motion-corrected PET/MR data. However, these pairs are rare. (4) DL-based approaches can address the limitations of dynamic PET, such as long scan durations that may cause patient discomfort and motion, providing new research opportunities. (5) Monte-Carlo simulations using anthropomorphic digital phantoms can provide extensive datasets to address the shortage of clinical data. This summary of technical/clinical challenges and potential solutions may provide research opportunities for the research community towards the clinical translation of DL solutions.
Collapse
Affiliation(s)
- Jaewon Yang
- Department of Radiology, University of Texas Southwestern, 5323 Harry Hines Blvd., Dallas, TX, USA.
| | - Asim Afaq
- Department of Radiology, University of Texas Southwestern, 5323 Harry Hines Blvd., Dallas, TX, USA
| | - Robert Sibley
- Department of Radiology, University of Texas Southwestern, 5323 Harry Hines Blvd., Dallas, TX, USA
| | - Alan McMilan
- Departments of Radiology and Medical Physics, University of Wisconsin-Madison, 600 Highland Ave, Madison, WI, USA
| | - Ali Pirasteh
- Departments of Radiology and Medical Physics, University of Wisconsin-Madison, 600 Highland Ave, Madison, WI, USA
| |
Collapse
|
14
|
Wu J, Jiang X, Zhong L, Zheng W, Li X, Lin J, Li Z. Linear diffusion noise boosted deep image prior for unsupervised sparse-view CT reconstruction. Phys Med Biol 2024; 69:165029. [PMID: 39119998 DOI: 10.1088/1361-6560/ad69f7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2024] [Accepted: 07/31/2024] [Indexed: 08/10/2024]
Abstract
Objective.Deep learning has markedly enhanced the performance of sparse-view computed tomography reconstruction. However, the dependence of these methods on supervised training using high-quality paired datasets, and the necessity for retraining under varied physical acquisition conditions, constrain their generalizability across new imaging contexts and settings.Approach.To overcome these limitations, we propose an unsupervised approach grounded in the deep image prior framework. Our approach advances beyond the conventional single noise level input by incorporating multi-level linear diffusion noise, significantly mitigating the risk of overfitting. Furthermore, we embed non-local self-similarity as a deep implicit prior within a self-attention network structure, improving the model's capability to identify and utilize repetitive patterns throughout the image. Additionally, leveraging imaging physics, gradient backpropagation is performed between the image domain and projection data space to optimize network weights.Main Results.Evaluations with both simulated and clinical cases demonstrate our method's effective zero-shot adaptability across various projection views, highlighting its robustness and flexibility. Additionally, our approach effectively eliminates noise and streak artifacts while significantly restoring intricate image details.Significance. Our method aims to overcome the limitations in current supervised deep learning-based sparse-view CT reconstruction, offering improved generalizability and adaptability without the need for extensive paired training data.
Collapse
Affiliation(s)
- Jia Wu
- School of Communications and Information Engineering, Chongqing University of Posts and Telecommunications, Chongqing 400065, People's Republic of China
- School of Medical Information and Engineering, Southwest Medical University, Luzhou 646000, People's Republic of China
| | - Xiaoming Jiang
- Chongqing Engineering Research Center of Medical Electronics and Information Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, People's Republic of China
| | - Lisha Zhong
- School of Medical Information and Engineering, Southwest Medical University, Luzhou 646000, People's Republic of China
| | - Wei Zheng
- Key Laboratory of Big Data Intelligent Computing, Chongqing University of Posts and Telecommunications, Chongqing 400065, People's Republic of China
| | - Xinwei Li
- Chongqing Engineering Research Center of Medical Electronics and Information Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, People's Republic of China
| | - Jinzhao Lin
- School of Communications and Information Engineering, Chongqing University of Posts and Telecommunications, Chongqing 400065, People's Republic of China
| | - Zhangyong Li
- Chongqing Engineering Research Center of Medical Electronics and Information Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, People's Republic of China
| |
Collapse
|
15
|
Dong S, Shewarega A, Chapiro J, Cai Z, Hyder F, Coman D, Duncan JS. High-resolution extracellular pH imaging of liver cancer with multiparametric MR using Deep Image Prior. NMR IN BIOMEDICINE 2024; 37:e5145. [PMID: 38488205 DOI: 10.1002/nbm.5145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Revised: 02/19/2024] [Accepted: 02/20/2024] [Indexed: 07/11/2024]
Abstract
Noninvasive extracellular pH (pHe) mapping with Biosensor Imaging of Redundant Deviation in Shifts (BIRDS) using MR spectroscopic imaging (MRSI) has been demonstrated on 3T clinical MR scanners at 8 × 8 × 10 mm3 spatial resolution and applied to study various liver cancer treatments. Although pHe imaging at higher resolution can be achieved by extending the acquisition time, a postprocessing method to increase the resolution is preferable, to minimize the duration spent by the subject in the MR scanner. In this work, we propose to improve the spatial resolution of pHe mapping with BIRDS by incorporating anatomical information in the form of multiparametric MRI and using an unsupervised deep-learning technique, Deep Image Prior (DIP). Specifically, we used high-resolution T 1 , T 2 , and diffusion-weighted imaging (DWI) MR images of rabbits with VX2 liver tumors as inputs to a U-Net architecture to provide anatomical information. U-Net parameters were optimized to minimize the difference between the output super-resolution image and the experimentally acquired low-resolution pHe image using the mean-absolute error. In this way, the super-resolution pHe image would be consistent with both anatomical MR images and the low-resolution pHe measurement from the scanner. The method was developed based on data from 49 rabbits implanted with VX2 liver tumors. For evaluation, we also acquired high-resolution pHe images from two rabbits, which were used as ground truth. The results indicate a good match between the spatial characteristics of the super-resolution images and the high-resolution ground truth, supported by the low pixelwise absolute error.
Collapse
Affiliation(s)
- Siyuan Dong
- Department of Electrical Engineering, Yale University, New Haven, Connecticut, USA
| | - Annabella Shewarega
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut, USA
| | - Julius Chapiro
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut, USA
| | - Zhuotong Cai
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut, USA
- Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, China
| | - Fahmeed Hyder
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut, USA
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut, USA
| | - Daniel Coman
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut, USA
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut, USA
| | - James S Duncan
- Department of Electrical Engineering, Yale University, New Haven, Connecticut, USA
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut, USA
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut, USA
| |
Collapse
|
16
|
Li S, Zhu Y, Spencer BA, Wang G. Single-Subject Deep-Learning Image Reconstruction With a Neural Optimization Transfer Algorithm for PET-Enabled Dual-Energy CT Imaging. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2024; 33:4075-4089. [PMID: 38941203 DOI: 10.1109/tip.2024.3418347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/30/2024]
Abstract
Combining dual-energy computed tomography (DECT) with positron emission tomography (PET) offers many potential clinical applications but typically requires expensive hardware upgrades or increases radiation doses on PET/CT scanners due to an extra X-ray CT scan. The recent PET-enabled DECT method allows DECT imaging on PET/CT without requiring a second X-ray CT scan. It combines the already existing X-ray CT image with a 511 keV γ -ray CT (gCT) image reconstructed from time-of-flight PET emission data. A kernelized framework has been developed for reconstructing gCT image but this method has not fully exploited the potential of prior knowledge. Use of deep neural networks may explore the power of deep learning in this application. However, common approaches require a large database for training, which is impractical for a new imaging method like PET-enabled DECT. Here, we propose a single-subject method by using neural-network representation as a deep coefficient prior to improving gCT image reconstruction without population-based pre-training. The resulting optimization problem becomes the tomographic estimation of nonlinear neural-network parameters from gCT projection data. This complicated problem can be efficiently solved by utilizing the optimization transfer strategy with quadratic surrogates. Each iteration of the proposed neural optimization transfer algorithm includes: PET activity image update; gCT image update; and least-square neural-network learning in the gCT image domain. This algorithm is guaranteed to monotonically increase the data likelihood. Results from computer simulation, real phantom data and real patient data have demonstrated that the proposed method can significantly improve gCT image quality and consequent multi-material decomposition as compared to other methods.
Collapse
|
17
|
Liu Q, Tsai YJ, Gallezot JD, Guo X, Chen MK, Pucar D, Young C, Panin V, Casey M, Miao T, Xie H, Chen X, Zhou B, Carson R, Liu C. Population-based deep image prior for dynamic PET denoising: A data-driven approach to improve parametric quantification. Med Image Anal 2024; 95:103180. [PMID: 38657423 DOI: 10.1016/j.media.2024.103180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 04/02/2024] [Accepted: 04/12/2024] [Indexed: 04/26/2024]
Abstract
The high noise level of dynamic Positron Emission Tomography (PET) images degrades the quality of parametric images. In this study, we aim to improve the quality and quantitative accuracy of Ki images by utilizing deep learning techniques to reduce the noise in dynamic PET images. We propose a novel denoising technique, Population-based Deep Image Prior (PDIP), which integrates population-based prior information into the optimization process of Deep Image Prior (DIP). Specifically, the population-based prior image is generated from a supervised denoising model that is trained on a prompts-matched static PET dataset comprising 100 clinical studies. The 3D U-Net architecture is employed for both the supervised model and the following DIP optimization process. We evaluated the efficacy of PDIP for noise reduction in 25%-count and 100%-count dynamic PET images from 23 patients by comparing with two other baseline techniques: the Prompts-matched Supervised model (PS) and a conditional DIP (CDIP) model that employs the mean static PET image as the prior. Both the PS and CDIP models show effective noise reduction but result in smoothing and removal of small lesions. In addition, the utilization of a single static image as the prior in the CDIP model also introduces a similar tracer distribution to the denoised dynamic frames, leading to lower Ki in general as well as incorrect Ki in the descending aorta. By contrast, as the proposed PDIP model utilizes intrinsic image features from the dynamic dataset and a large clinical static dataset, it not only achieves comparable noise reduction as the supervised and CDIP models but also improves lesion Ki predictions.
Collapse
Affiliation(s)
- Qiong Liu
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA.
| | - Yu-Jung Tsai
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | | | - Xueqi Guo
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Ming-Kai Chen
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Darko Pucar
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Colin Young
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | | | - Michael Casey
- Siemens Medical Solutions USA, Inc., Knoxville, TN, USA
| | - Tianshun Miao
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Huidong Xie
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Xiongchao Chen
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Bo Zhou
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Richard Carson
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Chi Liu
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA.
| |
Collapse
|
18
|
Lee J, Seo H, Lee W, Park H. Unsupervised motion artifact correction of turbo spin-echo MRI using deep image prior. Magn Reson Med 2024; 92:28-42. [PMID: 38282279 DOI: 10.1002/mrm.30026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Revised: 12/13/2023] [Accepted: 01/09/2024] [Indexed: 01/30/2024]
Abstract
PURPOSE In MRI, motion artifacts can significantly degrade image quality. Motion artifact correction methods using deep neural networks usually required extensive training on large datasets, making them time-consuming and resource-intensive. In this paper, an unsupervised deep learning-based motion artifact correction method for turbo-spin echo MRI is proposed using the deep image prior framework. THEORY AND METHODS The proposed approach takes advantage of the high impedance to motion artifacts offered by the neural network parameterization to remove motion artifacts in MR images. The framework consists of parameterization of MR image, automatic spatial transformation, and motion simulation model. The proposed method synthesizes motion-corrupted images from the motion-corrected images generated by the convolutional neural network, where an optimization process minimizes the objective function between the synthesized images and the acquired images. RESULTS In the simulation study of 280 slices from 14 subjects, the proposed method showed a significant increase in the averaged structural similarity index measure by 0.2737 in individual coil images and by 0.4550 in the root-sum-of-square images. In addition, the ablation study demonstrated the effectiveness of each proposed component in correcting motion artifacts compared to the corrected images produced by the baseline method. The experiments on real motion dataset has shown its clinical potential. CONCLUSION The proposed method exhibited significant quantitative and qualitative improvements in correcting rigid and in-plane motion artifacts in MR images acquired using turbo spin-echo sequence.
Collapse
Affiliation(s)
- Jongyeon Lee
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
- Bionics Research Center, Biomedical Research Division, Korea Institute of Science and Technology (KIST), Seoul, Republic of Korea
| | - Hyunseok Seo
- Bionics Research Center, Biomedical Research Division, Korea Institute of Science and Technology (KIST), Seoul, Republic of Korea
| | - Wonil Lee
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts, USA
- Department of Radiology, Harvard Medical School, Boston, Massachusetts, USA
| | - HyunWook Park
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
| |
Collapse
|
19
|
Wang F, Wang R, Qiu H. Low-dose CT reconstruction using dataset-free learning. PLoS One 2024; 19:e0304738. [PMID: 38875181 PMCID: PMC11178168 DOI: 10.1371/journal.pone.0304738] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Accepted: 05/16/2024] [Indexed: 06/16/2024] Open
Abstract
Low-Dose computer tomography (LDCT) is an ideal alternative to reduce radiation risk in clinical applications. Although supervised-deep-learning-based reconstruction methods have demonstrated superior performance compared to conventional model-driven reconstruction algorithms, they require collecting massive pairs of low-dose and norm-dose CT images for neural network training, which limits their practical application in LDCT imaging. In this paper, we propose an unsupervised and training data-free learning reconstruction method for LDCT imaging that avoids the requirement for training data. The proposed method is a post-processing technique that aims to enhance the initial low-quality reconstruction results, and it reconstructs the high-quality images by neural work training that minimizes the ℓ1-norm distance between the CT measurements and their corresponding simulated sinogram data, as well as the total variation (TV) value of the reconstructed image. Moreover, the proposed method does not require to set the weights for both the data fidelity term and the plenty term. Experimental results on the AAPM challenge data and LoDoPab-CT data demonstrate that the proposed method is able to effectively suppress the noise and preserve the tiny structures. Also, these results demonstrate the rapid convergence and low computational cost of the proposed method. The source code is available at https://github.com/linfengyu77/IRLDCT.
Collapse
Affiliation(s)
- Feng Wang
- College of Big Data and Software Engineering, Zhejiang Wanli University, Ningbo, Zhejiang, China
| | - Renfang Wang
- College of Big Data and Software Engineering, Zhejiang Wanli University, Ningbo, Zhejiang, China
| | - Hong Qiu
- College of Big Data and Software Engineering, Zhejiang Wanli University, Ningbo, Zhejiang, China
| |
Collapse
|
20
|
Jang SI, Pan T, Li Y, Heidari P, Chen J, Li Q, Gong K. Spach Transformer: Spatial and Channel-Wise Transformer Based on Local and Global Self-Attentions for PET Image Denoising. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2036-2049. [PMID: 37995174 PMCID: PMC11111593 DOI: 10.1109/tmi.2023.3336237] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/25/2023]
Abstract
Position emission tomography (PET) is widely used in clinics and research due to its quantitative merits and high sensitivity, but suffers from low signal-to-noise ratio (SNR). Recently convolutional neural networks (CNNs) have been widely used to improve PET image quality. Though successful and efficient in local feature extraction, CNN cannot capture long-range dependencies well due to its limited receptive field. Global multi-head self-attention (MSA) is a popular approach to capture long-range information. However, the calculation of global MSA for 3D images has high computational costs. In this work, we proposed an efficient spatial and channel-wise encoder-decoder transformer, Spach Transformer, that can leverage spatial and channel information based on local and global MSAs. Experiments based on datasets of different PET tracers, i.e., 18F-FDG, 18F-ACBC, 18F-DCFPyL, and 68Ga-DOTATATE, were conducted to evaluate the proposed framework. Quantitative results show that the proposed Spach Transformer framework outperforms state-of-the-art deep learning architectures.
Collapse
|
21
|
Yang B, Gong K, Liu H, Li Q, Zhu W. Anatomically Guided PET Image Reconstruction Using Conditional Weakly-Supervised Multi-Task Learning Integrating Self-Attention. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2098-2112. [PMID: 38241121 DOI: 10.1109/tmi.2024.3356189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/21/2024]
Abstract
To address the lack of high-quality training labels in positron emission tomography (PET) imaging, weakly-supervised reconstruction methods that generate network-based mappings between prior images and noisy targets have been developed. However, the learned model has an intrinsic variance proportional to the average variance of the target image. To suppress noise and improve the accuracy and generalizability of the learned model, we propose a conditional weakly-supervised multi-task learning (MTL) strategy, in which an auxiliary task is introduced serving as an anatomical regularizer for the PET reconstruction main task. In the proposed MTL approach, we devise a novel multi-channel self-attention (MCSA) module that helps learn an optimal combination of shared and task-specific features by capturing both local and global channel-spatial dependencies. The proposed reconstruction method was evaluated on NEMA phantom PET datasets acquired at different positions in a PET/CT scanner and 26 clinical whole-body PET datasets. The phantom results demonstrate that our method outperforms state-of-the-art learning-free and weakly-supervised approaches obtaining the best noise/contrast tradeoff with a significant noise reduction of approximately 50.0% relative to the maximum likelihood (ML) reconstruction. The patient study results demonstrate that our method achieves the largest noise reductions of 67.3% and 35.5% in the liver and lung, respectively, as well as consistently small biases in 8 tumors with various volumes and intensities. In addition, network visualization reveals that adding the auxiliary task introduces more anatomical information into PET reconstruction than adding only the anatomical loss, and the developed MCSA can abstract features and retain PET image details.
Collapse
|
22
|
Wang L, Wang Q, Wang X, Ma Y, Zhang L, Liu M. Triplet-constrained deep hashing for chest X-ray image retrieval in COVID-19 assessment. Neural Netw 2024; 173:106182. [PMID: 38387203 DOI: 10.1016/j.neunet.2024.106182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2023] [Revised: 01/15/2024] [Accepted: 02/15/2024] [Indexed: 02/24/2024]
Abstract
Radiology images of the chest, such as computer tomography scans and X-rays, have been prominently used in computer-aided COVID-19 analysis. Learning-based radiology image retrieval has attracted increasing attention recently, which generally involves image feature extraction and finding matches in extensive image databases based on query images. Many deep hashing methods have been developed for chest radiology image search due to the high efficiency of retrieval using hash codes. However, they often overlook the complex triple associations between images; that is, images belonging to the same category tend to share similar characteristics and vice versa. To this end, we develop a triplet-constrained deep hashing (TCDH) framework for chest radiology image retrieval to facilitate automated analysis of COVID-19. The TCDH consists of two phases, including (a) feature extraction and (b) image retrieval. For feature extraction, we have introduced a triplet constraint and an image reconstruction task to enhance discriminative ability of learned features, and these features are then converted into binary hash codes to capture semantic information. Specifically, the triplet constraint is designed to pull closer samples within the same category and push apart samples from different categories. Additionally, an auxiliary image reconstruction task is employed during feature extraction to help effectively capture anatomical structures of images. For image retrieval, we utilize learned hash codes to conduct searches for medical images. Extensive experiments on 30,386 chest X-ray images demonstrate the superiority of the proposed method over several state-of-the-art approaches in automated image search. The code is now available online.
Collapse
Affiliation(s)
- Linmin Wang
- School of Mathematics Science, Liaocheng University, Liaocheng, Shandong, 252000, China
| | - Qianqian Wang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA
| | - Xiaochuan Wang
- School of Mathematics Science, Liaocheng University, Liaocheng, Shandong, 252000, China
| | - Yunling Ma
- School of Mathematics Science, Liaocheng University, Liaocheng, Shandong, 252000, China
| | - Limei Zhang
- School of Computer Science and Technology, Shandong Jianzhu University, Jinan, Shandong, 250101, China.
| | - Mingxia Liu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA.
| |
Collapse
|
23
|
Cheng L, Lyu Z, Liu H, Wu J, Jia C, Wu Y, Ji Y, Jiang N, Ma T, Liu Y. Efficient image reconstruction for a small animal PET system with dual-layer-offset detector design. Med Phys 2024; 51:2772-2787. [PMID: 37921396 DOI: 10.1002/mp.16814] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 10/10/2023] [Accepted: 10/10/2023] [Indexed: 11/04/2023] Open
Abstract
BACKGROUND A compact PET/SPECT/CT system Inliview-3000B has been developed to provide multi-modality information on small animals for biomedical research. Its PET subsystem employed a dual-layer-offset detector design for depth-of-interaction capability and higher detection efficiency, but the irregular design caused some difficulties in calculating the normalization factors and the sensitivity map. Besides, the relatively larger (2 mm) crystal cross-section size also posed a challenge to high-resolution image reconstruction. PURPOSE We present an efficient image reconstruction method to achieve high imaging performance for the PET subsystem of Inliview-3000B. METHODS List mode reconstruction with efficient system modeling was used for the PET imaging. We adopt an on-the-fly multi-ray tracing method with random crystal sampling to model the solid angle, crystal penetration and object attenuation effect, and modify the system response model during each iteration to improve the reconstruction performance and computational efficiency. We estimate crystal efficiency with a novel iterative approach that combines measured cylinder phantom data with simulated line-of-response (LOR)-based factors for normalization correction before reconstruction. Since it is necessary to calculate normalization factors and the sensitivity map, we stack the two crystal layers together and extend the conventional data organization method here to index all useful LORs. Simulations and experiments were performed to demonstrate the feasibility and advantage of the proposed method. RESULTS Simulation results showed that the iterative algorithm for crystal efficiency estimation could achieve good accuracy. NEMA image quality phantom studies have demonstrated the superiority of random sampling, which is able to achieve good imaging performance with much less computation than traditional uniform sampling. In the spatial resolution evaluation based on the mini-Derenzo phantom, 1.1 mm hot rods could be identified with the proposed reconstruction method. Reconstruction of double mice and a rat showed good spatial resolution and a high signal-to-noise ratio, and organs with higher uptake could be recognized well. CONCLUSION The results validated the superiority of introducing randomness into reconstruction, and demonstrated its reliability for high-performance imaging. The Inliview-3000B PET subsystem with the proposed image reconstruction can provide rich and detailed information on small animals for preclinical research.
Collapse
Affiliation(s)
- Li Cheng
- Department of Engineering Physics, Tsinghua University, Beijing, China
- Key Laboratory of Particle & Radiation Imaging, Ministry of Education (Tsinghua University), Beijing, China
| | - Zhenlei Lyu
- Department of Engineering Physics, Tsinghua University, Beijing, China
- Key Laboratory of Particle & Radiation Imaging, Ministry of Education (Tsinghua University), Beijing, China
| | - Hui Liu
- Department of Engineering Physics, Tsinghua University, Beijing, China
- Key Laboratory of Particle & Radiation Imaging, Ministry of Education (Tsinghua University), Beijing, China
| | - Jing Wu
- Center for Advanced Quantum Studies and Department of Physics, Beijing Normal University, Beijing, China
| | - Chao Jia
- Beijing Novel Medical Equipment Ltd, Beijing, China
| | - Yuanguang Wu
- Beijing Novel Medical Equipment Ltd, Beijing, China
| | - Yingcai Ji
- Beijing Novel Medical Equipment Ltd, Beijing, China
| | | | - Tianyu Ma
- Department of Engineering Physics, Tsinghua University, Beijing, China
- Key Laboratory of Particle & Radiation Imaging, Ministry of Education (Tsinghua University), Beijing, China
| | - Yaqiang Liu
- Department of Engineering Physics, Tsinghua University, Beijing, China
- Key Laboratory of Particle & Radiation Imaging, Ministry of Education (Tsinghua University), Beijing, China
| |
Collapse
|
24
|
Li Y, Feng J, Xiang J, Li Z, Liang D. AIRPORT: A Data Consistency Constrained Deep Temporal Extrapolation Method To Improve Temporal Resolution In Contrast Enhanced CT Imaging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1605-1618. [PMID: 38133967 DOI: 10.1109/tmi.2023.3344712] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2023]
Abstract
Typical tomographic image reconstruction methods require that the imaged object is static and stationary during the time window to acquire a minimally complete data set. The violation of this requirement leads to temporal-averaging errors in the reconstructed images. For a fixed gantry rotation speed, to reduce the errors, it is desired to reconstruct images using data acquired over a narrower angular range, i.e., with a higher temporal resolution. However, image reconstruction with a narrower angular range violates the data sufficiency condition, resulting in severe data-insufficiency-induced errors. The purpose of this work is to decouple the trade-off between these two types of errors in contrast-enhanced computed tomography (CT) imaging. We demonstrated that using the developed data consistency constrained deep temporal extrapolation method (AIRPORT), the entire time-varying imaged object can be accurately reconstructed with 40 frames-per-second temporal resolution, the time window needed to acquire a single projection view data using a typical C-arm cone-beam CT system. AIRPORT is applicable to general non-sparse imaging tasks using a single short-scan data acquisition.
Collapse
|
25
|
Hashimoto F, Onishi Y, Ote K, Tashima H, Reader AJ, Yamaya T. Deep learning-based PET image denoising and reconstruction: a review. Radiol Phys Technol 2024; 17:24-46. [PMID: 38319563 PMCID: PMC10902118 DOI: 10.1007/s12194-024-00780-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 01/03/2024] [Accepted: 01/04/2024] [Indexed: 02/07/2024]
Abstract
This review focuses on positron emission tomography (PET) imaging algorithms and traces the evolution of PET image reconstruction methods. First, we provide an overview of conventional PET image reconstruction methods from filtered backprojection through to recent iterative PET image reconstruction algorithms, and then review deep learning methods for PET data up to the latest innovations within three main categories. The first category involves post-processing methods for PET image denoising. The second category comprises direct image reconstruction methods that learn mappings from sinograms to the reconstructed images in an end-to-end manner. The third category comprises iterative reconstruction methods that combine conventional iterative image reconstruction with neural-network enhancement. We discuss future perspectives on PET imaging and deep learning technology.
Collapse
Affiliation(s)
- Fumio Hashimoto
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan.
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho, Inage-Ku, Chiba, 263-8522, Japan.
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan.
| | - Yuya Onishi
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan
| | - Kibo Ote
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan
| | - Hideaki Tashima
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan
| | - Andrew J Reader
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, SE1 7EH, UK
| | - Taiga Yamaya
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho, Inage-Ku, Chiba, 263-8522, Japan
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan
| |
Collapse
|
26
|
Wang S, Liu B, Xie F, Chai L. An iterative reconstruction algorithm for unsupervised PET image. Phys Med Biol 2024; 69:055025. [PMID: 38346340 DOI: 10.1088/1361-6560/ad2882] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Accepted: 02/12/2024] [Indexed: 02/28/2024]
Abstract
Objective.In recent years, convolutional neural networks (CNNs) have shown great potential in positron emission tomography (PET) image reconstruction. However, most of them rely on many low-quality and high-quality reference PET image pairs for training, which are not always feasible in clinical practice. On the other hand, many works improve the quality of PET image reconstruction by adding explicit regularization or optimizing the network structure, which may lead to complex optimization problems.Approach.In this paper, we develop a novel iterative reconstruction algorithm by integrating the deep image prior (DIP) framework, which only needs the prior information (e.g. MRI) and sinogram data of patients. To be specific, we construct the objective function as a constrained optimization problem and utilize the existing PET image reconstruction packages to streamline calculations. Moreover, to further improve both the reconstruction quality and speed, we introduce the Nesterov's acceleration part and the restart mechanism in each iteration.Main results.2D experiments on PET data sets based on computer simulations and real patients demonstrate that our proposed algorithm can outperform existing MLEM-GF, KEM and DIPRecon methods.Significance.Unlike traditional CNN methods, the proposed algorithm does not rely on large data sets, but only leverages inter-patient information. Furthermore, we enhance reconstruction performance by optimizing the iterative algorithm. Notably, the proposed method does not require much modification of the basic algorithm, allowing for easy integration into standard implementations.
Collapse
Affiliation(s)
- Siqi Wang
- Engineering Research Center of Metallurgical Automation and Measurement Technology, Wuhan University of Science and Technology, Wuhan 430081, People's Republic of China
| | - Bing Liu
- Engineering Research Center of Metallurgical Automation and Measurement Technology, Wuhan University of Science and Technology, Wuhan 430081, People's Republic of China
| | - Furan Xie
- Engineering Research Center of Metallurgical Automation and Measurement Technology, Wuhan University of Science and Technology, Wuhan 430081, People's Republic of China
| | - Li Chai
- College of Control Science and Engineering, Zhejiang University, Hangzhou 310027, People's Republic of China
| |
Collapse
|
27
|
Wang D, Jiang C, He J, Teng Y, Qin H, Liu J, Yang X. M 3S-Net: multi-modality multi-branch multi-self-attention network with structure-promoting loss for low-dose PET/CT enhancement. Phys Med Biol 2024; 69:025001. [PMID: 38086073 DOI: 10.1088/1361-6560/ad14c5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2023] [Accepted: 12/12/2023] [Indexed: 01/05/2024]
Abstract
Objective.PET (Positron Emission Tomography) inherently involves radiotracer injections and long scanning time, which raises concerns about the risk of radiation exposure and patient comfort. Reductions in radiotracer dosage and acquisition time can lower the potential risk and improve patient comfort, respectively, but both will also reduce photon counts and hence degrade the image quality. Therefore, it is of interest to improve the quality of low-dose PET images.Approach.A supervised multi-modality deep learning model, named M3S-Net, was proposed to generate standard-dose PET images (60 s per bed position) from low-dose ones (10 s per bed position) and the corresponding CT images. Specifically, we designed a multi-branch convolutional neural network with multi-self-attention mechanisms, which first extracted features from PET and CT images in two separate branches and then fused the features to generate the final generated PET images. Moreover, a novel multi-modality structure-promoting term was proposed in the loss function to learn the anatomical information contained in CT images.Main results.We conducted extensive numerical experiments on real clinical data collected from local hospitals. Compared with state-of-the-art methods, the proposed M3S-Net not only achieved higher objective metrics and better generated tumors, but also performed better in preserving edges and suppressing noise and artifacts.Significance.The experimental results of quantitative metrics and qualitative displays demonstrate that the proposed M3S-Net can generate high-quality PET images from low-dose ones, which are competable to standard-dose PET images. This is valuable in reducing PET acquisition time and has potential applications in dynamic PET imaging.
Collapse
Affiliation(s)
- Dong Wang
- School of Mathematics/S.T.Yau Center of Southeast University, Southeast University, 210096, People's Republic of China
- Nanjing Center of Applied Mathematics, Nanjing, 211135, People's Republic of China
| | - Chong Jiang
- Department of Nuclear Medicine, West China Hospital of Sichuan University, Sichuan University, Chengdu, 610041, People's Republic of China
| | - Jian He
- Department of Nuclear Medicine, Nanjing Drum Tower Hospital, the Affiliated Hospital of Nanjing University Medical School, Nanjing, 210008, People's Republic of China
| | - Yue Teng
- Department of Nuclear Medicine, Nanjing Drum Tower Hospital, the Affiliated Hospital of Nanjing University Medical School, Nanjing, 210008, People's Republic of China
| | - Hourong Qin
- Department of Mathematics, Nanjing University, Nanjing, 210093, People's Republic of China
| | - Jijun Liu
- School of Mathematics/S.T.Yau Center of Southeast University, Southeast University, 210096, People's Republic of China
- Nanjing Center of Applied Mathematics, Nanjing, 211135, People's Republic of China
| | - Xiaoping Yang
- Department of Mathematics, Nanjing University, Nanjing, 210093, People's Republic of China
| |
Collapse
|
28
|
Hirata K, Watanabe S, Kitagawa Y, Kudo K. A Review of Hypoxia Imaging Using 18F-Fluoromisonidazole Positron Emission Tomography. Methods Mol Biol 2024; 2755:133-140. [PMID: 38319574 DOI: 10.1007/978-1-0716-3633-6_9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2024]
Abstract
Tumor hypoxia is an essential factor related to malignancy, prognosis, and resistance to treatment. Positron emission tomography (PET) is a modality that visualizes the distribution of radiopharmaceuticals administered into the body. PET imaging with [18F]fluoromisonidazole ([18F]FMISO) identifies hypoxic tissues. Unlike [18F]fluorodeoxyglucose ([18F]FDG)-PET, fasting is not necessary for [18F]FMISO-PET, but the waiting time from injection to image acquisition needs to be relatively long (e.g., 2-4 h). [18F]FMISO-PET images can be displayed on an ordinary commercial viewer on a personal computer (PC). While visual assessment is fundamental, various quantitative indices such as tumor-to-muscle ratio have also been proposed. Several novel hypoxia tracers have been invented to compensate for the limitations of [18F]FMISO.
Collapse
Affiliation(s)
- Kenji Hirata
- Department of Diagnostic Imaging, Graduate School of Medicine, Hokkaido University, Sapporo, Japan.
- Department of Nuclear Medicine, Hokkaido University Hospital, Sapporo, Japan.
- Global Center for Biomedical Science and Engineering, Faculty of Medicine, Hokkaido University, Sapporo, Japan.
| | - Shiro Watanabe
- Department of Diagnostic Imaging, Graduate School of Medicine, Hokkaido University, Sapporo, Japan
- Department of Nuclear Medicine, Hokkaido University Hospital, Sapporo, Japan
- Global Center for Biomedical Science and Engineering, Faculty of Medicine, Hokkaido University, Sapporo, Japan
| | - Yoshimasa Kitagawa
- Oral Diagnosis and Medicine, Department of Oral Pathobiological Science, Graduate School of Dental Medicine, Hokkaido University, Sapporo, Hokkaido, Japan
| | - Kohsuke Kudo
- Department of Diagnostic Imaging, Graduate School of Medicine, Hokkaido University, Sapporo, Japan
- Department of Nuclear Medicine, Hokkaido University Hospital, Sapporo, Japan
- Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, Sapporo, Japan
| |
Collapse
|
29
|
Galve P, Rodriguez-Vila B, Herraiz J, García-Vázquez V, Malpica N, Udias J, Torrado-Carvajal A. Recent advances in combined Positron Emission Tomography and Magnetic Resonance Imaging. JOURNAL OF INSTRUMENTATION 2024; 19:C01001. [DOI: 10.1088/1748-0221/19/01/c01001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/09/2024]
Abstract
Abstract
Hybrid imaging modalities combine two or more medical imaging techniques offering exciting new possibilities to image the structure, function and biochemistry of the human body in far greater detail than has previously been possible to improve patient diagnosis. In this context, simultaneous Positron Emission Tomography and Magnetic Resonance (PET/MR) imaging offers great complementary information, but it also poses challenges from the point of view of hardware and software compatibility. The PET signal may interfere with the MR magnetic field and vice-versa, posing several challenges and constrains in the PET instrumentation for PET/MR systems. Additionally, anatomical maps are needed to properly apply attenuation and scatter corrections to the resulting reconstructed PET images, as well motion estimates to minimize the effects of movement throughout the acquisition. In this review, we summarize the instrumentation implemented in modern PET scanners to overcome these limitations, describing the historical development of hybrid PET/MR scanners. We pay special attention to the methods used in PET to achieve attenuation, scatter and motion correction when it is combined with MR, and how both imaging modalities may be combined in PET image reconstruction algorithms.
Collapse
|
30
|
Zhang Q, Hu Y, Zhao Y, Cheng J, Fan W, Hu D, Shi F, Cao S, Zhou Y, Yang Y, Liu X, Zheng H, Liang D, Hu Z. Deep Generalized Learning Model for PET Image Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:122-134. [PMID: 37428658 DOI: 10.1109/tmi.2023.3293836] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/12/2023]
Abstract
Low-count positron emission tomography (PET) imaging is challenging because of the ill-posedness of this inverse problem. Previous studies have demonstrated that deep learning (DL) holds promise for achieving improved low-count PET image quality. However, almost all data-driven DL methods suffer from fine structure degradation and blurring effects after denoising. Incorporating DL into the traditional iterative optimization model can effectively improve its image quality and recover fine structures, but little research has considered the full relaxation of the model, resulting in the performance of this hybrid model not being sufficiently exploited. In this paper, we propose a learning framework that deeply integrates DL and an alternating direction of multipliers method (ADMM)-based iterative optimization model. The innovative feature of this method is that we break the inherent forms of the fidelity operators and use neural networks to process them. The regularization term is deeply generalized. The proposed method is evaluated on simulated data and real data. Both the qualitative and quantitative results show that our proposed neural network method can outperform partial operator expansion-based neural network methods, neural network denoising methods and traditional methods.
Collapse
|
31
|
Zheng Y, Frame E, Caravaca J, Gullberg GT, Vetter K, Seo Y. A generalization of the maximum likelihood expectation maximization (MLEM) method: Masked-MLEM. Phys Med Biol 2023; 68:10.1088/1361-6560/ad0900. [PMID: 37918026 PMCID: PMC10819675 DOI: 10.1088/1361-6560/ad0900] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Accepted: 11/02/2023] [Indexed: 11/04/2023]
Abstract
Objective.In our previous work on image reconstruction for single-layer collimatorless scintigraphy, we developed the min-min weighted robust least squares (WRLS) optimization algorithm to address the challenge of reconstructing images when both the system matrix and the projection data are uncertain. Whereas the WRLS algorithm has been successful in two-dimensional (2D) reconstruction, expanding it to three-dimensional (3D) reconstruction is difficult since the WRLS optimization problem is neither smooth nor strongly-convex. To overcome these difficulties and achieve robust image reconstruction in the presence of system uncertainties and projection noise, we propose a generalized iterative method based on the maximum likelihood expectation maximization (MLEM) algorithm, hereinafter referred to as the Masked-MLEM algorithm.Approach.In the Masked-MLEM algorithm, only selected subsets ('masks') from the system matrix and the projection contribute to the image update to satisfy the constraints imposed by the system uncertainties. We validate the Masked-MLEM algorithm and compare it to the standard MLEM algorithm using experimental data obtained from both collimated and uncollimated imaging instruments, including parallel-hole collimated SPECT, 2D collimatorless scintigraphy, and 3D collimatorless tomography. Additionally, we conduct comprehensive Monte Carlo simulations for 3D collimatorless tomography to further validate the effectiveness of the Masked-MLEM algorithm in handling different levels of system uncertainties.Main results.The Masked-MLEM and standard MLEM reconstructions are similar in cases with negligible system uncertainties, whereas the Masked-MLEM algorithm outperforms the standard MLEM algorithm when the system matrix is an approximation. Importantly, the Masked-MLEM algorithm ensures reliable image reconstruction across varying levels of system uncertainties.Significance.With a good choice of system uncertainty and without requiring accurate knowledge of the actual system matrix, the Masked-MLEM algorithm yields more robust image reconstruction than the standard MLEM algorithm, effectively reducing the likelihood of erroneously reconstructing higher activities in regions without radioactive sources.
Collapse
Affiliation(s)
- Yifan Zheng
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA 94143, USA
- Department of Nuclear Engineering, University of California, Berkeley, CA 94720, USA
| | - Emily Frame
- Department of Nuclear Engineering, University of California, Berkeley, CA 94720, USA
| | - Javier Caravaca
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA 94143, USA
| | - Grant T. Gullberg
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA 94143, USA
| | - Kai Vetter
- Department of Nuclear Engineering, University of California, Berkeley, CA 94720, USA
- Applied Nuclear Physics Group, Lawrence Berkeley National Laboratory, Berkeley, CA 94502, USA
| | - Youngho Seo
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA 94143, USA
- Department of Nuclear Engineering, University of California, Berkeley, CA 94720, USA
| |
Collapse
|
32
|
Hellwig D, Hellwig NC, Boehner S, Fuchs T, Fischer R, Schmidt D. Artificial Intelligence and Deep Learning for Advancing PET Image Reconstruction: State-of-the-Art and Future Directions. Nuklearmedizin 2023; 62:334-342. [PMID: 37995706 PMCID: PMC10689088 DOI: 10.1055/a-2198-0358] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 10/12/2023] [Indexed: 11/25/2023]
Abstract
Positron emission tomography (PET) is vital for diagnosing diseases and monitoring treatments. Conventional image reconstruction (IR) techniques like filtered backprojection and iterative algorithms are powerful but face limitations. PET IR can be seen as an image-to-image translation. Artificial intelligence (AI) and deep learning (DL) using multilayer neural networks enable a new approach to this computer vision task. This review aims to provide mutual understanding for nuclear medicine professionals and AI researchers. We outline fundamentals of PET imaging as well as state-of-the-art in AI-based PET IR with its typical algorithms and DL architectures. Advances improve resolution and contrast recovery, reduce noise, and remove artifacts via inferred attenuation and scatter correction, sinogram inpainting, denoising, and super-resolution refinement. Kernel-priors support list-mode reconstruction, motion correction, and parametric imaging. Hybrid approaches combine AI with conventional IR. Challenges of AI-assisted PET IR include availability of training data, cross-scanner compatibility, and the risk of hallucinated lesions. The need for rigorous evaluations, including quantitative phantom validation and visual comparison of diagnostic accuracy against conventional IR, is highlighted along with regulatory issues. First approved AI-based applications are clinically available, and its impact is foreseeable. Emerging trends, such as the integration of multimodal imaging and the use of data from previous imaging visits, highlight future potentials. Continued collaborative research promises significant improvements in image quality, quantitative accuracy, and diagnostic performance, ultimately leading to the integration of AI-based IR into routine PET imaging protocols.
Collapse
Affiliation(s)
- Dirk Hellwig
- Department of Nuclear Medicine, University Hospital Regensburg, Regensburg, Germany
- Partner Site Regensburg, Bavarian Center for Cancer Research (BZKF), Regensburg, Germany
- Medical Data Integration Center (MEDIZUKR), University Hospital Regensburg, Regensburg, Germany
| | - Nils Constantin Hellwig
- Department of Nuclear Medicine, University Hospital Regensburg, Regensburg, Germany
- Medical Data Integration Center (MEDIZUKR), University Hospital Regensburg, Regensburg, Germany
| | - Steven Boehner
- Department of Nuclear Medicine, University Hospital Regensburg, Regensburg, Germany
- Partner Site Regensburg, Bavarian Center for Cancer Research (BZKF), Regensburg, Germany
- Medical Data Integration Center (MEDIZUKR), University Hospital Regensburg, Regensburg, Germany
| | - Timo Fuchs
- Department of Nuclear Medicine, University Hospital Regensburg, Regensburg, Germany
- Partner Site Regensburg, Bavarian Center for Cancer Research (BZKF), Regensburg, Germany
- Medical Data Integration Center (MEDIZUKR), University Hospital Regensburg, Regensburg, Germany
| | - Regina Fischer
- Department of Nuclear Medicine, University Hospital Regensburg, Regensburg, Germany
- Partner Site Regensburg, Bavarian Center for Cancer Research (BZKF), Regensburg, Germany
- Medical Data Integration Center (MEDIZUKR), University Hospital Regensburg, Regensburg, Germany
| | - Daniel Schmidt
- Department of Nuclear Medicine, University Hospital Regensburg, Regensburg, Germany
| |
Collapse
|
33
|
Hellström M, Löfstedt T, Garpebring A. Denoising and uncertainty estimation in parameter mapping with approximate Bayesian deep image priors. Magn Reson Med 2023; 90:2557-2571. [PMID: 37582257 DOI: 10.1002/mrm.29823] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2022] [Revised: 06/26/2023] [Accepted: 07/18/2023] [Indexed: 08/17/2023]
Abstract
PURPOSE To mitigate the problem of noisy parameter maps with high uncertainties by casting parameter mapping as a denoising task based on Deep Image Priors. METHODS We extend the concept of denoising with Deep Image Prior (DIP) into parameter mapping by treating the output of an image-generating network as a parametrization of tissue parameter maps. The method implicitly denoises the parameter mapping process by filtering low-level image features with an untrained convolutional neural network (CNN). Our implementation includes uncertainty estimation from Bernoulli approximate variational inference, implemented with MC dropout, which provides model uncertainty in each voxel of the denoised parameter maps. The method is modular, so the specifics of different applications (e.g., T1 mapping) separate into application-specific signal equation blocks. We evaluate the method on variable flip angle T1 mapping, multi-echo T2 mapping, and apparent diffusion coefficient mapping. RESULTS We found that deep image prior adapts successfully to several applications in parameter mapping. In all evaluations, the method produces noise-reduced parameter maps with decreased uncertainty compared to conventional methods. The downsides of the proposed method are the long computational time and the introduction of some bias from the denoising prior. CONCLUSION DIP successfully denoise the parameter mapping process and applies to several applications with limited hyperparameter tuning. Further, it is easy to implement since DIP methods do not use network training data. Although time-consuming, uncertainty information from MC dropout makes the method more robust and provides useful information when properly calibrated.
Collapse
Affiliation(s)
- Max Hellström
- Department of Radiation Sciences, Umeå University, Umeå, Sweden
| | - Tommy Löfstedt
- Department of Radiation Sciences, Umeå University, Umeå, Sweden
- Department of Computing Science, Umeå University, Umeå, Sweden
| | | |
Collapse
|
34
|
Kaviani S, Sanaat A, Mokri M, Cohalan C, Carrier JF. Image reconstruction using UNET-transformer network for fast and low-dose PET scans. Comput Med Imaging Graph 2023; 110:102315. [PMID: 38006648 DOI: 10.1016/j.compmedimag.2023.102315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Revised: 09/26/2023] [Accepted: 11/15/2023] [Indexed: 11/27/2023]
Abstract
INTRODUCTION Low-dose and fast PET imaging (low-count PET) play a significant role in enhancing patient safety, healthcare efficiency, and patient comfort during medical imaging procedures. To achieve high-quality images with low-count PET scans, effective reconstruction models are crucial for denoising and enhancing image quality. The main goal of this paper is to develop an effective and accurate deep learning-based method for reconstructing low-count PET images, which is a challenging problem due to the limited amount of available data and the high level of noise in the acquired images. The proposed method aims to improve the quality of reconstructed PET images while preserving important features, such as edges and small details, by combining the strengths of UNET and Transformer networks. MATERIAL AND METHODS The proposed TrUNET-MAPEM model integrates a residual UNET-transformer regularizer into the unrolled maximum a posteriori expectation maximization (MAPEM) algorithm for PET image reconstruction. A loss function based on a combination of structural similarity index (SSIM) and mean squared error (MSE) is utilized to evaluate the accuracy of the reconstructed images. The simulated dataset was generated using the Brainweb phantom, while the real patient dataset was acquired using a Siemens Biograph mMR PET scanner. We also implemented state-of-the-art methods for comparison purposes: OSEM, MAPOSEM, and supervised learning using 3D-UNET network. The reconstructed images are compared to ground truth images using metrics such as peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and relative root mean square error (rRMSE) to quantitatively evaluate the accuracy of the reconstructed images. RESULTS Our proposed TrUNET-MAPEM approach was evaluated using both simulated and real patient data. For the patient data, our model achieved an average PSNR of 33.72 dB, an average SSIM of 0.955, and an average rRMSE of 0.39. These results outperformed other methods which had average PSNRs of 36.89 dB, 34.12 dB, and 33.52 db, average SSIMs of 0.944, 0.947, and 0.951, and average rRMSEs of 0.59, 0.49, and 0.42. For the simulated data, our model achieved an average PSNR of 31.23 dB, an average SSIM of 0.95, and an average rRMSE of 0.55. These results also outperformed other state-of-the-art methods, such as OSEM, MAPOSEM, and 3DUNET-MAPEM. The model demonstrates the potential for clinical use by successfully reconstructing smooth images while preserving edges. The comparison with other methods demonstrates the superiority of our approach, as it outperforms all other methods for all three metrics. CONCLUSION The proposed TrUNET-MAPEM model presents a significant advancement in the field of low-count PET image reconstruction. The results demonstrate the potential for clinical use, as the model can produce images with reduced noise levels and better edge preservation compared to other reconstruction and post-processing algorithms. The proposed approach may have important clinical applications in the early detection and diagnosis of various diseases.
Collapse
Affiliation(s)
- Sanaz Kaviani
- Faculty of Medicine, University of Montreal, Montreal, Canada; University of Montreal Hospital Research Centre (CRCHUM), Montreal, Canada.
| | - Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Mersede Mokri
- Faculty of Medicine, University of Montreal, Montreal, Canada; University of Montreal Hospital Research Centre (CRCHUM), Montreal, Canada
| | - Claire Cohalan
- University of Montreal Hospital Research Centre (CRCHUM), Montreal, Canada; Department of Physics and Biomedical Engineering, University of Montreal Hospital Centre, Montreal, Canada
| | - Jean-Francois Carrier
- University of Montreal Hospital Research Centre (CRCHUM), Montreal, Canada; Department of Physics, University of Montreal, Montreal, QC, Canada; Department de Radiation Oncology, University of Montreal Hospital Centre (CHUM), Montreal, Canada
| |
Collapse
|
35
|
Bollack A, Pemberton HG, Collij LE, Markiewicz P, Cash DM, Farrar G, Barkhof F. Longitudinal amyloid and tau PET imaging in Alzheimer's disease: A systematic review of methodologies and factors affecting quantification. Alzheimers Dement 2023; 19:5232-5252. [PMID: 37303269 DOI: 10.1002/alz.13158] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 04/21/2023] [Accepted: 04/25/2023] [Indexed: 06/13/2023]
Abstract
Deposition of amyloid and tau pathology can be quantified in vivo using positron emission tomography (PET). Accurate longitudinal measurements of accumulation from these images are critical for characterizing the start and spread of the disease. However, these measurements are challenging; precision and accuracy can be affected substantially by various sources of errors and variability. This review, supported by a systematic search of the literature, summarizes the current design and methodologies of longitudinal PET studies. Intrinsic, biological causes of variability of the Alzheimer's disease (AD) protein load over time are then detailed. Technical factors contributing to longitudinal PET measurement uncertainty are highlighted, followed by suggestions for mitigating these factors, including possible techniques that leverage shared information between serial scans. Controlling for intrinsic variability and reducing measurement uncertainty in longitudinal PET pipelines will provide more accurate and precise markers of disease evolution, improve clinical trial design, and aid therapy response monitoring.
Collapse
Affiliation(s)
- Ariane Bollack
- Department of Medical Physics and Biomedical Engineering, Centre for Medical Image Computing (CMIC), University College London, London, UK
| | - Hugh G Pemberton
- Department of Medical Physics and Biomedical Engineering, Centre for Medical Image Computing (CMIC), University College London, London, UK
- GE Healthcare, Amersham, UK
- UCL Queen Square Institute of Neurology, London, UK
| | - Lyduine E Collij
- Department of Radiology and Nuclear Medicine, Amsterdam UMC, location VUmc, Amsterdam, The Netherlands
- Clinical Memory Research Unit, Department of Clinical Sciences, Lund University, Malmö, Sweden
| | - Pawel Markiewicz
- Department of Medical Physics and Biomedical Engineering, Centre for Medical Image Computing (CMIC), University College London, London, UK
| | - David M Cash
- UCL Queen Square Institute of Neurology, London, UK
- UK Dementia Research Institute at University College London, London, UK
| | | | - Frederik Barkhof
- Department of Medical Physics and Biomedical Engineering, Centre for Medical Image Computing (CMIC), University College London, London, UK
- UCL Queen Square Institute of Neurology, London, UK
- Department of Radiology and Nuclear Medicine, Amsterdam UMC, location VUmc, Amsterdam, The Netherlands
| |
Collapse
|
36
|
Shinohara H, Hori K, Hashimoto T. Deep learning study on the mechanism of edge artifacts in point spread function reconstruction for numerical brain images. Ann Nucl Med 2023; 37:596-604. [PMID: 37610591 DOI: 10.1007/s12149-023-01862-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Accepted: 08/07/2023] [Indexed: 08/24/2023]
Abstract
OBJECTIVE Non-blinded image deblurring with deep learning was performed on blurred numerical brain images without point spread function (PSF) reconstruction to obtain edge artifacts (EA)-free images. This study uses numerical simulation to investigate the mechanism of EA in PSF reconstruction based on the spatial frequency characteristics of EA-free images. METHODS In 256 × 256 matrix brain images, the signal values of gray matter (GM), white matter, and cerebrospinal fluid were set to 1, 0.25, and 0.05, respectively. We assumed ideal projection data of a two-dimensional (2D) parallel beam with no degradation factors other than detector response blur to precisely grasp EA using the PSF reconstruction algorithm from blurred projection data. The detector response was assumed to be a shift-invariant and one-dimensional (1D) Gaussian function with 2-5 mm full width at half maximum (FWHM). Images without PSF reconstruction (non-PSF), PSF reconstruction without regularization (PSF) and with regularization of relative difference function (PSF-RD) were generated by ordered subset expectation maximization (OSEM). For non-PSF, the image deblurring with a deep image prior (DIP) was applied using a 2D Gaussian function with 2-5 mm FWHM. The 1D object-specific modulation transfer function (1D-OMTF), which is the ratio of 1D amplitude spectrum of the original and reconstructed images, was used as the index of spatial frequency characteristics. RESULTS When the detector response was greater than 3 mm FWHM, EA in PSF was observed in GM borders and narrow GM. No remarkable EA was observed in the DIP, and the FWHM estimated from the recovery coefficient for the deblurred image of non-PSF at 5 mm FWHM was reduced to 3 mm or less. PSF of 5 mm FWHM showed higher spatial frequency characteristics than that of DIP up to around 2.2 cycles/cm but was lower than the latter after 3 cycles/cm. PSF-RD showed almost the same spatial frequency characteristics as that of DIP above 3 cycles/cm but was inferior below 3 cycles/cm. PSF-RD has a lower spatial resolution than DIP. CONCLUSIONS Unlike DIP, PSF lacks high-frequency components around the Nyquist frequency, generating EA. PSF-RD mitigates EA while simultaneously suppressing the signal, diminishing spatial resolution.
Collapse
Affiliation(s)
- Hiroyuki Shinohara
- Faculty of Health Sciences, Tokyo Metropolitan University, 7-2-10, Higasi-ogu, Arakawa-ku, Tokyo, 116-8551, Japan.
- Department of Radiology, Showa University Fujigaoka Hospital, 1-30, Fujigaoka, Yokohama-shi, 227-8501, Japan.
| | - Kensuke Hori
- Department of Radiological Technology, Faculty of Health Science, Juntendo University, 1-5-32, Yushima, Bunkyo-ku, Tokyo, 113-0034, Japan
| | - Takeyuki Hashimoto
- Department of Radiological Technology, Faculty of Health Science, Kyorin University, 5-4-1 Shimorenjaku, Mitaka-shi, Tokyo, 181-8612, Japan
| |
Collapse
|
37
|
Ye S, Shen L, Islam MT, Xing L. Super-resolution biomedical imaging via reference-free statistical implicit neural representation. Phys Med Biol 2023; 68:10.1088/1361-6560/acfdf1. [PMID: 37757838 PMCID: PMC10615136 DOI: 10.1088/1361-6560/acfdf1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Accepted: 09/27/2023] [Indexed: 09/29/2023]
Abstract
Objective.Supervised deep learning for image super-resolution (SR) has limitations in biomedical imaging due to the lack of large amounts of low- and high-resolution image pairs for model training. In this work, we propose a reference-free statistical implicit neural representation (INR) framework, which needs only a single or a few observed low-resolution (LR) image(s), to generate high-quality SR images.Approach.The framework models the statistics of the observed LR images via maximum likelihood estimation and trains the INR network to represent the latent high-resolution (HR) image as a continuous function in the spatial domain. The INR network is constructed as a coordinate-based multi-layer perceptron, whose inputs are image spatial coordinates and outputs are corresponding pixel intensities. The trained INR not only constrains functional smoothness but also allows an arbitrary scale in SR imaging.Main results.We demonstrate the efficacy of the proposed framework on various biomedical images, including computed tomography (CT), magnetic resonance imaging (MRI), fluorescence microscopy, and ultrasound images, across different SR magnification scales of 2×, 4×, and 8×. A limited number of LR images were used for each of the SR imaging tasks to show the potential of the proposed statistical INR framework.Significance.The proposed method provides an urgently needed unsupervised deep learning framework for numerous biomedical SR applications that lack HR reference images.
Collapse
Affiliation(s)
- Siqi Ye
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, United States of America
| | - Liyue Shen
- Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI, 48109, United States of America
| | - Md Tauhidul Islam
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, United States of America
| | - Lei Xing
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, United States of America
| |
Collapse
|
38
|
Reader AJ, Pan B. AI for PET image reconstruction. Br J Radiol 2023; 96:20230292. [PMID: 37486607 PMCID: PMC10546435 DOI: 10.1259/bjr.20230292] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 06/06/2023] [Accepted: 06/20/2023] [Indexed: 07/25/2023] Open
Abstract
Image reconstruction for positron emission tomography (PET) has been developed over many decades, with advances coming from improved modelling of the data statistics and improved modelling of the imaging physics. However, high noise and limited spatial resolution have remained issues in PET imaging, and state-of-the-art PET reconstruction has started to exploit other medical imaging modalities (such as MRI) to assist in noise reduction and enhancement of PET's spatial resolution. Nonetheless, there is an ongoing drive towards not only improving image quality, but also reducing the injected radiation dose and reducing scanning times. While the arrival of new PET scanners (such as total body PET) is helping, there is always a need to improve reconstructed image quality due to the time and count limited imaging conditions. Artificial intelligence (AI) methods are now at the frontier of research for PET image reconstruction. While AI can learn the imaging physics as well as the noise in the data (when given sufficient examples), one of the most common uses of AI arises from exploiting databases of high-quality reference examples, to provide advanced noise compensation and resolution recovery. There are three main AI reconstruction approaches: (i) direct data-driven AI methods which rely on supervised learning from reference data, (ii) iterative (unrolled) methods which combine our physics and statistical models with AI learning from data, and (iii) methods which exploit AI with our known models, but crucially can offer benefits even in the absence of any example training data whatsoever. This article reviews these methods, considering opportunities and challenges of AI for PET reconstruction.
Collapse
Affiliation(s)
- Andrew J Reader
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
| | - Bolin Pan
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
| |
Collapse
|
39
|
Farag A, Huang J, Kohan A, Mirshahvalad SA, Basso Dias A, Fenchel M, Metser U, Veit-Haibach P. Evaluation of MR anatomically-guided PET reconstruction using a convolutional neural network in PSMA patients. Phys Med Biol 2023; 68:185014. [PMID: 37625418 DOI: 10.1088/1361-6560/acf439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Accepted: 08/25/2023] [Indexed: 08/27/2023]
Abstract
Background. Recently, approaches have utilized the superior anatomical information provided by magnetic resonance imaging (MRI) to guide the reconstruction of positron emission tomography (PET). One of those approaches is the Bowsher's prior, which has been accelerated lately with a convolutional neural network (CNN) to reconstruct MR-guided PET in the imaging domain in routine clinical imaging. Two differently trained Bowsher-CNN methods (B-CNN0 and B-CNN) have been trained and tested on brain PET/MR images with non-PSMA tracers, but so far, have not been evaluated in other anatomical regions yet.Methods. A NEMA phantom with five of its six spheres filled with the same, calibrated concentration of 18F-DCFPyL-PSMA, and thirty-two patients (mean age 64 ± 7 years) with biopsy-confirmed PCa were used in this study. Reconstruction with either of the two available Bowsher-CNN methods were performed on the conventional MR-based attenuation correction (MRAC) and T1-MR images in the imaging domain. Detectable volume of the spheres and tumors, relative contrast recovery (CR), and background variation (BV) were measured for the MRAC and the Bowsher-CNN images, and qualitative assessment was conducted by ranking the image sharpness and quality by two experienced readers.Results. For the phantom study, the B-CNN produced 12.7% better CR compared to conventional reconstruction. The small sphere volume (<1.8 ml) detectability improved from MRAC to B-CNN by nearly 13%, while measured activity was higher than the ground-truth by 8%. The signal-to-noise ratio, CR, and BV were significantly improved (p< 0.05) in B-CNN images of the tumor. The qualitative analysis determined that tumor sharpness was excellent in 76% of the PET images reconstructed with the B-CNN method, compared to conventional reconstruction.Conclusions. Applying the MR-guided B-CNN in clinical prostate PET/MR imaging improves some quantitative, as well as qualitative imaging measures. The measured improvements in the phantom are also clearly translated into clinical application.
Collapse
Affiliation(s)
- Adam Farag
- Joint Department of Medical Imaging, University Health Network, Mount Sinai Hospital and Women's College Hospital, University of Toronto, 610 University Ave, Toronto, ON, M5G 2M9, Canada
| | - Jin Huang
- Joint Department of Medical Imaging, University Health Network, Mount Sinai Hospital and Women's College Hospital, University of Toronto, 610 University Ave, Toronto, ON, M5G 2M9, Canada
| | - Andres Kohan
- Joint Department of Medical Imaging, University Health Network, Mount Sinai Hospital and Women's College Hospital, University of Toronto, 610 University Ave, Toronto, ON, M5G 2M9, Canada
| | - Seyed Ali Mirshahvalad
- Joint Department of Medical Imaging, University Health Network, Mount Sinai Hospital and Women's College Hospital, University of Toronto, 610 University Ave, Toronto, ON, M5G 2M9, Canada
| | - Adriano Basso Dias
- Joint Department of Medical Imaging, University Health Network, Mount Sinai Hospital and Women's College Hospital, University of Toronto, 610 University Ave, Toronto, ON, M5G 2M9, Canada
| | | | - Ur Metser
- Joint Department of Medical Imaging, University Health Network, Mount Sinai Hospital and Women's College Hospital, University of Toronto, 610 University Ave, Toronto, ON, M5G 2M9, Canada
| | - Patrick Veit-Haibach
- Joint Department of Medical Imaging, University Health Network, Mount Sinai Hospital and Women's College Hospital, University of Toronto, 610 University Ave, Toronto, ON, M5G 2M9, Canada
| |
Collapse
|
40
|
Krokos G, MacKewn J, Dunn J, Marsden P. A review of PET attenuation correction methods for PET-MR. EJNMMI Phys 2023; 10:52. [PMID: 37695384 PMCID: PMC10495310 DOI: 10.1186/s40658-023-00569-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 08/07/2023] [Indexed: 09/12/2023] Open
Abstract
Despite being thirteen years since the installation of the first PET-MR system, the scanners constitute a very small proportion of the total hybrid PET systems installed. This is in stark contrast to the rapid expansion of the PET-CT scanner, which quickly established its importance in patient diagnosis within a similar timeframe. One of the main hurdles is the development of an accurate, reproducible and easy-to-use method for attenuation correction. Quantitative discrepancies in PET images between the manufacturer-provided MR methods and the more established CT- or transmission-based attenuation correction methods have led the scientific community in a continuous effort to develop a robust and accurate alternative. These can be divided into four broad categories: (i) MR-based, (ii) emission-based, (iii) atlas-based and the (iv) machine learning-based attenuation correction, which is rapidly gaining momentum. The first is based on segmenting the MR images in various tissues and allocating a predefined attenuation coefficient for each tissue. Emission-based attenuation correction methods aim in utilising the PET emission data by simultaneously reconstructing the radioactivity distribution and the attenuation image. Atlas-based attenuation correction methods aim to predict a CT or transmission image given an MR image of a new patient, by using databases containing CT or transmission images from the general population. Finally, in machine learning methods, a model that could predict the required image given the acquired MR or non-attenuation-corrected PET image is developed by exploiting the underlying features of the images. Deep learning methods are the dominant approach in this category. Compared to the more traditional machine learning, which uses structured data for building a model, deep learning makes direct use of the acquired images to identify underlying features. This up-to-date review goes through the literature of attenuation correction approaches in PET-MR after categorising them. The various approaches in each category are described and discussed. After exploring each category separately, a general overview is given of the current status and potential future approaches along with a comparison of the four outlined categories.
Collapse
Affiliation(s)
- Georgios Krokos
- School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas' Hospital London, King's College London, 1st Floor Lambeth Wing, Westminster Bridge Road, London, SE1 7EH, UK.
| | - Jane MacKewn
- School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas' Hospital London, King's College London, 1st Floor Lambeth Wing, Westminster Bridge Road, London, SE1 7EH, UK
| | - Joel Dunn
- School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas' Hospital London, King's College London, 1st Floor Lambeth Wing, Westminster Bridge Road, London, SE1 7EH, UK
| | - Paul Marsden
- School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas' Hospital London, King's College London, 1st Floor Lambeth Wing, Westminster Bridge Road, London, SE1 7EH, UK
| |
Collapse
|
41
|
Li J, Xi C, Dai H, Wang J, Lv Y, Zhang P, Zhao J. Enhanced PET imaging using progressive conditional deep image prior. Phys Med Biol 2023; 68:175047. [PMID: 37582392 DOI: 10.1088/1361-6560/acf091] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2022] [Accepted: 08/15/2023] [Indexed: 08/17/2023]
Abstract
Objective.Unsupervised learning-based methods have been proven to be an effective way to improve the image quality of positron emission tomography (PET) images when a large dataset is not available. However, when the gap between the input image and the target PET image is large, direct unsupervised learning can be challenging and easily lead to reduced lesion detectability. We aim to develop a new unsupervised learning method to improve lesion detectability in patient studies.Approach.We applied the deep progressive learning strategy to bridge the gap between the input image and the target image. The one-step unsupervised learning is decomposed into two unsupervised learning steps. The input image of the first network is an anatomical image and the input image of the second network is a PET image with a low noise level. The output of the first network is also used as the prior image to generate the target image of the second network by iterative reconstruction method.Results.The performance of the proposed method was evaluated through the phantom and patient studies and compared with non-deep learning, supervised learning and unsupervised learning methods. The results showed that the proposed method was superior to non-deep learning and unsupervised methods, and was comparable to the supervised method.Significance.A progressive unsupervised learning method was proposed, which can improve image noise performance and lesion detectability.
Collapse
Affiliation(s)
- Jinming Li
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
- United Imaging Healthcare, Shanghai, People's Republic of China
| | - Chen Xi
- United Imaging Healthcare, Shanghai, People's Republic of China
| | - Houjiao Dai
- United Imaging Healthcare, Shanghai, People's Republic of China
| | - Jing Wang
- Department of Nuclear Medicine, Xijing Hospital, Fourth Military Medical University, Shaanxi, Xi'an, People's Republic of China
| | - Yang Lv
- United Imaging Healthcare, Shanghai, People's Republic of China
| | - Puming Zhang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Jun Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| |
Collapse
|
42
|
Hashimoto F, Onishi Y, Ote K, Tashima H, Yamaya T. Fully 3D implementation of the end-to-end deep image prior-based PET image reconstruction using block iterative algorithm. Phys Med Biol 2023; 68:155009. [PMID: 37406637 DOI: 10.1088/1361-6560/ace49c] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Accepted: 07/05/2023] [Indexed: 07/07/2023]
Abstract
Objective. Deep image prior (DIP) has recently attracted attention owing to its unsupervised positron emission tomography (PET) image reconstruction method, which does not require any prior training dataset. In this paper, we present the first attempt to implement an end-to-end DIP-based fully 3D PET image reconstruction method that incorporates a forward-projection model into a loss function.Approach. A practical implementation of a fully 3D PET image reconstruction could not be performed at present because of a graphics processing unit memory limitation. Consequently, we modify the DIP optimization to a block iteration and sequential learning of an ordered sequence of block sinograms. Furthermore, the relative difference penalty (RDP) term is added to the loss function to enhance the quantitative accuracy of the PET image.Main results. We evaluated our proposed method using Monte Carlo simulation with [18F]FDG PET data of a human brain and a preclinical study on monkey-brain [18F]FDG PET data. The proposed method was compared with the maximum-likelihood expectation maximization (EM), maximuma posterioriEM with RDP, and hybrid DIP-based PET reconstruction methods. The simulation results showed that, compared with other algorithms, the proposed method improved the PET image quality by reducing statistical noise and better preserved the contrast of brain structures and inserted tumors. In the preclinical experiment, finer structures and better contrast recovery were obtained with the proposed method.Significance.The results indicated that the proposed method could produce high-quality images without a prior training dataset. Thus, the proposed method could be a key enabling technology for the straightforward and practical implementation of end-to-end DIP-based fully 3D PET image reconstruction.
Collapse
Affiliation(s)
- Fumio Hashimoto
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu 434-8601, Japan
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho, Inage-ku, Chiba, 263-8522, Japan
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-ku, Chiba, 263-8555, Japan
| | - Yuya Onishi
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu 434-8601, Japan
| | - Kibo Ote
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu 434-8601, Japan
| | - Hideaki Tashima
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-ku, Chiba, 263-8555, Japan
| | - Taiga Yamaya
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho, Inage-ku, Chiba, 263-8522, Japan
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-ku, Chiba, 263-8555, Japan
| |
Collapse
|
43
|
Liao S, Mo Z, Zeng M, Wu J, Gu Y, Li G, Quan G, Lv Y, Liu L, Yang C, Wang X, Huang X, Zhang Y, Cao W, Dong Y, Wei Y, Zhou Q, Xiao Y, Zhan Y, Zhou XS, Shi F, Shen D. Fast and low-dose medical imaging generation empowered by hybrid deep-learning and iterative reconstruction. Cell Rep Med 2023; 4:101119. [PMID: 37467726 PMCID: PMC10394257 DOI: 10.1016/j.xcrm.2023.101119] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Revised: 05/16/2023] [Accepted: 06/19/2023] [Indexed: 07/21/2023]
Abstract
Fast and low-dose reconstructions of medical images are highly desired in clinical routines. We propose a hybrid deep-learning and iterative reconstruction (hybrid DL-IR) framework and apply it for fast magnetic resonance imaging (MRI), fast positron emission tomography (PET), and low-dose computed tomography (CT) image generation tasks. First, in a retrospective MRI study (6,066 cases), we demonstrate its capability of handling 3- to 10-fold under-sampled MR data, enabling organ-level coverage with only 10- to 100-s scan time; second, a low-dose CT study (142 cases) shows that our framework can successfully alleviate the noise and streak artifacts in scans performed with only 10% radiation dose (0.61 mGy); and last, a fast whole-body PET study (131 cases) allows us to faithfully reconstruct tumor-induced lesions, including small ones (<4 mm), from 2- to 4-fold-accelerated PET acquisition (30-60 s/bp). This study offers a promising avenue for accurate and high-quality image reconstruction with broad clinical value.
Collapse
Affiliation(s)
- Shu Liao
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China
| | - Zhanhao Mo
- Department of Radiology, China-Japan Union Hospital of Jilin University, Changchun 130033, China
| | - Mengsu Zeng
- Department of Radiology, Shanghai Institute of Medical Imaging, Zhongshan Hospital, Fudan University, Shanghai 200032, China
| | - Jiaojiao Wu
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China
| | - Yuning Gu
- School of Biomedical Engineering, ShanghaiTech University, Shanghai 201210, China
| | - Guobin Li
- Shanghai United Imaging Healthcare Co., Ltd., Shanghai 201800, China
| | - Guotao Quan
- Shanghai United Imaging Healthcare Co., Ltd., Shanghai 201800, China
| | - Yang Lv
- Shanghai United Imaging Healthcare Co., Ltd., Shanghai 201800, China
| | - Lin Liu
- Department of Radiology, China-Japan Union Hospital of Jilin University, Changchun 130033, China
| | - Chun Yang
- Department of Radiology, Shanghai Institute of Medical Imaging, Zhongshan Hospital, Fudan University, Shanghai 200032, China
| | - Xinglie Wang
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China
| | - Xiaoqian Huang
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China
| | - Yang Zhang
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China
| | - Wenjing Cao
- Shanghai United Imaging Healthcare Co., Ltd., Shanghai 201800, China
| | - Yun Dong
- Shanghai United Imaging Healthcare Co., Ltd., Shanghai 201800, China
| | - Ying Wei
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China
| | - Qing Zhou
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China
| | - Yongqin Xiao
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China
| | - Yiqiang Zhan
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China
| | - Xiang Sean Zhou
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China.
| | - Dinggang Shen
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China; School of Biomedical Engineering, ShanghaiTech University, Shanghai 201210, China; Shanghai Clinical Research and Trial Center, Shanghai 200122, China.
| |
Collapse
|
44
|
Cui ZX, Jia S, Cao C, Zhu Q, Liu C, Qiu Z, Liu Y, Cheng J, Wang H, Zhu Y, Liang D. K-UNN: k-space interpolation with untrained neural network. Med Image Anal 2023; 88:102877. [PMID: 37399681 DOI: 10.1016/j.media.2023.102877] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Revised: 05/24/2023] [Accepted: 06/22/2023] [Indexed: 07/05/2023]
Abstract
Recently, untrained neural networks (UNNs) have shown satisfactory performances for MR image reconstruction on random sampling trajectories without using additional full-sampled training data. However, the existing UNN-based approaches lack the modeling of physical priors, resulting in poor performance in some common scenarios (e.g., partial Fourier (PF), regular sampling, etc.) and the lack of theoretical guarantees for reconstruction accuracy. To bridge this gap, we propose a safeguarded k-space interpolation method for MRI using a specially designed UNN with a tripled architecture driven by three physical priors of the MR images (or k-space data), including transform sparsity, coil sensitivity smoothness, and phase smoothness. We also prove that the proposed method guarantees tight bounds for interpolated k-space data accuracy. Finally, ablation experiments show that the proposed method can characterize the physical priors of MR images well. Additionally, experiments show that the proposed method consistently outperforms traditional parallel imaging methods and existing UNNs, and is even competitive against supervised-trained deep learning methods in PF and regular undersampling reconstruction.
Collapse
Affiliation(s)
- Zhuo-Xu Cui
- Research Center for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Sen Jia
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Chentao Cao
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Qingyong Zhu
- Research Center for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Congcong Liu
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Zhilang Qiu
- Biomedical Engineering, Case Western Reserve University, Cleveland, OH, United States
| | - Yuanyuan Liu
- National Innovation Center for Advanced Medical Devices, Shenzhen, Guangdong, China
| | - Jing Cheng
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Haifeng Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yanjie Zhu
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Dong Liang
- Research Center for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China; Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China; Pazhou Lab, Guangzhou, Guangdong, China.
| |
Collapse
|
45
|
Qayyum A, Ilahi I, Shamshad F, Boussaid F, Bennamoun M, Qadir J. Untrained Neural Network Priors for Inverse Imaging Problems: A Survey. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2023; 45:6511-6536. [PMID: 36063506 DOI: 10.1109/tpami.2022.3204527] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
In recent years, advancements in machine learning (ML) techniques, in particular, deep learning (DL) methods have gained a lot of momentum in solving inverse imaging problems, often surpassing the performance provided by hand-crafted approaches. Traditionally, analytical methods have been used to solve inverse imaging problems such as image restoration, inpainting, and superresolution. Unlike analytical methods for which the problem is explicitly defined and the domain knowledge is carefully engineered into the solution, DL models do not benefit from such prior knowledge and instead make use of large datasets to predict an unknown solution to the inverse problem. Recently, a new paradigm of training deep models using a single image, named untrained neural network prior (UNNP) has been proposed to solve a variety of inverse tasks, e.g., restoration and inpainting. Since then, many researchers have proposed various applications and variants of UNNP. In this paper, we present a comprehensive review of such studies and various UNNP applications for different tasks and highlight various open research problems which require further research.
Collapse
|
46
|
Wang YRJ, Wang P, Adams LC, Sheybani ND, Qu L, Sarrami AH, Theruvath AJ, Gatidis S, Ho T, Zhou Q, Pribnow A, Thakor AS, Rubin D, Daldrup-Link HE. Low-count whole-body PET/MRI restoration: an evaluation of dose reduction spectrum and five state-of-the-art artificial intelligence models. Eur J Nucl Med Mol Imaging 2023; 50:1337-1350. [PMID: 36633614 PMCID: PMC10387227 DOI: 10.1007/s00259-022-06097-w] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Accepted: 12/24/2022] [Indexed: 01/13/2023]
Abstract
PURPOSE To provide a holistic and complete comparison of the five most advanced AI models in the augmentation of low-dose 18F-FDG PET data over the entire dose reduction spectrum. METHODS In this multicenter study, five AI models were investigated for restoring low-count whole-body PET/MRI, covering convolutional benchmarks - U-Net, enhanced deep super-resolution network (EDSR), generative adversarial network (GAN) - and the most cutting-edge image reconstruction transformer models in computer vision to date - Swin transformer image restoration network (SwinIR) and EDSR-ViT (vision transformer). The models were evaluated against six groups of count levels representing the simulated 75%, 50%, 25%, 12.5%, 6.25%, and 1% (extremely ultra-low-count) of the clinical standard 3 MBq/kg 18F-FDG dose. The comparisons were performed upon two independent cohorts - (1) a primary cohort from Stanford University and (2) a cross-continental external validation cohort from Tübingen University - in order to ensure the findings are generalizable. A total of 476 original count and simulated low-count whole-body PET/MRI scans were incorporated into this analysis. RESULTS For low-count PET restoration on the primary cohort, the mean structural similarity index (SSIM) scores for dose 6.25% were 0.898 (95% CI, 0.887-0.910) for EDSR, 0.893 (0.881-0.905) for EDSR-ViT, 0.873 (0.859-0.887) for GAN, 0.885 (0.873-0.898) for U-Net, and 0.910 (0.900-0.920) for SwinIR. In continuation, SwinIR and U-Net's performances were also discreetly evaluated at each simulated radiotracer dose levels. Using the primary Stanford cohort, the mean diagnostic image quality (DIQ; 5-point Likert scale) scores of SwinIR restoration were 5 (SD, 0) for dose 75%, 4.50 (0.535) for dose 50%, 3.75 (0.463) for dose 25%, 3.25 (0.463) for dose 12.5%, 4 (0.926) for dose 6.25%, and 2.5 (0.534) for dose 1%. CONCLUSION Compared to low-count PET images, with near-to or nondiagnostic images at higher dose reduction levels (up to 6.25%), both SwinIR and U-Net significantly improve the diagnostic quality of PET images. A radiotracer dose reduction to 1% of the current clinical standard radiotracer dose is out of scope for current AI techniques.
Collapse
Affiliation(s)
- Yan-Ran Joyce Wang
- Department of Radiology, School of Medicine, Stanford University, 725 Welch Road, Stanford, CA, 94304, USA.
- Department of Biomedical Data Science, Stanford University, Stanford, CA, 94304, USA.
| | - Pengcheng Wang
- Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei, China
| | - Lisa Christine Adams
- Department of Radiology, School of Medicine, Stanford University, 725 Welch Road, Stanford, CA, 94304, USA
| | - Natasha Diba Sheybani
- Department of Biomedical Data Science, Stanford University, Stanford, CA, 94304, USA
| | - Liangqiong Qu
- Department of Biomedical Data Science, Stanford University, Stanford, CA, 94304, USA
| | - Amir Hossein Sarrami
- Department of Radiology, School of Medicine, Stanford University, 725 Welch Road, Stanford, CA, 94304, USA
| | - Ashok Joseph Theruvath
- Department of Radiology, School of Medicine, Stanford University, 725 Welch Road, Stanford, CA, 94304, USA
| | - Sergios Gatidis
- Department of Diagnostic and Interventional Radiology, University Hospital Tuebingen, Tuebingen, Germany
| | - Tina Ho
- Department of Radiology, School of Medicine, Stanford University, 725 Welch Road, Stanford, CA, 94304, USA
| | - Quan Zhou
- Department of Radiology, School of Medicine, Stanford University, 725 Welch Road, Stanford, CA, 94304, USA
| | - Allison Pribnow
- Department of Pediatrics, Pediatric Oncology, Lucile Packard Children's Hospital, Stanford University, Stanford, CA, 94304, USA
| | - Avnesh S Thakor
- Department of Pediatrics, Pediatric Oncology, Lucile Packard Children's Hospital, Stanford University, Stanford, CA, 94304, USA
| | - Daniel Rubin
- Department of Biomedical Data Science, Stanford University, Stanford, CA, 94304, USA
- Department of Pediatrics, Pediatric Oncology, Lucile Packard Children's Hospital, Stanford University, Stanford, CA, 94304, USA
| | - Heike E Daldrup-Link
- Department of Radiology, School of Medicine, Stanford University, 725 Welch Road, Stanford, CA, 94304, USA.
- Department of Pediatrics, Pediatric Oncology, Lucile Packard Children's Hospital, Stanford University, Stanford, CA, 94304, USA.
| |
Collapse
|
47
|
Rajagopal A, Natsuaki Y, Wangerin K, Hamdi M, An H, Sunderland JJ, Laforest R, Kinahan PE, Larson PEZ, Hope TA. Synthetic PET via Domain Translation of 3-D MRI. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2023; 7:333-343. [PMID: 37396797 PMCID: PMC10311993 DOI: 10.1109/trpms.2022.3223275] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/12/2023]
Abstract
Historically, patient datasets have been used to develop and validate various reconstruction algorithms for PET/MRI and PET/CT. To enable such algorithm development, without the need for acquiring hundreds of patient exams, in this article we demonstrate a deep learning technique to generate synthetic but realistic whole-body PET sinograms from abundantly available whole-body MRI. Specifically, we use a dataset of 56 18F-FDG-PET/MRI exams to train a 3-D residual UNet to predict physiologic PET uptake from whole-body T1-weighted MRI. In training, we implemented a balanced loss function to generate realistic uptake across a large dynamic range and computed losses along tomographic lines of response to mimic the PET acquisition. The predicted PET images are forward projected to produce synthetic PET (sPET) time-of-flight (ToF) sinograms that can be used with vendor-provided PET reconstruction algorithms, including using CT-based attenuation correction (CTAC) and MR-based attenuation correction (MRAC). The resulting synthetic data recapitulates physiologic 18F-FDG uptake, e.g., high uptake localized to the brain and bladder, as well as uptake in liver, kidneys, heart, and muscle. To simulate abnormalities with high uptake, we also insert synthetic lesions. We demonstrate that this sPET data can be used interchangeably with real PET data for the PET quantification task of comparing CTAC and MRAC methods, achieving ≤ 7.6% error in mean-SUV compared to using real data. These results together show that the proposed sPET data pipeline can be reasonably used for development, evaluation, and validation of PET/MRI reconstruction methods.
Collapse
Affiliation(s)
- Abhejit Rajagopal
- Department of Radiology and Biomedical Imaging, University of California at San Francisco, San Francisco, CA 94158 USA
| | - Yutaka Natsuaki
- Department of Radiation Oncology, University of New Mexico, Albuquerque, NM 87131 USA
| | | | - Mahdjoub Hamdi
- Department of Radiology, Washington University in St. Louis, St. Louis, MO 63130 USA
| | - Hongyu An
- Department of Radiology, Washington University in St. Louis, St. Louis, MO 63130 USA
| | - John J Sunderland
- Department of Radiology, The University of Iowa, Iowa City, IA 52242 USA
| | - Richard Laforest
- Department of Radiology, Washington University in St. Louis, St. Louis, MO 63130 USA
| | - Paul E Kinahan
- Department of Radiology, University of Washington, Seattle, WA 98195 USA
| | - Peder E Z Larson
- Department of Radiology and Biomedical Imaging, University of California at San Francisco, San Francisco, CA 94158 USA
| | - Thomas A Hope
- Department of Radiology and Biomedical Imaging, University of California at San Francisco, San Francisco, CA 94158 USA
| |
Collapse
|
48
|
Anatomically guided reconstruction improves lesion quantitation and detectability in bone SPECT/CT. Nucl Med Commun 2023; 44:330-337. [PMID: 36804873 DOI: 10.1097/mnm.0000000000001675] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/23/2023]
Abstract
Bone single-photon emission computed tomography (SPECT)/computed tomography (CT) imaging suffers from poor spatial resolution, but the image quality can be improved during SPECT reconstruction by using anatomical information derived from CT imaging. The purpose of this work was to compare two different anatomically guided SPECT reconstruction methods to ordered subsets expectation maximization (OSEM) which is the most commonly used reconstruction method in nuclear medicine. The comparison was done in terms of lesion quantitation and lesion detectability. Anatomically guided Bayesian reconstruction (AMAP) and kernelized ordered subset expectation maximization (KEM) algorithms were implemented and compared against OSEM. Artificial lesions with a wide range of lesion-to-background contrasts were added to normal bone SPECT/CT studies. The quantitative accuracy was assessed by the error in lesion standardized uptake values and lesion detectability by the area under the receiver operating characteristic curve generated by a non-prewhitening matched filter. AMAP and KEM provided significantly better quantitative accuracy than OSEM at all contrast levels. Accuracy was the highest when SPECT lesions were matched to a lesion on CT. Correspondingly, AMAP and KEM also had significantly better lesion detectability than OSEM at all contrast levels and reconstructions with matching CT lesions performed the best. Quantitative differences between AMAP and KEM algorithms were minor. Visually AMAP and KEM images looked similar. Anatomically guided reconstruction improves lesion quantitation and detectability markedly compared to OSEM. Differences between AMAP and KEM algorithms were small and thus probably clinically insignificant.
Collapse
|
49
|
Pal S, Dutta S, Maitra R. Personalized synthetic MR imaging with deep learning enhancements. Magn Reson Med 2023; 89:1634-1643. [PMID: 36420834 PMCID: PMC10100029 DOI: 10.1002/mrm.29527] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Revised: 10/25/2022] [Accepted: 10/27/2022] [Indexed: 11/25/2022]
Abstract
PURPOSE Personalized synthetic MRI (syn-MRI) uses MR images of an individual subject acquired at a few design parameters (echo time, repetition time, flip angle) to obtain underlying parametric ( ρ , T 1 , T 2 ) $$ \left(\rho, {\mathrm{T}}_1,{\mathrm{T}}_2\right) $$ maps, from where MR images of that individual at other design parameter settings are synthesized. However, classical methods that use least-squares (LS) or maximum likelihood estimators (MLE) are unsatisfactory at higher noise levels because the underlying inverse problem is ill-posed. This article provides a pipeline to enhance the synthesis of such images in three-dimensional (3D) using a deep learning (DL) neural network architecture for spatial regularization in a personalized setting where having more than a few training images is impractical. METHODS Our DL enhancements employ a Deep Image Prior (DIP) with a U-net type denoising architecture that includes situations with minimal training data, such as personalized syn-MRI. We provide a general workflow for syn-MRI from three or more training images. Our workflow, called DIPsyn-MRI, uses DIP to enhance training images, then obtains parametric images using LS or MLE before synthesizing images at desired design parameter settings. DIPsyn-MRI is implemented in our publicly available Python package DeepSynMRI available at: https://github.com/StatPal/DeepSynMRI. RESULTS We demonstrate feasibility and improved performance of DIPsyn-MRI on 3D datasets acquired using the Brainweb interface for spin-echo and FLASH imaging sequences, at different noise levels. Our DL enhancements improve syn-MRI in the presence of different intensity nonuniformity levels of the magnetic field, for all but very low noise levels. CONCLUSION This article provides recipes and software to realistically facilitate DL-enhanced personalized syn-MRI.
Collapse
Affiliation(s)
- Subrata Pal
- Department of Statistics, Iowa State University, Ames, Iowa, USA
| | - Somak Dutta
- Department of Statistics, Iowa State University, Ames, Iowa, USA
| | - Ranjan Maitra
- Department of Statistics, Iowa State University, Ames, Iowa, USA
| |
Collapse
|
50
|
Li S, Gong K, Badawi RD, Kim EJ, Qi J, Wang G. Neural KEM: A Kernel Method With Deep Coefficient Prior for PET Image Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:785-796. [PMID: 36288234 PMCID: PMC10081957 DOI: 10.1109/tmi.2022.3217543] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Image reconstruction of low-count positron emission tomography (PET) data is challenging. Kernel methods address the challenge by incorporating image prior information in the forward model of iterative PET image reconstruction. The kernelized expectation-maximization (KEM) algorithm has been developed and demonstrated to be effective and easy to implement. A common approach for a further improvement of the kernel method would be adding an explicit regularization, which however leads to a complex optimization problem. In this paper, we propose an implicit regularization for the kernel method by using a deep coefficient prior, which represents the kernel coefficient image in the PET forward model using a convolutional neural-network. To solve the maximum-likelihood neural network-based reconstruction problem, we apply the principle of optimization transfer to derive a neural KEM algorithm. Each iteration of the algorithm consists of two separate steps: a KEM step for image update from the projection data and a deep-learning step in the image domain for updating the kernel coefficient image using the neural network. This optimization algorithm is guaranteed to monotonically increase the data likelihood. The results from computer simulations and real patient data have demonstrated that the neural KEM can outperform existing KEM and deep image prior methods.
Collapse
|