1
|
Li Y, Hu J, Sari H, Xue S, Ma R, Kandarpa S, Visvikis D, Rominger A, Liu H, Shi K. A deep neural network for parametric image reconstruction on a large axial field-of-view PET. Eur J Nucl Med Mol Imaging 2023; 50:701-714. [PMID: 36326869 DOI: 10.1007/s00259-022-06003-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Accepted: 10/09/2022] [Indexed: 11/06/2022]
Abstract
PURPOSE The PET scanners with long axial field of view (AFOV) having ~ 20 times higher sensitivity than conventional scanners provide new opportunities for enhanced parametric imaging but suffer from the dramatically increased volume and complexity of dynamic data. This study reconstructed a high-quality direct Patlak Ki image from five-frame sinograms without input function by a deep learning framework based on DeepPET to explore the potential of artificial intelligence reducing the acquisition time and the dependence of input function in parametric imaging. METHODS This study was implemented on a large AFOV PET/CT scanner (Biograph Vision Quadra) and twenty patients were recruited with 18F-fluorodeoxyglucose (18F-FDG) dynamic scans. During training and testing of the proposed deep learning framework, the last five-frame (25 min, 40-65 min post-injection) sinograms were set as input and the reconstructed Patlak Ki images by a nested EM algorithm on the vendor were set as ground truth. To evaluate the image quality of predicted Ki images, mean square error (MSE), peak signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM) were calculated. Meanwhile, a linear regression process was applied between predicted and true Ki means on avid malignant lesions and tumor volume of interests (VOIs). RESULTS In the testing phase, the proposed method achieved excellent MSE of less than 0.03%, high SSIM, and PSNR of ~ 0.98 and ~ 38 dB, respectively. Moreover, there was a high correlation (DeepPET: [Formula: see text]= 0.73, self-attention DeepPET: [Formula: see text]=0.82) between predicted Ki and traditionally reconstructed Patlak Ki means over eleven lesions. CONCLUSIONS The results show that the deep learning-based method produced high-quality parametric images from small frames of projection data without input function. It has much potential to address the dilemma of the long scan time and dependency on input function that still hamper the clinical translation of dynamic PET.
Collapse
Affiliation(s)
- Y Li
- College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, People's Republic of China.,College of Optical Science and Engineering, Zhejiang University, Hangzhou, People's Republic of China
| | - J Hu
- Department of Nuclear Medicine, Inselpital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - H Sari
- Advanced Clinical Imaging Technology, Siemens Healthcare AG, Lausanne, Switzerland
| | - S Xue
- Department of Nuclear Medicine, Inselpital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - R Ma
- Department of Nuclear Medicine, Inselpital, Bern University Hospital, University of Bern, Bern, Switzerland.,Department of Engineering Physics, Tsinghua University, Beijing, China
| | - S Kandarpa
- LaTIM, INSERM, UMR 1101, University of Brest, Brest, France
| | - D Visvikis
- LaTIM, INSERM, UMR 1101, University of Brest, Brest, France
| | - A Rominger
- Department of Nuclear Medicine, Inselpital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - H Liu
- College of Optical Science and Engineering, Zhejiang University, Hangzhou, People's Republic of China.
| | - K Shi
- Department of Nuclear Medicine, Inselpital, Bern University Hospital, University of Bern, Bern, Switzerland.,Computer Aided Medical Procedures and Augmented Reality, Institute of Informatics I16, Technical University of Munich, Munich, Germany
| |
Collapse
|