1
|
Hussain D, Al-Masni MA, Aslam M, Sadeghi-Niaraki A, Hussain J, Gu YH, Naqvi RA. Revolutionizing tumor detection and classification in multimodality imaging based on deep learning approaches: methods, applications and limitations. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2024:XST230429. [PMID: 38701131 DOI: 10.3233/xst-230429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2024]
Abstract
BACKGROUND The emergence of deep learning (DL) techniques has revolutionized tumor detection and classification in medical imaging, with multimodal medical imaging (MMI) gaining recognition for its precision in diagnosis, treatment, and progression tracking. OBJECTIVE This review comprehensively examines DL methods in transforming tumor detection and classification across MMI modalities, aiming to provide insights into advancements, limitations, and key challenges for further progress. METHODS Systematic literature analysis identifies DL studies for tumor detection and classification, outlining methodologies including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and their variants. Integration of multimodality imaging enhances accuracy and robustness. RESULTS Recent advancements in DL-based MMI evaluation methods are surveyed, focusing on tumor detection and classification tasks. Various DL approaches, including CNNs, YOLO, Siamese Networks, Fusion-Based Models, Attention-Based Models, and Generative Adversarial Networks, are discussed with emphasis on PET-MRI, PET-CT, and SPECT-CT. FUTURE DIRECTIONS The review outlines emerging trends and future directions in DL-based tumor analysis, aiming to guide researchers and clinicians toward more effective diagnosis and prognosis. Continued innovation and collaboration are stressed in this rapidly evolving domain. CONCLUSION Conclusions drawn from literature analysis underscore the efficacy of DL approaches in tumor detection and classification, highlighting their potential to address challenges in MMI analysis and their implications for clinical practice.
Collapse
Affiliation(s)
- Dildar Hussain
- Department of Artificial Intelligence and Data Science, Sejong University, Seoul, Republic of Korea
| | - Mohammed A Al-Masni
- Department of Artificial Intelligence and Data Science, Sejong University, Seoul, Republic of Korea
| | - Muhammad Aslam
- Department of Artificial Intelligence and Data Science, Sejong University, Seoul, Republic of Korea
| | - Abolghasem Sadeghi-Niaraki
- Department of Computer Science & Engineering and Convergence Engineering for Intelligent Drone, XR Research Center, Sejong University, Seoul, Republic of Korea
| | - Jamil Hussain
- Department of Artificial Intelligence and Data Science, Sejong University, Seoul, Republic of Korea
| | - Yeong Hyeon Gu
- Department of Artificial Intelligence and Data Science, Sejong University, Seoul, Republic of Korea
| | - Rizwan Ali Naqvi
- Department of Intelligent Mechatronics Engineering, Sejong University, Seoul, Republic of Korea
| |
Collapse
|
2
|
Pan S, Abouei E, Peng J, Qian J, Wynne JF, Wang T, Chang CW, Roper J, Nye JA, Mao H, Yang X. Full-dose whole-body PET synthesis from low-dose PET using high-efficiency denoising diffusion probabilistic model: PET consistency model. Med Phys 2024. [PMID: 38588512 DOI: 10.1002/mp.17068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2023] [Revised: 03/26/2024] [Accepted: 03/26/2024] [Indexed: 04/10/2024] Open
Abstract
PURPOSE Positron Emission Tomography (PET) has been a commonly used imaging modality in broad clinical applications. One of the most important tradeoffs in PET imaging is between image quality and radiation dose: high image quality comes with high radiation exposure. Improving image quality is desirable for all clinical applications while minimizing radiation exposure is needed to reduce risk to patients. METHODS We introduce PET Consistency Model (PET-CM), an efficient diffusion-based method for generating high-quality full-dose PET images from low-dose PET images. It employs a two-step process, adding Gaussian noise to full-dose PET images in the forward diffusion, and then denoising them using a PET Shifted-window Vision Transformer (PET-VIT) network in the reverse diffusion. The PET-VIT network learns a consistency function that enables direct denoising of Gaussian noise into clean full-dose PET images. PET-CM achieves state-of-the-art image quality while requiring significantly less computation time than other methods. Evaluation with normalized mean absolute error (NMAE), peak signal-to-noise ratio (PSNR), multi-scale structure similarity index (SSIM), normalized cross-correlation (NCC), and clinical evaluation including Human Ranking Score (HRS) and Standardized Uptake Value (SUV) Error analysis shows its superiority in synthesizing full-dose PET images from low-dose inputs. RESULTS In experiments comparing eighth-dose to full-dose images, PET-CM demonstrated impressive performance with NMAE of 1.278 ± 0.122%, PSNR of 33.783 ± 0.824 dB, SSIM of 0.964 ± 0.009, NCC of 0.968 ± 0.011, HRS of 4.543, and SUV Error of 0.255 ± 0.318%, with an average generation time of 62 s per patient. This is a significant improvement compared to the state-of-the-art diffusion-based model with PET-CM reaching this result 12× faster. Similarly, in the quarter-dose to full-dose image experiments, PET-CM delivered competitive outcomes, achieving an NMAE of 0.973 ± 0.066%, PSNR of 36.172 ± 0.801 dB, SSIM of 0.984 ± 0.004, NCC of 0.990 ± 0.005, HRS of 4.428, and SUV Error of 0.151 ± 0.192% using the same generation process, which underlining its high quantitative and clinical precision in both denoising scenario. CONCLUSIONS We propose PET-CM, the first efficient diffusion-model-based method, for estimating full-dose PET images from low-dose images. PET-CM provides comparable quality to the state-of-the-art diffusion model with higher efficiency. By utilizing this approach, it becomes possible to maintain high-quality PET images suitable for clinical use while mitigating the risks associated with radiation. The code is availble at https://github.com/shaoyanpan/Full-dose-Whole-body-PET-Synthesis-from-Low-dose-PET-Using-Consistency-Model.
Collapse
Affiliation(s)
- Shaoyan Pan
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
- Department of Biomedical Informatics, Emory University, Atlanta, Georgia, USA
| | - Elham Abouei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Junbo Peng
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Joshua Qian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Jacob F Wynne
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tonghe Wang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Chih-Wei Chang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Jonathon A Nye
- Radiology and Radiological Science, Medical University of South Carolina, Charleston, South Carolina, USA
| | - Hui Mao
- Department of Radiology and Imaging Science, and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
- Department of Biomedical Informatics, Emory University, Atlanta, Georgia, USA
| |
Collapse
|
3
|
Hashimoto F, Onishi Y, Ote K, Tashima H, Reader AJ, Yamaya T. Deep learning-based PET image denoising and reconstruction: a review. Radiol Phys Technol 2024; 17:24-46. [PMID: 38319563 PMCID: PMC10902118 DOI: 10.1007/s12194-024-00780-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 01/03/2024] [Accepted: 01/04/2024] [Indexed: 02/07/2024]
Abstract
This review focuses on positron emission tomography (PET) imaging algorithms and traces the evolution of PET image reconstruction methods. First, we provide an overview of conventional PET image reconstruction methods from filtered backprojection through to recent iterative PET image reconstruction algorithms, and then review deep learning methods for PET data up to the latest innovations within three main categories. The first category involves post-processing methods for PET image denoising. The second category comprises direct image reconstruction methods that learn mappings from sinograms to the reconstructed images in an end-to-end manner. The third category comprises iterative reconstruction methods that combine conventional iterative image reconstruction with neural-network enhancement. We discuss future perspectives on PET imaging and deep learning technology.
Collapse
Affiliation(s)
- Fumio Hashimoto
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan.
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho, Inage-Ku, Chiba, 263-8522, Japan.
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan.
| | - Yuya Onishi
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan
| | - Kibo Ote
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan
| | - Hideaki Tashima
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan
| | - Andrew J Reader
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, SE1 7EH, UK
| | - Taiga Yamaya
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho, Inage-Ku, Chiba, 263-8522, Japan
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan
| |
Collapse
|
4
|
Zhou X, Fu Y, Dong S, Li L, Xue S, Chen R, Huang G, Liu J, Shi K. Intelligent ultrafast total-body PET for sedation-free pediatric [ 18F]FDG imaging. Eur J Nucl Med Mol Imaging 2024:10.1007/s00259-024-06649-2. [PMID: 38383744 DOI: 10.1007/s00259-024-06649-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 02/07/2024] [Indexed: 02/23/2024]
Abstract
PURPOSE This study aims to develop deep learning techniques on total-body PET to bolster the feasibility of sedation-free pediatric PET imaging. METHODS A deformable 3D U-Net was developed based on 245 adult subjects with standard total-body PET imaging for the quality enhancement of simulated rapid imaging. The developed method was first tested on 16 children receiving total-body [18F]FDG PET scans with standard 300-s acquisition time with sedation. Sixteen rapid scans (acquisition time about 3 s, 6 s, 15 s, 30 s, and 75 s) were retrospectively simulated by selecting the reconstruction time window. In the end, the developed methodology was prospectively tested on five children without sedation to prove the routine feasibility. RESULTS The approach significantly improved the subjective image quality and lesion conspicuity in abdominal and pelvic regions of the generated 6-s data. In the first test set, the proposed method enhanced the objective image quality metrics of 6-s data, such as PSNR (from 29.13 to 37.09, p < 0.01) and SSIM (from 0.906 to 0.921, p < 0.01). Furthermore, the errors of mean standardized uptake values (SUVmean) for lesions between 300-s data and 6-s data were reduced from 12.9 to 4.1% (p < 0.01), and the errors of max SUV (SUVmax) were reduced from 17.4 to 6.2% (p < 0.01). In the prospective test, radiologists reached a high degree of consistency on the clinical feasibility of the enhanced PET images. CONCLUSION The proposed method can effectively enhance the image quality of total-body PET scanning with ultrafast acquisition time, leading to meeting clinical diagnostic requirements of lesion detectability and quantification in abdominal and pelvic regions. It has much potential to solve the dilemma of the use of sedation and long acquisition time that influence the health of pediatric patients.
Collapse
Affiliation(s)
- Xiang Zhou
- Department of Nuclear Medicine, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Yu Fu
- College of Information Science & Electronic Engineering, Zhejiang University, Hangzhou, Zhejiang, China
| | - Shunjie Dong
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.
- College of Health Science and Technology, Shanghai Jiao Tong University School of Medicine, Shanghai, China.
| | - Lianghua Li
- Department of Nuclear Medicine, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Song Xue
- Department of Nuclear Medicine, University of Bern, Bern, Switzerland
| | - Ruohua Chen
- Department of Nuclear Medicine, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Gang Huang
- Department of Nuclear Medicine, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Jianjun Liu
- Department of Nuclear Medicine, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China.
| | - Kuangyu Shi
- Department of Nuclear Medicine, University of Bern, Bern, Switzerland
| |
Collapse
|
5
|
Bousse A, Kandarpa VSS, Shi K, Gong K, Lee JS, Liu C, Visvikis D. A Review on Low-Dose Emission Tomography Post-Reconstruction Denoising with Neural Network Approaches. ARXIV 2024:arXiv:2401.00232v2. [PMID: 38313194 PMCID: PMC10836084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 02/06/2024]
Abstract
Low-dose emission tomography (ET) plays a crucial role in medical imaging, enabling the acquisition of functional information for various biological processes while minimizing the patient dose. However, the inherent randomness in the photon counting process is a source of noise which is amplified in low-dose ET. This review article provides an overview of existing post-processing techniques, with an emphasis on deep neural network (NN) approaches. Furthermore, we explore future directions in the field of NN-based low-dose ET. This comprehensive examination sheds light on the potential of deep learning in enhancing the quality and resolution of low-dose ET images, ultimately advancing the field of medical imaging.
Collapse
Affiliation(s)
| | | | - Kuangyu Shi
- Lab for Artificial Intelligence & Translational Theranostics, Dept. Nuclear Medicine, Inselspital, University of Bern, 3010 Bern, Switzerland
| | - Kuang Gong
- The Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital/Harvard Medical School, Boston, MA 02114, USA
| | - Jae Sung Lee
- Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul 03080, Korea
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | | |
Collapse
|
6
|
Yang H, Chen S, Qi M, Chen W, Kong Q, Zhang J, Song S. Investigation of PET image quality with acquisition time/bed and enhancement of lesion quantification accuracy through deep progressive learning. EJNMMI Phys 2024; 11:7. [PMID: 38195785 PMCID: PMC10776545 DOI: 10.1186/s40658-023-00607-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Accepted: 12/22/2023] [Indexed: 01/11/2024] Open
Abstract
OBJECTIVE To improve the PET image quality by a deep progressive learning (DPL) reconstruction algorithm and evaluate the DPL performance in lesion quantification. METHODS We reconstructed PET images from 48 oncological patients using ordered subset expectation maximization (OSEM) and deep progressive learning (DPL) methods. The patients were enrolled into three overlapped studies: 11 patients for image quality assessment (study 1), 34 patients for sub-centimeter lesion quantification (study 2), and 28 patients for imaging of overweight or obese individuals (study 3). In study 1, we evaluated the image quality visually based on four criteria: overall score, image sharpness, image noise, and diagnostic confidence. We also measured the image quality quantitatively using the signal-to-background ratio (SBR), signal-to-noise ratio (SNR), contrast-to-background ratio (CBR), and contrast-to-noise ratio (CNR). To evaluate the performance of the DPL algorithm in quantifying lesions, we compared the maximum standardized uptake values (SUVmax), SBR, CBR, SNR and CNR of 63 sub-centimeter lesions in study 2 and 44 lesions in study 3. RESULTS DPL produced better PET image quality than OSEM did based on the visual evaluation methods when the acquisition time was 0.5, 1.0 and 1.5 min/bed. However, no discernible differences were found between the two methods when the acquisition time was 2.0, 2.5 and 3.0 min/bed. Quantitative results showed that DPL had significantly higher values of SBR, CBR, SNR, and CNR than OSEM did for each acquisition time. For sub-centimeter lesion quantification, the SUVmax, SBR, CBR, SNR, and CNR of DPL were significantly enhanced, compared with OSEM. Similarly, for lesion quantification in overweight and obese patients, DPL significantly increased these parameters compared with OSEM. CONCLUSION The DPL algorithm dramatically enhanced the quality of PET images and enabled more accurate quantification of sub-centimeters lesions in patients and lesions in overweight or obese patients. This is particularly beneficial for overweight or obese patients who usually have lower image quality due to the increased attenuation.
Collapse
Affiliation(s)
- Hongxing Yang
- Key Laboratory of Nuclear Physics and Ion-Beam Application (MOE), Fudan University, No. 220, Handan Road, Yangpu District, Shanghai, 200433, People's Republic of China
- Department of Nuclear Medicine, Fudan University Shanghai Cancer Center, No. 270, Dong'an Road, Xuhui District, Shanghai, 200032, People's Republic of China
- Department of Oncology, Shanghai Medical College, Fudan University, No. 130, Dong'an Road, Xuhui District, Shanghai, 200032, People's Republic of China
- Center for Biomedical Imaging, Fudan University, No. 270, Dong'an Road, Shanghai, 200032, People's Republic of China
- Shanghai Engineering Research Center for Molecular Imaging Probes, No. 270, Dong'an Road, Xuhui District, Shanghai, 200032, People's Republic of China
- Institute of Modern Physics, Fudan University, No. 220, Handan Road, Yangpu District, Shanghai, 200433, People's Republic of China
| | - Shihao Chen
- Key Laboratory of Nuclear Physics and Ion-Beam Application (MOE), Fudan University, No. 220, Handan Road, Yangpu District, Shanghai, 200433, People's Republic of China
- Department of Nuclear Medicine, Fudan University Shanghai Cancer Center, No. 270, Dong'an Road, Xuhui District, Shanghai, 200032, People's Republic of China
- Department of Oncology, Shanghai Medical College, Fudan University, No. 130, Dong'an Road, Xuhui District, Shanghai, 200032, People's Republic of China
- Center for Biomedical Imaging, Fudan University, No. 270, Dong'an Road, Shanghai, 200032, People's Republic of China
- Shanghai Engineering Research Center for Molecular Imaging Probes, No. 270, Dong'an Road, Xuhui District, Shanghai, 200032, People's Republic of China
| | - Ming Qi
- Key Laboratory of Nuclear Physics and Ion-Beam Application (MOE), Fudan University, No. 220, Handan Road, Yangpu District, Shanghai, 200433, People's Republic of China
- Department of Nuclear Medicine, Fudan University Shanghai Cancer Center, No. 270, Dong'an Road, Xuhui District, Shanghai, 200032, People's Republic of China
- Department of Oncology, Shanghai Medical College, Fudan University, No. 130, Dong'an Road, Xuhui District, Shanghai, 200032, People's Republic of China
- Center for Biomedical Imaging, Fudan University, No. 270, Dong'an Road, Shanghai, 200032, People's Republic of China
- Shanghai Engineering Research Center for Molecular Imaging Probes, No. 270, Dong'an Road, Xuhui District, Shanghai, 200032, People's Republic of China
| | - Wen Chen
- Key Laboratory of Nuclear Physics and Ion-Beam Application (MOE), Fudan University, No. 220, Handan Road, Yangpu District, Shanghai, 200433, People's Republic of China
- Department of Nuclear Medicine, Fudan University Shanghai Cancer Center, No. 270, Dong'an Road, Xuhui District, Shanghai, 200032, People's Republic of China
- Department of Oncology, Shanghai Medical College, Fudan University, No. 130, Dong'an Road, Xuhui District, Shanghai, 200032, People's Republic of China
- Center for Biomedical Imaging, Fudan University, No. 270, Dong'an Road, Shanghai, 200032, People's Republic of China
- Shanghai Engineering Research Center for Molecular Imaging Probes, No. 270, Dong'an Road, Xuhui District, Shanghai, 200032, People's Republic of China
| | - Qing Kong
- Key Laboratory of Nuclear Physics and Ion-Beam Application (MOE), Fudan University, No. 220, Handan Road, Yangpu District, Shanghai, 200433, People's Republic of China
- Shanghai Engineering Research Center for Molecular Imaging Probes, No. 270, Dong'an Road, Xuhui District, Shanghai, 200032, People's Republic of China
| | - Jianping Zhang
- Department of Nuclear Medicine, Fudan University Shanghai Cancer Center, No. 270, Dong'an Road, Xuhui District, Shanghai, 200032, People's Republic of China.
- Department of Oncology, Shanghai Medical College, Fudan University, No. 130, Dong'an Road, Xuhui District, Shanghai, 200032, People's Republic of China.
- Center for Biomedical Imaging, Fudan University, No. 270, Dong'an Road, Shanghai, 200032, People's Republic of China.
- Shanghai Engineering Research Center for Molecular Imaging Probes, No. 270, Dong'an Road, Xuhui District, Shanghai, 200032, People's Republic of China.
- Shanghai Key Laboratory of Bioactive Small Molecules, Fudan University, No. 220, Handan Road, Yangpu District, Shanghai, 200032, People's Republic of China.
| | - Shaoli Song
- Key Laboratory of Nuclear Physics and Ion-Beam Application (MOE), Fudan University, No. 220, Handan Road, Yangpu District, Shanghai, 200433, People's Republic of China.
- Department of Nuclear Medicine, Fudan University Shanghai Cancer Center, No. 270, Dong'an Road, Xuhui District, Shanghai, 200032, People's Republic of China.
- Department of Oncology, Shanghai Medical College, Fudan University, No. 130, Dong'an Road, Xuhui District, Shanghai, 200032, People's Republic of China.
- Center for Biomedical Imaging, Fudan University, No. 270, Dong'an Road, Shanghai, 200032, People's Republic of China.
- Shanghai Engineering Research Center for Molecular Imaging Probes, No. 270, Dong'an Road, Xuhui District, Shanghai, 200032, People's Republic of China.
| |
Collapse
|
7
|
Yan L, Wang Z, Li D, Wang Y, Yang G, Zhao Y, Kong Y, Wang R, Wu R, Wang Z. Low 18F-fluorodeoxyglucose dose positron emission tomography assisted by a deep-learning image-denoising technique in patients with lymphoma. Quant Imaging Med Surg 2024; 14:111-122. [PMID: 38223079 PMCID: PMC10784027 DOI: 10.21037/qims-23-817] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Accepted: 10/20/2023] [Indexed: 01/16/2024]
Abstract
Background Patients with lymphoma receive multiple positron emission tomography/computed tomography (PET/CT) exams for monitoring of the therapeutic response. With PET imaging, a reduced level of injected fluorine-18 fluorodeoxyglucose ([18F]FDG) activity can be administered while maintaining the image quality. In this study, we investigated the efficacy of applying a deep learning (DL) denoising-technique on image quality and the quantification of metabolic parameters and Deauville score (DS) of a low [18F]FDG dose PET in patients with lymphoma. Methods This study retrospectively enrolled 62 patients who underwent [18F]FDG PET scans. The low-dose (LD) data were simulated by taking a 50% duration of routine-dose (RD) PET list-mode data in the reconstruction, and a U-Net-based denoising neural network was applied to improve the images of LD PET. The visual image quality score (1 = undiagnostic, 5 = excellent) and DS were assessed in all patients by nuclear radiologists. The maximum, mean, and standard deviation (SD) of the standardized uptake value (SUV) in the liver and mediastinum were measured. In addition, lesions in some patients were segmented using a fixed threshold of 2.5, and their SUV, metabolic tumor volume (MTV), and tumor lesion glycolysis (TLG) were measured. The correlation coefficient and limits of agreement between the RD and LD group were analyzed. Results The visual image quality of the LD group was improved compared with the RD group. The DS was similar between the RD and LD group, and the negative (DS 1-3) and positive (DS 4-5) results remained unchanged. The correlation coefficients of SUV in the liver, mediastinum, and lesions were all >0.85. The mean differences of SUVmax and SUVmean between the RD and LD groups, respectively, were 0.22 [95% confidence interval (CI): -0.19 to 0.64] and 0.02 (95% CI: -0.17 to 0.20) in the liver, 0.13 (95% CI: -0.17 to 0.42) and 0.02 (95% CI: -0.12 to 0.16) in the mediastinum, and -0.75 (95% CI: -3.42 to 1.91), and -0.13 (95% CI: -0.57 to 0.31) in lesions. The mean differences in MTV and TLG were 0.85 (95% CI: -2.27 to 3.98) and 4.06 (95% CI: -20.53 to 28.64) between the RD and LD groups. Conclusions The DL denoising technique enables accurate tumor assessment and quantification with LD [18F]FDG PET imaging in patients with lymphoma.
Collapse
Affiliation(s)
- Lei Yan
- Department of Nuclear Medicine, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Zhao Wang
- Department of Nuclear Medicine, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Dacheng Li
- Department of Nuclear Medicine, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Yangyang Wang
- Department of Nuclear Medicine, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Guangjie Yang
- Department of Nuclear Medicine, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Yujun Zhao
- Department of Nuclear Medicine, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Yan Kong
- Department of Nuclear Medicine, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Rui Wang
- Department of Nuclear Medicine, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Runze Wu
- Central Research Institute, Beijing United Imaging Research Institute of Intelligent Imaging, Beijing, China
| | - Zhenguang Wang
- Department of Nuclear Medicine, The Affiliated Hospital of Qingdao University, Qingdao, China
| |
Collapse
|
8
|
Zhang Q, Hu Y, Zhou C, Zhao Y, Zhang N, Zhou Y, Yang Y, Zheng H, Fan W, Liang D, Hu Z. Reducing pediatric total-body PET/CT imaging scan time with multimodal artificial intelligence technology. EJNMMI Phys 2024; 11:1. [PMID: 38165551 PMCID: PMC10761657 DOI: 10.1186/s40658-023-00605-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Accepted: 12/20/2023] [Indexed: 01/04/2024] Open
Abstract
OBJECTIVES This study aims to decrease the scan time and enhance image quality in pediatric total-body PET imaging by utilizing multimodal artificial intelligence techniques. METHODS A total of 270 pediatric patients who underwent total-body PET/CT scans with a uEXPLORER at the Sun Yat-sen University Cancer Center were retrospectively enrolled. 18F-fluorodeoxyglucose (18F-FDG) was administered at a dose of 3.7 MBq/kg with an acquisition time of 600 s. Short-term scan PET images (acquired within 6, 15, 30, 60 and 150 s) were obtained by truncating the list-mode data. A three-dimensional (3D) neural network was developed with a residual network as the basic structure, fusing low-dose CT images as prior information, which were fed to the network at different scales. The short-term PET images and low-dose CT images were processed by the multimodal 3D network to generate full-length, high-dose PET images. The nonlocal means method and the same 3D network without the fused CT information were used as reference methods. The performance of the network model was evaluated by quantitative and qualitative analyses. RESULTS Multimodal artificial intelligence techniques can significantly improve PET image quality. When fused with prior CT information, the anatomical information of the images was enhanced, and 60 s of scan data produced images of quality comparable to that of the full-time data. CONCLUSION Multimodal artificial intelligence techniques can effectively improve the quality of pediatric total-body PET/CT images acquired using ultrashort scan times. This has the potential to decrease the use of sedation, enhance guardian confidence, and reduce the probability of motion artifacts.
Collapse
Affiliation(s)
- Qiyang Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Yingying Hu
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, 510060, China
| | - Chao Zhou
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, 510060, China
| | - Yumo Zhao
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, 510060, China
| | - Na Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Yun Zhou
- United Imaging Healthcare Group, Central Research Institute, Shanghai, 201807, China
| | - Yongfeng Yang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Hairong Zheng
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Wei Fan
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, 510060, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.
| |
Collapse
|
9
|
Manoj Doss KK, Chen JC. Utilizing deep learning techniques to improve image quality and noise reduction in preclinical low-dose PET images in the sinogram domain. Med Phys 2024; 51:209-223. [PMID: 37966121 DOI: 10.1002/mp.16830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 09/28/2023] [Accepted: 10/22/2023] [Indexed: 11/16/2023] Open
Abstract
BACKGROUND Low-dose positron emission tomography (LD-PET) imaging is commonly employed in preclinical research to minimize radiation exposure to animal subjects. However, LD-PET images often exhibit poor quality and high noise levels due to the low signal-to-noise ratio. Deep learning (DL) techniques such as generative adversarial networks (GANs) and convolutional neural network (CNN) have the capability to enhance the quality of images derived from noisy or low-quality PET data, which encodes critical information about radioactivity distribution in the body. PURPOSE Our objective was to optimize the image quality and reduce noise in preclinical PET images by utilizing the sinogram domain as input for DL models, resulting in improved image quality as compared to LD-PET images. METHODS A GAN and CNN model were utilized to predict high-dose (HD) preclinical PET sinograms from the corresponding LD preclinical PET sinograms. In order to generate the datasets, experiments were conducted on micro-phantoms, animal subjects (rats), and virtual simulations. The quality of DL-generated images was weighted by performing the following quantitative measures: structural similarity index measure (SSIM), root mean squared error (RMSE), peak signal-to-noise ratio (PSNR), signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR). Additionally, DL input and output were both subjected to a spatial resolution calculation of full width half maximum (FWHM) and full width tenth maximum (FWTM). DL outcomes were then compared with the conventional denoising algorithms such as non-local means (NLM), block-matching, and 3D filtering (BM3D). RESULTS The DL models effectively learned image features and produced high-quality images, as reflected in the quantitative metrics. Notably, the FWHM and FWTM values of DL PET images exhibited significantly improved accuracy compared to LD, NLM, and BM3D PET images, and just as precise as HD PET images. The MSE loss underscored the excellent performance of the models, indicating that the models performed well. To further improve the training, the generator loss (G loss) was increased to a value higher than the discriminator loss (D loss), thereby achieving convergence in the GAN model. CONCLUSIONS The sinograms generated by the GAN network closely resembled real HD preclinical PET sinograms and were more realistic than LD. There was a noticeable improvement in image quality and noise factor in the predicted HD images. Importantly, DL networks did not fully compromise the spatial resolution of the images.
Collapse
Affiliation(s)
| | - Jyh-Cheng Chen
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Department of Medical Imaging and Radiological Sciences, China Medical University, Taichung, Taiwan
- School of Medical Imaging, Xuzhou Medical University, Xuzhou, China
| |
Collapse
|
10
|
Gong K, Johnson K, El Fakhri G, Li Q, Pan T. PET image denoising based on denoising diffusion probabilistic model. Eur J Nucl Med Mol Imaging 2024; 51:358-368. [PMID: 37787849 PMCID: PMC10958486 DOI: 10.1007/s00259-023-06417-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2022] [Accepted: 08/22/2023] [Indexed: 10/04/2023]
Abstract
PURPOSE Due to various physical degradation factors and limited counts received, PET image quality needs further improvements. The denoising diffusion probabilistic model (DDPM) was a distribution learning-based model, which tried to transform a normal distribution into a specific data distribution based on iterative refinements. In this work, we proposed and evaluated different DDPM-based methods for PET image denoising. METHODS Under the DDPM framework, one way to perform PET image denoising was to provide the PET image and/or the prior image as the input. Another way was to supply the prior image as the network input with the PET image included in the refinement steps, which could fit for scenarios of different noise levels. 150 brain [[Formula: see text]F]FDG datasets and 140 brain [[Formula: see text]F]MK-6240 (imaging neurofibrillary tangles deposition) datasets were utilized to evaluate the proposed DDPM-based methods. RESULTS Quantification showed that the DDPM-based frameworks with PET information included generated better results than the nonlocal mean, Unet and generative adversarial network (GAN)-based denoising methods. Adding additional MR prior in the model helped achieved better performance and further reduced the uncertainty during image denoising. Solely relying on MR prior while ignoring the PET information resulted in large bias. Regional and surface quantification showed that employing MR prior as the network input while embedding PET image as a data-consistency constraint during inference achieved the best performance. CONCLUSION DDPM-based PET image denoising is a flexible framework, which can efficiently utilize prior information and achieve better performance than the nonlocal mean, Unet and GAN-based denoising methods.
Collapse
Affiliation(s)
- Kuang Gong
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, 32611, FL, USA.
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, 02114, MA, USA.
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, 02114, MA, USA.
| | - Keith Johnson
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, 02114, MA, USA
| | - Georges El Fakhri
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, 02114, MA, USA
| | - Quanzheng Li
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, 02114, MA, USA
| | - Tinsu Pan
- Department of Imaging Physics, University of Texas MD Anderson Cancer Center, Houston, 77030, TX, USA
| |
Collapse
|
11
|
Huang Z, Li W, Wu Y, Guo N, Yang L, Zhang N, Pang Z, Yang Y, Zhou Y, Shang Y, Zheng H, Liang D, Wang M, Hu Z. Short-axis PET image quality improvement based on a uEXPLORER total-body PET system through deep learning. Eur J Nucl Med Mol Imaging 2023; 51:27-39. [PMID: 37672046 DOI: 10.1007/s00259-023-06422-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Accepted: 08/30/2023] [Indexed: 09/07/2023]
Abstract
PURPOSE The axial field of view (AFOV) of a positron emission tomography (PET) scanner greatly affects the quality of PET images. Although a total-body PET scanner (uEXPLORER) with a large AFOV is more sensitive, it is more expensive and difficult to widely use. Therefore, we attempt to utilize high-quality images generated by uEXPLORER to optimize the quality of images from short-axis PET scanners through deep learning technology while controlling costs. METHODS The experiments were conducted using PET images of three anatomical locations (brain, lung, and abdomen) from 335 patients. To simulate PET images from different axes, two protocols were used to obtain PET image pairs (each patient was scanned once). For low-quality PET (LQ-PET) images with a 320-mm AFOV, we applied a 300-mm FOV for brain reconstruction and a 500-mm FOV for lung and abdomen reconstruction. For high-quality PET (HQ-PET) images, we applied a 1940-mm AFOV during the reconstruction process. A 3D Unet was utilized to learn the mapping relationship between LQ-PET and HQ-PET images. In addition, the peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) were employed to evaluate the model performance. Furthermore, two nuclear medicine doctors evaluated the image quality based on clinical readings. RESULTS The generated PET images of the brain, lung, and abdomen were quantitatively and qualitatively compatible with the HQ-PET images. In particular, our method achieved PSNR values of 35.41 ± 5.45 dB (p < 0.05), 33.77 ± 6.18 dB (p < 0.05), and 38.58 ± 7.28 dB (p < 0.05) for the three beds. The overall mean SSIM was greater than 0.94 for all patients who underwent testing. Moreover, the total subjective quality levels of the generated PET images for three beds were 3.74 ± 0.74, 3.69 ± 0.81, and 3.42 ± 0.99 (the highest possible score was 5, and the minimum score was 1) from two experienced nuclear medicine experts. Additionally, we evaluated the distribution of quantitative standard uptake values (SUV) in the region of interest (ROI). Both the SUV distribution and the peaks of the profile show that our results are consistent with the HQ-PET images, proving the superiority of our approach. CONCLUSION The findings demonstrate the potential of the proposed technique for improving the image quality of a PET scanner with a 320 mm or even shorter AFOV. Furthermore, this study explored the potential of utilizing uEXPLORER to achieve improved short-axis PET image quality at a limited economic cost, and computer-aided diagnosis systems that are related can help patients and radiologists.
Collapse
Affiliation(s)
- Zhenxing Huang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Wenbo Li
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Yaping Wu
- Department of Medical Imaging, Henan Provincial People's Hospital & People's Hospital of Zhengzhou University, Zhengzhou, 450003, China
| | - Nannan Guo
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
- School of Mathematics and Statistics, Henan University, Kaifeng, 475004, China
| | - Lin Yang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
- School of Mathematics and Statistics, Henan University, Kaifeng, 475004, China
| | - Na Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Zhifeng Pang
- School of Mathematics and Statistics, Henan University, Kaifeng, 475004, China
| | - Yongfeng Yang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Yun Zhou
- Central Research Institute, United Imaging Healthcare Group, Shanghai, 201807, China
| | - Yue Shang
- Performance Strategy & Analytics, UCLA Health, Los Angeles, CA, 90001, USA
| | - Hairong Zheng
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Meiyun Wang
- School of Mathematics and Statistics, Henan University, Kaifeng, 475004, China.
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.
| |
Collapse
|
12
|
Kaviani S, Sanaat A, Mokri M, Cohalan C, Carrier JF. Image reconstruction using UNET-transformer network for fast and low-dose PET scans. Comput Med Imaging Graph 2023; 110:102315. [PMID: 38006648 DOI: 10.1016/j.compmedimag.2023.102315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Revised: 09/26/2023] [Accepted: 11/15/2023] [Indexed: 11/27/2023]
Abstract
INTRODUCTION Low-dose and fast PET imaging (low-count PET) play a significant role in enhancing patient safety, healthcare efficiency, and patient comfort during medical imaging procedures. To achieve high-quality images with low-count PET scans, effective reconstruction models are crucial for denoising and enhancing image quality. The main goal of this paper is to develop an effective and accurate deep learning-based method for reconstructing low-count PET images, which is a challenging problem due to the limited amount of available data and the high level of noise in the acquired images. The proposed method aims to improve the quality of reconstructed PET images while preserving important features, such as edges and small details, by combining the strengths of UNET and Transformer networks. MATERIAL AND METHODS The proposed TrUNET-MAPEM model integrates a residual UNET-transformer regularizer into the unrolled maximum a posteriori expectation maximization (MAPEM) algorithm for PET image reconstruction. A loss function based on a combination of structural similarity index (SSIM) and mean squared error (MSE) is utilized to evaluate the accuracy of the reconstructed images. The simulated dataset was generated using the Brainweb phantom, while the real patient dataset was acquired using a Siemens Biograph mMR PET scanner. We also implemented state-of-the-art methods for comparison purposes: OSEM, MAPOSEM, and supervised learning using 3D-UNET network. The reconstructed images are compared to ground truth images using metrics such as peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and relative root mean square error (rRMSE) to quantitatively evaluate the accuracy of the reconstructed images. RESULTS Our proposed TrUNET-MAPEM approach was evaluated using both simulated and real patient data. For the patient data, our model achieved an average PSNR of 33.72 dB, an average SSIM of 0.955, and an average rRMSE of 0.39. These results outperformed other methods which had average PSNRs of 36.89 dB, 34.12 dB, and 33.52 db, average SSIMs of 0.944, 0.947, and 0.951, and average rRMSEs of 0.59, 0.49, and 0.42. For the simulated data, our model achieved an average PSNR of 31.23 dB, an average SSIM of 0.95, and an average rRMSE of 0.55. These results also outperformed other state-of-the-art methods, such as OSEM, MAPOSEM, and 3DUNET-MAPEM. The model demonstrates the potential for clinical use by successfully reconstructing smooth images while preserving edges. The comparison with other methods demonstrates the superiority of our approach, as it outperforms all other methods for all three metrics. CONCLUSION The proposed TrUNET-MAPEM model presents a significant advancement in the field of low-count PET image reconstruction. The results demonstrate the potential for clinical use, as the model can produce images with reduced noise levels and better edge preservation compared to other reconstruction and post-processing algorithms. The proposed approach may have important clinical applications in the early detection and diagnosis of various diseases.
Collapse
Affiliation(s)
- Sanaz Kaviani
- Faculty of Medicine, University of Montreal, Montreal, Canada; University of Montreal Hospital Research Centre (CRCHUM), Montreal, Canada.
| | - Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Mersede Mokri
- Faculty of Medicine, University of Montreal, Montreal, Canada; University of Montreal Hospital Research Centre (CRCHUM), Montreal, Canada
| | - Claire Cohalan
- University of Montreal Hospital Research Centre (CRCHUM), Montreal, Canada; Department of Physics and Biomedical Engineering, University of Montreal Hospital Centre, Montreal, Canada
| | - Jean-Francois Carrier
- University of Montreal Hospital Research Centre (CRCHUM), Montreal, Canada; Department of Physics, University of Montreal, Montreal, QC, Canada; Department de Radiation Oncology, University of Montreal Hospital Centre (CHUM), Montreal, Canada
| |
Collapse
|
13
|
Jimenez-Mesa C, Arco JE, Martinez-Murcia FJ, Suckling J, Ramirez J, Gorriz JM. Applications of machine learning and deep learning in SPECT and PET imaging: General overview, challenges and future prospects. Pharmacol Res 2023; 197:106984. [PMID: 37940064 DOI: 10.1016/j.phrs.2023.106984] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 10/04/2023] [Accepted: 11/04/2023] [Indexed: 11/10/2023]
Abstract
The integration of positron emission tomography (PET) and single-photon emission computed tomography (SPECT) imaging techniques with machine learning (ML) algorithms, including deep learning (DL) models, is a promising approach. This integration enhances the precision and efficiency of current diagnostic and treatment strategies while offering invaluable insights into disease mechanisms. In this comprehensive review, we delve into the transformative impact of ML and DL in this domain. Firstly, a brief analysis is provided of how these algorithms have evolved and which are the most widely applied in this domain. Their different potential applications in nuclear imaging are then discussed, such as optimization of image adquisition or reconstruction, biomarkers identification, multimodal fusion and the development of diagnostic, prognostic, and disease progression evaluation systems. This is because they are able to analyse complex patterns and relationships within imaging data, as well as extracting quantitative and objective measures. Furthermore, we discuss the challenges in implementation, such as data standardization and limited sample sizes, and explore the clinical opportunities and future horizons, including data augmentation and explainable AI. Together, these factors are propelling the continuous advancement of more robust, transparent, and reliable systems.
Collapse
Affiliation(s)
- Carmen Jimenez-Mesa
- Department of Signal Theory, Networking and Communications, University of Granada, 18010, Spain
| | - Juan E Arco
- Department of Signal Theory, Networking and Communications, University of Granada, 18010, Spain; Department of Communications Engineering, University of Malaga, 29010, Spain
| | | | - John Suckling
- Department of Psychiatry, University of Cambridge, Cambridge CB21TN, UK
| | - Javier Ramirez
- Department of Signal Theory, Networking and Communications, University of Granada, 18010, Spain
| | - Juan Manuel Gorriz
- Department of Signal Theory, Networking and Communications, University of Granada, 18010, Spain; Department of Psychiatry, University of Cambridge, Cambridge CB21TN, UK.
| |
Collapse
|
14
|
Roya M, Mostafapour S, Mohr P, Providência L, Li Z, van Snick JH, Brouwers AH, Noordzij W, Willemsen ATM, Dierckx RAJO, Lammertsma AA, Glaudemans AWJM, Tsoumpas C, Slart RHJA, van Sluis J. Current and Future Use of Long Axial Field-of-View Positron Emission Tomography/Computed Tomography Scanners in Clinical Oncology. Cancers (Basel) 2023; 15:5173. [PMID: 37958347 PMCID: PMC10648837 DOI: 10.3390/cancers15215173] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2023] [Revised: 10/23/2023] [Accepted: 10/24/2023] [Indexed: 11/15/2023] Open
Abstract
The latest technical development in the field of positron emission tomography/computed tomography (PET/CT) imaging has been the extension of the PET axial field-of-view. As a result of the increased number of detectors, the long axial field-of-view (LAFOV) PET systems are not only characterized by a larger anatomical coverage but also by a substantially improved sensitivity, compared with conventional short axial field-of-view PET systems. In clinical practice, this innovation has led to the following optimization: (1) improved overall image quality, (2) decreased duration of PET examinations, (3) decreased amount of radioactivity administered to the patient, or (4) a combination of any of the above. In this review, novel applications of LAFOV PET in oncology are highlighted and future directions are discussed.
Collapse
Affiliation(s)
- Mostafa Roya
- Department of Nuclear Medicine and Molecular Imaging, Medical Imaging Center, University Medical Center Groningen, University of Groningen, P.O. Box 30001, 9700 RB Groningen, The Netherlands; (S.M.); (P.M.); (L.P.); (Z.L.); (J.H.v.S.); (A.H.B.); (W.N.); (A.T.M.W.); (R.A.J.O.D.); (A.A.L.); (A.W.J.M.G.); (C.T.); (J.v.S.)
| | - Samaneh Mostafapour
- Department of Nuclear Medicine and Molecular Imaging, Medical Imaging Center, University Medical Center Groningen, University of Groningen, P.O. Box 30001, 9700 RB Groningen, The Netherlands; (S.M.); (P.M.); (L.P.); (Z.L.); (J.H.v.S.); (A.H.B.); (W.N.); (A.T.M.W.); (R.A.J.O.D.); (A.A.L.); (A.W.J.M.G.); (C.T.); (J.v.S.)
| | - Philipp Mohr
- Department of Nuclear Medicine and Molecular Imaging, Medical Imaging Center, University Medical Center Groningen, University of Groningen, P.O. Box 30001, 9700 RB Groningen, The Netherlands; (S.M.); (P.M.); (L.P.); (Z.L.); (J.H.v.S.); (A.H.B.); (W.N.); (A.T.M.W.); (R.A.J.O.D.); (A.A.L.); (A.W.J.M.G.); (C.T.); (J.v.S.)
| | - Laura Providência
- Department of Nuclear Medicine and Molecular Imaging, Medical Imaging Center, University Medical Center Groningen, University of Groningen, P.O. Box 30001, 9700 RB Groningen, The Netherlands; (S.M.); (P.M.); (L.P.); (Z.L.); (J.H.v.S.); (A.H.B.); (W.N.); (A.T.M.W.); (R.A.J.O.D.); (A.A.L.); (A.W.J.M.G.); (C.T.); (J.v.S.)
| | - Zekai Li
- Department of Nuclear Medicine and Molecular Imaging, Medical Imaging Center, University Medical Center Groningen, University of Groningen, P.O. Box 30001, 9700 RB Groningen, The Netherlands; (S.M.); (P.M.); (L.P.); (Z.L.); (J.H.v.S.); (A.H.B.); (W.N.); (A.T.M.W.); (R.A.J.O.D.); (A.A.L.); (A.W.J.M.G.); (C.T.); (J.v.S.)
| | - Johannes H. van Snick
- Department of Nuclear Medicine and Molecular Imaging, Medical Imaging Center, University Medical Center Groningen, University of Groningen, P.O. Box 30001, 9700 RB Groningen, The Netherlands; (S.M.); (P.M.); (L.P.); (Z.L.); (J.H.v.S.); (A.H.B.); (W.N.); (A.T.M.W.); (R.A.J.O.D.); (A.A.L.); (A.W.J.M.G.); (C.T.); (J.v.S.)
| | - Adrienne H. Brouwers
- Department of Nuclear Medicine and Molecular Imaging, Medical Imaging Center, University Medical Center Groningen, University of Groningen, P.O. Box 30001, 9700 RB Groningen, The Netherlands; (S.M.); (P.M.); (L.P.); (Z.L.); (J.H.v.S.); (A.H.B.); (W.N.); (A.T.M.W.); (R.A.J.O.D.); (A.A.L.); (A.W.J.M.G.); (C.T.); (J.v.S.)
| | - Walter Noordzij
- Department of Nuclear Medicine and Molecular Imaging, Medical Imaging Center, University Medical Center Groningen, University of Groningen, P.O. Box 30001, 9700 RB Groningen, The Netherlands; (S.M.); (P.M.); (L.P.); (Z.L.); (J.H.v.S.); (A.H.B.); (W.N.); (A.T.M.W.); (R.A.J.O.D.); (A.A.L.); (A.W.J.M.G.); (C.T.); (J.v.S.)
| | - Antoon T. M. Willemsen
- Department of Nuclear Medicine and Molecular Imaging, Medical Imaging Center, University Medical Center Groningen, University of Groningen, P.O. Box 30001, 9700 RB Groningen, The Netherlands; (S.M.); (P.M.); (L.P.); (Z.L.); (J.H.v.S.); (A.H.B.); (W.N.); (A.T.M.W.); (R.A.J.O.D.); (A.A.L.); (A.W.J.M.G.); (C.T.); (J.v.S.)
| | - Rudi A. J. O. Dierckx
- Department of Nuclear Medicine and Molecular Imaging, Medical Imaging Center, University Medical Center Groningen, University of Groningen, P.O. Box 30001, 9700 RB Groningen, The Netherlands; (S.M.); (P.M.); (L.P.); (Z.L.); (J.H.v.S.); (A.H.B.); (W.N.); (A.T.M.W.); (R.A.J.O.D.); (A.A.L.); (A.W.J.M.G.); (C.T.); (J.v.S.)
| | - Adriaan A. Lammertsma
- Department of Nuclear Medicine and Molecular Imaging, Medical Imaging Center, University Medical Center Groningen, University of Groningen, P.O. Box 30001, 9700 RB Groningen, The Netherlands; (S.M.); (P.M.); (L.P.); (Z.L.); (J.H.v.S.); (A.H.B.); (W.N.); (A.T.M.W.); (R.A.J.O.D.); (A.A.L.); (A.W.J.M.G.); (C.T.); (J.v.S.)
| | - Andor W. J. M. Glaudemans
- Department of Nuclear Medicine and Molecular Imaging, Medical Imaging Center, University Medical Center Groningen, University of Groningen, P.O. Box 30001, 9700 RB Groningen, The Netherlands; (S.M.); (P.M.); (L.P.); (Z.L.); (J.H.v.S.); (A.H.B.); (W.N.); (A.T.M.W.); (R.A.J.O.D.); (A.A.L.); (A.W.J.M.G.); (C.T.); (J.v.S.)
| | - Charalampos Tsoumpas
- Department of Nuclear Medicine and Molecular Imaging, Medical Imaging Center, University Medical Center Groningen, University of Groningen, P.O. Box 30001, 9700 RB Groningen, The Netherlands; (S.M.); (P.M.); (L.P.); (Z.L.); (J.H.v.S.); (A.H.B.); (W.N.); (A.T.M.W.); (R.A.J.O.D.); (A.A.L.); (A.W.J.M.G.); (C.T.); (J.v.S.)
| | - Riemer H. J. A. Slart
- Department of Nuclear Medicine and Molecular Imaging, Medical Imaging Center, University Medical Center Groningen, University of Groningen, P.O. Box 30001, 9700 RB Groningen, The Netherlands; (S.M.); (P.M.); (L.P.); (Z.L.); (J.H.v.S.); (A.H.B.); (W.N.); (A.T.M.W.); (R.A.J.O.D.); (A.A.L.); (A.W.J.M.G.); (C.T.); (J.v.S.)
- Department of Biomedical Photonic Imaging, Faculty of Science and Technology, University of Twente, 7522 NB Enchede, The Netherlands
| | - Joyce van Sluis
- Department of Nuclear Medicine and Molecular Imaging, Medical Imaging Center, University Medical Center Groningen, University of Groningen, P.O. Box 30001, 9700 RB Groningen, The Netherlands; (S.M.); (P.M.); (L.P.); (Z.L.); (J.H.v.S.); (A.H.B.); (W.N.); (A.T.M.W.); (R.A.J.O.D.); (A.A.L.); (A.W.J.M.G.); (C.T.); (J.v.S.)
| |
Collapse
|
15
|
Liu L, Chen X, Wan L, Zhang N, Hu R, Li W, Liu S, Zhu Y, Pang H, Liang D, Chen Y, Hu Z. Feasibility of a deep learning algorithm to achieve the low-dose 68Ga-FAPI/the fast-scan PET images: a multicenter study. Br J Radiol 2023; 96:20230038. [PMID: 37393527 PMCID: PMC10461288 DOI: 10.1259/bjr.20230038] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Revised: 05/26/2023] [Accepted: 06/04/2023] [Indexed: 07/03/2023] Open
Abstract
OBJECTIVES Our work aims to study the feasibility of a deep learning algorithm to reduce the 68Ga-FAPI radiotracer injected activity and/or shorten the scanning time and to investigate its effects on image quality and lesion detection ability. METHODS The data of 130 patients who underwent 68Ga-FAPI positron emission tomography (PET)/CT in two centers were studied. Predicted full-dose images (DL-22%, DL-28% and DL-33%) were obtained from three groups of low-dose images using a deep learning method and compared with the standard-dose images (raw data). Injection activity for full-dose images was 2.16 ± 0.61 MBq/kg. The quality of the predicted full-dose PET images was subjectively evaluated by two nuclear physicians using a 5-point Likert scale, and objectively evaluated by the peak signal-to-noise ratio, structural similarity index and root mean square error. The maximum standardized uptake value and the mean standardized uptake value (SUVmean) were used to quantitatively analyze the four volumes of interest (the brain, liver, left lung and right lung) and all lesions, and the lesion detection rate was calculated. RESULTS Data showed that the DL-33% images of the two test data sets met the clinical diagnosis requirements, and the overall lesion detection rate of the two centers reached 95.9%. CONCLUSION Through deep learning, we demonstrated that reducing the 68Ga-FAPI injected activity and/or shortening the scanning time in PET/CT imaging was feasible. In addition, 68Ga-FAPI dose as low as 33% of the standard dose maintained acceptable image quality. ADVANCES IN KNOWLEDGE This is the first study of low-dose 68Ga-FAPI PET images from two centers using a deep learning algorithm.
Collapse
Affiliation(s)
| | | | - Liwen Wan
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | | | - Ruibao Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Wenbo Li
- Department of Nuclear Medicine, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Shengping Liu
- Chongqing University of Technology, Chongqing, China
| | | | - Hua Pang
- Department of Nuclear Medicine, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | | | | |
Collapse
|
16
|
Li J, Xi C, Dai H, Wang J, Lv Y, Zhang P, Zhao J. Enhanced PET imaging using progressive conditional deep image prior. Phys Med Biol 2023; 68:175047. [PMID: 37582392 DOI: 10.1088/1361-6560/acf091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2022] [Accepted: 08/15/2023] [Indexed: 08/17/2023]
Abstract
Objective.Unsupervised learning-based methods have been proven to be an effective way to improve the image quality of positron emission tomography (PET) images when a large dataset is not available. However, when the gap between the input image and the target PET image is large, direct unsupervised learning can be challenging and easily lead to reduced lesion detectability. We aim to develop a new unsupervised learning method to improve lesion detectability in patient studies.Approach.We applied the deep progressive learning strategy to bridge the gap between the input image and the target image. The one-step unsupervised learning is decomposed into two unsupervised learning steps. The input image of the first network is an anatomical image and the input image of the second network is a PET image with a low noise level. The output of the first network is also used as the prior image to generate the target image of the second network by iterative reconstruction method.Results.The performance of the proposed method was evaluated through the phantom and patient studies and compared with non-deep learning, supervised learning and unsupervised learning methods. The results showed that the proposed method was superior to non-deep learning and unsupervised methods, and was comparable to the supervised method.Significance.A progressive unsupervised learning method was proposed, which can improve image noise performance and lesion detectability.
Collapse
Affiliation(s)
- Jinming Li
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
- United Imaging Healthcare, Shanghai, People's Republic of China
| | - Chen Xi
- United Imaging Healthcare, Shanghai, People's Republic of China
| | - Houjiao Dai
- United Imaging Healthcare, Shanghai, People's Republic of China
| | - Jing Wang
- Department of Nuclear Medicine, Xijing Hospital, Fourth Military Medical University, Shaanxi, Xi'an, People's Republic of China
| | - Yang Lv
- United Imaging Healthcare, Shanghai, People's Republic of China
| | - Puming Zhang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Jun Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| |
Collapse
|
17
|
Sanaei B, Faghihi R, Arabi H. Employing Multiple Low-Dose PET Images (at Different Dose Levels) as Prior Knowledge to Predict Standard-Dose PET Images. J Digit Imaging 2023; 36:1588-1596. [PMID: 36988836 PMCID: PMC10406788 DOI: 10.1007/s10278-023-00815-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2022] [Revised: 03/13/2023] [Accepted: 03/15/2023] [Indexed: 03/30/2023] Open
Abstract
The existing deep learning-based denoising methods predicting standard-dose PET images (S-PET) from the low-dose versions (L-PET) solely rely on a single-dose level of PET images as the input of deep learning network. In this work, we exploited the prior knowledge in the form of multiple low-dose levels of PET images to estimate the S-PET images. To this end, a high-resolution ResNet architecture was utilized to predict S-PET images from 6 to 4% L-PET images. For the 6% L-PET imaging, two models were developed; the first and second models were trained using a single input of 6% L-PET and three inputs of 6%, 4%, and 2% L-PET as input to predict S-PET images, respectively. Similarly, for 4% L-PET imaging, a model was trained using a single input of 4% low-dose data, and a three-channel model was developed getting 4%, 3%, and 2% L-PET images. The performance of the four models was evaluated using structural similarity index (SSI), peak signal-to-noise ratio (PSNR), and root mean square error (RMSE) within the entire head regions and malignant lesions. The 4% multi-input model led to improved SSI and PSNR and a significant decrease in RMSE by 22.22% and 25.42% within the entire head region and malignant lesions, respectively. Furthermore, the 4% multi-input network remarkably decreased the lesions' SUVmean bias and SUVmax bias by 64.58% and 37.12% comparing to single-input network. In addition, the 6% multi-input network decreased the RMSE within the entire head region, within the lesions, lesions' SUVmean bias, and SUVmax bias by 37.5%, 39.58%, 86.99%, and 45.60%, respectively. This study demonstrated the significant benefits of using prior knowledge in the form of multiple L-PET images to predict S-PET images.
Collapse
Affiliation(s)
- Behnoush Sanaei
- Nuclear Engineering Department, Shiraz University, Shiraz, Iran
| | - Reza Faghihi
- Nuclear Engineering Department, Shiraz University, Shiraz, Iran
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical Imaging, Geneva University Hospital, CH-1211 Geneva 4, Switzerland
| |
Collapse
|
18
|
Pratt EC, Lopez-Montes A, Volpe A, Crowley MJ, Carter LM, Mittal V, Pillarsetty N, Ponomarev V, Udías JM, Grimm J, Herraiz JL. Simultaneous quantitative imaging of two PET radiotracers via the detection of positron-electron annihilation and prompt gamma emissions. Nat Biomed Eng 2023; 7:1028-1039. [PMID: 37400715 PMCID: PMC10810307 DOI: 10.1038/s41551-023-01060-y] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Accepted: 05/23/2023] [Indexed: 07/05/2023]
Abstract
In conventional positron emission tomography (PET), only one radiotracer can be imaged at a time, because all PET isotopes produce the same two 511 keV annihilation photons. Here we describe an image reconstruction method for the simultaneous in vivo imaging of two PET tracers and thereby the independent quantification of two molecular signals. This method of multiplexed PET imaging leverages the 350-700 keV range to maximize the capture of 511 keV annihilation photons and prompt γ-ray emission in the same energy window, hence eliminating the need for energy discrimination during reconstruction or for signal separation beforehand. We used multiplexed PET to track, in mice with subcutaneous tumours, the biodistributions of intravenously injected [124I]I-trametinib and 2-deoxy-2-[18F]fluoro-D-glucose, [124I]I-trametinib and its nanoparticle carrier [89Zr]Zr-ferumoxytol, and the prostate-specific membrane antigen (PSMA) and infused PSMA-targeted chimaeric antigen receptor T cells after the systemic administration of [68Ga]Ga-PSMA-11 and [124I]I. Multiplexed PET provides more information depth, gives new uses to prompt γ-ray-emitting isotopes, reduces radiation burden by omitting the need for an additional computed-tomography scan and can be implemented on preclinical and clinical systems without any modifications in hardware or image acquisition software.
Collapse
Affiliation(s)
- Edwin C Pratt
- Department of Pharmacology, Weill Cornell Graduate School, New York, NY, USA
- Molecular Pharmacology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Alejandro Lopez-Montes
- Nuclear Physics Group, EMFTEL and IPARCOS, Complutense University of Madrid, Madrid, Spain
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Alessia Volpe
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Michael J Crowley
- Department of Cell and Developmental Biology, Weill Cornell Graduate School, New York, NY, USA
| | - Lukas M Carter
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Vivek Mittal
- Department of Cell and Developmental Biology, Weill Cornell Graduate School, New York, NY, USA
- Department of Cardiothoracic Surgery, Weill Cornell Medicine, New York, USA
- Neuberger Berman Lung Cancer Center, Weill Cornell Medicine, New York, USA
- Sandra and Edward Meyer Cancer Center, Weill Cornell Medicine, New York, USA
| | | | - Vladimir Ponomarev
- Molecular Pharmacology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Jose M Udías
- Nuclear Physics Group, EMFTEL and IPARCOS, Complutense University of Madrid, Madrid, Spain
- Instituto de Investigación Sanitaria Hospital Clínico San Carlos, Madrid, Spain
| | - Jan Grimm
- Department of Pharmacology, Weill Cornell Graduate School, New York, NY, USA.
- Molecular Pharmacology, Memorial Sloan Kettering Cancer Center, New York, NY, USA.
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA.
| | - Joaquin L Herraiz
- Nuclear Physics Group, EMFTEL and IPARCOS, Complutense University of Madrid, Madrid, Spain.
- Instituto de Investigación Sanitaria Hospital Clínico San Carlos, Madrid, Spain.
| |
Collapse
|
19
|
Loft M, Ladefoged CN, Johnbeck CB, Carlsen EA, Oturai P, Langer SW, Knigge U, Andersen FL, Kjaer A. An Investigation of Lesion Detection Accuracy for Artificial Intelligence-Based Denoising of Low-Dose 64Cu-DOTATATE PET Imaging in Patients with Neuroendocrine Neoplasms. J Nucl Med 2023; 64:951-959. [PMID: 37169532 PMCID: PMC10241012 DOI: 10.2967/jnumed.122.264826] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Revised: 01/31/2023] [Indexed: 05/13/2023] Open
Abstract
Frequent somatostatin receptor PET, for example, 64Cu-DOTATATE PET, is part of the diagnostic work-up of patients with neuroendocrine neoplasms (NENs), resulting in high accumulated radiation doses. Scan-related radiation exposure should be minimized in accordance with the as-low-as-reasonably achievable principle, for example, by reducing injected radiotracer activity. Previous investigations found that reducing 64Cu-DOTATATE activity to below 50 MBq results in inadequate image quality and lesion detection. We therefore investigated whether image quality and lesion detection of less than 50 MBq of 64Cu-DOTATATE PET could be restored using artificial intelligence (AI). Methods: We implemented a parameter-transferred Wasserstein generative adversarial network for patients with NENs on simulated low-dose 64Cu-DOTATATE PET images corresponding to 25% (PET25%), or about 48 MBq, of the injected activity of the reference full dose (PET100%), or about 191 MBq, to generate denoised PET images (PETAI). We included 38 patients in the training sets for network optimization. We analyzed PET intensity correlation, peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and mean-square error (MSE) of PETAI/PET100% versus PET25%/PET100% Two readers assessed Likert scale-defined image quality (1, very poor; 2, poor; 3, moderate; 4, good; 5, excellent) and identified lesion-suspicious foci on PETAI and PET100% in a subset of the patients with no more than 20 lesions per organ (n = 33) to allow comparison of all foci on a 1:1 basis. Detected foci were scored (C1, definite lesion; C0, lesion-suspicious focus) and matched with PET100% as the reference. True-positive (TP), false-positive (FP), and false-negative (FN) lesions were assessed. Results: For PETAI/PET100% versus PET25%/PET100%, PET intensity correlation had a goodness-of-fit value of 0.94 versus 0.81, PSNR was 58.1 versus 53.0, SSIM was 0.908 versus 0.899, and MSE was 2.6 versus 4.7. Likert scale-defined image quality was rated good or excellent in 33 of 33 and 32 of 33 patients on PET100% and PETAI, respectively. Total number of detected lesions was 118 on PET100% and 115 on PETAI Only 78 PETAI lesions were TP, 40 were FN, and 37 were FP, yielding detection sensitivity (TP/(TP+FN)) and a false discovery rate (FP/(TP+FP)) of 66% (78/118) and 32% (37/115), respectively. In 62% (23/37) of cases, the FP lesion was scored C1, suggesting a definite lesion. Conclusion: PETAI improved visual similarity with PET100% compared with PET25%, and PETAI and PET100% had similar Likert scale-defined image quality. However, lesion detection analysis performed by physicians showed high proportions of FP and FN lesions on PETAI, highlighting the need for clinical validation of AI algorithms.
Collapse
Affiliation(s)
- Mathias Loft
- Department of Clinical Physiology and Nuclear Medicine & Cluster for Molecular Imaging, Copenhagen University Hospital-Rigshospitalet & Department of Biomedical Sciences, University of Copenhagen, Copenhagen, Denmark
- ENETS Neuroendocrine Tumor Center of Excellence, Copenhagen University Hospital-Rigshospitalet, Copenhagen, Denmark
| | - Claes N Ladefoged
- Department of Clinical Physiology and Nuclear Medicine & Cluster for Molecular Imaging, Copenhagen University Hospital-Rigshospitalet & Department of Biomedical Sciences, University of Copenhagen, Copenhagen, Denmark
| | - Camilla B Johnbeck
- Department of Clinical Physiology and Nuclear Medicine & Cluster for Molecular Imaging, Copenhagen University Hospital-Rigshospitalet & Department of Biomedical Sciences, University of Copenhagen, Copenhagen, Denmark
- ENETS Neuroendocrine Tumor Center of Excellence, Copenhagen University Hospital-Rigshospitalet, Copenhagen, Denmark
| | - Esben A Carlsen
- Department of Clinical Physiology and Nuclear Medicine & Cluster for Molecular Imaging, Copenhagen University Hospital-Rigshospitalet & Department of Biomedical Sciences, University of Copenhagen, Copenhagen, Denmark
- ENETS Neuroendocrine Tumor Center of Excellence, Copenhagen University Hospital-Rigshospitalet, Copenhagen, Denmark
| | - Peter Oturai
- Department of Clinical Physiology and Nuclear Medicine & Cluster for Molecular Imaging, Copenhagen University Hospital-Rigshospitalet & Department of Biomedical Sciences, University of Copenhagen, Copenhagen, Denmark
- ENETS Neuroendocrine Tumor Center of Excellence, Copenhagen University Hospital-Rigshospitalet, Copenhagen, Denmark
| | - Seppo W Langer
- ENETS Neuroendocrine Tumor Center of Excellence, Copenhagen University Hospital-Rigshospitalet, Copenhagen, Denmark
- Department of Oncology, Copenhagen University Hospital-Rigshospitalet, Copenhagen, Denmark
- Department of Clinical Medicine, University of Copenhagen, Copenhagen, Denmark; and
| | - Ulrich Knigge
- ENETS Neuroendocrine Tumor Center of Excellence, Copenhagen University Hospital-Rigshospitalet, Copenhagen, Denmark
- Departments of Clinical Endocrinology and Surgical Gastroenterology, Copenhagen University Hospital-Rigshospitalet, Copenhagen, Denmark
| | - Flemming L Andersen
- Department of Clinical Physiology and Nuclear Medicine & Cluster for Molecular Imaging, Copenhagen University Hospital-Rigshospitalet & Department of Biomedical Sciences, University of Copenhagen, Copenhagen, Denmark
| | - Andreas Kjaer
- Department of Clinical Physiology and Nuclear Medicine & Cluster for Molecular Imaging, Copenhagen University Hospital-Rigshospitalet & Department of Biomedical Sciences, University of Copenhagen, Copenhagen, Denmark;
- ENETS Neuroendocrine Tumor Center of Excellence, Copenhagen University Hospital-Rigshospitalet, Copenhagen, Denmark
| |
Collapse
|
20
|
Margail C, Merlin C, Billoux T, Wallaert M, Otman H, Sas N, Molnar I, Guillemin F, Boyer L, Guy L, Tempier M, Levesque S, Revy A, Cachin F, Chanchou M. Imaging quality of an artificial intelligence denoising algorithm: validation in 68Ga PSMA-11 PET for patients with biochemical recurrence of prostate cancer. EJNMMI Res 2023; 13:50. [PMID: 37231229 DOI: 10.1186/s13550-023-00999-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Accepted: 05/12/2023] [Indexed: 05/27/2023] Open
Abstract
BACKGROUND 68 Ga-PSMA PET is the leading prostate cancer imaging technique, but the image quality remains noisy and could be further improved using an artificial intelligence-based denoising algorithm. To address this issue, we analyzed the overall quality of reprocessed images compared to standard reconstructions. We also analyzed the diagnostic performances of the different sequences and the impact of the algorithm on lesion intensity and background measures. METHODS We retrospectively included 30 patients with biochemical recurrence of prostate cancer who had undergone 68 Ga-PSMA-11 PET-CT. We simulated images produced using only a quarter, half, three-quarters, or all of the acquired data material reprocessed using the SubtlePET® denoising algorithm. Three physicians with different levels of experience blindly analyzed every sequence and then used a 5-level Likert scale to assess the series. The binary criterion of lesion detectability was compared between series. We also compared lesion SUV, background uptake, and diagnostic performances of the series (sensitivity, specificity, accuracy). RESULTS VPFX-derived series were classified differently but better than standard reconstructions (p < 0.001) using half the data. Q.Clear series were not classified differently using half the signal. Some series were noisy but had no significant effect on lesion detectability (p > 0.05). The SubtlePET® algorithm significantly decreased lesion SUV (p < 0.005) and increased liver background (p < 0.005) and had no substantial effect on the diagnostic performance of each reader. CONCLUSION We show that the SubtlePET® can be used for 68 Ga-PSMA scans using half the signal with similar image quality to Q.Clear series and superior quality to VPFX series. However, it significantly modifies quantitative measurements and should not be used for comparative examinations if standard algorithm is applied during follow-up.
Collapse
Affiliation(s)
- Charles Margail
- Nuclear Medicine, CLCC Jean Perrin: Centre Jean Perrin, Clermont-Ferrand, France.
| | - Charles Merlin
- Nuclear Medicine, CLCC Jean Perrin: Centre Jean Perrin, Clermont-Ferrand, France
| | - Tommy Billoux
- Inserm UMR 1240 IMOST, Physique Médicale, CLCC Jean Perrin, Université Clermont Auvergne, Clermont-Ferrand, France
| | | | - Hosameldin Otman
- Nuclear Medicine, CLCC Jean Perrin: Centre Jean Perrin, Clermont-Ferrand, France
| | - Nicolas Sas
- Inserm UMR 1240 IMOST, Physique Médicale, CLCC Jean Perrin, Université Clermont Auvergne, Clermont-Ferrand, France
| | - Ioana Molnar
- Biostatistics, CLCC Jean Perrin, Clermont-Ferrand, France
- Inserm UMR1240 IMoST, Clermont-Ferrand, France
| | | | - Louis Boyer
- Radiology, UMR 6602 UCA/CNRS/SIGMA, Hôpital Gabriel-Montpied TGI -Institut Pascal, Clermont-Ferrand, France
| | - Laurent Guy
- Urology, Hôpital Gabriel-Montpied, Clermont-Ferrand, France
- Université Clermont Auvergne, Clermont-Ferrand, France
| | - Marion Tempier
- Nuclear Medicine, CLCC Jean Perrin: Centre Jean Perrin, Clermont-Ferrand, France
- Inserm UMR1240 IMoST, Clermont-Ferrand, France
| | - Sophie Levesque
- Nuclear Medicine, CLCC Jean Perrin: Centre Jean Perrin, Clermont-Ferrand, France
- Inserm UMR1240 IMoST, Clermont-Ferrand, France
| | - Alban Revy
- Nuclear Medicine, CLCC Jean Perrin: Centre Jean Perrin, Clermont-Ferrand, France
| | - Florent Cachin
- Nuclear Medicine, CLCC Jean Perrin: Centre Jean Perrin, Clermont-Ferrand, France
- Inserm UMR1240 IMoST, Clermont-Ferrand, France
- Université Clermont Auvergne, Clermont-Ferrand, France
| | - Marion Chanchou
- Nuclear Medicine, CLCC Jean Perrin: Centre Jean Perrin, Clermont-Ferrand, France
- Inserm UMR1240 IMoST, Clermont-Ferrand, France
- Université Clermont Auvergne, Clermont-Ferrand, France
| |
Collapse
|
21
|
Wang H, Wu Y, Huang Z, Li Z, Zhang N, Fu F, Meng N, Wang H, Zhou Y, Yang Y, Liu X, Liang D, Zheng H, Mok GSP, Wang M, Hu Z. Deep learning-based dynamic PET parametric K i image generation from lung static PET. Eur Radiol 2023; 33:2676-2685. [PMID: 36399164 DOI: 10.1007/s00330-022-09237-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2022] [Revised: 09/30/2022] [Accepted: 10/12/2022] [Indexed: 11/19/2022]
Abstract
OBJECTIVES PET/CT is a first-line tool for the diagnosis of lung cancer. The accuracy of quantification may suffer from various factors throughout the acquisition process. The dynamic PET parametric Ki provides better quantification and improve specificity for cancer detection. However, parametric imaging is difficult to implement clinically due to the long acquisition time (~ 1 h). We propose a dynamic parametric imaging method based on conventional static PET using deep learning. METHODS Based on the imaging data of 203 participants, an improved cycle generative adversarial network incorporated with squeeze-and-excitation attention block was introduced to learn the potential mapping relationship between static PET and Ki parametric images. The image quality of the synthesized images was qualitatively and quantitatively evaluated by using several physical and clinical metrics. Statistical analysis of correlation and consistency was also performed on the synthetic images. RESULTS Compared with those of other networks, the images synthesized by our proposed network exhibited superior performance in both qualitative and quantitative evaluation, statistical analysis, and clinical scoring. Our synthesized Ki images had significant correlation (Pearson correlation coefficient, 0.93), consistency, and excellent quantitative evaluation results with the Ki images obtained in standard dynamic PET practice. CONCLUSIONS Our proposed deep learning method can be used to synthesize highly correlated and consistent dynamic parametric images obtained from static lung PET. KEY POINTS • Compared with conventional static PET, dynamic PET parametric Ki imaging has been shown to provide better quantification and improved specificity for cancer detection. • The purpose of this work was to develop a dynamic parametric imaging method based on static PET images using deep learning. • Our proposed network can synthesize highly correlated and consistent dynamic parametric images, providing an additional quantitative diagnostic reference for clinicians.
Collapse
Affiliation(s)
- Haiyan Wang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.,Biomedical Imaging Laboratory (BIG), Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Avenida da Universidade, Macau, 999078, SAR, China
| | - Yaping Wu
- Department of Medical Imaging, Henan Provincial People's Hospital & People's Hospital of Zhengzhou University, Zhengzhou, 450003, China
| | - Zhenxing Huang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Zhicheng Li
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Na Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Fangfang Fu
- Department of Medical Imaging, Henan Provincial People's Hospital & People's Hospital of Zhengzhou University, Zhengzhou, 450003, China
| | - Nan Meng
- Department of Medical Imaging, Henan Provincial People's Hospital & People's Hospital of Zhengzhou University, Zhengzhou, 450003, China
| | - Haining Wang
- Shenzhen United Imaging Research Institute of Innovative Medical Equipment, Shenzhen, 518045, China
| | - Yun Zhou
- Central Research Institute, United Imaging Healthcare Group, Shanghai, 201807, China
| | - Yongfeng Yang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Xin Liu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Hairong Zheng
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Greta S P Mok
- Biomedical Imaging Laboratory (BIG), Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Avenida da Universidade, Macau, 999078, SAR, China
| | - Meiyun Wang
- Department of Medical Imaging, Henan Provincial People's Hospital & People's Hospital of Zhengzhou University, Zhengzhou, 450003, China.
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.
| |
Collapse
|
22
|
Sadremomtaz A, Mohammadi Ghalebin M. Dose evaluation of the one-year-old child in PET imaging by 18F-(DOPA, FDG, FLT, FET) and 68Ga-EDTA using reference voxel phantoms. Biomed Phys Eng Express 2023; 9. [PMID: 36758232 DOI: 10.1088/2057-1976/acba9e] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Accepted: 02/09/2023] [Indexed: 02/11/2023]
Abstract
Because of more sensitive organs due to high growth rates, evaluating the absorbed dose is essential for children to prevent irreparable damage. Therefore, to this aim, a one-year-old child's whole-body effective dose and organ absorbed dose were evaluated for various PET imaging Radiopharmaceuticals such as:18F-DOPA,18F-FDG,18F-FLT,18F-FET, and68Ga-EDTA. For this aim, one-year-old child reference voxel phantoms and GATE Monte Carlo simulation were used, and the results were compared with the ICRP128 report (for stylized phantom). The highest absorbed dose was related to bladder wall (for18F-DOPA,18F-FET, and68Ga-EDTA), heart wall (for18F-FDG), and liver (for18F-FLT) between 30 organs that have been studied. Comparing the results with the ICRP128 report values for a one-year-old child show a significant difference in some organs. Comparison of the effective dose with the ICRP128 report shows a relative difference of 22%, 12.5%, 11.8%, 10.8% and 8.6% for18F-DOPA,68Ga-EDTA,18F-FDG,18F-FET,18F-FLT, respectively. In conclusion, using new one-year-old voxel phantoms could provide a better estimate of organs absorbed dose and whole-body effective dose due to its exact structure.
Collapse
|
23
|
Salimi Y, Shiri I, Akavanallaf A, Mansouri Z, Arabi H, Zaidi H. Fully automated accurate patient positioning in computed tomography using anterior-posterior localizer images and a deep neural network: a dual-center study. Eur Radiol 2023; 33:3243-3252. [PMID: 36703015 PMCID: PMC9879741 DOI: 10.1007/s00330-023-09424-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 11/29/2022] [Accepted: 01/02/2023] [Indexed: 01/28/2023]
Abstract
OBJECTIVES This study aimed to improve patient positioning accuracy by relying on a CT localizer and a deep neural network to optimize image quality and radiation dose. METHODS We included 5754 chest CT axial and anterior-posterior (AP) images from two different centers, C1 and C2. After pre-processing, images were split into training (80%) and test (20%) datasets. A deep neural network was trained to generate 3D axial images from the AP localizer. The geometric centerlines of patient bodies were indicated by creating a bounding box on the predicted images. The distance between the body centerline, estimated by the deep learning model and ground truth (BCAP), was compared with patient mis-centering during manual positioning (BCMP). We evaluated the performance of our model in terms of distance between the lung centerline estimated by the deep learning model and the ground truth (LCAP). RESULTS The error in terms of BCAP was - 0.75 ± 7.73 mm and 2.06 ± 10.61 mm for C1 and C2, respectively. This error was significantly lower than BCMP, which achieved an error of 9.35 ± 14.94 and 13.98 ± 14.5 mm for C1 and C2, respectively. The absolute BCAP was 5.7 ± 5.26 and 8.26 ± 6.96 mm for C1 and C2, respectively. The LCAP metric was 1.56 ± 10.8 and -0.27 ± 16.29 mm for C1 and C2, respectively. The error in terms of BCAP and LCAP was higher for larger patients (p value < 0.01). CONCLUSION The accuracy of the proposed method was comparable to available alternative methods, carrying the advantage of being free from errors related to objects blocking the camera visibility. KEY POINTS • Patient mis-centering in the anterior-posterior direction (AP) is a common problem in clinical practice which can degrade image quality and increase patient radiation dose. • We proposed a deep neural network for automatic patient positioning using only the CT image localizer, achieving a performance comparable to alternative techniques, such as the external 3D visual camera. • The advantage of the proposed method is that it is free from errors related to objects blocking the camera visibility and that it could be implemented on imaging consoles as a patient positioning support tool.
Collapse
Affiliation(s)
- Yazdan Salimi
- grid.150338.c0000 0001 0721 9812Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - Isaac Shiri
- grid.150338.c0000 0001 0721 9812Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - Azadeh Akavanallaf
- grid.150338.c0000 0001 0721 9812Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - Zahra Mansouri
- grid.150338.c0000 0001 0721 9812Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - Hossein Arabi
- grid.150338.c0000 0001 0721 9812Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - Habib Zaidi
- grid.150338.c0000 0001 0721 9812Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland ,grid.8591.50000 0001 2322 4988Geneva University Neurocenter, Geneva University, Geneva, Switzerland ,grid.4494.d0000 0000 9558 4598Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, The Netherlands ,grid.10825.3e0000 0001 0728 0170Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
| |
Collapse
|
24
|
Hu Y, Zheng Z, Yu H, Wang J, Yang X, Shi H. Ultra-low-dose CT reconstructed with the artificial intelligence iterative reconstruction algorithm (AIIR) in 18F-FDG total-body PET/CT examination: a preliminary study. EJNMMI Phys 2023; 10:1. [PMID: 36592256 PMCID: PMC9807709 DOI: 10.1186/s40658-022-00521-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 12/20/2022] [Indexed: 01/03/2023] Open
Abstract
PURPOSE To investigate the feasibility of ultra-low-dose CT (ULDCT) reconstructed with the artificial intelligence iterative reconstruction (AIIR) algorithm in total-body PET/CT imaging. METHODS The study included both the phantom and clinical parts. An anthropomorphic phantom underwent CT imaging with ULDCT (10mAs) and standard-dose CT (SDCT) (120mAs), respectively. ULDCT was reconstructed with AIIR and hybrid iterative reconstruction (HIR) (expressed as ULDCT-AIIRphantom and ULDCT-HIRphantom), respectively, and SDCT was reconstructed with HIR (SDCT-HIRphantom) as control. In the clinical part, 52 patients with malignant tumors underwent the total-body PET/CT scan. ULDCT with AIIR (ULDCT-AIIR) and HIR (ULDCT-HIR), respectively, was reconstructed for PET attenuation correction, followed by the SDCT reconstructed with HIR (SDCT-HIR) for anatomical location. PET/CT images' quality was qualitatively assessed by two readers. The CTmean, as well as the CT standard deviation (CTsd), SUVmax, SUVmean, and the SUV standard deviation (SUVsd), was recorded. The signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were calculated and compared. RESULTS The image quality of ULDCT-HIRphantom was inferior to the SDCT-HIRphantom, but no significant difference was found between the ULDCT-AIIRphantom and SDCT-HIRphantom. The subjective score of ULDCT-AIIR in the neck, chest and lower limb was equivalent to that of SDCT-HIR. Besides the brain and lower limb, the change rates of CTmean in thyroid, neck muscle, lung, mediastinum, back muscle, liver, lumbar muscle, first lumbar spine and sigmoid colon were -2.15, -1.52, 0.66, 2.97, 0.23, 8.91, 0.06, -4.29 and 8.78%, respectively, while all CTsd of ULDCT-AIIR was lower than that of SDCT-HIR. Except for the brain, the CNR of ULDCT-AIIR was the same as the SDCT-HIR, but the SNR was higher. The change rates of SUVmax, SUVmean and SUVsd were within [Formula: see text] 3% in all ROIs. For the lesions, the SUVmax, SUVsd and TBR showed no significant difference between PET-AIIR and PET-HIR. CONCLUSION The SDCT-HIR could not be replaced by the ULDCT-AIIR at date, but the AIIR algorithm decreased the image noise and increased the SNR, which can be implemented under special circumstances in PET/CT examination.
Collapse
Affiliation(s)
- Yan Hu
- grid.8547.e0000 0001 0125 2443Department of Nuclear Medicine, Zhongshan Hospital, Fudan University, 180 Fenglin Rd, Shanghai, 200032 China ,grid.8547.e0000 0001 0125 2443Nuclear Medicine Institute of Fudan University, Shanghai, 200032 China ,grid.413087.90000 0004 1755 3939Shanghai Institute of Medical Imaging, Shanghai, 200032 China
| | - Zhe Zheng
- grid.8547.e0000 0001 0125 2443Department of Nuclear Medicine, Zhongshan Hospital, Fudan University, 180 Fenglin Rd, Shanghai, 200032 China ,grid.8547.e0000 0001 0125 2443Nuclear Medicine Institute of Fudan University, Shanghai, 200032 China ,grid.413087.90000 0004 1755 3939Shanghai Institute of Medical Imaging, Shanghai, 200032 China
| | - Haojun Yu
- grid.8547.e0000 0001 0125 2443Department of Nuclear Medicine, Zhongshan Hospital, Fudan University, 180 Fenglin Rd, Shanghai, 200032 China ,grid.8547.e0000 0001 0125 2443Nuclear Medicine Institute of Fudan University, Shanghai, 200032 China ,grid.413087.90000 0004 1755 3939Shanghai Institute of Medical Imaging, Shanghai, 200032 China
| | - Jingyi Wang
- grid.497849.fUnited Imaging Healthcare Co., Ltd., Shanghai, China
| | - Xinlan Yang
- grid.497849.fUnited Imaging Healthcare Co., Ltd., Shanghai, China
| | - Hongcheng Shi
- grid.8547.e0000 0001 0125 2443Department of Nuclear Medicine, Zhongshan Hospital, Fudan University, 180 Fenglin Rd, Shanghai, 200032 China ,grid.8547.e0000 0001 0125 2443Nuclear Medicine Institute of Fudan University, Shanghai, 200032 China ,grid.413087.90000 0004 1755 3939Shanghai Institute of Medical Imaging, Shanghai, 200032 China
| |
Collapse
|
25
|
Wang T, Qiao W, Wang Y, Wang J, Lv Y, Dong Y, Qian Z, Xing Y, Zhao J. Deep progressive learning achieves whole-body low-dose 18F-FDG PET imaging. EJNMMI Phys 2022; 9:82. [DOI: 10.1186/s40658-022-00508-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2022] [Accepted: 10/31/2022] [Indexed: 11/24/2022] Open
Abstract
Abstract
Objectives
To validate a total-body PET-guided deep progressive learning reconstruction method (DPR) for low-dose 18F-FDG PET imaging.
Methods
List-mode data from the retrospective study (n = 26) were rebinned into short-duration scans and reconstructed with DPR. The standard uptake value (SUV) and tumor-to-liver ratio (TLR) in lesions and coefficient of variation (COV) in the liver in the DPR images were compared to the reference (OSEM images with full-duration data). In the prospective study, another 41 patients were injected with 1/3 of the activity based on the retrospective results. The DPR images (DPR_1/3(p)) were generated and compared with the reference (OSEM images with extended acquisition time). The SUV and COV were evaluated in three selected organs: liver, blood pool and muscle. Quantitative analyses were performed with lesion SUV and TLR, furthermore on small lesions (≤ 10 mm in diameter). Additionally, a 5-point Likert scale visual analysis was performed on the following perspectives: contrast, noise and diagnostic confidence.
Results
In the retrospective study, the DPR with one-third duration can maintain the image quality as the reference. In the prospective study, good agreement among the SUVs was observed in all selected organs. The quantitative results showed that there was no significant difference in COV between the DPR_1/3(p) group and the reference, while the visual analysis showed no significant differences in image contrast, noise and diagnostic confidence. The lesion SUVs and TLRs in the DPR_1/3(p) group were significantly enhanced compared with the reference, even for small lesions.
Conclusions
The proposed DPR method can reduce the administered activity of 18F-FDG by up to 2/3 in a real-world deployment while maintaining image quality.
Collapse
|
26
|
Zhao K, Jiang B, Zhang S, Zhang L, Zhang L, Feng Y, Li J, Zhang Y, Xie X. Measurement Accuracy and Repeatability of RECIST-Defined Pulmonary Lesions and Lymph Nodes in Ultra-Low-Dose CT Based on Deep Learning Image Reconstruction. Cancers (Basel) 2022; 14:5016. [PMID: 36291800 PMCID: PMC9599467 DOI: 10.3390/cancers14205016] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Accepted: 10/11/2022] [Indexed: 08/16/2023] Open
Abstract
BACKGROUND Deep learning image reconstruction (DLIR) improves image quality. We aimed to compare the measured diameter of pulmonary lesions and lymph nodes between DLIR-based ultra-low-dose CT (ULDCT) and contrast-enhanced CT. METHODS The consecutive adult patients with noncontrast chest ULDCT (0.07-0.14 mSv) and contrast-enhanced CT (2.38 mSv) were prospectively enrolled. Patients with poor image quality and body mass index ≥ 30 kg/m2 were excluded. The diameter of pulmonary target lesions and lymph nodes defined by Response Evaluation Criteria in Solid Tumors (RECIST) was measured. The measurement variability between ULDCT and enhanced CT was evaluated by Bland-Altman analysis. RESULTS The 141 enrolled patients (62 ± 12 years) had 89 RECIST-defined measurable pulmonary target lesions (including 30 malignant lesions, mainly adenocarcinomas) and 45 measurable mediastinal lymph nodes (12 malignant). The measurement variation of pulmonary lesions between high-strength DLIR (DLIR-H) images of ULDCT and contrast-enhanced CT was 2.2% (95% CI: 1.7% to 2.6%) and the variation of lymph nodes was 1.4% (1.0% to 1.9%). CONCLUSIONS The measured diameters of pulmonary lesions and lymph nodes in DLIR-H images of ULDCT are highly close to those of contrast-enhanced CT. DLIR-based ULDCT may facilitate evaluating target lesions with greatly reduced radiation exposure in tumor evaluation and lung cancer screening.
Collapse
Affiliation(s)
- Keke Zhao
- Radiology Department, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Haining Rd.100, Shanghai 200080, China
| | - Beibei Jiang
- Radiology Department, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Haining Rd.100, Shanghai 200080, China
| | - Shuai Zhang
- CT Imaging Research Center, GE Healthcare China, Shanghai 201203, China
| | - Lu Zhang
- Radiology Department, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Haining Rd.100, Shanghai 200080, China
| | - Lin Zhang
- Radiology Department, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Haining Rd.100, Shanghai 200080, China
| | - Yan Feng
- Radiology Department, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Haining Rd.100, Shanghai 200080, China
| | - Jianying Li
- CT Imaging Research Center, GE Healthcare China, Shanghai 201203, China
| | - Yaping Zhang
- Radiology Department, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Haining Rd.100, Shanghai 200080, China
| | - Xueqian Xie
- Radiology Department, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Haining Rd.100, Shanghai 200080, China
| |
Collapse
|
27
|
Sanaat A, Akhavanalaf A, Shiri I, Salimi Y, Arabi H, Zaidi H. Deep-TOF-PET: Deep learning-guided generation of time-of-flight from non-TOF brain PET images in the image and projection domains. Hum Brain Mapp 2022; 43:5032-5043. [PMID: 36087092 PMCID: PMC9582376 DOI: 10.1002/hbm.26068] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Accepted: 08/18/2022] [Indexed: 11/12/2022] Open
Abstract
We aim to synthesize brain time‐of‐flight (TOF) PET images/sinograms from their corresponding non‐TOF information in the image space (IS) and sinogram space (SS) to increase the signal‐to‐noise ratio (SNR) and contrast of abnormalities, and decrease the bias in tracer uptake quantification. One hundred forty clinical brain 18F‐FDG PET/CT scans were collected to generate TOF and non‐TOF sinograms. The TOF sinograms were split into seven time bins (0, ±1, ±2, ±3). The predicted TOF sinogram was reconstructed and the performance of both models (IS and SS) compared with reference TOF and non‐TOF. Wide‐ranging quantitative and statistical analysis metrics, including structural similarity index metric (SSIM), root mean square error (RMSE), as well as 28 radiomic features for 83 brain regions were extracted to evaluate the performance of the CycleGAN model. SSIM and RMSE of 0.99 ± 0.03, 0.98 ± 0.02 and 0.12 ± 0.09, 0.16 ± 0.04 were achieved for the generated TOF‐PET images in IS and SS, respectively. They were 0.97 ± 0.03 and 0.22 ± 0.12, respectively, for non‐TOF‐PET images. The Bland & Altman analysis revealed that the lowest tracer uptake value bias (−0.02%) and minimum variance (95% CI: −0.17%, +0.21%) were achieved for TOF‐PET images generated in IS. For malignant lesions, the contrast in the test dataset was enhanced from 3.22 ± 2.51 for non‐TOF to 3.34 ± 0.41 and 3.65 ± 3.10 for TOF PET in SS and IS, respectively. The implemented CycleGAN is capable of generating TOF from non‐TOF PET images to achieve better image quality.
Collapse
Affiliation(s)
- Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Azadeh Akhavanalaf
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Yazdan Salimi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland.,Geneva University Neurocenter, Geneva University, Geneva, Switzerland.,Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands.,Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
| |
Collapse
|
28
|
Liu J, Ren S, Wang R, Mirian N, Tsai YJ, Kulon M, Pucar D, Chen MK, Liu C. Virtual high-count PET image generation using a deep learning method. Med Phys 2022; 49:5830-5840. [PMID: 35880541 PMCID: PMC9474624 DOI: 10.1002/mp.15867] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Revised: 06/07/2022] [Accepted: 07/18/2022] [Indexed: 11/10/2022] Open
Abstract
PURPOSE Recently, deep learning-based methods have been established to denoise the low-count positron emission tomography (PET) images and predict their standard-count image counterparts, which could achieve reduction of injected dosage and scan time, and improve image quality for equivalent lesion detectability and clinical diagnosis. In clinical settings, the majority scans are still acquired using standard injection dose with standard scan time. In this work, we applied a 3D U-Net network to reduce the noise of standard-count PET images to obtain the virtual-high-count (VHC) PET images for identifying the potential benefits of the obtained VHC PET images. METHODS The training datasets, including down-sampled standard-count PET images as the network input and high-count images as the desired network output, were derived from 27 whole-body PET datasets, which were acquired using 90-min dynamic scan. The down-sampled standard-count PET images were rebinned with matched noise level of 195 clinical static PET datasets, by matching the normalized standard derivation (NSTD) inside 3D liver region of interests (ROIs). Cross-validation was performed on 27 PET datasets. Normalized mean square error (NMSE), peak signal to noise ratio (PSNR), structural similarity index (SSIM), and standard uptake value (SUV) bias of lesions were used for evaluation on standard-count and VHC PET images, with real-high-count PET image of 90 min as the gold standard. In addition, the network trained with 27 dynamic PET datasets was applied to 195 clinical static datasets to obtain VHC PET images. The NSTD and mean/max SUV of hypermetabolic lesions in standard-count and VHC PET images were evaluated. Three experienced nuclear medicine physicians evaluated the overall image quality of randomly selected 50 out of 195 patients' standard-count and VHC images and conducted 5-score ranking. A Wilcoxon signed-rank test was used to compare differences in the grading of standard-count and VHC images. RESULTS The cross-validation results showed that VHC PET images had improved quantitative metrics scores than the standard-count PET images. The mean/max SUVs of 35 lesions in the standard-count and true-high-count PET images did not show significantly statistical difference. Similarly, the mean/max SUVs of VHC and true-high-count PET images did not show significantly statistical difference. For the 195 clinical data, the VHC PET images had a significantly lower NSTD than the standard-count images. The mean/max SUVs of 215 hypermetabolic lesions in the VHC and standard-count images showed no statistically significant difference. In the image quality evaluation by three experienced nuclear medicine physicians, standard-count images and VHC images received scores with mean and standard deviation of 3.34±0.80 and 4.26 ± 0.72 from Physician 1, 3.02 ± 0.87 and 3.96 ± 0.73 from Physician 2, and 3.74 ± 1.10 and 4.58 ± 0.57 from Physician 3, respectively. The VHC images were consistently ranked higher than the standard-count images. The Wilcoxon signed-rank test also indicated that the image quality evaluation between standard-count and VHC images had significant difference. CONCLUSIONS A DL method was proposed to convert the standard-count images to the VHC images. The VHC images had reduced noise level. No significant difference in mean/max SUV to the standard-count images was observed. VHC images improved image quality for better lesion detectability and clinical diagnosis.
Collapse
Affiliation(s)
- Juan Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, 06520, USA
| | - Sijin Ren
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, 06520, USA
| | - Rui Wang
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, 06520, USA
- Department of Engineering Physics, Tsinghua University, Beijing, 100084, China
| | - Niloufarsadat Mirian
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, 06520, USA
| | - Yu-Jung Tsai
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, 06520, USA
| | - Michal Kulon
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, 06520, USA
| | - Darko Pucar
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, 06520, USA
| | - Ming-Kai Chen
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, 06520, USA
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, 06520, USA
| |
Collapse
|
29
|
Fragoso Costa P, Jentzen W, Brahmer A, Mavroeidi IA, Zarrad F, Umutlu L, Fendler WP, Rischpler C, Herrmann K, Conti M, Seifert R, Sraieb M, Weber M, Kersting D. Phantom-based acquisition time and image reconstruction parameter optimisation for oncologic FDG PET/CT examinations using a digital system. BMC Cancer 2022; 22:899. [PMID: 35978274 PMCID: PMC9387080 DOI: 10.1186/s12885-022-09993-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2021] [Accepted: 08/08/2022] [Indexed: 11/18/2022] Open
Abstract
Background New-generation silicon-photomultiplier (SiPM)-based PET/CT systems exhibit an improved lesion detectability and image quality due to a higher detector sensitivity. Consequently, the acquisition time can be reduced while maintaining diagnostic quality. The aim of this study was to determine the lowest 18F-FDG PET acquisition time without loss of diagnostic information and to optimise image reconstruction parameters (image reconstruction algorithm, number of iterations, voxel size, Gaussian filter) by phantom imaging. Moreover, patient data are evaluated to confirm the phantom results. Methods Three phantoms were used: a soft-tissue tumour phantom, a bone-lung tumour phantom, and a resolution phantom. Phantom conditions (lesion sizes from 6.5 mm to 28.8 mm in diameter, lesion activity concentration of 15 kBq/mL, and signal-to-background ratio of 5:1) were derived from patient data. PET data were acquired on an SiPM-based Biograph Vision PET/CT system for 10 min in list-mode format and resampled into time frames from 30 to 300 s in 30-s increments to simulate different acquisition times. Different image reconstructions with varying iterations, voxel sizes, and Gaussian filters were probed. Contrast-to-noise-ratio (CNR), maximum, and peak signal were evaluated using the 10-min acquisition time image as reference. A threshold CNR value ≥ 5 and a maximum (peak) deviation of ± 20% were considered acceptable. 20 patient data sets were evaluated regarding lesion quantification as well as agreement and correlation between reduced and full acquisition time standard uptake values (assessed by Pearson correlation coefficient, intraclass correlation coefficient, Bland–Altman analyses, and Krippendorff’s alpha). Results An acquisition time of 60 s per bed position yielded acceptable detectability and quantification results for clinically relevant phantom lesions ≥ 9.7 mm in diameter using OSEM-TOF or OSEM-TOF+PSF image reconstruction, a 4-mm Gaussian filter, and a 1.65 × 1.65 x 2.00-mm3 or 3.30 × 3.30 x 3.00-mm3 voxel size. Correlation and agreement of patient lesion quantification between full and reduced acquisition times were excellent. Conclusion A threefold reduction in acquisition time is possible. Patients might benefit from more comfortable examinations or reduced radiation exposure, if instead of the acquisition time the applied activity is reduced. Supplementary Information The online version contains supplementary material available at 10.1186/s12885-022-09993-4.
Collapse
Affiliation(s)
- Pedro Fragoso Costa
- Department of Nuclear Medicine, University Hospital Essen, West German Cancer Center (WTZ), University of Duisburg-Essen, Hufelandstrasse 55, 45147, Essen, Germany.,German Cancer Consortium (DKTK), Partner Site University Hospital Essen, Essen, Germany
| | - Walter Jentzen
- Department of Nuclear Medicine, University Hospital Essen, West German Cancer Center (WTZ), University of Duisburg-Essen, Hufelandstrasse 55, 45147, Essen, Germany.,German Cancer Consortium (DKTK), Partner Site University Hospital Essen, Essen, Germany
| | - Alissa Brahmer
- Department of Nuclear Medicine, University Hospital Essen, West German Cancer Center (WTZ), University of Duisburg-Essen, Hufelandstrasse 55, 45147, Essen, Germany.,German Cancer Consortium (DKTK), Partner Site University Hospital Essen, Essen, Germany
| | - Ilektra-Antonia Mavroeidi
- German Cancer Consortium (DKTK), Partner Site University Hospital Essen, Essen, Germany.,Department of Medical Oncology, University Hospital Essen, West German Cancer Center (WTZ), University Duisburg-Essen, 45147, Essen, Germany
| | - Fadi Zarrad
- Department of Nuclear Medicine, University Hospital Essen, West German Cancer Center (WTZ), University of Duisburg-Essen, Hufelandstrasse 55, 45147, Essen, Germany.,German Cancer Consortium (DKTK), Partner Site University Hospital Essen, Essen, Germany
| | - Lale Umutlu
- German Cancer Consortium (DKTK), Partner Site University Hospital Essen, Essen, Germany.,Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, University Duisburg-Essen, 45147, Essen, Germany
| | - Wolfgang P Fendler
- Department of Nuclear Medicine, University Hospital Essen, West German Cancer Center (WTZ), University of Duisburg-Essen, Hufelandstrasse 55, 45147, Essen, Germany.,German Cancer Consortium (DKTK), Partner Site University Hospital Essen, Essen, Germany
| | - Christoph Rischpler
- Department of Nuclear Medicine, University Hospital Essen, West German Cancer Center (WTZ), University of Duisburg-Essen, Hufelandstrasse 55, 45147, Essen, Germany.,German Cancer Consortium (DKTK), Partner Site University Hospital Essen, Essen, Germany
| | - Ken Herrmann
- Department of Nuclear Medicine, University Hospital Essen, West German Cancer Center (WTZ), University of Duisburg-Essen, Hufelandstrasse 55, 45147, Essen, Germany.,German Cancer Consortium (DKTK), Partner Site University Hospital Essen, Essen, Germany
| | | | - Robert Seifert
- Department of Nuclear Medicine, University Hospital Essen, West German Cancer Center (WTZ), University of Duisburg-Essen, Hufelandstrasse 55, 45147, Essen, Germany.,German Cancer Consortium (DKTK), Partner Site University Hospital Essen, Essen, Germany
| | - Miriam Sraieb
- Department of Nuclear Medicine, University Hospital Essen, West German Cancer Center (WTZ), University of Duisburg-Essen, Hufelandstrasse 55, 45147, Essen, Germany.,German Cancer Consortium (DKTK), Partner Site University Hospital Essen, Essen, Germany
| | - Manuel Weber
- Department of Nuclear Medicine, University Hospital Essen, West German Cancer Center (WTZ), University of Duisburg-Essen, Hufelandstrasse 55, 45147, Essen, Germany.,German Cancer Consortium (DKTK), Partner Site University Hospital Essen, Essen, Germany
| | - David Kersting
- Department of Nuclear Medicine, University Hospital Essen, West German Cancer Center (WTZ), University of Duisburg-Essen, Hufelandstrasse 55, 45147, Essen, Germany. .,German Cancer Consortium (DKTK), Partner Site University Hospital Essen, Essen, Germany.
| |
Collapse
|
30
|
Hosch R, Weber M, Sraieb M, Flaschel N, Haubold J, Kim MS, Umutlu L, Kleesiek J, Herrmann K, Nensa F, Rischpler C, Koitka S, Seifert R, Kersting D. Artificial intelligence guided enhancement of digital PET: scans as fast as CT? Eur J Nucl Med Mol Imaging 2022; 49:4503-4515. [PMID: 35904589 PMCID: PMC9606065 DOI: 10.1007/s00259-022-05901-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Accepted: 06/30/2022] [Indexed: 12/03/2022]
Abstract
Purpose Both digital positron emission tomography (PET) detector technologies and artificial intelligence based image post-reconstruction methods allow to reduce the PET acquisition time while maintaining diagnostic quality. The aim of this study was to acquire ultra-low-count fluorodeoxyglucose (FDG) ExtremePET images on a digital PET/computed tomography (CT) scanner at an acquisition time comparable to a CT scan and to generate synthetic full-dose PET images using an artificial neural network. Methods This is a prospective, single-arm, single-center phase I/II imaging study. A total of 587 patients were included. For each patient, a standard and an ultra-low-count FDG PET/CT scan (whole-body acquisition time about 30 s) were acquired. A modified pix2pixHD deep-learning network was trained employing 387 data sets as training and 200 as test cohort. Three models (PET-only and PET/CT with or without group convolution) were compared. Detectability and quantification were evaluated. Results The PET/CT input model with group convolution performed best regarding lesion signal recovery and was selected for detailed evaluation. Synthetic PET images were of high visual image quality; mean absolute lesion SUVmax (maximum standardized uptake value) difference was 1.5. Patient-based sensitivity and specificity for lesion detection were 79% and 100%, respectively. Not-detected lesions were of lower tracer uptake and lesion volume. In a matched-pair comparison, patient-based (lesion-based) detection rate was 89% (78%) for PERCIST (PET response criteria in solid tumors)-measurable and 36% (22%) for non PERCIST-measurable lesions. Conclusion Lesion detectability and lesion quantification were promising in the context of extremely fast acquisition times. Possible application scenarios might include re-staging of late-stage cancer patients, in whom assessment of total tumor burden can be of higher relevance than detailed evaluation of small and low-uptake lesions. Supplementary Information The online version contains supplementary material available at 10.1007/s00259-022-05901-x.
Collapse
Affiliation(s)
- René Hosch
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany. .,Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany.
| | - Manuel Weber
- Department of Nuclear Medicine and German Cancer Consortium (DKTK), University Hospital Essen, University of Duisburg-Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Miriam Sraieb
- Department of Nuclear Medicine and German Cancer Consortium (DKTK), University Hospital Essen, University of Duisburg-Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Nils Flaschel
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany.,Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany
| | - Johannes Haubold
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Moon-Sung Kim
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany.,Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany
| | - Lale Umutlu
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Jens Kleesiek
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany
| | - Ken Herrmann
- Department of Nuclear Medicine and German Cancer Consortium (DKTK), University Hospital Essen, University of Duisburg-Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Felix Nensa
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany.,Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany
| | - Christoph Rischpler
- Department of Nuclear Medicine and German Cancer Consortium (DKTK), University Hospital Essen, University of Duisburg-Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Sven Koitka
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany.,Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany
| | - Robert Seifert
- Department of Nuclear Medicine and German Cancer Consortium (DKTK), University Hospital Essen, University of Duisburg-Essen, Hufelandstraße 55, 45147, Essen, Germany.,Department of Nuclear Medicine, University Hospital Münster, University of Münster, Albert-Schweitzer-Campus 1, 48149, Münster, Germany
| | - David Kersting
- Department of Nuclear Medicine and German Cancer Consortium (DKTK), University Hospital Essen, University of Duisburg-Essen, Hufelandstraße 55, 45147, Essen, Germany
| |
Collapse
|
31
|
Sari H, Teimoorisichani M, Mingels C, Alberts I, Panin V, Bharkhada D, Xue S, Prenosil G, Shi K, Conti M, Rominger A. Quantitative evaluation of a deep learning-based framework to generate whole-body attenuation maps using LSO background radiation in long axial FOV PET scanners. Eur J Nucl Med Mol Imaging 2022; 49:4490-4502. [PMID: 35852557 PMCID: PMC9606046 DOI: 10.1007/s00259-022-05909-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2022] [Accepted: 07/10/2022] [Indexed: 12/19/2022]
Abstract
Purpose Attenuation correction is a critically important step in data correction in positron emission tomography (PET) image formation. The current standard method involves conversion of Hounsfield units from a computed tomography (CT) image to construct attenuation maps (µ-maps) at 511 keV. In this work, the increased sensitivity of long axial field-of-view (LAFOV) PET scanners was exploited to develop and evaluate a deep learning (DL) and joint reconstruction-based method to generate µ-maps utilizing background radiation from lutetium-based (LSO) scintillators. Methods Data from 18 subjects were used to train convolutional neural networks to enhance initial µ-maps generated using joint activity and attenuation reconstruction algorithm (MLACF) with transmission data from LSO background radiation acquired before and after the administration of 18F-fluorodeoxyglucose (18F-FDG) (µ-mapMLACF-PRE and µ-mapMLACF-POST respectively). The deep learning-enhanced µ-maps (µ-mapDL-MLACF-PRE and µ-mapDL-MLACF-POST) were compared against MLACF-derived and CT-based maps (µ-mapCT). The performance of the method was also evaluated by assessing PET images reconstructed using each µ-map and computing volume-of-interest based standard uptake value measurements and percentage relative mean error (rME) and relative mean absolute error (rMAE) relative to CT-based method. Results No statistically significant difference was observed in rME values for µ-mapDL-MLACF-PRE and µ-mapDL-MLACF-POST both in fat-based and water-based soft tissue as well as bones, suggesting that presence of the radiopharmaceutical activity in the body had negligible effects on the resulting µ-maps. The rMAE values µ-mapDL-MLACF-POST were reduced by a factor of 3.3 in average compared to the rMAE of µ-mapMLACF-POST. Similarly, the average rMAE values of PET images reconstructed using µ-mapDL-MLACF-POST (PETDL-MLACF-POST) were 2.6 times smaller than the average rMAE values of PET images reconstructed using µ-mapMLACF-POST. The mean absolute errors in SUV values of PETDL-MLACF-POST compared to PETCT were less than 5% in healthy organs, less than 7% in brain grey matter and 4.3% for all tumours combined. Conclusion We describe a deep learning-based method to accurately generate µ-maps from PET emission data and LSO background radiation, enabling CT-free attenuation and scatter correction in LAFOV PET scanners. Supplementary Information The online version contains supplementary material available at 10.1007/s00259-022-05909-3.
Collapse
Affiliation(s)
- Hasan Sari
- Advanced Clinical Imaging Technology, Siemens Healthcare AG, Lausanne, Switzerland.
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, Bern, Switzerland.
| | | | - Clemens Mingels
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, Bern, Switzerland
| | - Ian Alberts
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, Bern, Switzerland
| | | | | | - Song Xue
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, Bern, Switzerland
| | - George Prenosil
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, Bern, Switzerland
| | - Kuangyu Shi
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, Bern, Switzerland
| | | | - Axel Rominger
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, Bern, Switzerland
| |
Collapse
|
32
|
Nai YH, Loi HY, O'Doherty S, Tan TH, Reilhac A. Comparison of the performances of machine learning and deep learning in improving the quality of low dose lung cancer PET images. Jpn J Radiol 2022; 40:1290-1299. [PMID: 35809210 DOI: 10.1007/s11604-022-01311-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Accepted: 06/19/2022] [Indexed: 11/29/2022]
Abstract
PURPOSE To compare the performances of machine learning (ML) and deep learning (DL) in improving the quality of low dose (LD) lung cancer PET images and the minimum counts required. MATERIALS AND METHODS 33 standard dose (SD) PET images, were used to simulate LD PET images at seven-count levels of 0.25, 0.5, 1, 2, 5, 7.5 and 10 million (M) counts. Image quality transfer (IQT), a ML algorithm that uses decision tree and patch-sampling was compared to two DL networks-HighResNet (HRN) and deep-boosted regression (DBR). Supervised training was performed by training the ML and DL algorithms with matched-pair SD and LD images. Image quality evaluation and clinical lesion detection tasks were performed by three readers. Bias in 53 radiomic features, including mean SUV, was evaluated for all lesions. RESULTS ML- and DL-estimated images showed higher signal and smaller error than LD images with optimal image quality recovery achieved using LD down to 5 M counts. True positive rate and false discovery rate were fairly stable beyond 5 M counts for the detection of small and large true lesions. Readers rated average or higher ratings to images estimated from LD images of count levels above 5 M only, with higher confidence in detecting true lesions. CONCLUSION LD images with a minimum of 5 M counts (8.72 MBq for 10 min scan or 25 MBq for 3 min scan) are required for optimal clinical use of ML and DL, with slightly better but more varied performance shown by DL.
Collapse
Affiliation(s)
- Ying-Hwey Nai
- Clinical Imaging Research Centre, Yong Loo Lin School of Medicine, National University of Singapore, 14 Medical Drive, #B1-01, Singapore, 117599, Singapore.
| | - Hoi Yin Loi
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore
| | - Sophie O'Doherty
- Clinical Imaging Research Centre, Yong Loo Lin School of Medicine, National University of Singapore, 14 Medical Drive, #B1-01, Singapore, 117599, Singapore
| | - Teng Hwee Tan
- Department of Radiation Oncology, National University Cancer Institute, Singapore, Singapore
| | - Anthonin Reilhac
- Clinical Imaging Research Centre, Yong Loo Lin School of Medicine, National University of Singapore, 14 Medical Drive, #B1-01, Singapore, 117599, Singapore
| |
Collapse
|
33
|
Evaluation of a High-Sensitivity Organ-Targeted PET Camera. SENSORS 2022; 22:s22134678. [PMID: 35808181 PMCID: PMC9269056 DOI: 10.3390/s22134678] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Revised: 06/16/2022] [Accepted: 06/17/2022] [Indexed: 02/05/2023]
Abstract
The aim of this study is to evaluate the performance of the Radialis organ-targeted positron emission tomography (PET) Camera with standardized tests and through assessment of clinical-imaging results. Sensitivity, count-rate performance, and spatial resolution were evaluated according to the National Electrical Manufacturers Association (NEMA) NU-4 standards, with necessary modifications to accommodate the planar detector design. The detectability of small objects was shown with micro hotspot phantom images. The clinical performance of the camera was also demonstrated through breast cancer images acquired with varying injected doses of 2-[fluorine-18]-fluoro-2-deoxy-D-glucose (18F-FDG) and qualitatively compared with sample digital full-field mammography, magnetic resonance imaging (MRI), and whole-body (WB) PET images. Micro hotspot phantom sources were visualized down to 1.35 mm-diameter rods. Spatial resolution was calculated to be 2.3 ± 0.1 mm for the in-plane resolution and 6.8 ± 0.1 mm for the cross-plane resolution using maximum likelihood expectation maximization (MLEM) reconstruction. The system peak noise equivalent count rate was 17.8 kcps at a 18F-FDG concentration of 10.5 kBq/mL. System scatter fraction was 24%. The overall efficiency at the peak noise equivalent count rate was 5400 cps/MBq. The maximum axial sensitivity achieved was 3.5%, with an average system sensitivity of 2.4%. Selected results from clinical trials demonstrate capability of imaging lesions at the chest wall and identifying false-negative X-ray findings and false-positive MRI findings, even at up to a 10-fold dose reduction in comparison with standard 18F-FDG doses (i.e., at 37 MBq or 1 mCi). The evaluation of the organ-targeted Radialis PET Camera indicates that it is a promising technology for high-image-quality, low-dose PET imaging. High-efficiency radiotracer detection also opens an opportunity to reduce administered doses of radiopharmaceuticals and, therefore, patient exposure to radiation.
Collapse
|
34
|
Sanaat A, Shiri I, Ferdowsi S, Arabi H, Zaidi H. Robust-Deep: A Method for Increasing Brain Imaging Datasets to Improve Deep Learning Models' Performance and Robustness. J Digit Imaging 2022; 35:469-481. [PMID: 35137305 PMCID: PMC9156620 DOI: 10.1007/s10278-021-00536-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2021] [Revised: 09/29/2021] [Accepted: 11/08/2021] [Indexed: 12/15/2022] Open
Abstract
A small dataset commonly affects generalization, robustness, and overall performance of deep neural networks (DNNs) in medical imaging research. Since gathering large clinical databases is always difficult, we proposed an analytical method for producing a large realistic/diverse dataset. Clinical brain PET/CT/MR images including full-dose (FD), low-dose (LD) corresponding to only 5 % of events acquired in the FD scan, non-attenuated correction (NAC) and CT-based measured attenuation correction (MAC) PET images, CT images and T1 and T2 MR sequences of 35 patients were included. All images were registered to the Montreal Neurological Institute (MNI) template. Laplacian blending was used to make a natural presentation using information in the frequency domain of images from two separate patients, as well as the blending mask. This classical technique from the computer vision and image processing communities is still widely used and unlike modern DNNs, does not require the availability of training data. A modified ResNet DNN was implemented to evaluate four image-to-image translation tasks, including LD to FD, LD+MR to FD, NAC to MAC, and MRI to CT, with and without using the synthesized images. Quantitative analysis using established metrics, including the peak signal-to-noise ratio (PSNR), structural similarity index metric (SSIM), and joint histogram analysis was performed for quantitative evaluation. The quantitative comparison between the registered small dataset containing 35 patients and the large dataset containing 350 synthesized plus 35 real dataset demonstrated improvement of the RMSE and SSIM by 29% and 8% for LD to FD, 40% and 7% for LD+MRI to FD, 16% and 8% for NAC to MAC, and 24% and 11% for MRI to CT mapping task, respectively. The qualitative/quantitative analysis demonstrated that the proposed model improved the performance of all four DNN models through producing images of higher quality and lower quantitative bias and variance compared to reference images.
Collapse
Affiliation(s)
- Amirhossein Sanaat
- grid.150338.c0000 0001 0721 9812Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - Isaac Shiri
- grid.150338.c0000 0001 0721 9812Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - Sohrab Ferdowsi
- University of Applied Sciences and Arts of Western, Geneva, Switzerland
| | - Hossein Arabi
- grid.150338.c0000 0001 0721 9812Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - Habib Zaidi
- grid.150338.c0000 0001 0721 9812Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland ,grid.8591.50000 0001 2322 4988Geneva University Neurocenter, Geneva University, 1205 Geneva, Switzerland ,grid.4494.d0000 0000 9558 4598Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands ,grid.10825.3e0000 0001 0728 0170Department of Nuclear Medicine, University of Southern Denmark, DK-500 Odense, Denmark
| |
Collapse
|
35
|
Artificial intelligence-based PET denoising could allow a two-fold reduction in [ 18F]FDG PET acquisition time in digital PET/CT. Eur J Nucl Med Mol Imaging 2022; 49:3750-3760. [PMID: 35593925 PMCID: PMC9399218 DOI: 10.1007/s00259-022-05800-1] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2021] [Accepted: 04/10/2022] [Indexed: 11/18/2022]
Abstract
Purpose We investigated whether artificial intelligence (AI)-based denoising halves PET acquisition time in digital PET/CT. Methods One hundred ninety-five patients referred for [18F]FDG PET/CT were prospectively included. Body PET acquisitions were performed in list mode. Original “PET90” (90 s/bed position) was compared to reconstructed ½-duration PET (45 s/bed position) with and without AI-denoising, “PET45AI and PET45”. Denoising was performed by SubtlePET™ using deep convolutional neural networks. Visual global image quality (IQ) 3-point scores and lesion detectability were evaluated. Lesion maximal and peak standardized uptake values using lean body mass (SULmax and SULpeak), metabolic volumes (MV), and liver SULmean were measured, including both standard and EARL1 (European Association of Nuclear Medicine Research Ltd) compliant SUL. Lesion-to-liver SUL ratios (LLR) and liver coefficients of variation (CVliv) were calculated. Results PET45 showed mediocre IQ (scored poor in 8% and moderate in 68%) and lesion concordance rate with PET90 (88.7%). In PET45AI, IQ scores were similar to PET90 (P = 0.80), good in 92% and moderate in 8% for both. The lesion concordance rate between PET90 and PET45AI was 836/856 (97.7%), with 7 lesions (0.8%) only detected in PET90 and 13 (1.5%) exclusively in PET45AI. Lesion EARL1 SULpeak was not significantly different between both PET (P = 0.09). Lesion standard SULpeak, standard and EARL1 SULmax, LLR and CVliv were lower in PET45AI than in PET90 (P < 0.0001), while lesion MV and liver SULmean were higher (P < 0.0001). Good to excellent intraclass correlation coefficients (ICC) between PET90 and PET45AI were observed for lesion SUL and MV (ICC ≥ 0.97) and for liver SULmean (ICC ≥ 0.87). Conclusion AI allows [18F]FDG PET duration in digital PET/CT to be halved, while restoring degraded ½-duration PET image quality. Future multicentric studies, including other PET radiopharmaceuticals, are warranted. Supplementary Information The online version contains supplementary material available at 10.1007/s00259-022-05800-1.
Collapse
|
36
|
Applications of Generative Adversarial Networks (GANs) in Positron Emission Tomography (PET) imaging: A review. Eur J Nucl Med Mol Imaging 2022; 49:3717-3739. [PMID: 35451611 DOI: 10.1007/s00259-022-05805-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Accepted: 04/12/2022] [Indexed: 11/04/2022]
Abstract
PURPOSE This paper reviews recent applications of Generative Adversarial Networks (GANs) in Positron Emission Tomography (PET) imaging. Recent advances in Deep Learning (DL) and GANs catalysed the research of their applications in medical imaging modalities. As a result, several unique GAN topologies have emerged and been assessed in an experimental environment over the last two years. METHODS The present work extensively describes GAN architectures and their applications in PET imaging. The identification of relevant publications was performed via approved publication indexing websites and repositories. Web of Science, Scopus, and Google Scholar were the major sources of information. RESULTS The research identified a hundred articles that address PET imaging applications such as attenuation correction, de-noising, scatter correction, removal of artefacts, image fusion, high-dose image estimation, super-resolution, segmentation, and cross-modality synthesis. These applications are presented and accompanied by the corresponding research works. CONCLUSION GANs are rapidly employed in PET imaging tasks. However, specific limitations must be eliminated to reach their full potential and gain the medical community's trust in everyday clinical practice.
Collapse
|
37
|
Decentralized Distributed Multi-institutional PET Image Segmentation Using a Federated Deep Learning Framework. Clin Nucl Med 2022; 47:606-617. [PMID: 35442222 DOI: 10.1097/rlu.0000000000004194] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
PURPOSE The generalizability and trustworthiness of deep learning (DL)-based algorithms depend on the size and heterogeneity of training datasets. However, because of patient privacy concerns and ethical and legal issues, sharing medical images between different centers is restricted. Our objective is to build a federated DL-based framework for PET image segmentation utilizing a multicentric dataset and to compare its performance with the centralized DL approach. METHODS PET images from 405 head and neck cancer patients from 9 different centers formed the basis of this study. All tumors were segmented manually. PET images converted to SUV maps were resampled to isotropic voxels (3 × 3 × 3 mm3) and then normalized. PET image subvolumes (12 × 12 × 12 cm3) consisting of whole tumors and background were analyzed. Data from each center were divided into train/validation (80% of patients) and test sets (20% of patients). The modified R2U-Net was used as core DL model. A parallel federated DL model was developed and compared with the centralized approach where the data sets are pooled to one server. Segmentation metrics, including Dice similarity and Jaccard coefficients, percent relative errors (RE%) of SUVpeak, SUVmean, SUVmedian, SUVmax, metabolic tumor volume, and total lesion glycolysis were computed and compared with manual delineations. RESULTS The performance of the centralized versus federated DL methods was nearly identical for segmentation metrics: Dice (0.84 ± 0.06 vs 0.84 ± 0.05) and Jaccard (0.73 ± 0.08 vs 0.73 ± 0.07). For quantitative PET parameters, we obtained comparable RE% for SUVmean (6.43% ± 4.72% vs 6.61% ± 5.42%), metabolic tumor volume (12.2% ± 16.2% vs 12.1% ± 15.89%), and total lesion glycolysis (6.93% ± 9.6% vs 7.07% ± 9.85%) and negligible RE% for SUVmax and SUVpeak. No significant differences in performance (P > 0.05) between the 2 frameworks (centralized vs federated) were observed. CONCLUSION The developed federated DL model achieved comparable quantitative performance with respect to the centralized DL model. Federated DL models could provide robust and generalizable segmentation, while addressing patient privacy and legal and ethical issues in clinical data sharing.
Collapse
|
38
|
Huang Z, Wu Y, Fu F, Meng N, Gu F, Wu Q, Zhou Y, Yang Y, Liu X, Zheng H, Liang D, Wang M, Hu Z. Parametric image generation with the uEXPLORER total-body PET/CT system through deep learning. Eur J Nucl Med Mol Imaging 2022; 49:2482-2492. [DOI: 10.1007/s00259-022-05731-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Accepted: 02/13/2022] [Indexed: 11/25/2022]
|
39
|
Ote K, Hashimoto F. Deep-learning-based fast TOF-PET image reconstruction using direction information. Radiol Phys Technol 2022; 15:72-82. [DOI: 10.1007/s12194-022-00652-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Revised: 01/26/2022] [Accepted: 01/27/2022] [Indexed: 10/19/2022]
|
40
|
Matsubara K, Ibaraki M, Nemoto M, Watabe H, Kimura Y. A review on AI in PET imaging. Ann Nucl Med 2022; 36:133-143. [PMID: 35029818 DOI: 10.1007/s12149-021-01710-8] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2021] [Accepted: 12/09/2021] [Indexed: 12/16/2022]
Abstract
Artificial intelligence (AI) has been applied to various medical imaging tasks, such as computer-aided diagnosis. Specifically, deep learning techniques such as convolutional neural network (CNN) and generative adversarial network (GAN) have been extensively used for medical image generation. Image generation with deep learning has been investigated in studies using positron emission tomography (PET). This article reviews studies that applied deep learning techniques for image generation on PET. We categorized the studies for PET image generation with deep learning into three themes as follows: (1) recovering full PET data from noisy data by denoising with deep learning, (2) PET image reconstruction and attenuation correction with deep learning and (3) PET image translation and synthesis with deep learning. We introduce recent studies based on these three categories. Finally, we mention the limitations of applying deep learning techniques to PET image generation and future prospects for PET image generation.
Collapse
Affiliation(s)
- Keisuke Matsubara
- Department of Radiology and Nuclear Medicine, Research Institute for Brain and Blood Vessels, Akita Cerebrospinal and Cardiovascular Center, Akita, Japan
| | - Masanobu Ibaraki
- Department of Radiology and Nuclear Medicine, Research Institute for Brain and Blood Vessels, Akita Cerebrospinal and Cardiovascular Center, Akita, Japan
| | - Mitsutaka Nemoto
- Faculty of Biology-Oriented Science and Technology, and Cyber Informatics Research Institute, Kindai University, Wakayama, Japan
| | - Hiroshi Watabe
- Cyclotron and Radioisotope Center (CYRIC), Tohoku University, Miyagi, Japan
| | - Yuichi Kimura
- Faculty of Biology-Oriented Science and Technology, and Cyber Informatics Research Institute, Kindai University, Wakayama, Japan.
| |
Collapse
|
41
|
Seifert R, Kersting D, Rischpler C, Opitz M, Kirchner J, Pabst KM, Mavroeidi IA, Laschinsky C, Grueneisen J, Schaarschmidt B, Catalano OA, Herrmann K, Umutlu L. Clinical Use of PET/MR in Oncology: An Update. Semin Nucl Med 2021; 52:356-364. [PMID: 34980479 DOI: 10.1053/j.semnuclmed.2021.11.012] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Revised: 11/21/2021] [Accepted: 11/23/2021] [Indexed: 12/30/2022]
Abstract
The combination of PET and MRI is one of the recent advances of hybrid imaging. Yet to date, the adoption rate of PET/MRI systems has been rather slow. This seems to be partially caused by the high costs of PET/MRI systems and the need to verify an incremental benefit over PET/CT or sequential PET/CT and MRI. In analogy to PET/CT, the MRI part of PET/MRI was primarily used for anatomical imaging. Though this can be advantageous, for example in diseases where the superior soft tissue contrast of MRI is highly appreciated, the sole use of MRI for anatomical orientation lessens the potential of PET/MRI. Consequently, more recent studies focused on its multiparametric potential and employed diffusion weighted sequences and other functional imaging sequences in PET/MRI. This integration puts the focus on a more wholesome approach to PET/MR imaging, in terms of releasing its full potential for local primary staging based on multiparametric imaging and an included one-stop shop approach for whole-body staging. This approach as well as the implementation of computational analysis, in terms of radiomics analysis, has been shown valuable in several oncological diseases, as will be discussed in this review article.
Collapse
Affiliation(s)
- Robert Seifert
- Department of Nuclear Medicine, University Hospital Essen, Essen, Germany; Department of Nuclear Medicine, University Hospital Münster, Münster, Germany; West German Cancer Center, University Hospital Essen, Essen, Germany.; German Cancer Consortium (DKTK), University Hospital Essen, Essen, Germany.
| | - David Kersting
- Department of Nuclear Medicine, University Hospital Essen, Essen, Germany; West German Cancer Center, University Hospital Essen, Essen, Germany.; German Cancer Consortium (DKTK), University Hospital Essen, Essen, Germany
| | - Christoph Rischpler
- Department of Nuclear Medicine, University Hospital Essen, Essen, Germany; West German Cancer Center, University Hospital Essen, Essen, Germany.; German Cancer Consortium (DKTK), University Hospital Essen, Essen, Germany
| | - Marcel Opitz
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany
| | - Julian Kirchner
- Department of Diagnostic and Interventional Radiology, University Dusseldorf, Medical Faculty, Dusseldorf, Germany
| | - Kim M Pabst
- Department of Nuclear Medicine, University Hospital Essen, Essen, Germany; West German Cancer Center, University Hospital Essen, Essen, Germany.; German Cancer Consortium (DKTK), University Hospital Essen, Essen, Germany
| | - Ilektra-Antonia Mavroeidi
- West German Cancer Center, University Hospital Essen, Essen, Germany.; Clinic for Internal Medicine (Tumor Research), University Hospital Essen, Essen, Germany
| | - Christina Laschinsky
- Department of Nuclear Medicine, University Hospital Essen, Essen, Germany; West German Cancer Center, University Hospital Essen, Essen, Germany.; German Cancer Consortium (DKTK), University Hospital Essen, Essen, Germany
| | - Johannes Grueneisen
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany
| | - Benedikt Schaarschmidt
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany
| | - Onofrio Antonio Catalano
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA; Abdominal Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA
| | - Ken Herrmann
- Department of Nuclear Medicine, University Hospital Essen, Essen, Germany; West German Cancer Center, University Hospital Essen, Essen, Germany.; German Cancer Consortium (DKTK), University Hospital Essen, Essen, Germany
| | - Lale Umutlu
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany
| |
Collapse
|
42
|
Aide N, Lasnon C, Desmonts C, Armstrong IS, Walker MD, McGowan DR. Advances in PET-CT technology: An update. Semin Nucl Med 2021; 52:286-301. [PMID: 34823841 DOI: 10.1053/j.semnuclmed.2021.10.005] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Revised: 10/18/2021] [Accepted: 10/19/2021] [Indexed: 11/11/2022]
Abstract
This article reviews the current evolution and future directions in PET-CT technology focusing on three areas: time of flight, image reconstruction, and data-driven gating. Image reconstruction is considered with advances in point spread function modelling, Bayesian penalised likelihood reconstruction, and artificial intelligence approaches. Data-driven gating is examined with reference to respiratory motion, cardiac motion, and head motion. For each of these technological advancements, theory will be briefly discussed, benefits of their use in routine practice will be detailed and potential future developments will be discussed. Representative clinical cases will be presented, demonstrating the huge opportunities given to the PET community by hardware and software advances in PET technology when it comes to lesion detection, disease characterization, accurate quantitation and quicker scans. Through this review, hospitals are encouraged to embrace, evaluate and appropriately implement the wide range of new PET technologies that are available now or in the near future, for the improvement of patient care.
Collapse
Affiliation(s)
- Nicolas Aide
- Nuclear Medicine, Caen University Hospital, Caen, France; INSERM ANTICIPE, Normandie University, Caen, France.
| | - Charline Lasnon
- INSERM ANTICIPE, Normandie University, Caen, France; François Baclesse Cancer Center, Caen, France
| | - Cedric Desmonts
- Nuclear Medicine, Caen University Hospital, Caen, France; INSERM ANTICIPE, Normandie University, Caen, France
| | - Ian S Armstrong
- Nuclear Medicine, Manchester University NHS Foundation Trust, Manchester
| | - Matthew D Walker
- Department of Medical Physics and Clinical Engineering, Oxford University Hospitals NHS FT, Oxford
| | - Daniel R McGowan
- Department of Medical Physics and Clinical Engineering, Oxford University Hospitals NHS FT, Oxford; Department of Oncology, University of Oxford, Oxford
| |
Collapse
|
43
|
Amirrashedi M, Sarkar S, Mamizadeh H, Ghadiri H, Ghafarian P, Zaidi H, Ay MR. Leveraging deep neural networks to improve numerical and perceptual image quality in low-dose preclinical PET imaging. Comput Med Imaging Graph 2021; 94:102010. [PMID: 34784505 DOI: 10.1016/j.compmedimag.2021.102010] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Revised: 10/25/2021] [Accepted: 10/26/2021] [Indexed: 01/24/2023]
Abstract
The amount of radiotracer injected into laboratory animals is still the most daunting challenge facing translational PET studies. Since low-dose imaging is characterized by a higher level of noise, the quality of the reconstructed images leaves much to be desired. Being the most ubiquitous techniques in denoising applications, edge-aware denoising filters, and reconstruction-based techniques have drawn significant attention in low-count applications. However, for the last few years, much of the credit has gone to deep-learning (DL) methods, which provide more robust solutions to handle various conditions. Albeit being extensively explored in clinical studies, to the best of our knowledge, there is a lack of studies exploring the feasibility of DL-based image denoising in low-count small animal PET imaging. Therefore, herein, we investigated different DL frameworks to map low-dose small animal PET images to their full-dose equivalent with quality and visual similarity on a par with those of standard acquisition. The performance of the DL model was also compared to other well-established filters, including Gaussian smoothing, nonlocal means, and anisotropic diffusion. Visual inspection and quantitative assessment based on quality metrics proved the superior performance of the DL methods in low-count small animal PET studies, paving the way for a more detailed exploration of DL-assisted algorithms in this domain.
Collapse
Affiliation(s)
- Mahsa Amirrashedi
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran; Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran.
| | - Saeed Sarkar
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran; Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran.
| | - Hojjat Mamizadeh
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran; Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran.
| | - Hossein Ghadiri
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran; Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran.
| | - Pardis Ghafarian
- Chronic Respiratory Diseases Research Center, National Research Institute of Tuberculosis and Lung Diseases (NRITLD), Shahid Beheshti University of Medical Sciences, Tehran, Iran; PET/CT and Cyclotron Center, Masih Daneshvari Hospital, Shahid Beheshti University of Medical, Tehran, Iran.
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva CH-1211, Switzerland; Geneva University Neurocenter, Geneva University, Geneva, Switzerland; Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands; Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark.
| | - Mohammad Reza Ay
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran; Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran.
| |
Collapse
|
44
|
Sanaat A, Shooli H, Ferdowsi S, Shiri I, Arabi H, Zaidi H. DeepTOFSino: A deep learning model for synthesizing full-dose time-of-flight bin sinograms from their corresponding low-dose sinograms. Neuroimage 2021; 245:118697. [PMID: 34742941 DOI: 10.1016/j.neuroimage.2021.118697] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2021] [Revised: 09/21/2021] [Accepted: 10/29/2021] [Indexed: 11/29/2022] Open
Abstract
PURPOSE Reducing the injected activity and/or the scanning time is a desirable goal to minimize radiation exposure and maximize patients' comfort. To achieve this goal, we developed a deep neural network (DNN) model for synthesizing full-dose (FD) time-of-flight (TOF) bin sinograms from their corresponding fast/low-dose (LD) TOF bin sinograms. METHODS Clinical brain PET/CT raw data of 140 normal and abnormal patients were employed to create LD and FD TOF bin sinograms. The LD TOF sinograms were created through 5% undersampling of FD list-mode PET data. The TOF sinograms were split into seven time bins (0, ±1, ±2, ±3). Residual network (ResNet) algorithms were trained separately to generate FD bins from LD bins. An extra ResNet model was trained to synthesize FD images from LD images to compare the performance of DNN in sinogram space (SS) vs implementation in image space (IS). Comprehensive quantitative and statistical analysis was performed to assess the performance of the proposed model using established quantitative metrics, including the peak signal-to-noise ratio (PSNR), structural similarity index metric (SSIM) region-wise standardized uptake value (SUV) bias and statistical analysis for 83 brain regions. RESULTS SSIM and PSNR values of 0.97 ± 0.01, 0.98 ± 0.01 and 33.70 ± 0.32, 39.36 ± 0.21 were obtained for IS and SS, respectively, compared to 0.86 ± 0.02and 31.12 ± 0.22 for reference LD images. The absolute average SUV bias was 0.96 ± 0.95% and 1.40 ± 0.72% for SS and IS implementations, respectively. The joint histogram analysis revealed the lowest mean square error (MSE) and highest correlation (R2 = 0.99, MSE = 0.019) was achieved by SS compared to IS (R2 = 0.97, MSE= 0.028). The Bland & Altman analysis showed that the lowest SUV bias (-0.4%) and minimum variance (95% CI: -2.6%, +1.9%) were achieved by SS images. The voxel-wise t-test analysis revealed the presence of voxels with statistically significantly lower values in LD, IS, and SS images compared to FD images respectively. CONCLUSION The results demonstrated that images reconstructed from the predicted TOF FD sinograms using the SS approach led to higher image quality and lower bias compared to images predicted from LD images.
Collapse
Affiliation(s)
- Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Hossein Shooli
- Persian Gulf Nuclear Medicine Research Center, Department of Molecular Imaging and Radionuclide Therapy (MIRT), Bushehr Medical University Hospital, Faculty of Medicine, Bushehr University of Medical Sciences, Bushehr, Iran
| | - Sohrab Ferdowsi
- University of Applied Sciences and Arts of Western, Geneva, Switzerland
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland; Geneva University Neurocenter, University of Geneva, Geneva, Switzerland; Department of Nuclear Medicine and Molecular Imaging, University of Groningen, Groningen, Netherlands; Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark.
| |
Collapse
|
45
|
Liu J, Malekzadeh M, Mirian N, Song TA, Liu C, Dutta J. Artificial Intelligence-Based Image Enhancement in PET Imaging: Noise Reduction and Resolution Enhancement. PET Clin 2021; 16:553-576. [PMID: 34537130 PMCID: PMC8457531 DOI: 10.1016/j.cpet.2021.06.005] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
High noise and low spatial resolution are two key confounding factors that limit the qualitative and quantitative accuracy of PET images. Artificial intelligence models for image denoising and deblurring are becoming increasingly popular for the postreconstruction enhancement of PET images. We present a detailed review of recent efforts for artificial intelligence-based PET image enhancement with a focus on network architectures, data types, loss functions, and evaluation metrics. We also highlight emerging areas in this field that are quickly gaining popularity, identify barriers to large-scale adoption of artificial intelligence models for PET image enhancement, and discuss future directions.
Collapse
Affiliation(s)
- Juan Liu
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Masoud Malekzadeh
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, 1 University Avenue, Ball 301, Lowell, MA 01854, USA
| | - Niloufar Mirian
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Tzu-An Song
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, 1 University Avenue, Ball 301, Lowell, MA 01854, USA
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA.
| | - Joyita Dutta
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, 1 University Avenue, Ball 301, Lowell, MA 01854, USA; Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
46
|
Gong K, Kim K, Cui J, Wu D, Li Q. The Evolution of Image Reconstruction in PET: From Filtered Back-Projection to Artificial Intelligence. PET Clin 2021; 16:533-542. [PMID: 34537129 DOI: 10.1016/j.cpet.2021.06.004] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
PET can provide functional images revealing physiologic processes in vivo. Although PET has many applications, there are still some limitations that compromise its precision: the absorption of photons in the body causes signal attenuation; the dead-time limit of system components leads to the loss of the count rate; the scattered and random events received by the detector introduce additional noise; the characteristics of the detector limit the spatial resolution; and the low signal-to-noise ratio caused by the scan-time limit (eg, dynamic scans) and dose concern. The early PET reconstruction methods are analytical approaches based on an idealized mathematical model.
Collapse
Affiliation(s)
- Kuang Gong
- Department of Radiology, Center for Advanced Medical Computing and Analysis, Gordon Center for Medical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Kyungsang Kim
- Department of Radiology, Center for Advanced Medical Computing and Analysis, Gordon Center for Medical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Jianan Cui
- Department of Radiology, Center for Advanced Medical Computing and Analysis, Gordon Center for Medical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Dufan Wu
- Department of Radiology, Center for Advanced Medical Computing and Analysis, Gordon Center for Medical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Quanzheng Li
- Department of Radiology, Center for Advanced Medical Computing and Analysis, Gordon Center for Medical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
47
|
Bouchareb Y, Moradi Khaniabadi P, Al Kindi F, Al Dhuhli H, Shiri I, Zaidi H, Rahmim A. Artificial intelligence-driven assessment of radiological images for COVID-19. Comput Biol Med 2021; 136:104665. [PMID: 34343890 PMCID: PMC8291996 DOI: 10.1016/j.compbiomed.2021.104665] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Revised: 07/11/2021] [Accepted: 07/17/2021] [Indexed: 12/24/2022]
Abstract
Artificial Intelligence (AI) methods have significant potential for diagnosis and prognosis of COVID-19 infections. Rapid identification of COVID-19 and its severity in individual patients is expected to enable better control of the disease individually and at-large. There has been remarkable interest by the scientific community in using imaging biomarkers to improve detection and management of COVID-19. Exploratory tools such as AI-based models may help explain the complex biological mechanisms and provide better understanding of the underlying pathophysiological processes. The present review focuses on AI-based COVID-19 studies as applies to chest x-ray (CXR) and computed tomography (CT) imaging modalities, and the associated challenges. Explicit radiomics, deep learning methods, and hybrid methods that combine both deep learning and explicit radiomics have the potential to enhance the ability and usefulness of radiological images to assist clinicians in the current COVID-19 pandemic. The aims of this review are: first, to outline COVID-19 AI-analysis workflows, including acquisition of data, feature selection, segmentation methods, feature extraction, and multi-variate model development and validation as appropriate for AI-based COVID-19 studies. Secondly, existing limitations of AI-based COVID-19 analyses are discussed, highlighting potential improvements that can be made. Finally, the impact of AI and radiomics methods and the associated clinical outcomes are summarized. In this review, pipelines that include the key steps for AI-based COVID-19 signatures identification are elaborated. Sample size, non-standard imaging protocols, segmentation, availability of public COVID-19 databases, combination of imaging and clinical information and full clinical validation remain major limitations and challenges. We conclude that AI-based assessment of CXR and CT images has significant potential as a viable pathway for the diagnosis, follow-up and prognosis of COVID-19.
Collapse
Affiliation(s)
- Yassine Bouchareb
- Department of Radiology and Molecular Imaging, College of Medicine and Health Science, Sultan Qaboos University, PO. Box 35, Al Khod, Muscat, 123, Oman.
| | - Pegah Moradi Khaniabadi
- Department of Radiology and Molecular Imaging, College of Medicine and Health Science, Sultan Qaboos University, PO. Box 35, Al Khod, Muscat, 123, Oman.
| | | | - Humoud Al Dhuhli
- Department of Radiology and Molecular Imaging, College of Medicine and Health Science, Sultan Qaboos University, PO. Box 35, Al Khod, Muscat, 123, Oman
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva 4, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva 4, Switzerland; Geneva University Neurocenter, Geneva University, Geneva, Switzerland; Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands; Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
| | - Arman Rahmim
- Departments of Radiology and Physics, University of British Columbia, Vancouver, BC, Canada; Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC, Canada
| |
Collapse
|
48
|
Aide N, Lasnon C, Kesner A, Levin CS, Buvat I, Iagaru A, Hermann K, Badawi RD, Cherry SR, Bradley KM, McGowan DR. New PET technologies - embracing progress and pushing the limits. Eur J Nucl Med Mol Imaging 2021; 48:2711-2726. [PMID: 34081153 PMCID: PMC8263417 DOI: 10.1007/s00259-021-05390-4] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2021] [Accepted: 04/25/2021] [Indexed: 12/11/2022]
Affiliation(s)
- Nicolas Aide
- Nuclear medicine Department, University Hospital, Caen, France.
- INSERM ANTICIPE, Normandie University, Caen, France.
| | - Charline Lasnon
- INSERM ANTICIPE, Normandie University, Caen, France
- François Baclesse Cancer Centre, Caen, France
| | - Adam Kesner
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, USA
| | - Craig S Levin
- Department of Radiology, Molecular Imaging Program at Stanford, Stanford University, Stanford, CA, 94305, USA
| | - Irene Buvat
- Institut Curie, Université PLS, Inserm, U1288 LITO, Orsay, France
| | - Andrei Iagaru
- Department of Radiology, Division of Nuclear Medicine and Molecular Imaging, Stanford University, Stanford, CA, 94305, USA
| | - Ken Hermann
- Department of Nuclear Medicine, University of Duisburg-Essen and German Cancer Consortium (DKTK)-University Hospital Essen, Essen, Germany
| | - Ramsey D Badawi
- Departments of Radiology and Biomedical Engineering, University of California, Davis, CA, USA
| | - Simon R Cherry
- Departments of Radiology and Biomedical Engineering, University of California, Davis, CA, USA
| | - Kevin M Bradley
- Wales Research and Diagnostic PET Imaging Centre, Cardiff University, Cardiff, UK
| | - Daniel R McGowan
- Radiation Physics and Protection, Churchill Hospital, Oxford University Hospitals NHS FT, Oxford, UK.
- Department of Oncology, University of Oxford, Oxford, UK.
| |
Collapse
|
49
|
Cox CPW, van Assema DME, Verburg FA, Brabander T, Konijnenberg M, Segbers M. A dedicated paediatric [ 18F]FDG PET/CT dosage regimen. EJNMMI Res 2021; 11:65. [PMID: 34279735 PMCID: PMC8289942 DOI: 10.1186/s13550-021-00812-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2021] [Accepted: 07/09/2021] [Indexed: 12/20/2022] Open
Abstract
BACKGROUND The role of 2-[18F]fluoro-2-deoxy-D-glucose ([18F]FDG) positron emission tomography/computed tomography (PET/CT) in children is still expanding. Dedicated paediatric dosage regimens are needed to keep the radiation dose as low as reasonably achievable and reduce the risk of radiation-induced carcinogenesis. The aim of this study is to investigate the relation between patient-dependent parameters and [18F]FDG PET image quality in order to propose a dedicated paediatric dose regimen. METHODS In this retrospective analysis, 102 children and 85 adults were included that underwent a diagnostic [18F]FDG PET/CT scan. The image quality of the PET scans was measured by the signal-to-noise ratio (SNR) in the liver. The SNR liver was normalized (SNRnorm) for administered activity and acquisition time to apply curve fitting with body weight, body length, body mass index, body weight/body length and body surface area. Curve fitting was performed with two power fits, a nonlinear two-parameter model α p-d and a linear single-parameter model α p-0.5. The fit parameters of the preferred model were combined with a user preferred SNR to obtain at least moderate or good image quality for the dosage regimen proposal. RESULTS Body weight demonstrated the highest coefficient of determination for the nonlinear (R2 = 0.81) and linear (R2 = 0.80) models. The nonlinear model was preferred by the Akaike's corrected information criterion. We decided to use a SNR of 6.5, based on the expert opinion of three nuclear medicine physicians. Comparison with the quadratic adult protocol confirmed the need for different dosage regimens for both patient groups. In this study, the amount of administered activity can be considerably reduced in comparison with the current paediatric guidelines. CONCLUSION Body weight has the strongest relation with [18F]FDG PET image quality in children. The proposed nonlinear dosage regimen based on body mass will provide a constant and clinical sufficient image quality with a significant reduction of the effective dose compared to the current guidelines. A dedicated paediatric dosage regimen is necessary, as a universal dosing regimen for paediatric and adult is not feasible.
Collapse
Affiliation(s)
- Christina P W Cox
- Department of Radiology & Nuclear Medicine, Erasmus Medical Center, Postbus, 2040 3000 CA, Rotterdam, The Netherlands.
| | - Daniëlle M E van Assema
- Department of Radiology & Nuclear Medicine, Erasmus Medical Center, Postbus, 2040 3000 CA, Rotterdam, The Netherlands
| | - Frederik A Verburg
- Department of Radiology & Nuclear Medicine, Erasmus Medical Center, Postbus, 2040 3000 CA, Rotterdam, The Netherlands
| | - Tessa Brabander
- Department of Radiology & Nuclear Medicine, Erasmus Medical Center, Postbus, 2040 3000 CA, Rotterdam, The Netherlands
| | - Mark Konijnenberg
- Department of Radiology & Nuclear Medicine, Erasmus Medical Center, Postbus, 2040 3000 CA, Rotterdam, The Netherlands
| | - Marcel Segbers
- Department of Radiology & Nuclear Medicine, Erasmus Medical Center, Postbus, 2040 3000 CA, Rotterdam, The Netherlands
| |
Collapse
|
50
|
Sanaat A, Mirsadeghi E, Razeghi B, Ginovart N, Zaidi H. Fast dynamic brain PET imaging using stochastic variational prediction for recurrent frame generation. Med Phys 2021; 48:5059-5071. [PMID: 34174787 PMCID: PMC8518550 DOI: 10.1002/mp.15063] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Revised: 05/30/2021] [Accepted: 06/08/2021] [Indexed: 12/03/2022] Open
Abstract
Purpose We assess the performance of a recurrent frame generation algorithm for prediction of late frames from initial frames in dynamic brain PET imaging. Methods Clinical dynamic 18F‐DOPA brain PET/CT studies of 46 subjects with ten folds cross‐validation were retrospectively employed. A novel stochastic adversarial video prediction model was implemented to predict the last 13 frames (25–90 minutes) from the initial 13 frames (0–25 minutes). The quantitative analysis of the predicted dynamic PET frames was performed for the test and validation dataset using established metrics. Results The predicted dynamic images demonstrated that the model is capable of predicting the trend of change in time‐varying tracer biodistribution. The Bland‐Altman plots reported the lowest tracer uptake bias (−0.04) for the putamen region and the smallest variance (95% CI: −0.38, +0.14) for the cerebellum. The region‐wise Patlak graphical analysis in the caudate and putamen regions for eight subjects from the test and validation dataset showed that the average bias for Ki and distribution volume was 4.3%, 5.1% and 4.4%, 4.2%, (P‐value <0.05), respectively. Conclusion We have developed a novel deep learning approach for fast dynamic brain PET imaging capable of generating the last 65 minutes time frames from the initial 25 minutes frames, thus enabling significant reduction in scanning time.
Collapse
Affiliation(s)
- Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Ehsan Mirsadeghi
- Electrical Engineering Department, Amirkabir University of Technology, Tehran, Iran
| | - Behrooz Razeghi
- Department of Computer Sciences, University of Geneva, Geneva, Switzerland.,School of Engineering and Applied Sciences, Harvard University, Boston, USA
| | - Nathalie Ginovart
- Department of Psychiatry, Geneva University, Geneva, Switzerland.,Department of Basic Neurosciences, Geneva University, Geneva, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland.,Geneva University Neurocenter, Geneva University, Geneva, Switzerland.,Department of Nuclear Medicine and Molecular Imaging, University of Groningen, Groningen, Netherlands.,University Medical Center, Groningen, Netherlands.,Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
| |
Collapse
|