1
|
Chen J, Ye Z, Zhang R, Li H, Fang B, Zhang LB, Wang W. Medical image translation with deep learning: Advances, datasets and perspectives. Med Image Anal 2025; 103:103605. [PMID: 40311301 DOI: 10.1016/j.media.2025.103605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2024] [Revised: 03/07/2025] [Accepted: 04/12/2025] [Indexed: 05/03/2025]
Abstract
Traditional medical image generation often lacks patient-specific clinical information, limiting its clinical utility despite enhancing downstream task performance. In contrast, medical image translation precisely converts images from one modality to another, preserving both anatomical structures and cross-modal features, thus enabling efficient and accurate modality transfer and offering unique advantages for model development and clinical practice. This paper reviews the latest advancements in deep learning(DL)-based medical image translation. Initially, it elaborates on the diverse tasks and practical applications of medical image translation. Subsequently, it provides an overview of fundamental models, including convolutional neural networks (CNNs), transformers, and state space models (SSMs). Additionally, it delves into generative models such as Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), Autoregressive Models (ARs), diffusion Models, and flow Models. Evaluation metrics for assessing translation quality are discussed, emphasizing their importance. Commonly used datasets in this field are also analyzed, highlighting their unique characteristics and applications. Looking ahead, the paper identifies future trends, challenges, and proposes research directions and solutions in medical image translation. It aims to serve as a valuable reference and inspiration for researchers, driving continued progress and innovation in this area.
Collapse
Affiliation(s)
- Junxin Chen
- School of Software, Dalian University of Technology, Dalian 116621, China.
| | - Zhiheng Ye
- School of Software, Dalian University of Technology, Dalian 116621, China.
| | - Renlong Zhang
- Institute of Research and Clinical Innovations, Neusoft Medical Systems Co., Ltd., Beijing, China.
| | - Hao Li
- School of Computing Science, University of Glasgow, Glasgow G12 8QQ, United Kingdom.
| | - Bo Fang
- School of Computer Science, The University of Sydney, Sydney, NSW 2006, Australia.
| | - Li-Bo Zhang
- Department of Radiology, General Hospital of Northern Theater Command, Shenyang 110840, China.
| | - Wei Wang
- Guangdong-Hong Kong-Macao Joint Laboratory for Emotion Intelligence and Pervasive Computing, Artificial Intelligence Research Institute, Shenzhen MSU-BIT University, Shenzhen 518172, China; School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China.
| |
Collapse
|
2
|
Yu X, Hu D, Yao Q, Fu Y, Zhong Y, Wang J, Tian M, Zhang H. Diffused Multi-scale Generative Adversarial Network for low-dose PET images reconstruction. Biomed Eng Online 2025; 24:16. [PMID: 39924498 PMCID: PMC11807330 DOI: 10.1186/s12938-025-01348-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2024] [Accepted: 01/29/2025] [Indexed: 02/11/2025] Open
Abstract
PURPOSE The aim of this study is to convert low-dose PET (L-PET) images to full-dose PET (F-PET) images based on our Diffused Multi-scale Generative Adversarial Network (DMGAN) to offer a potential balance between reducing radiation exposure and maintaining diagnostic performance. METHODS The proposed method includes two modules: the diffusion generator and the u-net discriminator. The goal of the first module is to get different information from different levels, enhancing the generalization ability of the generator to the image and improving the stability of the training. Generated images are inputted into the u-net discriminator, extracting details from both overall and specific perspectives to enhance the quality of the generated F-PET images. We conducted evaluations encompassing both qualitative assessments and quantitative measures. In terms of quantitative comparisons, we employed two metrics, structure similarity index measure (SSIM) and peak signal-to-noise ratio (PSNR) to evaluate the performance of diverse methods. RESULTS Our proposed method achieved the highest PSNR and SSIM scores among the compared methods, which improved PSNR by at least 6.2% compared to the other methods. Compared to other methods, the synthesized full-dose PET image generated by our method exhibits a more accurate voxel-wise metabolic intensity distribution, resulting in a clearer depiction of the epilepsy focus. CONCLUSIONS The proposed method demonstrates improved restoration of original details from low-dose PET images compared to other models trained on the same datasets. This method offers a potential balance between minimizing radiation exposure and preserving diagnostic performance.
Collapse
Affiliation(s)
- Xiang Yu
- Polytechnic Institute, Zhejiang University, Hangzhou, China
| | - Daoyan Hu
- The College of Biomedical Engineering and Instrument Science of Zhejiang University, Hangzhou, China
| | - Qiong Yao
- Department of Nuclear Medicine and Medical PET Center, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, 310009, China
| | - Yu Fu
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China
| | - Yan Zhong
- Department of Nuclear Medicine and Medical PET Center, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, 310009, China
| | - Jing Wang
- Department of Nuclear Medicine and Medical PET Center, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, 310009, China
| | - Mei Tian
- Human Phenome Institute, Fudan University, 825 Zhangheng Road, Shanghai, 201203, China.
| | - Hong Zhang
- The College of Biomedical Engineering and Instrument Science of Zhejiang University, Hangzhou, China.
- Department of Nuclear Medicine and Medical PET Center, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, 310009, China.
- Key Laboratory for Biomedical Engineering of Ministry of Education, Zhejiang University, Hangzhou, China.
| |
Collapse
|
3
|
Kanavos T, Birbas E, Zanos TP. A Systematic Review of the Applications of Deep Learning for the Interpretation of Positron Emission Tomography Images of Patients with Lymphoma. Cancers (Basel) 2024; 17:69. [PMID: 39796698 PMCID: PMC11719749 DOI: 10.3390/cancers17010069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2024] [Revised: 12/20/2024] [Accepted: 12/24/2024] [Indexed: 01/13/2025] Open
Abstract
Background: Positron emission tomography (PET) is a valuable tool for the assessment of lymphoma, while artificial intelligence (AI) holds promise as a reliable resource for the analysis of medical images. In this context, we systematically reviewed the applications of deep learning (DL) for the interpretation of lymphoma PET images. Methods: We searched PubMed until 11 September 2024 for studies developing DL models for the evaluation of PET images of patients with lymphoma. The risk of bias and applicability concerns were assessed using the prediction model risk of bias assessment tool (PROBAST). The articles included were categorized and presented based on the task performed by the proposed models. Our study was registered with the international prospective register of systematic reviews, PROSPERO, as CRD42024600026. Results: From 71 papers initially retrieved, 21 studies with a total of 9402 participants were ultimately included in our review. The proposed models achieved a promising performance in diverse medical tasks, namely, the detection and histological classification of lesions, the differential diagnosis of lymphoma from other conditions, the quantification of metabolic tumor volume, and the prediction of treatment response and survival with areas under the curve, F1-scores, and R2 values of up to 0.963, 87.49%, and 0.94, respectively. Discussion: The primary limitations of several studies were the small number of participants and the absence of external validation. In conclusion, the interpretation of lymphoma PET images can reliably be aided by DL models, which are not designed to replace physicians but to assist them in managing large volumes of scans through rapid and accurate calculations, alleviate their workload, and provide them with decision support tools for precise care and improved outcomes.
Collapse
Affiliation(s)
| | | | - Theodoros P. Zanos
- Institute of Health System Science, Feinstein Institutes for Medical Research, Manhasset, NY 11030, USA; (T.K.); (E.B.)
| |
Collapse
|
4
|
Montgomery ME, Andersen FL, Mathiasen R, Borgwardt L, Andersen KF, Ladefoged CN. CT-Free Attenuation Correction in Paediatric Long Axial Field-of-View Positron Emission Tomography Using Synthetic CT from Emission Data. Diagnostics (Basel) 2024; 14:2788. [PMID: 39767149 PMCID: PMC11727418 DOI: 10.3390/diagnostics14242788] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2024] [Revised: 12/03/2024] [Accepted: 12/10/2024] [Indexed: 01/16/2025] Open
Abstract
Background/Objectives: Paediatric PET/CT imaging is crucial in oncology but poses significant radiation risks due to children's higher radiosensitivity and longer post-exposure life expectancy. This study aims to minimize radiation exposure by generating synthetic CT (sCT) images from emission PET data, eliminating the need for attenuation correction (AC) CT scans in paediatric patients. Methods: We utilized a cohort of 128 paediatric patients, resulting in 195 paired PET and CT images. Data were acquired using Siemens Biograph Vision 600 and Long Axial Field-of-View (LAFOV) Siemens Vision Quadra PET/CT scanners. A 3D parameter transferred conditional GAN (PT-cGAN) architecture, pre-trained on adult data, was adapted and trained on the paediatric cohort. The model's performance was evaluated qualitatively by a nuclear medicine specialist and quantitatively by comparing sCT-derived PET (sPET) with standard PET images. Results: The model demonstrated high qualitative and quantitative performance. Visual inspection showed no significant (19/23) or minor clinically insignificant (4/23) differences in image quality between PET and sPET. Quantitative analysis revealed a mean SUV relative difference of -2.6 ± 5.8% across organs, with a high agreement in lesion overlap (Dice coefficient of 0.92 ± 0.08). The model also performed robustly in low-count settings, maintaining performance with reduced acquisition times. Conclusions: The proposed method effectively reduces radiation exposure in paediatric PET/CT imaging by eliminating the need for AC CT scans. It maintains high diagnostic accuracy and minimises motion-induced artifacts, making it a valuable alternative for clinical application. Further testing in clinical settings is warranted to confirm these findings and enhance patient safety.
Collapse
Affiliation(s)
- Maria Elkjær Montgomery
- Department of Clinical Physiology and Nuclear Medicine, Copenhagen University Hospital, Rigshospitalet, 2100 Copenhagen, Denmark; (M.E.M.); (F.L.A.); (L.B.); (K.F.A.)
| | - Flemming Littrup Andersen
- Department of Clinical Physiology and Nuclear Medicine, Copenhagen University Hospital, Rigshospitalet, 2100 Copenhagen, Denmark; (M.E.M.); (F.L.A.); (L.B.); (K.F.A.)
- Department of Clinical Medicine, University of Copenhagen, 2200 Copenhagen, Denmark;
| | - René Mathiasen
- Department of Clinical Medicine, University of Copenhagen, 2200 Copenhagen, Denmark;
- Department of Paediatrics and Adolescent Medicine, Copenhagen University Hospital, Rigshospitalet, 2100 Copenhagen, Denmark
| | - Lise Borgwardt
- Department of Clinical Physiology and Nuclear Medicine, Copenhagen University Hospital, Rigshospitalet, 2100 Copenhagen, Denmark; (M.E.M.); (F.L.A.); (L.B.); (K.F.A.)
| | - Kim Francis Andersen
- Department of Clinical Physiology and Nuclear Medicine, Copenhagen University Hospital, Rigshospitalet, 2100 Copenhagen, Denmark; (M.E.M.); (F.L.A.); (L.B.); (K.F.A.)
| | - Claes Nøhr Ladefoged
- Department of Clinical Physiology and Nuclear Medicine, Copenhagen University Hospital, Rigshospitalet, 2100 Copenhagen, Denmark; (M.E.M.); (F.L.A.); (L.B.); (K.F.A.)
- Department of Applied Mathematics and Computer Science, Technical University of Denmark, 2800 Kgs. Lyngby, Denmark
| |
Collapse
|
5
|
Fu Y, Dong S, Huang Y, Niu M, Ni C, Yu L, Shi K, Yao Z, Zhuo C. MPGAN: Multi Pareto Generative Adversarial Network for the denoising and quantitative analysis of low-dose PET images of human brain. Med Image Anal 2024; 98:103306. [PMID: 39163786 DOI: 10.1016/j.media.2024.103306] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Revised: 06/15/2024] [Accepted: 08/12/2024] [Indexed: 08/22/2024]
Abstract
Positron emission tomography (PET) imaging is widely used in medical imaging for analyzing neurological disorders and related brain diseases. Usually, full-dose imaging for PET ensures image quality but raises concerns about potential health risks of radiation exposure. The contradiction between reducing radiation exposure and maintaining diagnostic performance can be effectively addressed by reconstructing low-dose PET (L-PET) images to the same high-quality as full-dose (F-PET). This paper introduces the Multi Pareto Generative Adversarial Network (MPGAN) to achieve 3D end-to-end denoising for the L-PET images of human brain. MPGAN consists of two key modules: the diffused multi-round cascade generator (GDmc) and the dynamic Pareto-efficient discriminator (DPed), both of which play a zero-sum game for n(n∈1,2,3) rounds to ensure the quality of synthesized F-PET images. The Pareto-efficient dynamic discrimination process is introduced in DPed to adaptively adjust the weights of sub-discriminators for improved discrimination output. We validated the performance of MPGAN using three datasets, including two independent datasets and one mixed dataset, and compared it with 12 recent competing models. Experimental results indicate that the proposed MPGAN provides an effective solution for 3D end-to-end denoising of L-PET images of the human brain, which meets clinical standards and achieves state-of-the-art performance on commonly used metrics.
Collapse
Affiliation(s)
- Yu Fu
- School of Information Science and Engineering, Lanzhou University, Lanzhou, China; College of Integrated Circuits, Zhejiang University, Hangzhou, China
| | - Shunjie Dong
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China; College of Health Science and Technology, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Yanyan Huang
- Department of Statistics and Actuarial Science, The University of Hong Kong, Hong Kong, China
| | - Meng Niu
- Department of Radiology, The First Hospital of Lanzhou University, Lanzhou, China
| | - Chao Ni
- Department of Breast Surgery, The Second Affiliated Hospital of Zhejiang University, Hangzhou, China
| | - Lequan Yu
- Department of Statistics and Actuarial Science, The University of Hong Kong, Hong Kong, China
| | - Kuangyu Shi
- Department of Nuclear Medicine, University Hospital Bern, Bern, Switzerland
| | - Zhijun Yao
- School of Information Science and Engineering, Lanzhou University, Lanzhou, China
| | - Cheng Zhuo
- College of Integrated Circuits, Zhejiang University, Hangzhou, China.
| |
Collapse
|
6
|
Csikos C, Barna S, Kovács Á, Czina P, Budai Á, Szoliková M, Nagy IG, Husztik B, Kiszler G, Garai I. AI-Based Noise-Reduction Filter for Whole-Body Planar Bone Scintigraphy Reliably Improves Low-Count Images. Diagnostics (Basel) 2024; 14:2686. [PMID: 39682594 DOI: 10.3390/diagnostics14232686] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2024] [Revised: 11/25/2024] [Accepted: 11/27/2024] [Indexed: 12/18/2024] Open
Abstract
Background/Objectives: Artificial intelligence (AI) is a promising tool for the enhancement of physician workflow and serves to further improve the efficiency of their diagnostic evaluations. This study aimed to assess the performance of an AI-based bone scan noise-reduction filter on noisy, low-count images in a routine clinical environment. Methods: The performance of the AI bone-scan filter (BS-AI filter) in question was retrospectively evaluated on 47 different patients' 99mTc-MDP bone scintigraphy image pairs (anterior- and posterior-view images), which were obtained in such a manner as to represent the diverse characteristics of the general patient population. The BS-AI filter was tested on artificially degraded noisy images-75, 50, and 25% of total counts-which were generated by binominal sampling. The AI-filtered and unfiltered images were concurrently appraised for image quality and contrast by three nuclear medicine physicians. It was also determined whether there was any difference between the lesions seen on the unfiltered and filtered images. For quantitative analysis, an automatic lesion detector (BS-AI annotator) was utilized as a segmentation algorithm. The total number of lesions and their locations as detected by the BS-AI annotator in the BS-AI-filtered low-count images was compared to the total-count filtered images. The total number of pixels labeled as lesions in the filtered low-count images in relation to the number of pixels in the total-count filtered images was also compared to ensure the filtering process did not change lesion sizes significantly. The comparison of pixel numbers was performed using the reduced-count filtered images that contained only those lesions that were detected in the total-count images. Results: Based on visual assessment, observers agreed that image contrast and quality were better in the BS-AI-filtered images, increasing their diagnostic confidence. Similarities in lesion numbers and sites detected by the BS-AI annotator compared to filtered total-count images were 89%, 83%, and 75% for images degraded to counts of 75%, 50%, and 25%, respectively. No significant difference was found in the number of annotated pixels between filtered images with different counts (p > 0.05). Conclusions: Our findings indicate that the BS-AI noise-reduction filter enhances image quality and contrast without loss of vital information. The implementation of this filter in routine diagnostic procedures reliably improves diagnostic confidence in low-count images and elicits a reduction in the administered dose or acquisition time by a minimum of 50% relative to the original dose or acquisition time.
Collapse
Affiliation(s)
- Csaba Csikos
- Division of Nuclear Medicine and Translational Imaging, Department of Medical Imaging, Faculty of Medicine, University of Debrecen, H-4032 Debrecen, Hungary
- Gyula Petrányi Doctoral School of Clinical Immunology and Allergology, Faculty of Medicine, University of Debrecen, H-4032 Debrecen, Hungary
| | - Sándor Barna
- Division of Nuclear Medicine and Translational Imaging, Department of Medical Imaging, Faculty of Medicine, University of Debrecen, H-4032 Debrecen, Hungary
- Scanomed Ltd., H-4032 Debrecen, Hungary
| | | | - Péter Czina
- Division of Nuclear Medicine and Translational Imaging, Department of Medical Imaging, Faculty of Medicine, University of Debrecen, H-4032 Debrecen, Hungary
| | | | | | - Iván Gábor Nagy
- Division of Nuclear Medicine and Translational Imaging, Department of Medical Imaging, Faculty of Medicine, University of Debrecen, H-4032 Debrecen, Hungary
| | | | | | - Ildikó Garai
- Division of Nuclear Medicine and Translational Imaging, Department of Medical Imaging, Faculty of Medicine, University of Debrecen, H-4032 Debrecen, Hungary
- Gyula Petrányi Doctoral School of Clinical Immunology and Allergology, Faculty of Medicine, University of Debrecen, H-4032 Debrecen, Hungary
- Scanomed Ltd., H-4032 Debrecen, Hungary
| |
Collapse
|
7
|
Seyyedi N, Ghafari A, Seyyedi N, Sheikhzadeh P. Deep learning-based techniques for estimating high-quality full-dose positron emission tomography images from low-dose scans: a systematic review. BMC Med Imaging 2024; 24:238. [PMID: 39261796 PMCID: PMC11391655 DOI: 10.1186/s12880-024-01417-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Accepted: 08/30/2024] [Indexed: 09/13/2024] Open
Abstract
This systematic review aimed to evaluate the potential of deep learning algorithms for converting low-dose Positron Emission Tomography (PET) images to full-dose PET images in different body regions. A total of 55 articles published between 2017 and 2023 by searching PubMed, Web of Science, Scopus and IEEE databases were included in this review, which utilized various deep learning models, such as generative adversarial networks and UNET, to synthesize high-quality PET images. The studies involved different datasets, image preprocessing techniques, input data types, and loss functions. The evaluation of the generated PET images was conducted using both quantitative and qualitative methods, including physician evaluations and various denoising techniques. The findings of this review suggest that deep learning algorithms have promising potential in generating high-quality PET images from low-dose PET images, which can be useful in clinical practice.
Collapse
Affiliation(s)
- Negisa Seyyedi
- Nursing and Midwifery Care Research Center, Health Management Research Institute, Iran University of Medical Sciences, Tehran, Iran
| | - Ali Ghafari
- Research Center for Evidence-Based Medicine, Iranian EBM Centre: A JBI Centre of Excellence, Tabriz University of Medical Sciences, Tabriz, Iran
| | - Navisa Seyyedi
- Department of Health Information Management and Medical Informatics, School of Allied Medical Science, Tehran University of Medical Sciences, Tehran, Iran
| | - Peyman Sheikhzadeh
- Medical Physics and Biomedical Engineering Department, Medical Faculty, Tehran University of Medical Sciences, Tehran, Iran.
- Department of Nuclear Medicine, Imam Khomeini Hospital Complex, Tehran University of Medical Sciences, Tehran, Iran.
| |
Collapse
|
8
|
Singh SB, Sarrami AH, Gatidis S, Varniab ZS, Chaudhari A, Daldrup-Link HE. Applications of Artificial Intelligence for Pediatric Cancer Imaging. AJR Am J Roentgenol 2024; 223:e2431076. [PMID: 38809123 PMCID: PMC11874589 DOI: 10.2214/ajr.24.31076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/30/2024]
Abstract
Artificial intelligence (AI) is transforming the medical imaging of adult patients. However, its utilization in pediatric oncology imaging remains constrained, in part due to the inherent scarcity of data associated with childhood cancers. Pediatric cancers are rare, and imaging technologies are evolving rapidly, leading to insufficient data of a particular type to effectively train these algorithms. The small market size of pediatric patients compared with adult patients could also contribute to this challenge, as market size is a driver of commercialization. This review provides an overview of the current state of AI applications for pediatric cancer imaging, including applications for medical image acquisition, processing, reconstruction, segmentation, diagnosis, staging, and treatment response monitoring. Although current developments are promising, impediments due to the diverse anatomies of growing children and nonstandardized imaging protocols have led to limited clinical translation thus far. Opportunities include leveraging reconstruction algorithms to achieve accelerated low-dose imaging and automating the generation of metric-based staging and treatment monitoring scores. Transfer learning of adult-based AI models to pediatric cancers, multiinstitutional data sharing, and ethical data privacy practices for pediatric patients with rare cancers will be keys to unlocking the full potential of AI for clinical translation and improving outcomes for these young patients.
Collapse
Affiliation(s)
- Shashi B. Singh
- Department of Radiology, Division of Pediatric Radiology, Stanford University School of Medicine, 1201 Welch Rd, Stanford, CA 94305
| | - Amir H. Sarrami
- Department of Radiology, Division of Pediatric Radiology, Stanford University School of Medicine, 1201 Welch Rd, Stanford, CA 94305
| | - Sergios Gatidis
- Department of Radiology, Division of Pediatric Radiology, Stanford University School of Medicine, 1201 Welch Rd, Stanford, CA 94305
| | - Zahra S. Varniab
- Department of Radiology, Division of Pediatric Radiology, Stanford University School of Medicine, 1201 Welch Rd, Stanford, CA 94305
| | - Akshay Chaudhari
- Department of Radiology, Integrative Biomedical Imaging Informatics (IBIIS), Stanford University School of Medicine, Stanford University, Stanford, CA
- Department of Biomedical Data Science, Stanford University School of Medicine, Stanford University, Stanford, CA
| | - Heike E. Daldrup-Link
- Department of Radiology, Division of Pediatric Radiology, Stanford University School of Medicine, 1201 Welch Rd, Stanford, CA 94305
- Department of Pediatrics, Pediatric Hematology-Oncology, Lucile Packard Children’s Hospital, Stanford University, Stanford, CA
| |
Collapse
|
9
|
Bousse A, Kandarpa VSS, Shi K, Gong K, Lee JS, Liu C, Visvikis D. A Review on Low-Dose Emission Tomography Post-Reconstruction Denoising with Neural Network Approaches. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2024; 8:333-347. [PMID: 39429805 PMCID: PMC11486494 DOI: 10.1109/trpms.2023.3349194] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2024]
Abstract
Low-dose emission tomography (ET) plays a crucial role in medical imaging, enabling the acquisition of functional information for various biological processes while minimizing the patient dose. However, the inherent randomness in the photon counting process is a source of noise which is amplified low-dose ET. This review article provides an overview of existing post-processing techniques, with an emphasis on deep neural network (NN) approaches. Furthermore, we explore future directions in the field of NN-based low-dose ET. This comprehensive examination sheds light on the potential of deep learning in enhancing the quality and resolution of low-dose ET images, ultimately advancing the field of medical imaging.
Collapse
Affiliation(s)
| | | | - Kuangyu Shi
- Lab for Artificial Intelligence & Translational Theranostics, Dept. Nuclear Medicine, Inselspital, University of Bern, 3010 Bern, Switzerland
| | - Kuang Gong
- The Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital/Harvard Medical School, Boston, MA 02114, USA
| | - Jae Sung Lee
- Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul 03080, Korea
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | | |
Collapse
|
10
|
Hashimoto F, Onishi Y, Ote K, Tashima H, Reader AJ, Yamaya T. Deep learning-based PET image denoising and reconstruction: a review. Radiol Phys Technol 2024; 17:24-46. [PMID: 38319563 PMCID: PMC10902118 DOI: 10.1007/s12194-024-00780-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 01/03/2024] [Accepted: 01/04/2024] [Indexed: 02/07/2024]
Abstract
This review focuses on positron emission tomography (PET) imaging algorithms and traces the evolution of PET image reconstruction methods. First, we provide an overview of conventional PET image reconstruction methods from filtered backprojection through to recent iterative PET image reconstruction algorithms, and then review deep learning methods for PET data up to the latest innovations within three main categories. The first category involves post-processing methods for PET image denoising. The second category comprises direct image reconstruction methods that learn mappings from sinograms to the reconstructed images in an end-to-end manner. The third category comprises iterative reconstruction methods that combine conventional iterative image reconstruction with neural-network enhancement. We discuss future perspectives on PET imaging and deep learning technology.
Collapse
Affiliation(s)
- Fumio Hashimoto
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan.
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho, Inage-Ku, Chiba, 263-8522, Japan.
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan.
| | - Yuya Onishi
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan
| | - Kibo Ote
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan
| | - Hideaki Tashima
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan
| | - Andrew J Reader
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, SE1 7EH, UK
| | - Taiga Yamaya
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho, Inage-Ku, Chiba, 263-8522, Japan
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan
| |
Collapse
|
11
|
Sartoretti T, Skawran S, Gennari AG, Maurer A, Euler A, Treyer V, Sartoretti E, Waelti S, Schwyzer M, von Schulthess GK, Burger IA, Huellner MW, Messerli M. Fully automated computational measurement of noise in positron emission tomography. Eur Radiol 2024; 34:1716-1723. [PMID: 37644149 PMCID: PMC10873217 DOI: 10.1007/s00330-023-10056-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Revised: 04/15/2023] [Accepted: 05/15/2023] [Indexed: 08/31/2023]
Abstract
OBJECTIVES To introduce an automated computational algorithm that estimates the global noise level across the whole imaging volume of PET datasets. METHODS [18F]FDG PET images of 38 patients were reconstructed with simulated decreasing acquisition times (15-120 s) resulting in increasing noise levels, and with block sequential regularized expectation maximization with beta values of 450 and 600 (Q.Clear 450 and 600). One reader performed manual volume-of-interest (VOI) based noise measurements in liver and lung parenchyma and two readers graded subjective image quality as sufficient or insufficient. An automated computational noise measurement algorithm was developed and deployed on the whole imaging volume of each reconstruction, delivering a single value representing the global image noise (Global Noise Index, GNI). Manual noise measurement values and subjective image quality gradings were compared with the GNI. RESULTS Irrespective of the absolute noise values, there was no significant difference between the GNI and manual liver measurements in terms of the distribution of noise values (p = 0.84 for Q.Clear 450, and p = 0.51 for Q.Clear 600). The GNI showed a fair to moderately strong correlation with manual noise measurements in liver parenchyma (r = 0.6 in Q.Clear 450, r = 0.54 in Q.Clear 600, all p < 0.001), and a fair correlation with manual noise measurements in lung parenchyma (r = 0.52 in Q.Clear 450, r = 0.33 in Q.Clear 600, all p < 0.001). Classification performance of the GNI for subjective image quality was AUC 0.898 for Q.Clear 450 and 0.919 for Q.Clear 600. CONCLUSION An algorithm provides an accurate and meaningful estimation of the global noise level encountered in clinical PET imaging datasets. CLINICAL RELEVANCE STATEMENT An automated computational approach that measures the global noise level of PET imaging datasets may facilitate quality standardization and benchmarking of clinical PET imaging within and across institutions. KEY POINTS • Noise is an important quantitative marker that strongly impacts image quality of PET images. • An automated computational noise measurement algorithm provides an accurate and meaningful estimation of the global noise level encountered in clinical PET imaging datasets. • An automated computational approach that measures the global noise level of PET imaging datasets may facilitate quality standardization and benchmarking as well as protocol harmonization.
Collapse
Affiliation(s)
- Thomas Sartoretti
- Department of Nuclear Medicine, University Hospital Zurich, Rämistrasse 100, CH-8091, Zurich, Switzerland
- University of Zurich, Zurich, Switzerland
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Zurich, Switzerland
| | - Stephan Skawran
- Department of Nuclear Medicine, University Hospital Zurich, Rämistrasse 100, CH-8091, Zurich, Switzerland
- University of Zurich, Zurich, Switzerland
| | - Antonio G Gennari
- Department of Nuclear Medicine, University Hospital Zurich, Rämistrasse 100, CH-8091, Zurich, Switzerland
- University of Zurich, Zurich, Switzerland
| | - Alexander Maurer
- Department of Nuclear Medicine, University Hospital Zurich, Rämistrasse 100, CH-8091, Zurich, Switzerland
- University of Zurich, Zurich, Switzerland
| | - André Euler
- University of Zurich, Zurich, Switzerland
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Zurich, Switzerland
| | - Valerie Treyer
- Department of Nuclear Medicine, University Hospital Zurich, Rämistrasse 100, CH-8091, Zurich, Switzerland
- University of Zurich, Zurich, Switzerland
| | - Elisabeth Sartoretti
- Department of Nuclear Medicine, University Hospital Zurich, Rämistrasse 100, CH-8091, Zurich, Switzerland
- University of Zurich, Zurich, Switzerland
| | - Stephan Waelti
- Department of Nuclear Medicine, University Hospital Zurich, Rämistrasse 100, CH-8091, Zurich, Switzerland
- University of Zurich, Zurich, Switzerland
- Department of Radiology and Nuclear Medicine, Children's Hospital of Eastern Switzerland, St. Gallen, Switzerland
| | - Moritz Schwyzer
- Department of Nuclear Medicine, University Hospital Zurich, Rämistrasse 100, CH-8091, Zurich, Switzerland
- University of Zurich, Zurich, Switzerland
- Health Sciences and Technology, Institute of Food, Nutrition and Health, ETH Zurich, Zurich, Switzerland
| | - Gustav K von Schulthess
- Department of Nuclear Medicine, University Hospital Zurich, Rämistrasse 100, CH-8091, Zurich, Switzerland
- University of Zurich, Zurich, Switzerland
| | - Irene A Burger
- Department of Nuclear Medicine, University Hospital Zurich, Rämistrasse 100, CH-8091, Zurich, Switzerland
- University of Zurich, Zurich, Switzerland
- Department of Nuclear Medicine, Kantonsspital Baden, Baden, Switzerland
| | - Martin W Huellner
- Department of Nuclear Medicine, University Hospital Zurich, Rämistrasse 100, CH-8091, Zurich, Switzerland
- University of Zurich, Zurich, Switzerland
| | - Michael Messerli
- Department of Nuclear Medicine, University Hospital Zurich, Rämistrasse 100, CH-8091, Zurich, Switzerland.
- University of Zurich, Zurich, Switzerland.
| |
Collapse
|
12
|
Dayarathna S, Islam KT, Uribe S, Yang G, Hayat M, Chen Z. Deep learning based synthesis of MRI, CT and PET: Review and analysis. Med Image Anal 2024; 92:103046. [PMID: 38052145 DOI: 10.1016/j.media.2023.103046] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Revised: 11/14/2023] [Accepted: 11/29/2023] [Indexed: 12/07/2023]
Abstract
Medical image synthesis represents a critical area of research in clinical decision-making, aiming to overcome the challenges associated with acquiring multiple image modalities for an accurate clinical workflow. This approach proves beneficial in estimating an image of a desired modality from a given source modality among the most common medical imaging contrasts, such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Positron Emission Tomography (PET). However, translating between two image modalities presents difficulties due to the complex and non-linear domain mappings. Deep learning-based generative modelling has exhibited superior performance in synthetic image contrast applications compared to conventional image synthesis methods. This survey comprehensively reviews deep learning-based medical imaging translation from 2018 to 2023 on pseudo-CT, synthetic MR, and synthetic PET. We provide an overview of synthetic contrasts in medical imaging and the most frequently employed deep learning networks for medical image synthesis. Additionally, we conduct a detailed analysis of each synthesis method, focusing on their diverse model designs based on input domains and network architectures. We also analyse novel network architectures, ranging from conventional CNNs to the recent Transformer and Diffusion models. This analysis includes comparing loss functions, available datasets and anatomical regions, and image quality assessments and performance in other downstream tasks. Finally, we discuss the challenges and identify solutions within the literature, suggesting possible future directions. We hope that the insights offered in this survey paper will serve as a valuable roadmap for researchers in the field of medical image synthesis.
Collapse
Affiliation(s)
- Sanuwani Dayarathna
- Department of Data Science and AI, Faculty of Information Technology, Monash University, Clayton VIC 3800, Australia.
| | | | - Sergio Uribe
- Department of Medical Imaging and Radiation Sciences, Faculty of Medicine, Monash University, Clayton VIC 3800, Australia
| | - Guang Yang
- Bioengineering Department and Imperial-X, Imperial College London, W12 7SL, United Kingdom
| | - Munawar Hayat
- Department of Data Science and AI, Faculty of Information Technology, Monash University, Clayton VIC 3800, Australia
| | - Zhaolin Chen
- Department of Data Science and AI, Faculty of Information Technology, Monash University, Clayton VIC 3800, Australia; Monash Biomedical Imaging, Clayton VIC 3800, Australia
| |
Collapse
|
13
|
Bousse A, Kandarpa VSS, Shi K, Gong K, Lee JS, Liu C, Visvikis D. A Review on Low-Dose Emission Tomography Post-Reconstruction Denoising with Neural Network Approaches. ARXIV 2024:arXiv:2401.00232v2. [PMID: 38313194 PMCID: PMC10836084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 02/06/2024]
Abstract
Low-dose emission tomography (ET) plays a crucial role in medical imaging, enabling the acquisition of functional information for various biological processes while minimizing the patient dose. However, the inherent randomness in the photon counting process is a source of noise which is amplified in low-dose ET. This review article provides an overview of existing post-processing techniques, with an emphasis on deep neural network (NN) approaches. Furthermore, we explore future directions in the field of NN-based low-dose ET. This comprehensive examination sheds light on the potential of deep learning in enhancing the quality and resolution of low-dose ET images, ultimately advancing the field of medical imaging.
Collapse
Affiliation(s)
| | | | - Kuangyu Shi
- Lab for Artificial Intelligence & Translational Theranostics, Dept. Nuclear Medicine, Inselspital, University of Bern, 3010 Bern, Switzerland
| | - Kuang Gong
- The Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital/Harvard Medical School, Boston, MA 02114, USA
| | - Jae Sung Lee
- Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul 03080, Korea
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | | |
Collapse
|
14
|
Rudroff T. Artificial Intelligence's Transformative Role in Illuminating Brain Function in Long COVID Patients Using PET/FDG. Brain Sci 2024; 14:73. [PMID: 38248288 PMCID: PMC10813353 DOI: 10.3390/brainsci14010073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Revised: 01/05/2024] [Accepted: 01/09/2024] [Indexed: 01/23/2024] Open
Abstract
Cutting-edge brain imaging techniques, particularly positron emission tomography with Fluorodeoxyglucose (PET/FDG), are being used in conjunction with Artificial Intelligence (AI) to shed light on the neurological symptoms associated with Long COVID. AI, particularly deep learning algorithms such as convolutional neural networks (CNN) and generative adversarial networks (GAN), plays a transformative role in analyzing PET scans, identifying subtle metabolic changes, and offering a more comprehensive understanding of Long COVID's impact on the brain. It aids in early detection of abnormal brain metabolism patterns, enabling personalized treatment plans. Moreover, AI assists in predicting the progression of neurological symptoms, refining patient care, and accelerating Long COVID research. It can uncover new insights, identify biomarkers, and streamline drug discovery. Additionally, the application of AI extends to non-invasive brain stimulation techniques, such as transcranial direct current stimulation (tDCS), which have shown promise in alleviating Long COVID symptoms. AI can optimize treatment protocols by analyzing neuroimaging data, predicting individual responses, and automating adjustments in real time. While the potential benefits are vast, ethical considerations and data privacy must be rigorously addressed. The synergy of AI and PET scans in Long COVID research offers hope in understanding and mitigating the complexities of this condition.
Collapse
Affiliation(s)
- Thorsten Rudroff
- Department of Health and Human Physiology, University of Iowa, Iowa City, IA 52242, USA; ; Tel.: +1-(319)-467-0363; Fax: +1-(319)-355-6669
- Department of Neurology, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, USA
| |
Collapse
|
15
|
Huang Z, Li W, Wu Y, Guo N, Yang L, Zhang N, Pang Z, Yang Y, Zhou Y, Shang Y, Zheng H, Liang D, Wang M, Hu Z. Short-axis PET image quality improvement based on a uEXPLORER total-body PET system through deep learning. Eur J Nucl Med Mol Imaging 2023; 51:27-39. [PMID: 37672046 DOI: 10.1007/s00259-023-06422-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Accepted: 08/30/2023] [Indexed: 09/07/2023]
Abstract
PURPOSE The axial field of view (AFOV) of a positron emission tomography (PET) scanner greatly affects the quality of PET images. Although a total-body PET scanner (uEXPLORER) with a large AFOV is more sensitive, it is more expensive and difficult to widely use. Therefore, we attempt to utilize high-quality images generated by uEXPLORER to optimize the quality of images from short-axis PET scanners through deep learning technology while controlling costs. METHODS The experiments were conducted using PET images of three anatomical locations (brain, lung, and abdomen) from 335 patients. To simulate PET images from different axes, two protocols were used to obtain PET image pairs (each patient was scanned once). For low-quality PET (LQ-PET) images with a 320-mm AFOV, we applied a 300-mm FOV for brain reconstruction and a 500-mm FOV for lung and abdomen reconstruction. For high-quality PET (HQ-PET) images, we applied a 1940-mm AFOV during the reconstruction process. A 3D Unet was utilized to learn the mapping relationship between LQ-PET and HQ-PET images. In addition, the peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) were employed to evaluate the model performance. Furthermore, two nuclear medicine doctors evaluated the image quality based on clinical readings. RESULTS The generated PET images of the brain, lung, and abdomen were quantitatively and qualitatively compatible with the HQ-PET images. In particular, our method achieved PSNR values of 35.41 ± 5.45 dB (p < 0.05), 33.77 ± 6.18 dB (p < 0.05), and 38.58 ± 7.28 dB (p < 0.05) for the three beds. The overall mean SSIM was greater than 0.94 for all patients who underwent testing. Moreover, the total subjective quality levels of the generated PET images for three beds were 3.74 ± 0.74, 3.69 ± 0.81, and 3.42 ± 0.99 (the highest possible score was 5, and the minimum score was 1) from two experienced nuclear medicine experts. Additionally, we evaluated the distribution of quantitative standard uptake values (SUV) in the region of interest (ROI). Both the SUV distribution and the peaks of the profile show that our results are consistent with the HQ-PET images, proving the superiority of our approach. CONCLUSION The findings demonstrate the potential of the proposed technique for improving the image quality of a PET scanner with a 320 mm or even shorter AFOV. Furthermore, this study explored the potential of utilizing uEXPLORER to achieve improved short-axis PET image quality at a limited economic cost, and computer-aided diagnosis systems that are related can help patients and radiologists.
Collapse
Affiliation(s)
- Zhenxing Huang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Wenbo Li
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Yaping Wu
- Department of Medical Imaging, Henan Provincial People's Hospital & People's Hospital of Zhengzhou University, Zhengzhou, 450003, China
| | - Nannan Guo
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
- School of Mathematics and Statistics, Henan University, Kaifeng, 475004, China
| | - Lin Yang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
- School of Mathematics and Statistics, Henan University, Kaifeng, 475004, China
| | - Na Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Zhifeng Pang
- School of Mathematics and Statistics, Henan University, Kaifeng, 475004, China
| | - Yongfeng Yang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Yun Zhou
- Central Research Institute, United Imaging Healthcare Group, Shanghai, 201807, China
| | - Yue Shang
- Performance Strategy & Analytics, UCLA Health, Los Angeles, CA, 90001, USA
| | - Hairong Zheng
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Meiyun Wang
- School of Mathematics and Statistics, Henan University, Kaifeng, 475004, China.
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.
| |
Collapse
|
16
|
Küstner T, Hepp T, Seith F. Multiparametric Oncologic Hybrid Imaging: Machine Learning Challenges and Opportunities. Nuklearmedizin 2023; 62:306-313. [PMID: 37802058 DOI: 10.1055/a-2157-6670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/08/2023]
Abstract
BACKGROUND Machine learning (ML) is considered an important technology for future data analysis in health care. METHODS The inherently technology-driven fields of diagnostic radiology and nuclear medicine will both benefit from ML in terms of image acquisition and reconstruction. Within the next few years, this will lead to accelerated image acquisition, improved image quality, a reduction of motion artifacts and - for PET imaging - reduced radiation exposure and new approaches for attenuation correction. Furthermore, ML has the potential to support decision making by a combined analysis of data derived from different modalities, especially in oncology. In this context, we see great potential for ML in multiparametric hybrid imaging and the development of imaging biomarkers. RESULTS AND CONCLUSION In this review, we will describe the basics of ML, present approaches in hybrid imaging of MRI, CT, and PET, and discuss the specific challenges associated with it and the steps ahead to make ML a diagnostic and clinical tool in the future. KEY POINTS · ML provides a viable clinical solution for the reconstruction, processing, and analysis of hybrid imaging obtained from MRI, CT, and PET..
Collapse
Affiliation(s)
- Thomas Küstner
- Medical Image and Data Analysis (MIDAS.lab), Department of Diagnostic and Interventional Radiology, University Hospitals Tubingen, Germany
| | - Tobias Hepp
- Medical Image and Data Analysis (MIDAS.lab), Department of Diagnostic and Interventional Radiology, University Hospitals Tubingen, Germany
| | - Ferdinand Seith
- Department of Diagnostic and Interventional Radiology, University Hospitals Tubingen, Germany
| |
Collapse
|
17
|
Tsang B, Gupta A, Takahashi MS, Baffi H, Ola T, Doria AS. Applications of artificial intelligence in magnetic resonance imaging of primary pediatric cancers: a scoping review and CLAIM score assessment. Jpn J Radiol 2023; 41:1127-1147. [PMID: 37395982 DOI: 10.1007/s11604-023-01437-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Accepted: 04/18/2023] [Indexed: 07/04/2023]
Abstract
PURPOSES To review the uses of AI for magnetic resonance (MR) imaging assessment of primary pediatric cancer and identify common literature topics and knowledge gaps. To assess the adherence of the existing literature to the Checklist for Artificial Intelligence in Medical Imaging (CLAIM) guidelines. MATERIALS AND METHODS A scoping literature search using MEDLINE, EMBASE and Cochrane databases was performed, including studies of > 10 subjects with a mean age of < 21 years. Relevant data were summarized into three categories based on AI application: detection, characterization, treatment and monitoring. Readers independently scored each study using CLAIM guidelines, and inter-rater reproducibility was assessed using intraclass correlation coefficients. RESULTS Twenty-one studies were included. The most common AI application for pediatric cancer MR imaging was pediatric tumor diagnosis and detection (13/21 [62%] studies). The most commonly studied tumor was posterior fossa tumors (14 [67%] studies). Knowledge gaps included a lack of research in AI-driven tumor staging (0/21 [0%] studies), imaging genomics (1/21 [5%] studies), and tumor segmentation (2/21 [10%] studies). Adherence to CLAIM guidelines was moderate in primary studies, with an average (range) of 55% (34%-73%) CLAIM items reported. Adherence has improved over time based on publication year. CONCLUSION The literature surrounding AI applications of MR imaging in pediatric cancers is limited. The existing literature shows moderate adherence to CLAIM guidelines, suggesting that better adherence is required for future studies.
Collapse
Affiliation(s)
- Brian Tsang
- Department of Medical Imaging, University of Toronto, Toronto, ON, Canada
- Department of Diagnostic Imaging, Research Institute, The Hospital for Sick Children, Toronto, ON, Canada
| | - Aaryan Gupta
- Department of Medical Imaging, University of Toronto, Toronto, ON, Canada
- Department of Diagnostic Imaging, Research Institute, The Hospital for Sick Children, Toronto, ON, Canada
| | - Marcelo Straus Takahashi
- Instituto de Radiologia do Hospital das Clínicas da Faculdade de Medicina da Universidade de São Paulo (InRad/HC-FMUSP), São Paulo, SP, Brazil
- Instituto da Criança do Hospital das Clínicas da Faculdade de Medicina da Universidade de São Paulo (ICr/HC-FMUSP), São Paulo, SP, Brazil
- DasaInova, Diagnósticos da América SA (Dasa), São Paulo, SP, Brazil
| | | | - Tolulope Ola
- Department of Medical Imaging, University of Toronto, Toronto, ON, Canada
- Department of Diagnostic Imaging, Research Institute, The Hospital for Sick Children, Toronto, ON, Canada
| | - Andrea S Doria
- Department of Medical Imaging, University of Toronto, Toronto, ON, Canada.
- Department of Diagnostic Imaging, Research Institute, The Hospital for Sick Children, Toronto, ON, Canada.
| |
Collapse
|
18
|
Spadafora M, Sannino P, Mansi L, Mainolfi C, Capasso R, Di Giorgio E, Fiordoro S, Imbimbo S, Masone F, Evangelista L. Algorithm for Reducing Overall Biological Detriment Caused by PET/CT: an Age-Based Study. Nucl Med Mol Imaging 2023; 57:137-144. [PMID: 37181801 PMCID: PMC10172419 DOI: 10.1007/s13139-023-00788-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Revised: 01/16/2023] [Accepted: 01/28/2023] [Indexed: 02/09/2023] Open
Abstract
Purpose This study is to use a simple algorithm based on patient's age to reduce the overall biological detriment associated with PET/CT. Materials and Methods A total of 421 consecutive patients (mean age 64 ± 14 years) undergoing PET for various clinical indications were enrolled. For each scan, effective dose (ED in mSv) and additional cancer risk (ACR) were computed both in a reference condition (REF) and after applying an original algorithm (ALGO). The ALGO modified the mean dose of FDG and the PET scan time parameters; indeed, a lower dose and a longer scan time were reported in the younger, while a higher dose and a shorter scan time in the older patients. Moreover, patients were classified by age bracket (18-29, 30-60, and 61-90 years). Results The ED was 4.57 ± 0.92 mSv in the REF condition. The ACR were 0.020 ± 0.016 and 0.0187 ± 0.013, respectively, in REF and ALGO. The ACR for the REF and ALGO conditions were significantly reduced in males and females, although it was more evident in the latter gender (all p < 0.0001). Finally, the ACR significantly reduced from the REF condition to ALGO in all three age brackets (all p < 0.0001). Conclusion Implementation of ALGO protocols in PET can reduce the overall ACR, mainly in young and female patients.
Collapse
Affiliation(s)
| | | | - Luigi Mansi
- CIRPS, Interuniversity Research Center for Sustainability, Rome, Italy
- IOS–Medicina Futura, Acerra, Naples, Italy
| | - Ciro Mainolfi
- Department of Advanced Biomedical Sciences, University of Naples Federico II, Naples, Italy
| | | | | | | | | | | | - Laura Evangelista
- Nuclear Medicine Unit, Department of Medicine (DIMED), University of Padua, Via Giustiniani 2, 35128 Padua, Italy
| |
Collapse
|
19
|
Wang YR(J, Qu L, Sheybani ND, Luo X, Wang J, Hawk KE, Theruvath AJ, Gatidis S, Xiao X, Pribnow A, Rubin D, Daldrup-Link HE. AI Transformers for Radiation Dose Reduction in Serial Whole-Body PET Scans. Radiol Artif Intell 2023; 5:e220246. [PMID: 37293349 PMCID: PMC10245181 DOI: 10.1148/ryai.220246] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 03/30/2023] [Accepted: 04/12/2023] [Indexed: 06/10/2023]
Abstract
Purpose To develop a deep learning approach that enables ultra-low-dose, 1% of the standard clinical dosage (3 MBq/kg), ultrafast whole-body PET reconstruction in cancer imaging. Materials and Methods In this Health Insurance Portability and Accountability Act-compliant study, serial fluorine 18-labeled fluorodeoxyglucose PET/MRI scans of pediatric patients with lymphoma were retrospectively collected from two cross-continental medical centers between July 2015 and March 2020. Global similarity between baseline and follow-up scans was used to develop Masked-LMCTrans, a longitudinal multimodality coattentional convolutional neural network (CNN) transformer that provides interaction and joint reasoning between serial PET/MRI scans from the same patient. Image quality of the reconstructed ultra-low-dose PET was evaluated in comparison with a simulated standard 1% PET image. The performance of Masked-LMCTrans was compared with that of CNNs with pure convolution operations (classic U-Net family), and the effect of different CNN encoders on feature representation was assessed. Statistical differences in the structural similarity index measure (SSIM), peak signal-to-noise ratio (PSNR), and visual information fidelity (VIF) were assessed by two-sample testing with the Wilcoxon signed rank t test. Results The study included 21 patients (mean age, 15 years ± 7 [SD]; 12 female) in the primary cohort and 10 patients (mean age, 13 years ± 4; six female) in the external test cohort. Masked-LMCTrans-reconstructed follow-up PET images demonstrated significantly less noise and more detailed structure compared with simulated 1% extremely ultra-low-dose PET images. SSIM, PSNR, and VIF were significantly higher for Masked-LMCTrans-reconstructed PET (P < .001), with improvements of 15.8%, 23.4%, and 186%, respectively. Conclusion Masked-LMCTrans achieved high image quality reconstruction of 1% low-dose whole-body PET images.Keywords: Pediatrics, PET, Convolutional Neural Network (CNN), Dose Reduction Supplemental material is available for this article. © RSNA, 2023.
Collapse
|
20
|
Wang YRJ, Wang P, Adams LC, Sheybani ND, Qu L, Sarrami AH, Theruvath AJ, Gatidis S, Ho T, Zhou Q, Pribnow A, Thakor AS, Rubin D, Daldrup-Link HE. Low-count whole-body PET/MRI restoration: an evaluation of dose reduction spectrum and five state-of-the-art artificial intelligence models. Eur J Nucl Med Mol Imaging 2023; 50:1337-1350. [PMID: 36633614 PMCID: PMC10387227 DOI: 10.1007/s00259-022-06097-w] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Accepted: 12/24/2022] [Indexed: 01/13/2023]
Abstract
PURPOSE To provide a holistic and complete comparison of the five most advanced AI models in the augmentation of low-dose 18F-FDG PET data over the entire dose reduction spectrum. METHODS In this multicenter study, five AI models were investigated for restoring low-count whole-body PET/MRI, covering convolutional benchmarks - U-Net, enhanced deep super-resolution network (EDSR), generative adversarial network (GAN) - and the most cutting-edge image reconstruction transformer models in computer vision to date - Swin transformer image restoration network (SwinIR) and EDSR-ViT (vision transformer). The models were evaluated against six groups of count levels representing the simulated 75%, 50%, 25%, 12.5%, 6.25%, and 1% (extremely ultra-low-count) of the clinical standard 3 MBq/kg 18F-FDG dose. The comparisons were performed upon two independent cohorts - (1) a primary cohort from Stanford University and (2) a cross-continental external validation cohort from Tübingen University - in order to ensure the findings are generalizable. A total of 476 original count and simulated low-count whole-body PET/MRI scans were incorporated into this analysis. RESULTS For low-count PET restoration on the primary cohort, the mean structural similarity index (SSIM) scores for dose 6.25% were 0.898 (95% CI, 0.887-0.910) for EDSR, 0.893 (0.881-0.905) for EDSR-ViT, 0.873 (0.859-0.887) for GAN, 0.885 (0.873-0.898) for U-Net, and 0.910 (0.900-0.920) for SwinIR. In continuation, SwinIR and U-Net's performances were also discreetly evaluated at each simulated radiotracer dose levels. Using the primary Stanford cohort, the mean diagnostic image quality (DIQ; 5-point Likert scale) scores of SwinIR restoration were 5 (SD, 0) for dose 75%, 4.50 (0.535) for dose 50%, 3.75 (0.463) for dose 25%, 3.25 (0.463) for dose 12.5%, 4 (0.926) for dose 6.25%, and 2.5 (0.534) for dose 1%. CONCLUSION Compared to low-count PET images, with near-to or nondiagnostic images at higher dose reduction levels (up to 6.25%), both SwinIR and U-Net significantly improve the diagnostic quality of PET images. A radiotracer dose reduction to 1% of the current clinical standard radiotracer dose is out of scope for current AI techniques.
Collapse
Affiliation(s)
- Yan-Ran Joyce Wang
- Department of Radiology, School of Medicine, Stanford University, 725 Welch Road, Stanford, CA, 94304, USA.
- Department of Biomedical Data Science, Stanford University, Stanford, CA, 94304, USA.
| | - Pengcheng Wang
- Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei, China
| | - Lisa Christine Adams
- Department of Radiology, School of Medicine, Stanford University, 725 Welch Road, Stanford, CA, 94304, USA
| | - Natasha Diba Sheybani
- Department of Biomedical Data Science, Stanford University, Stanford, CA, 94304, USA
| | - Liangqiong Qu
- Department of Biomedical Data Science, Stanford University, Stanford, CA, 94304, USA
| | - Amir Hossein Sarrami
- Department of Radiology, School of Medicine, Stanford University, 725 Welch Road, Stanford, CA, 94304, USA
| | - Ashok Joseph Theruvath
- Department of Radiology, School of Medicine, Stanford University, 725 Welch Road, Stanford, CA, 94304, USA
| | - Sergios Gatidis
- Department of Diagnostic and Interventional Radiology, University Hospital Tuebingen, Tuebingen, Germany
| | - Tina Ho
- Department of Radiology, School of Medicine, Stanford University, 725 Welch Road, Stanford, CA, 94304, USA
| | - Quan Zhou
- Department of Radiology, School of Medicine, Stanford University, 725 Welch Road, Stanford, CA, 94304, USA
| | - Allison Pribnow
- Department of Pediatrics, Pediatric Oncology, Lucile Packard Children's Hospital, Stanford University, Stanford, CA, 94304, USA
| | - Avnesh S Thakor
- Department of Pediatrics, Pediatric Oncology, Lucile Packard Children's Hospital, Stanford University, Stanford, CA, 94304, USA
| | - Daniel Rubin
- Department of Biomedical Data Science, Stanford University, Stanford, CA, 94304, USA
- Department of Pediatrics, Pediatric Oncology, Lucile Packard Children's Hospital, Stanford University, Stanford, CA, 94304, USA
| | - Heike E Daldrup-Link
- Department of Radiology, School of Medicine, Stanford University, 725 Welch Road, Stanford, CA, 94304, USA.
- Department of Pediatrics, Pediatric Oncology, Lucile Packard Children's Hospital, Stanford University, Stanford, CA, 94304, USA.
| |
Collapse
|
21
|
Fujioka T, Satoh Y, Imokawa T, Mori M, Yamaga E, Takahashi K, Kubota K, Onishi H, Tateishi U. Proposal to Improve the Image Quality of Short-Acquisition Time-Dedicated Breast Positron Emission Tomography Using the Pix2pix Generative Adversarial Network. Diagnostics (Basel) 2022; 12:diagnostics12123114. [PMID: 36553120 PMCID: PMC9777139 DOI: 10.3390/diagnostics12123114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Revised: 11/26/2022] [Accepted: 12/07/2022] [Indexed: 12/14/2022] Open
Abstract
This study aimed to evaluate the ability of the pix2pix generative adversarial network (GAN) to improve the image quality of low-count dedicated breast positron emission tomography (dbPET). Pairs of full- and low-count dbPET images were collected from 49 breasts. An image synthesis model was constructed using pix2pix GAN for each acquisition time with training (3776 pairs from 16 breasts) and validation data (1652 pairs from 7 breasts). Test data included dbPET images synthesized by our model from 26 breasts with short acquisition times. Two breast radiologists visually compared the overall image quality of the original and synthesized images derived from the short-acquisition time data (scores of 1−5). Further quantitative evaluation was performed using a peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). In the visual evaluation, both readers revealed an average score of >3 for all images. The quantitative evaluation revealed significantly higher SSIM (p < 0.01) and PSNR (p < 0.01) for 26 s synthetic images and higher PSNR for 52 s images (p < 0.01) than for the original images. Our model improved the quality of low-count time dbPET synthetic images, with a more significant effect on images with lower counts.
Collapse
Affiliation(s)
- Tomoyuki Fujioka
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo 113-8510, Japan
| | - Yoko Satoh
- Yamanashi PET Imaging Clinic, Chuo City 409-3821, Japan
- Department of Radiology, University of Yamanashi, Chuo City 409-3898, Japan
- Correspondence:
| | - Tomoki Imokawa
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo 113-8510, Japan
| | - Mio Mori
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo 113-8510, Japan
| | - Emi Yamaga
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo 113-8510, Japan
| | - Kanae Takahashi
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo 113-8510, Japan
| | - Kazunori Kubota
- Department of Radiology, Dokkyo Medical University Saitama Medical Center, Koshigaya 343-8555, Japan
| | - Hiroshi Onishi
- Department of Radiology, University of Yamanashi, Chuo City 409-3898, Japan
| | - Ukihide Tateishi
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo 113-8510, Japan
| |
Collapse
|
22
|
Sun H, Jiang Y, Yuan J, Wang H, Liang D, Fan W, Hu Z, Zhang N. High-quality PET image synthesis from ultra-low-dose PET/MRI using bi-task deep learning. Quant Imaging Med Surg 2022; 12:5326-5342. [PMID: 36465830 PMCID: PMC9703111 DOI: 10.21037/qims-22-116] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Accepted: 08/04/2022] [Indexed: 01/25/2023]
Abstract
BACKGROUND Lowering the dose for positron emission tomography (PET) imaging reduces patients' radiation burden but decreases the image quality by increasing noise and reducing imaging detail and quantifications. This paper introduces a method for acquiring high-quality PET images from an ultra-low-dose state to achieve both high-quality images and a low radiation burden. METHODS We developed a two-task-based end-to-end generative adversarial network, named bi-c-GAN, that incorporated the advantages of PET and magnetic resonance imaging (MRI) modalities to synthesize high-quality PET images from an ultra-low-dose input. Moreover, a combined loss, including the mean absolute error, structural loss, and bias loss, was created to improve the trained model's performance. Real integrated PET/MRI data from 67 patients' axial heads (each with 161 slices) were used for training and validation purposes. Synthesized images were quantified by the peak signal-to-noise ratio (PSNR), normalized mean square error (NMSE), structural similarity (SSIM), and contrast noise ratio (CNR). The improvement ratios of these four selected quantitative metrics were used to compare the images produced by bi-c-GAN with other methods. RESULTS In the four-fold cross-validation, the proposed bi-c-GAN outperformed the other three selected methods (U-net, c-GAN, and multiple input c-GAN). With the bi-c-GAN, in a 5% low-dose PET, the image quality was higher than that of the other three methods by at least 6.7% in the PSNR, 0.6% in the SSIM, 1.3% in the NMSE, and 8% in the CNR. In the hold-out validation, bi-c-GAN improved the image quality compared to U-net and c-GAN in both 2.5% and 10% low-dose PET. For example, the PSNR using bi-C-GAN was at least 4.46% in the 2.5% low-dose PET and at most 14.88% in the 10% low-dose PET. Visual examples also showed a higher quality of images generated from the proposed method, demonstrating the denoising and improving ability of bi-c-GAN. CONCLUSIONS By taking advantage of integrated PET/MR images and multitask deep learning (MDL), the proposed bi-c-GAN can efficiently improve the image quality of ultra-low-dose PET and reduce radiation exposure.
Collapse
Affiliation(s)
- Hanyu Sun
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yongluo Jiang
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Jianmin Yuan
- Central Research Institute, Shanghai United Imaging Healthcare, Shanghai, China
| | - Haining Wang
- United Imaging Research Institute of Innovative Medical Equipment, Shenzhen, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Wei Fan
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China;,United Imaging Research Institute of Innovative Medical Equipment, Shenzhen, China
| | - Na Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China;,United Imaging Research Institute of Innovative Medical Equipment, Shenzhen, China
| |
Collapse
|
23
|
Castillo-Flores S, Gonzalez MR, Bryce-Alberti M, de Souza F, Subhawong TK, Kuker R, Pretell-Mazzini J. PET-CT in the Evaluation of Neoadjuvant/Adjuvant Treatment Response of Soft-tissue Sarcomas: A Comprehensive Review of the Literature. JBJS Rev 2022; 10:01874474-202212000-00003. [PMID: 36639875 DOI: 10.2106/jbjs.rvw.22.00131] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
Abstract
➢ In soft-tissue sarcomas (STSs), the use of positron emission tomography-computed tomography (PET-CT) through a standardized uptake value reduction rate correlates well with histopathological response to neoadjuvant treatment and survival. ➢ PET-CT has shown a better sensitivity to diagnose systemic involvement compared with magnetic resonance imaging and CT; therefore, it has an important role in detecting recurrent systemic disease. However, delaying the use of PET-CT scan, to differentiate tumor recurrence from benign fluorodeoxyglucose uptake changes after surgical treatment and radiotherapy, is essential. ➢ PET-CT limitations such as difficult differentiation between benign inflammatory and malignant processes, inefficient discrimination between benign soft-tissue tumors and STSs, and low sensitivity when evaluating small pulmonary metastases must be of special consideration.
Collapse
Affiliation(s)
- Samy Castillo-Flores
- Medical Student at Facultad de Medicina Alberto Hurtado, Universidad Peruana Cayetano Heredia, Lima, Peru
| | - Marcos R Gonzalez
- Medical Student at Facultad de Medicina Alberto Hurtado, Universidad Peruana Cayetano Heredia, Lima, Peru
| | - Mayte Bryce-Alberti
- Medical Student at Facultad de Medicina Alberto Hurtado, Universidad Peruana Cayetano Heredia, Lima, Peru
| | - Felipe de Souza
- Division of Musculoskeletal Radiology, Department of Radiology, University of Miami Miller School of Medicine, Miami, Florida
| | - Ty K Subhawong
- Division of Musculoskeletal Radiology, Department of Radiology, University of Miami Miller School of Medicine, Miami, Florida
| | - Russ Kuker
- Division of Musculoskeletal Radiology, Department of Radiology, University of Miami Miller School of Medicine, Miami, Florida
| | - Juan Pretell-Mazzini
- Division of Orthopedic Oncology, Miami Cancer Institute, Baptist Health System South Florida, Plantation, Florida
| |
Collapse
|
24
|
Carreras J, Roncador G, Hamoudi R. Artificial Intelligence Predicted Overall Survival and Classified Mature B-Cell Neoplasms Based on Immuno-Oncology and Immune Checkpoint Panels. Cancers (Basel) 2022; 14:5318. [PMID: 36358737 PMCID: PMC9657332 DOI: 10.3390/cancers14215318] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Revised: 10/20/2022] [Accepted: 10/24/2022] [Indexed: 08/01/2023] Open
Abstract
Artificial intelligence (AI) can identify actionable oncology biomarkers. This research integrates our previous analyses of non-Hodgkin lymphoma. We used gene expression and immunohistochemical data, focusing on the immune checkpoint, and added a new analysis of macrophages, including 3D rendering. The AI comprised machine learning (C5, Bayesian network, C&R, CHAID, discriminant analysis, KNN, logistic regression, LSVM, Quest, random forest, random trees, SVM, tree-AS, and XGBoost linear and tree) and artificial neural networks (multilayer perceptron and radial basis function). The series included chronic lymphocytic leukemia, mantle cell lymphoma, follicular lymphoma, Burkitt, diffuse large B-cell lymphoma, marginal zone lymphoma, and multiple myeloma, as well as acute myeloid leukemia and pan-cancer series. AI classified lymphoma subtypes and predicted overall survival accurately. Oncogenes and tumor suppressor genes were highlighted (MYC, BCL2, and TP53), along with immune microenvironment markers of tumor-associated macrophages (M2-like TAMs), T-cells and regulatory T lymphocytes (Tregs) (CD68, CD163, MARCO, CSF1R, CSF1, PD-L1/CD274, SIRPA, CD85A/LILRB3, CD47, IL10, TNFRSF14/HVEM, TNFAIP8, IKAROS, STAT3, NFKB, MAPK, PD-1/PDCD1, BTLA, and FOXP3), apoptosis (BCL2, CASP3, CASP8, PARP, and pathway-related MDM2, E2F1, CDK6, MYB, and LMO2), and metabolism (ENO3, GGA3). In conclusion, AI with immuno-oncology markers is a powerful predictive tool. Additionally, a review of recent literature was made.
Collapse
Affiliation(s)
- Joaquim Carreras
- Department of Pathology, School of Medicine, Tokai University, 143 Shimokasuya, Isehara 259-1193, Kanagawa, Japan
| | - Giovanna Roncador
- Monoclonal Antibodies Unit, Spanish National Cancer Research Center (Centro Nacional de Investigaciones Oncologicas, CNIO), Melchor Fernandez Almagro 3, 28029 Madrid, Spain
| | - Rifat Hamoudi
- Department of Clinical Sciences, College of Medicine, University of Sharjah, Sharjah P.O. Box 27272, United Arab Emirates
- Division of Surgery and Interventional Science, University College London, Gower Street, London WC1E 6BT, UK
| |
Collapse
|
25
|
Hosch R, Weber M, Sraieb M, Flaschel N, Haubold J, Kim MS, Umutlu L, Kleesiek J, Herrmann K, Nensa F, Rischpler C, Koitka S, Seifert R, Kersting D. Artificial intelligence guided enhancement of digital PET: scans as fast as CT? Eur J Nucl Med Mol Imaging 2022; 49:4503-4515. [PMID: 35904589 PMCID: PMC9606065 DOI: 10.1007/s00259-022-05901-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Accepted: 06/30/2022] [Indexed: 12/03/2022]
Abstract
Purpose Both digital positron emission tomography (PET) detector technologies and artificial intelligence based image post-reconstruction methods allow to reduce the PET acquisition time while maintaining diagnostic quality. The aim of this study was to acquire ultra-low-count fluorodeoxyglucose (FDG) ExtremePET images on a digital PET/computed tomography (CT) scanner at an acquisition time comparable to a CT scan and to generate synthetic full-dose PET images using an artificial neural network. Methods This is a prospective, single-arm, single-center phase I/II imaging study. A total of 587 patients were included. For each patient, a standard and an ultra-low-count FDG PET/CT scan (whole-body acquisition time about 30 s) were acquired. A modified pix2pixHD deep-learning network was trained employing 387 data sets as training and 200 as test cohort. Three models (PET-only and PET/CT with or without group convolution) were compared. Detectability and quantification were evaluated. Results The PET/CT input model with group convolution performed best regarding lesion signal recovery and was selected for detailed evaluation. Synthetic PET images were of high visual image quality; mean absolute lesion SUVmax (maximum standardized uptake value) difference was 1.5. Patient-based sensitivity and specificity for lesion detection were 79% and 100%, respectively. Not-detected lesions were of lower tracer uptake and lesion volume. In a matched-pair comparison, patient-based (lesion-based) detection rate was 89% (78%) for PERCIST (PET response criteria in solid tumors)-measurable and 36% (22%) for non PERCIST-measurable lesions. Conclusion Lesion detectability and lesion quantification were promising in the context of extremely fast acquisition times. Possible application scenarios might include re-staging of late-stage cancer patients, in whom assessment of total tumor burden can be of higher relevance than detailed evaluation of small and low-uptake lesions. Supplementary Information The online version contains supplementary material available at 10.1007/s00259-022-05901-x.
Collapse
Affiliation(s)
- René Hosch
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany. .,Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany.
| | - Manuel Weber
- Department of Nuclear Medicine and German Cancer Consortium (DKTK), University Hospital Essen, University of Duisburg-Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Miriam Sraieb
- Department of Nuclear Medicine and German Cancer Consortium (DKTK), University Hospital Essen, University of Duisburg-Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Nils Flaschel
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany.,Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany
| | - Johannes Haubold
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Moon-Sung Kim
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany.,Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany
| | - Lale Umutlu
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Jens Kleesiek
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany
| | - Ken Herrmann
- Department of Nuclear Medicine and German Cancer Consortium (DKTK), University Hospital Essen, University of Duisburg-Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Felix Nensa
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany.,Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany
| | - Christoph Rischpler
- Department of Nuclear Medicine and German Cancer Consortium (DKTK), University Hospital Essen, University of Duisburg-Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Sven Koitka
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany.,Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany
| | - Robert Seifert
- Department of Nuclear Medicine and German Cancer Consortium (DKTK), University Hospital Essen, University of Duisburg-Essen, Hufelandstraße 55, 45147, Essen, Germany.,Department of Nuclear Medicine, University Hospital Münster, University of Münster, Albert-Schweitzer-Campus 1, 48149, Münster, Germany
| | - David Kersting
- Department of Nuclear Medicine and German Cancer Consortium (DKTK), University Hospital Essen, University of Duisburg-Essen, Hufelandstraße 55, 45147, Essen, Germany
| |
Collapse
|
26
|
Nakamoto Y, Kitajima K, Toriihara A, Nakajo M, Hirata K. Recent topics of the clinical utility of PET/MRI in oncology and neuroscience. Ann Nucl Med 2022; 36:798-803. [PMID: 35896912 DOI: 10.1007/s12149-022-01780-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Accepted: 07/22/2022] [Indexed: 11/29/2022]
Abstract
Since the inline positron emission tomography (PET)/magnetic resonance imaging (MRI) system appeared in clinical, more than a decade has passed. In this article, we have reviewed recently-published articles about PET/MRI. There have been articles about staging in rectal and breast cancers by PET/MRI using fluorodeoxyglucose (FDG) with higher diagnostic performance in oncology. Assessing possible metastatic bone lesions is considered a proper target by FDG PET/MRI. Other than FDG, PET/MRI with prostate specific membrane antigen (PSMA)-targeted tracers or fibroblast activation protein inhibitor have been reported. Especially, PSMA PET/MRI has been reported to be a promising tool for determining appropriate sites in biopsy. Independent of tracers, the clinical application of artificial intelligence (AI) for images obtained by PET/MRI is one of the current topics in this field, suggesting clinical usefulness for differentiating breast lesions or grading prostate cancer. In addition, AI has been reported to be helpful for noise reduction for reconstructing images, which would be promising for reducing radiation exposure. Furthermore, PET/MRI has a clinical role in neuroscience, including localization of the epileptogenic zone. PET/MRI with new PET tracers could be useful for differentiation among neurological disorders. Clinical applications of integrated PET/MRI in various fields are expected to be reported in the future.
Collapse
Affiliation(s)
- Yuji Nakamoto
- Department of Diagnostic Imaging and Nuclear Medicine, Graduate School of Medicine, Kyoto University, 54 Shogoinkawahara-cho, Sakyo-Ku, Kyoto, 606-8507, Japan.
| | - Kazuhiro Kitajima
- Department of Radiology, Division of Nuclear Medicine and PET Center, Hyogo College of Medicine, 1-1 Mukogawa-cho, Nishinomiya, Hyogo, 663-8501, Japan
| | - Akira Toriihara
- PET Imaging Center, Asahi General Hospital, 1326 I, Asahi, Chiba, 289-2511, Japan
| | - Masatoyo Nakajo
- Department of Radiology, Graduate School of Medical and Dental Sciences, Kagoshima University, 8-35-1 Sakuragaoka, Kagoshima, 890-8544, Japan
| | - Kenji Hirata
- Department of Diagnostic Imaging, Hokkaido University Graduate School of Medicine, Kita 15, Nishi 7, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan
| |
Collapse
|
27
|
Ng CKC. Artificial Intelligence for Radiation Dose Optimization in Pediatric Radiology: A Systematic Review. CHILDREN 2022; 9:children9071044. [PMID: 35884028 PMCID: PMC9320231 DOI: 10.3390/children9071044] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 07/11/2022] [Accepted: 07/11/2022] [Indexed: 01/19/2023]
Abstract
Radiation dose optimization is particularly important in pediatric radiology, as children are more susceptible to potential harmful effects of ionizing radiation. However, only one narrative review about artificial intelligence (AI) for dose optimization in pediatric computed tomography (CT) has been published yet. The purpose of this systematic review is to answer the question “What are the AI techniques and architectures introduced in pediatric radiology for dose optimization, their specific application areas, and performances?” Literature search with use of electronic databases was conducted on 3 June 2022. Sixteen articles that met selection criteria were included. The included studies showed deep convolutional neural network (CNN) was the most common AI technique and architecture used for dose optimization in pediatric radiology. All but three included studies evaluated AI performance in dose optimization of abdomen, chest, head, neck, and pelvis CT; CT angiography; and dual-energy CT through deep learning image reconstruction. Most studies demonstrated that AI could reduce radiation dose by 36–70% without losing diagnostic information. Despite the dominance of commercially available AI models based on deep CNN with promising outcomes, homegrown models could provide comparable performances. Future exploration of AI value for dose optimization in pediatric radiology is necessary due to small sample sizes and narrow scopes (only three modalities, CT, positron emission tomography/magnetic resonance imaging and mobile radiography, and not all examination types covered) of existing studies.
Collapse
Affiliation(s)
- Curtise K. C. Ng
- Curtin Medical School, Curtin University, GPO Box U1987, Perth, WA 6845, Australia; or ; Tel.: +61-8-9266-7314; Fax: +61-8-9266-2377
- Curtin Health Innovation Research Institute (CHIRI), Faculty of Health Sciences, Curtin University, GPO Box U1987, Perth, WA 6845, Australia
| |
Collapse
|
28
|
Visvikis D, Lambin P, Beuschau Mauridsen K, Hustinx R, Lassmann M, Rischpler C, Shi K, Pruim J. Application of artificial intelligence in nuclear medicine and molecular imaging: a review of current status and future perspectives for clinical translation. Eur J Nucl Med Mol Imaging 2022; 49:4452-4463. [PMID: 35809090 PMCID: PMC9606092 DOI: 10.1007/s00259-022-05891-w] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Accepted: 06/25/2022] [Indexed: 02/06/2023]
Abstract
Artificial intelligence (AI) will change the face of nuclear medicine and molecular imaging as it will in everyday life. In this review, we focus on the potential applications of AI in the field, both from a physical (radiomics, underlying statistics, image reconstruction and data analysis) and a clinical (neurology, cardiology, oncology) perspective. Challenges for transferability from research to clinical practice are being discussed as is the concept of explainable AI. Finally, we focus on the fields where challenges should be set out to introduce AI in the field of nuclear medicine and molecular imaging in a reliable manner.
Collapse
Affiliation(s)
| | - Philippe Lambin
- The D-Lab, Department of Precision Medicine, GROW - School for Oncology, Maastricht University Medical Center (MUMC +), Maastricht, The Netherlands.,Department of Radiology and Nuclear Medicine, GROW - School for Oncology, Maastricht University Medical Center (MUMC +), Maastricht, The Netherlands
| | - Kim Beuschau Mauridsen
- Center of Functionally Integrative Neuroscience and MindLab, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark.,Department of Nuclear Medicine, University of Bern, Bern, Switzerland
| | - Roland Hustinx
- GIGA-CRC in Vivo Imaging, University of Liège, GIGA, Avenue de l'Hôpital 11, 4000, Liege, Belgium
| | - Michael Lassmann
- Klinik Und Poliklinik Für Nuklearmedizin, Universitätsklinikum Würzburg, Würzburg, Germany
| | - Christoph Rischpler
- Department of Nuclear Medicine, University Hospital Essen, University of Duisburg-Essen, Essen, Germany
| | - Kuangyu Shi
- Department of Nuclear Medicine, University of Bern, Bern, Switzerland.,Department of Informatics, Technical University of Munich, Munich, Germany
| | - Jan Pruim
- Medical Imaging Center, Dept. of Nuclear Medicine and Molecular Imaging, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands.
| |
Collapse
|
29
|
Abstract
Purpose To evaluate the clinical feasibility of high-resolution dedicated breast positron emission tomography (dbPET) with real low-dose 18F-2-fluorodeoxy-d-glucose (18F-FDG) by comparing images acquired with full-dose FDG. Materials and methods Nine women with no history of breast cancer and previously scanned by dbPET injected with a clinical 18F-FDG dose (3 MBq/kg) were enrolled. They were injected with 50% of the clinical 18F-FDG dose and scanned with dbPET for 10 min for each breast 60 and 90 min after injection. To investigate the effect of the scan start time and acquisition time on image quality, list-mode data were divided into 1, 3, 5, and 7 min (and 10 min with 50% FDG injected) from the start of acquisition and reconstructed. The reconstructed images were visually and quantitatively compared for contrast between mammary gland and fat (contrast) and for coefficient of variation (CV) in the mammary gland. Results In visual evaluation, the contrast between the mammary gland and fat acquired at a 50% dose for 7 min was comparable and even better in smoothness than that in the images acquired at a 100% dose. No visual difference between the images with a 50% dose was found with scan start times 60 and 90 min after injection. Quantitative evaluation showed a slightly lower contrast in the image at 60 min after 50% dosing, with no difference between acquisition times. There was no difference in CV between conditions; however, smoothness decreased with shorter acquisition time in all conditions. Conclusions The quality of dbPET images with a 50% FDG dose was high enough for clinical application. Although the optimal scan start time for improved lesion-to-background mammary gland contrast remained unknown in this study, it will be clarified in future studies of breast cancer patients.
Collapse
|
30
|
Artificial intelligence-based PET denoising could allow a two-fold reduction in [ 18F]FDG PET acquisition time in digital PET/CT. Eur J Nucl Med Mol Imaging 2022; 49:3750-3760. [PMID: 35593925 PMCID: PMC9399218 DOI: 10.1007/s00259-022-05800-1] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2021] [Accepted: 04/10/2022] [Indexed: 11/18/2022]
Abstract
Purpose We investigated whether artificial intelligence (AI)-based denoising halves PET acquisition time in digital PET/CT. Methods One hundred ninety-five patients referred for [18F]FDG PET/CT were prospectively included. Body PET acquisitions were performed in list mode. Original “PET90” (90 s/bed position) was compared to reconstructed ½-duration PET (45 s/bed position) with and without AI-denoising, “PET45AI and PET45”. Denoising was performed by SubtlePET™ using deep convolutional neural networks. Visual global image quality (IQ) 3-point scores and lesion detectability were evaluated. Lesion maximal and peak standardized uptake values using lean body mass (SULmax and SULpeak), metabolic volumes (MV), and liver SULmean were measured, including both standard and EARL1 (European Association of Nuclear Medicine Research Ltd) compliant SUL. Lesion-to-liver SUL ratios (LLR) and liver coefficients of variation (CVliv) were calculated. Results PET45 showed mediocre IQ (scored poor in 8% and moderate in 68%) and lesion concordance rate with PET90 (88.7%). In PET45AI, IQ scores were similar to PET90 (P = 0.80), good in 92% and moderate in 8% for both. The lesion concordance rate between PET90 and PET45AI was 836/856 (97.7%), with 7 lesions (0.8%) only detected in PET90 and 13 (1.5%) exclusively in PET45AI. Lesion EARL1 SULpeak was not significantly different between both PET (P = 0.09). Lesion standard SULpeak, standard and EARL1 SULmax, LLR and CVliv were lower in PET45AI than in PET90 (P < 0.0001), while lesion MV and liver SULmean were higher (P < 0.0001). Good to excellent intraclass correlation coefficients (ICC) between PET90 and PET45AI were observed for lesion SUL and MV (ICC ≥ 0.97) and for liver SULmean (ICC ≥ 0.87). Conclusion AI allows [18F]FDG PET duration in digital PET/CT to be halved, while restoring degraded ½-duration PET image quality. Future multicentric studies, including other PET radiopharmaceuticals, are warranted. Supplementary Information The online version contains supplementary material available at 10.1007/s00259-022-05800-1.
Collapse
|
31
|
Medical Radiation Exposure Reduction in PET via Super-Resolution Deep Learning Model. Diagnostics (Basel) 2022; 12:diagnostics12040872. [PMID: 35453920 PMCID: PMC9025130 DOI: 10.3390/diagnostics12040872] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Revised: 03/24/2022] [Accepted: 03/29/2022] [Indexed: 11/16/2022] Open
Abstract
In positron emission tomography (PET) imaging, image quality correlates with the injected [18F]-fluorodeoxyglucose (FDG) dose and acquisition time. If image quality improves from short-acquisition PET images via the super-resolution (SR) deep learning technique, it is possible to reduce the injected FDG dose. Therefore, the aim of this study was to clarify whether the SR deep learning technique could improve the image quality of the 50%-acquisition-time image to the level of that of the 100%-acquisition-time image. One-hundred-and-eight adult patients were enrolled in this retrospective observational study. The supervised data were divided into nine subsets for nested cross-validation. The mean peak signal-to-noise ratio and structural similarity in the SR-PET image were 31.3 dB and 0.931, respectively. The mean opinion scores of the 50% PET image, SR-PET image, and 100% PET image were 3.41, 3.96, and 4.23 for the lung level, 3.31, 3.80, and 4.27 for the liver level, and 3.08, 3.67, and 3.94 for the bowel level, respectively. Thus, the SR-PET image was more similar to the 100% PET image and subjectively improved the image quality, as compared to the 50% PET image. The use of the SR deep-learning technique can reduce the injected FDG dose and thus lower radiation exposure.
Collapse
|
32
|
Pain CD, Egan GF, Chen Z. Deep learning-based image reconstruction and post-processing methods in positron emission tomography for low-dose imaging and resolution enhancement. Eur J Nucl Med Mol Imaging 2022; 49:3098-3118. [PMID: 35312031 PMCID: PMC9250483 DOI: 10.1007/s00259-022-05746-4] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Accepted: 02/25/2022] [Indexed: 12/21/2022]
Abstract
Image processing plays a crucial role in maximising diagnostic quality of positron emission tomography (PET) images. Recently, deep learning methods developed across many fields have shown tremendous potential when applied to medical image enhancement, resulting in a rich and rapidly advancing literature surrounding this subject. This review encapsulates methods for integrating deep learning into PET image reconstruction and post-processing for low-dose imaging and resolution enhancement. A brief introduction to conventional image processing techniques in PET is firstly presented. We then review methods which integrate deep learning into the image reconstruction framework as either deep learning-based regularisation or as a fully data-driven mapping from measured signal to images. Deep learning-based post-processing methods for low-dose imaging, temporal resolution enhancement and spatial resolution enhancement are also reviewed. Finally, the challenges associated with applying deep learning to enhance PET images in the clinical setting are discussed and future research directions to address these challenges are presented.
Collapse
Affiliation(s)
- Cameron Dennis Pain
- Monash Biomedical Imaging, Monash University, Melbourne, Australia.
- Department of Electrical and Computer Systems Engineering, Monash University, Melbourne, Australia.
| | - Gary F Egan
- Monash Biomedical Imaging, Monash University, Melbourne, Australia
- Turner Institute for Brain and Mental Health, Monash University, Melbourne, Australia
| | - Zhaolin Chen
- Monash Biomedical Imaging, Monash University, Melbourne, Australia
- Department of Data Science and AI, Monash University, Melbourne, Australia
| |
Collapse
|
33
|
Küstner T, Hepp T, Seith F. Multiparametric Oncologic Hybrid Imaging: Machine Learning Challenges and Opportunities. ROFO-FORTSCHR RONTG 2022; 194:605-612. [PMID: 35211929 DOI: 10.1055/a-1718-4128] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
BACKGROUND Machine learning (ML) is considered an important technology for future data analysis in health care. METHODS The inherently technology-driven fields of diagnostic radiology and nuclear medicine will both benefit from ML in terms of image acquisition and reconstruction. Within the next few years, this will lead to accelerated image acquisition, improved image quality, a reduction of motion artifacts and - for PET imaging - reduced radiation exposure and new approaches for attenuation correction. Furthermore, ML has the potential to support decision making by a combined analysis of data derived from different modalities, especially in oncology. In this context, we see great potential for ML in multiparametric hybrid imaging and the development of imaging biomarkers. RESULTS AND CONCLUSION In this review, we will describe the basics of ML, present approaches in hybrid imaging of MRI, CT, and PET, and discuss the specific challenges associated with it and the steps ahead to make ML a diagnostic and clinical tool in the future. KEY POINTS · ML provides a viable clinical solution for the reconstruction, processing, and analysis of hybrid imaging obtained from MRI, CT, and PET.. CITATION FORMAT · Küstner T, Hepp T, Seith F. Multiparametric Oncologic Hybrid Imaging: Machine Learning Challenges and Opportunities. Fortschr Röntgenstr 2022; DOI: 10.1055/a-1718-4128.
Collapse
Affiliation(s)
- Thomas Küstner
- Medical Image and Data Analysis (MIDAS.lab), Department of Diagnostic and Interventional Radiology, University Hospitals Tubingen, Germany
| | - Tobias Hepp
- Medical Image and Data Analysis (MIDAS.lab), Department of Diagnostic and Interventional Radiology, University Hospitals Tubingen, Germany
| | - Ferdinand Seith
- Department of Diagnostic and Interventional Radiology, University Hospitals Tubingen, Germany
| |
Collapse
|
34
|
Daldrup-Link HE, Theruvath AJ, Baratto L, Hawk KE. One-stop local and whole-body staging of children with cancer. Pediatr Radiol 2022; 52:391-400. [PMID: 33929564 PMCID: PMC10874282 DOI: 10.1007/s00247-021-05076-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Revised: 02/04/2021] [Accepted: 03/30/2021] [Indexed: 12/19/2022]
Abstract
Accurate staging and re-staging of cancer in children is crucial for patient management. Currently, children with a newly diagnosed cancer must undergo a series of imaging tests, which are stressful, time-consuming, partially redundant, expensive, and can require repetitive anesthesia. New approaches for pediatric cancer staging can evaluate the primary tumor and metastases in a single session. However, traditional one-stop imaging tests, such as CT and positron emission tomography (PET)/CT, are associated with considerable radiation exposure. This is particularly concerning for children because they are more sensitive to ionizing radiation than adults and they live long enough to experience secondary cancers later in life. In this review article we discuss child-tailored imaging tests for tumor detection and therapy response assessment - tests that can be obtained with substantially reduced radiation exposure compared to traditional CT and PET/CT scans. This includes diffusion-weighted imaging (DWI)/MRI and integrated [F-18]2-fluoro-2-deoxyglucose (18F-FDG) PET/MRI scans. While several investigators have compared the value of DWI/MRI and 18F-FDG PET/MRI for staging pediatric cancer, the value of these novel imaging technologies for cancer therapy monitoring has received surprisingly little attention. In this article, we share our experiences and review existing literature on this subject.
Collapse
Affiliation(s)
- Heike E Daldrup-Link
- Department of Radiology, Molecular Imaging Program at Stanford (MIPS), Lucile Packard Children's Hospital, Stanford University, 725 Welch Road, Room 1665, Stanford, CA, 94305-5614, USA.
- Department of Pediatrics, Stanford University, Stanford, CA, USA.
- Cancer Imaging and Early Detection Program, Stanford Cancer Institute, Stanford, CA, USA.
| | - Ashok J Theruvath
- Department of Radiology, Molecular Imaging Program at Stanford (MIPS), Lucile Packard Children's Hospital, Stanford University, 725 Welch Road, Room 1665, Stanford, CA, 94305-5614, USA
- Cancer Imaging and Early Detection Program, Stanford Cancer Institute, Stanford, CA, USA
| | - Lucia Baratto
- Department of Radiology, Molecular Imaging Program at Stanford (MIPS), Lucile Packard Children's Hospital, Stanford University, 725 Welch Road, Room 1665, Stanford, CA, 94305-5614, USA
- Cancer Imaging and Early Detection Program, Stanford Cancer Institute, Stanford, CA, USA
| | - Kristina Elizabeth Hawk
- Department of Radiology, Molecular Imaging Program at Stanford (MIPS), Lucile Packard Children's Hospital, Stanford University, 725 Welch Road, Room 1665, Stanford, CA, 94305-5614, USA
- Cancer Imaging and Early Detection Program, Stanford Cancer Institute, Stanford, CA, USA
| |
Collapse
|
35
|
Matsubara K, Ibaraki M, Nemoto M, Watabe H, Kimura Y. A review on AI in PET imaging. Ann Nucl Med 2022; 36:133-143. [PMID: 35029818 DOI: 10.1007/s12149-021-01710-8] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2021] [Accepted: 12/09/2021] [Indexed: 12/16/2022]
Abstract
Artificial intelligence (AI) has been applied to various medical imaging tasks, such as computer-aided diagnosis. Specifically, deep learning techniques such as convolutional neural network (CNN) and generative adversarial network (GAN) have been extensively used for medical image generation. Image generation with deep learning has been investigated in studies using positron emission tomography (PET). This article reviews studies that applied deep learning techniques for image generation on PET. We categorized the studies for PET image generation with deep learning into three themes as follows: (1) recovering full PET data from noisy data by denoising with deep learning, (2) PET image reconstruction and attenuation correction with deep learning and (3) PET image translation and synthesis with deep learning. We introduce recent studies based on these three categories. Finally, we mention the limitations of applying deep learning techniques to PET image generation and future prospects for PET image generation.
Collapse
Affiliation(s)
- Keisuke Matsubara
- Department of Radiology and Nuclear Medicine, Research Institute for Brain and Blood Vessels, Akita Cerebrospinal and Cardiovascular Center, Akita, Japan
| | - Masanobu Ibaraki
- Department of Radiology and Nuclear Medicine, Research Institute for Brain and Blood Vessels, Akita Cerebrospinal and Cardiovascular Center, Akita, Japan
| | - Mitsutaka Nemoto
- Faculty of Biology-Oriented Science and Technology, and Cyber Informatics Research Institute, Kindai University, Wakayama, Japan
| | - Hiroshi Watabe
- Cyclotron and Radioisotope Center (CYRIC), Tohoku University, Miyagi, Japan
| | - Yuichi Kimura
- Faculty of Biology-Oriented Science and Technology, and Cyber Informatics Research Institute, Kindai University, Wakayama, Japan.
| |
Collapse
|
36
|
Mehranian A, Wollenweber SD, Walker MD, Bradley KM, Fielding PA, Su KH, Johnsen R, Kotasidis F, Jansen FP, McGowan DR. Image enhancement of whole-body oncology [ 18F]-FDG PET scans using deep neural networks to reduce noise. Eur J Nucl Med Mol Imaging 2022; 49:539-549. [PMID: 34318350 PMCID: PMC8803788 DOI: 10.1007/s00259-021-05478-x] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2021] [Accepted: 06/20/2021] [Indexed: 02/07/2023]
Abstract
PURPOSE To enhance the image quality of oncology [18F]-FDG PET scans acquired in shorter times and reconstructed by faster algorithms using deep neural networks. METHODS List-mode data from 277 [18F]-FDG PET/CT scans, from six centres using GE Discovery PET/CT scanners, were split into ¾-, ½- and ¼-duration scans. Full-duration datasets were reconstructed using the convergent block sequential regularised expectation maximisation (BSREM) algorithm. Short-duration datasets were reconstructed with the faster OSEM algorithm. The 277 examinations were divided into training (n = 237), validation (n = 15) and testing (n = 25) sets. Three deep learning enhancement (DLE) models were trained to map full and partial-duration OSEM images into their target full-duration BSREM images. In addition to standardised uptake value (SUV) evaluations in lesions, liver and lungs, two experienced radiologists scored the quality of testing set images and BSREM in a blinded clinical reading (175 series). RESULTS OSEM reconstructions demonstrated up to 22% difference in lesion SUVmax, for different scan durations, compared to full-duration BSREM. Application of the DLE models reduced this difference significantly for full-, ¾- and ½-duration scans, while simultaneously reducing the noise in the liver. The clinical reading showed that the standard DLE model with full- or ¾-duration scans provided an image quality substantially comparable to full-duration scans with BSREM reconstruction, yet in a shorter reconstruction time. CONCLUSION Deep learning-based image enhancement models may allow a reduction in scan time (or injected activity) by up to 50%, and can decrease reconstruction time to a third, while maintaining image quality.
Collapse
Affiliation(s)
| | | | | | - Kevin M Bradley
- Wales Research and Diagnostic PET Imaging Centre, University Hospital of Wales, Cardiff, UK
| | | | | | | | | | | | - Daniel R McGowan
- Oxford University Hospitals NHS FT, Oxford, UK.
- Department of Oncology, University of Oxford, Oxford, UK.
| |
Collapse
|
37
|
Seifert R, Kersting D, Rischpler C, Opitz M, Kirchner J, Pabst KM, Mavroeidi IA, Laschinsky C, Grueneisen J, Schaarschmidt B, Catalano OA, Herrmann K, Umutlu L. Clinical Use of PET/MR in Oncology: An Update. Semin Nucl Med 2021; 52:356-364. [PMID: 34980479 DOI: 10.1053/j.semnuclmed.2021.11.012] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Revised: 11/21/2021] [Accepted: 11/23/2021] [Indexed: 12/30/2022]
Abstract
The combination of PET and MRI is one of the recent advances of hybrid imaging. Yet to date, the adoption rate of PET/MRI systems has been rather slow. This seems to be partially caused by the high costs of PET/MRI systems and the need to verify an incremental benefit over PET/CT or sequential PET/CT and MRI. In analogy to PET/CT, the MRI part of PET/MRI was primarily used for anatomical imaging. Though this can be advantageous, for example in diseases where the superior soft tissue contrast of MRI is highly appreciated, the sole use of MRI for anatomical orientation lessens the potential of PET/MRI. Consequently, more recent studies focused on its multiparametric potential and employed diffusion weighted sequences and other functional imaging sequences in PET/MRI. This integration puts the focus on a more wholesome approach to PET/MR imaging, in terms of releasing its full potential for local primary staging based on multiparametric imaging and an included one-stop shop approach for whole-body staging. This approach as well as the implementation of computational analysis, in terms of radiomics analysis, has been shown valuable in several oncological diseases, as will be discussed in this review article.
Collapse
Affiliation(s)
- Robert Seifert
- Department of Nuclear Medicine, University Hospital Essen, Essen, Germany; Department of Nuclear Medicine, University Hospital Münster, Münster, Germany; West German Cancer Center, University Hospital Essen, Essen, Germany.; German Cancer Consortium (DKTK), University Hospital Essen, Essen, Germany.
| | - David Kersting
- Department of Nuclear Medicine, University Hospital Essen, Essen, Germany; West German Cancer Center, University Hospital Essen, Essen, Germany.; German Cancer Consortium (DKTK), University Hospital Essen, Essen, Germany
| | - Christoph Rischpler
- Department of Nuclear Medicine, University Hospital Essen, Essen, Germany; West German Cancer Center, University Hospital Essen, Essen, Germany.; German Cancer Consortium (DKTK), University Hospital Essen, Essen, Germany
| | - Marcel Opitz
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany
| | - Julian Kirchner
- Department of Diagnostic and Interventional Radiology, University Dusseldorf, Medical Faculty, Dusseldorf, Germany
| | - Kim M Pabst
- Department of Nuclear Medicine, University Hospital Essen, Essen, Germany; West German Cancer Center, University Hospital Essen, Essen, Germany.; German Cancer Consortium (DKTK), University Hospital Essen, Essen, Germany
| | - Ilektra-Antonia Mavroeidi
- West German Cancer Center, University Hospital Essen, Essen, Germany.; Clinic for Internal Medicine (Tumor Research), University Hospital Essen, Essen, Germany
| | - Christina Laschinsky
- Department of Nuclear Medicine, University Hospital Essen, Essen, Germany; West German Cancer Center, University Hospital Essen, Essen, Germany.; German Cancer Consortium (DKTK), University Hospital Essen, Essen, Germany
| | - Johannes Grueneisen
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany
| | - Benedikt Schaarschmidt
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany
| | - Onofrio Antonio Catalano
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA; Abdominal Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA
| | - Ken Herrmann
- Department of Nuclear Medicine, University Hospital Essen, Essen, Germany; West German Cancer Center, University Hospital Essen, Essen, Germany.; German Cancer Consortium (DKTK), University Hospital Essen, Essen, Germany
| | - Lale Umutlu
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany
| |
Collapse
|
38
|
Abstract
Nuclear medicine provides methods and techniques in that has benefited pediatric patients and their referring physicians for over 40 years. Nuclear medicine provides qualitative and quantitative information about overall and regional function of organs, systems, and lesions in the body. This involves applications in many organ systems including the skeleton, the brain, the kidneys and the heart as well as in the diagnosis and treatment of cancer. The practice of nuclear medicine requires the administration of radiopharmaceuticals which expose the patient to very low levels of ionizing radiation. Advanced approaches in the estimation of radiation dose from the internal distribution of radiopharmaceuticals in patients of various sizes and shapes have been developed in the past 20 years. Although there is considerable uncertainty in the estimation of the risk of adverse health effects from radiation at the very low exposure levels typically associated with nuclear medicine, some considers it prudent to be more cautious when applied to children as they are generally considered to be at higher risk than adults. Standard guidelines for administered activities for nuclear medicine procedures in children have been established including the North American consensus guidelines and the Paediatric Dosage Card developed by the European Association of Nuclear Medicine. As we move into the future, these guidelines would likely be reviewed in response to changes in clinical practice, a better understanding of radiation dosimetry as applied to children as well as new clinical applications, new advancements in the field with respect to both instrumentation and image reconstruction and processing.
Collapse
Affiliation(s)
- S Ted Treves
- Harvard Medical School; Brigham and Women's Hospital.
| | | |
Collapse
|
39
|
Theruvath AJ, Siedek F, Yerneni K, Muehe AM, Spunt SL, Pribnow A, Moseley M, Lu Y, Zhao Q, Gulaka P, Chaudhari A, Daldrup-Link HE. Validation of Deep Learning-based Augmentation for Reduced 18F-FDG Dose for PET/MRI in Children and Young Adults with Lymphoma. Radiol Artif Intell 2021; 3:e200232. [PMID: 34870211 DOI: 10.1148/ryai.2021200232] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2020] [Revised: 08/30/2021] [Accepted: 09/17/2021] [Indexed: 11/11/2022]
Abstract
Purpose To investigate if a deep learning convolutional neural network (CNN) could enable low-dose fluorine 18 (18F) fluorodeoxyglucose (FDG) PET/MRI for correct treatment response assessment of children and young adults with lymphoma. Materials and Methods In this secondary analysis of prospectively collected data (ClinicalTrials.gov identifier: NCT01542879), 20 patients with lymphoma (mean age, 16.4 years ± 6.4 [standard deviation]) underwent 18F-FDG PET/MRI between July 2015 and August 2019 at baseline and after induction chemotherapy. Full-dose 18F-FDG PET data (3 MBq/kg) were simulated to lower 18F-FDG doses based on the percentage of coincidence events (representing simulated 75%, 50%, 25%, 12.5%, and 6.25% 18F-FDG dose [hereafter referred to as 75%Sim, 50%Sim, 25%Sim, 12.5%Sim, and 6.25%Sim, respectively]). A U.S. Food and Drug Administration-approved CNN was used to augment input simulated low-dose scans to full-dose scans. For each follow-up scan after induction chemotherapy, the standardized uptake value (SUV) response score was calculated as the maximum SUV (SUVmax) of the tumor normalized to the mean liver SUV; tumor response was classified as adequate or inadequate. Sensitivity and specificity in the detection of correct response status were computed using full-dose PET as the reference standard. Results With decreasing simulated radiotracer doses, tumor SUVmax increased. A dose below 75%Sim of the full dose led to erroneous upstaging of adequate responders to inadequate responders (43% [six of 14 patients] for 75%Sim; 93% [13 of 14 patients] for 50%Sim; and 100% [14 of 14 patients] below 50%Sim; P < .05 for all). CNN-enhanced low-dose PET/MRI scans at 75%Sim and 50%Sim enabled correct response assessments for all patients. Use of the CNN augmentation for assessing adequate and inadequate responses resulted in identical sensitivities (100%) and specificities (100%) between the assessment of 100% full-dose PET, augmented 75%Sim, and augmented 50%Sim images. Conclusion CNN enhancement of PET/MRI scans may enable 50% 18F-FDG dose reduction with correct treatment response assessment of children and young adults with lymphoma.Keywords: Pediatrics, PET/MRI, Computer Applications Detection/Diagnosis, Lymphoma, Tumor Response, Whole-Body Imaging, Technology AssessmentClinical trial registration no: NCT01542879 Supplemental material is available for this article. © RSNA, 2021.
Collapse
Affiliation(s)
- Ashok J Theruvath
- Department of Radiology, Molecular Imaging Program at Stanford (A.J.T., F.S., K.Y., A.M.M., M.M., A.C., H.E.D.L.), Department of Pediatrics, Division of Hematology/Oncology, Lucile Packard Children's Hospital (S.L.S., A.P., H.E.D.L.), and Department of Biomedical Data Science (Y.L., Q.Z.), Stanford University, 725 Welch Rd, Stanford, CA 94304; and Subtle Medical, Menlo Park, Calif (P.G.)
| | - Florian Siedek
- Department of Radiology, Molecular Imaging Program at Stanford (A.J.T., F.S., K.Y., A.M.M., M.M., A.C., H.E.D.L.), Department of Pediatrics, Division of Hematology/Oncology, Lucile Packard Children's Hospital (S.L.S., A.P., H.E.D.L.), and Department of Biomedical Data Science (Y.L., Q.Z.), Stanford University, 725 Welch Rd, Stanford, CA 94304; and Subtle Medical, Menlo Park, Calif (P.G.)
| | - Ketan Yerneni
- Department of Radiology, Molecular Imaging Program at Stanford (A.J.T., F.S., K.Y., A.M.M., M.M., A.C., H.E.D.L.), Department of Pediatrics, Division of Hematology/Oncology, Lucile Packard Children's Hospital (S.L.S., A.P., H.E.D.L.), and Department of Biomedical Data Science (Y.L., Q.Z.), Stanford University, 725 Welch Rd, Stanford, CA 94304; and Subtle Medical, Menlo Park, Calif (P.G.)
| | - Anne M Muehe
- Department of Radiology, Molecular Imaging Program at Stanford (A.J.T., F.S., K.Y., A.M.M., M.M., A.C., H.E.D.L.), Department of Pediatrics, Division of Hematology/Oncology, Lucile Packard Children's Hospital (S.L.S., A.P., H.E.D.L.), and Department of Biomedical Data Science (Y.L., Q.Z.), Stanford University, 725 Welch Rd, Stanford, CA 94304; and Subtle Medical, Menlo Park, Calif (P.G.)
| | - Sheri L Spunt
- Department of Radiology, Molecular Imaging Program at Stanford (A.J.T., F.S., K.Y., A.M.M., M.M., A.C., H.E.D.L.), Department of Pediatrics, Division of Hematology/Oncology, Lucile Packard Children's Hospital (S.L.S., A.P., H.E.D.L.), and Department of Biomedical Data Science (Y.L., Q.Z.), Stanford University, 725 Welch Rd, Stanford, CA 94304; and Subtle Medical, Menlo Park, Calif (P.G.)
| | - Allison Pribnow
- Department of Radiology, Molecular Imaging Program at Stanford (A.J.T., F.S., K.Y., A.M.M., M.M., A.C., H.E.D.L.), Department of Pediatrics, Division of Hematology/Oncology, Lucile Packard Children's Hospital (S.L.S., A.P., H.E.D.L.), and Department of Biomedical Data Science (Y.L., Q.Z.), Stanford University, 725 Welch Rd, Stanford, CA 94304; and Subtle Medical, Menlo Park, Calif (P.G.)
| | - Michael Moseley
- Department of Radiology, Molecular Imaging Program at Stanford (A.J.T., F.S., K.Y., A.M.M., M.M., A.C., H.E.D.L.), Department of Pediatrics, Division of Hematology/Oncology, Lucile Packard Children's Hospital (S.L.S., A.P., H.E.D.L.), and Department of Biomedical Data Science (Y.L., Q.Z.), Stanford University, 725 Welch Rd, Stanford, CA 94304; and Subtle Medical, Menlo Park, Calif (P.G.)
| | - Ying Lu
- Department of Radiology, Molecular Imaging Program at Stanford (A.J.T., F.S., K.Y., A.M.M., M.M., A.C., H.E.D.L.), Department of Pediatrics, Division of Hematology/Oncology, Lucile Packard Children's Hospital (S.L.S., A.P., H.E.D.L.), and Department of Biomedical Data Science (Y.L., Q.Z.), Stanford University, 725 Welch Rd, Stanford, CA 94304; and Subtle Medical, Menlo Park, Calif (P.G.)
| | - Qian Zhao
- Department of Radiology, Molecular Imaging Program at Stanford (A.J.T., F.S., K.Y., A.M.M., M.M., A.C., H.E.D.L.), Department of Pediatrics, Division of Hematology/Oncology, Lucile Packard Children's Hospital (S.L.S., A.P., H.E.D.L.), and Department of Biomedical Data Science (Y.L., Q.Z.), Stanford University, 725 Welch Rd, Stanford, CA 94304; and Subtle Medical, Menlo Park, Calif (P.G.)
| | - Praveen Gulaka
- Department of Radiology, Molecular Imaging Program at Stanford (A.J.T., F.S., K.Y., A.M.M., M.M., A.C., H.E.D.L.), Department of Pediatrics, Division of Hematology/Oncology, Lucile Packard Children's Hospital (S.L.S., A.P., H.E.D.L.), and Department of Biomedical Data Science (Y.L., Q.Z.), Stanford University, 725 Welch Rd, Stanford, CA 94304; and Subtle Medical, Menlo Park, Calif (P.G.)
| | - Akshay Chaudhari
- Department of Radiology, Molecular Imaging Program at Stanford (A.J.T., F.S., K.Y., A.M.M., M.M., A.C., H.E.D.L.), Department of Pediatrics, Division of Hematology/Oncology, Lucile Packard Children's Hospital (S.L.S., A.P., H.E.D.L.), and Department of Biomedical Data Science (Y.L., Q.Z.), Stanford University, 725 Welch Rd, Stanford, CA 94304; and Subtle Medical, Menlo Park, Calif (P.G.)
| | - Heike E Daldrup-Link
- Department of Radiology, Molecular Imaging Program at Stanford (A.J.T., F.S., K.Y., A.M.M., M.M., A.C., H.E.D.L.), Department of Pediatrics, Division of Hematology/Oncology, Lucile Packard Children's Hospital (S.L.S., A.P., H.E.D.L.), and Department of Biomedical Data Science (Y.L., Q.Z.), Stanford University, 725 Welch Rd, Stanford, CA 94304; and Subtle Medical, Menlo Park, Calif (P.G.)
| |
Collapse
|
40
|
Aide N, Lasnon C, Desmonts C, Armstrong IS, Walker MD, McGowan DR. Advances in PET-CT technology: An update. Semin Nucl Med 2021; 52:286-301. [PMID: 34823841 DOI: 10.1053/j.semnuclmed.2021.10.005] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Revised: 10/18/2021] [Accepted: 10/19/2021] [Indexed: 11/11/2022]
Abstract
This article reviews the current evolution and future directions in PET-CT technology focusing on three areas: time of flight, image reconstruction, and data-driven gating. Image reconstruction is considered with advances in point spread function modelling, Bayesian penalised likelihood reconstruction, and artificial intelligence approaches. Data-driven gating is examined with reference to respiratory motion, cardiac motion, and head motion. For each of these technological advancements, theory will be briefly discussed, benefits of their use in routine practice will be detailed and potential future developments will be discussed. Representative clinical cases will be presented, demonstrating the huge opportunities given to the PET community by hardware and software advances in PET technology when it comes to lesion detection, disease characterization, accurate quantitation and quicker scans. Through this review, hospitals are encouraged to embrace, evaluate and appropriately implement the wide range of new PET technologies that are available now or in the near future, for the improvement of patient care.
Collapse
Affiliation(s)
- Nicolas Aide
- Nuclear Medicine, Caen University Hospital, Caen, France; INSERM ANTICIPE, Normandie University, Caen, France.
| | - Charline Lasnon
- INSERM ANTICIPE, Normandie University, Caen, France; François Baclesse Cancer Center, Caen, France
| | - Cedric Desmonts
- Nuclear Medicine, Caen University Hospital, Caen, France; INSERM ANTICIPE, Normandie University, Caen, France
| | - Ian S Armstrong
- Nuclear Medicine, Manchester University NHS Foundation Trust, Manchester
| | - Matthew D Walker
- Department of Medical Physics and Clinical Engineering, Oxford University Hospitals NHS FT, Oxford
| | - Daniel R McGowan
- Department of Medical Physics and Clinical Engineering, Oxford University Hospitals NHS FT, Oxford; Department of Oncology, University of Oxford, Oxford
| |
Collapse
|
41
|
Liu J, Malekzadeh M, Mirian N, Song TA, Liu C, Dutta J. Artificial Intelligence-Based Image Enhancement in PET Imaging: Noise Reduction and Resolution Enhancement. PET Clin 2021; 16:553-576. [PMID: 34537130 PMCID: PMC8457531 DOI: 10.1016/j.cpet.2021.06.005] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
High noise and low spatial resolution are two key confounding factors that limit the qualitative and quantitative accuracy of PET images. Artificial intelligence models for image denoising and deblurring are becoming increasingly popular for the postreconstruction enhancement of PET images. We present a detailed review of recent efforts for artificial intelligence-based PET image enhancement with a focus on network architectures, data types, loss functions, and evaluation metrics. We also highlight emerging areas in this field that are quickly gaining popularity, identify barriers to large-scale adoption of artificial intelligence models for PET image enhancement, and discuss future directions.
Collapse
Affiliation(s)
- Juan Liu
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Masoud Malekzadeh
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, 1 University Avenue, Ball 301, Lowell, MA 01854, USA
| | - Niloufar Mirian
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Tzu-An Song
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, 1 University Avenue, Ball 301, Lowell, MA 01854, USA
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA.
| | - Joyita Dutta
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, 1 University Avenue, Ball 301, Lowell, MA 01854, USA; Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
42
|
Chaudhari AS, Mittra E, Davidzon GA, Gulaka P, Gandhi H, Brown A, Zhang T, Srinivas S, Gong E, Zaharchuk G, Jadvar H. Low-count whole-body PET with deep learning in a multicenter and externally validated study. NPJ Digit Med 2021; 4:127. [PMID: 34426629 PMCID: PMC8382711 DOI: 10.1038/s41746-021-00497-2] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2020] [Accepted: 08/03/2021] [Indexed: 02/08/2023] Open
Abstract
More widespread use of positron emission tomography (PET) imaging is limited by its high cost and radiation dose. Reductions in PET scan time or radiotracer dosage typically degrade diagnostic image quality (DIQ). Deep-learning-based reconstruction may improve DIQ, but such methods have not been clinically evaluated in a realistic multicenter, multivendor environment. In this study, we evaluated the performance and generalizability of a deep-learning-based image-quality enhancement algorithm applied to fourfold reduced-count whole-body PET in a realistic clinical oncologic imaging environment with multiple blinded readers, institutions, and scanner types. We demonstrate that the low-count-enhanced scans were noninferior to the standard scans in DIQ (p < 0.05) and overall diagnostic confidence (p < 0.001) independent of the underlying PET scanner used. Lesion detection for the low-count-enhanced scans had a high patient-level sensitivity of 0.94 (0.83-0.99) and specificity of 0.98 (0.95-0.99). Interscan kappa agreement of 0.85 was comparable to intrareader (0.88) and pairwise inter-reader agreements (maximum of 0.72). SUV quantification was comparable in the reference regions and lesions (lowest p-value=0.59) and had high correlation (lowest CCC = 0.94). Thus, we demonstrated that deep learning can be used to restore diagnostic image quality and maintain SUV accuracy for fourfold reduced-count PET scans, with interscan variations in lesion depiction, lower than intra- and interreader variations. This method generalized to an external validation set of clinical patients from multiple institutions and scanner types. Overall, this method may enable either dose or exam-duration reduction, increasing safety and lowering the cost of PET imaging.
Collapse
Affiliation(s)
- Akshay S Chaudhari
- Department of Radiology, Stanford University, Palo Alto, CA, USA.
- Department of Biomedical Data Science, Stanford University, Stanford, CA, USA.
- Subtle Medical, Menlo Park, CA, USA.
| | - Erik Mittra
- Division of Diagnostic Radiology, Oregon Health & Science University, Portland, OR, USA
| | - Guido A Davidzon
- Department of Radiology, Stanford University, Palo Alto, CA, USA
| | | | | | - Adam Brown
- Division of Diagnostic Radiology, Oregon Health & Science University, Portland, OR, USA
| | | | - Shyam Srinivas
- Department of Radiology, University of Pittsburgh Medical Center, Pittsburgh, PA, USA
| | | | - Greg Zaharchuk
- Department of Radiology, Stanford University, Palo Alto, CA, USA
- Subtle Medical, Menlo Park, CA, USA
| | - Hossein Jadvar
- Department of Radiology, University of Southern California, Los Angeles, CA, USA
| |
Collapse
|
43
|
Aide N, Lasnon C, Kesner A, Levin CS, Buvat I, Iagaru A, Hermann K, Badawi RD, Cherry SR, Bradley KM, McGowan DR. New PET technologies - embracing progress and pushing the limits. Eur J Nucl Med Mol Imaging 2021; 48:2711-2726. [PMID: 34081153 PMCID: PMC8263417 DOI: 10.1007/s00259-021-05390-4] [Citation(s) in RCA: 38] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2021] [Accepted: 04/25/2021] [Indexed: 12/11/2022]
Affiliation(s)
- Nicolas Aide
- Nuclear medicine Department, University Hospital, Caen, France.
- INSERM ANTICIPE, Normandie University, Caen, France.
| | - Charline Lasnon
- INSERM ANTICIPE, Normandie University, Caen, France
- François Baclesse Cancer Centre, Caen, France
| | - Adam Kesner
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, USA
| | - Craig S Levin
- Department of Radiology, Molecular Imaging Program at Stanford, Stanford University, Stanford, CA, 94305, USA
| | - Irene Buvat
- Institut Curie, Université PLS, Inserm, U1288 LITO, Orsay, France
| | - Andrei Iagaru
- Department of Radiology, Division of Nuclear Medicine and Molecular Imaging, Stanford University, Stanford, CA, 94305, USA
| | - Ken Hermann
- Department of Nuclear Medicine, University of Duisburg-Essen and German Cancer Consortium (DKTK)-University Hospital Essen, Essen, Germany
| | - Ramsey D Badawi
- Departments of Radiology and Biomedical Engineering, University of California, Davis, CA, USA
| | - Simon R Cherry
- Departments of Radiology and Biomedical Engineering, University of California, Davis, CA, USA
| | - Kevin M Bradley
- Wales Research and Diagnostic PET Imaging Centre, Cardiff University, Cardiff, UK
| | - Daniel R McGowan
- Radiation Physics and Protection, Churchill Hospital, Oxford University Hospitals NHS FT, Oxford, UK.
- Department of Oncology, University of Oxford, Oxford, UK.
| |
Collapse
|
44
|
Cox CPW, van Assema DME, Verburg FA, Brabander T, Konijnenberg M, Segbers M. A dedicated paediatric [ 18F]FDG PET/CT dosage regimen. EJNMMI Res 2021; 11:65. [PMID: 34279735 PMCID: PMC8289942 DOI: 10.1186/s13550-021-00812-8] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2021] [Accepted: 07/09/2021] [Indexed: 12/20/2022] Open
Abstract
BACKGROUND The role of 2-[18F]fluoro-2-deoxy-D-glucose ([18F]FDG) positron emission tomography/computed tomography (PET/CT) in children is still expanding. Dedicated paediatric dosage regimens are needed to keep the radiation dose as low as reasonably achievable and reduce the risk of radiation-induced carcinogenesis. The aim of this study is to investigate the relation between patient-dependent parameters and [18F]FDG PET image quality in order to propose a dedicated paediatric dose regimen. METHODS In this retrospective analysis, 102 children and 85 adults were included that underwent a diagnostic [18F]FDG PET/CT scan. The image quality of the PET scans was measured by the signal-to-noise ratio (SNR) in the liver. The SNR liver was normalized (SNRnorm) for administered activity and acquisition time to apply curve fitting with body weight, body length, body mass index, body weight/body length and body surface area. Curve fitting was performed with two power fits, a nonlinear two-parameter model α p-d and a linear single-parameter model α p-0.5. The fit parameters of the preferred model were combined with a user preferred SNR to obtain at least moderate or good image quality for the dosage regimen proposal. RESULTS Body weight demonstrated the highest coefficient of determination for the nonlinear (R2 = 0.81) and linear (R2 = 0.80) models. The nonlinear model was preferred by the Akaike's corrected information criterion. We decided to use a SNR of 6.5, based on the expert opinion of three nuclear medicine physicians. Comparison with the quadratic adult protocol confirmed the need for different dosage regimens for both patient groups. In this study, the amount of administered activity can be considerably reduced in comparison with the current paediatric guidelines. CONCLUSION Body weight has the strongest relation with [18F]FDG PET image quality in children. The proposed nonlinear dosage regimen based on body mass will provide a constant and clinical sufficient image quality with a significant reduction of the effective dose compared to the current guidelines. A dedicated paediatric dosage regimen is necessary, as a universal dosing regimen for paediatric and adult is not feasible.
Collapse
Affiliation(s)
- Christina P W Cox
- Department of Radiology & Nuclear Medicine, Erasmus Medical Center, Postbus, 2040 3000 CA, Rotterdam, The Netherlands.
| | - Daniëlle M E van Assema
- Department of Radiology & Nuclear Medicine, Erasmus Medical Center, Postbus, 2040 3000 CA, Rotterdam, The Netherlands
| | - Frederik A Verburg
- Department of Radiology & Nuclear Medicine, Erasmus Medical Center, Postbus, 2040 3000 CA, Rotterdam, The Netherlands
| | - Tessa Brabander
- Department of Radiology & Nuclear Medicine, Erasmus Medical Center, Postbus, 2040 3000 CA, Rotterdam, The Netherlands
| | - Mark Konijnenberg
- Department of Radiology & Nuclear Medicine, Erasmus Medical Center, Postbus, 2040 3000 CA, Rotterdam, The Netherlands
| | - Marcel Segbers
- Department of Radiology & Nuclear Medicine, Erasmus Medical Center, Postbus, 2040 3000 CA, Rotterdam, The Netherlands
| |
Collapse
|