1
|
Lopes L, Lopez-Montes A, Chen Y, Koller P, Rathod N, Blomgren A, Caobelli F, Rominger A, Shi K, Seifert R. The Evolution of Artificial Intelligence in Nuclear Medicine. Semin Nucl Med 2025; 55:313-327. [PMID: 39934005 DOI: 10.1053/j.semnuclmed.2025.01.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2025] [Revised: 01/24/2025] [Accepted: 01/24/2025] [Indexed: 02/13/2025]
Abstract
Nuclear medicine has continuously evolved since its beginnings, constantly improving the diagnosis and treatment of various diseases. The integration of artificial intelligence (AI) is one of the latest revolutionizing chapters, promising significant advancements in diagnosis, prognosis, segmentation, image quality enhancement, and theranostics. Early AI applications in nuclear medicine focused on improving diagnostic accuracy, leveraging machine learning algorithms for disease classification and outcome prediction. Advances in deep learning, including convolutional and more recently transformer-based neural networks, have further enabled more precise diagnosis and image segmentation as well as low-dose imaging, and patient-specific dosimetry for personalized treatment. Generative AI, driven by large language models and diffusion techniques, is now allowing the process, interpretation, and generation of complex medical language and images. Despite these achievements, challenges such as data scarcity, heterogeneity, and ethical concerns remain barriers to clinical translation. Addressing these issues through interdisciplinary collaboration will pave the way for a broader adoption of AI in nuclear medicine, potentially enhancing patient care and optimizing diagnosis and therapeutic outcomes.
Collapse
Affiliation(s)
- Leonor Lopes
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland; Graduate School for Cellular and Biomedical Sciences, University of Bern, Bern, Switzerland.
| | - Alejandro Lopez-Montes
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Yizhou Chen
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland; Graduate School for Cellular and Biomedical Sciences, University of Bern, Bern, Switzerland
| | - Pia Koller
- Department of Computer Science, Ludwig-Maximilians-University of Munich, Munich, Germany
| | - Narendra Rathod
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - August Blomgren
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland; Graduate School for Cellular and Biomedical Sciences, University of Bern, Bern, Switzerland
| | - Federico Caobelli
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Axel Rominger
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Kuangyu Shi
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland; Department of Informatics, Technical University of Munich, Munich, Germany
| | - Robert Seifert
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| |
Collapse
|
2
|
Chen J, Ye Z, Zhang R, Li H, Fang B, Zhang LB, Wang W. Medical image translation with deep learning: Advances, datasets and perspectives. Med Image Anal 2025; 103:103605. [PMID: 40311301 DOI: 10.1016/j.media.2025.103605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2024] [Revised: 03/07/2025] [Accepted: 04/12/2025] [Indexed: 05/03/2025]
Abstract
Traditional medical image generation often lacks patient-specific clinical information, limiting its clinical utility despite enhancing downstream task performance. In contrast, medical image translation precisely converts images from one modality to another, preserving both anatomical structures and cross-modal features, thus enabling efficient and accurate modality transfer and offering unique advantages for model development and clinical practice. This paper reviews the latest advancements in deep learning(DL)-based medical image translation. Initially, it elaborates on the diverse tasks and practical applications of medical image translation. Subsequently, it provides an overview of fundamental models, including convolutional neural networks (CNNs), transformers, and state space models (SSMs). Additionally, it delves into generative models such as Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), Autoregressive Models (ARs), diffusion Models, and flow Models. Evaluation metrics for assessing translation quality are discussed, emphasizing their importance. Commonly used datasets in this field are also analyzed, highlighting their unique characteristics and applications. Looking ahead, the paper identifies future trends, challenges, and proposes research directions and solutions in medical image translation. It aims to serve as a valuable reference and inspiration for researchers, driving continued progress and innovation in this area.
Collapse
Affiliation(s)
- Junxin Chen
- School of Software, Dalian University of Technology, Dalian 116621, China.
| | - Zhiheng Ye
- School of Software, Dalian University of Technology, Dalian 116621, China.
| | - Renlong Zhang
- Institute of Research and Clinical Innovations, Neusoft Medical Systems Co., Ltd., Beijing, China.
| | - Hao Li
- School of Computing Science, University of Glasgow, Glasgow G12 8QQ, United Kingdom.
| | - Bo Fang
- School of Computer Science, The University of Sydney, Sydney, NSW 2006, Australia.
| | - Li-Bo Zhang
- Department of Radiology, General Hospital of Northern Theater Command, Shenyang 110840, China.
| | - Wei Wang
- Guangdong-Hong Kong-Macao Joint Laboratory for Emotion Intelligence and Pervasive Computing, Artificial Intelligence Research Institute, Shenzhen MSU-BIT University, Shenzhen 518172, China; School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China.
| |
Collapse
|
3
|
Huang B, Liu X, Fang L, Liu Q, Li B. Diffusion transformer model with compact prior for low-dose PET reconstruction. Phys Med Biol 2025; 70:045015. [PMID: 39832449 DOI: 10.1088/1361-6560/adac25] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2024] [Accepted: 01/20/2025] [Indexed: 01/22/2025]
Abstract
Objective.Positron emission tomography (PET) is an advanced medical imaging technique that plays a crucial role in non-invasive clinical diagnosis. However, while reducing radiation exposure through low-dose PET scans is beneficial for patient safety, it often results in insufficient statistical data. This scarcity of data poses significant challenges for accurately reconstructing high-quality images, which are essential for reliable diagnostic outcomes.Approach.In this research, we propose a diffusion transformer model (DTM) guided by joint compact prior to enhance the reconstruction quality of low-dose PET imaging. In light of current research findings, we present a pioneering PET reconstruction model that integrates diffusion and transformer models for joint optimization. This model combines the powerful distribution mapping abilities of diffusion model with the capacity of transformers to capture long-range dependencies, offering significant advantages for low-dose PET reconstruction. Additionally, the incorporation of the lesion refining block and alternating direction method of multipliers enhance the recovery capability of lesion regions and preserves detail information, solving blurring problems in lesion areas and texture details of most deep learning frameworks.Main results. Experimental results validate the effectiveness of DTM in reconstructing low-dose PET image quality. DTM achieves state-of-the-art performance across various metrics, including PSNR, SSIM, NRMSE, CR, and COV, demonstrating its ability to reduce noise while preserving critical clinical details such as lesion structure and texture. Compared with baseline methods, DTM delivers best results in denoising and lesion preservation across various low-dose levels, including 10%, 25%, 50%, and even ultra-low-dose level such as 1%. DTM shows robust generalization performance on phantom and patient datasets, highlighting its adaptability to varying imaging conditions.Significance. This approach reduces radiation exposure while ensuring reliable imaging for early disease detection and clinical decision-making, offering a promising tool for both clinical and research applications.
Collapse
Affiliation(s)
- Bin Huang
- School of Mathematics and Computer Sciences, Nanchang University, Nanchang, People's Republic of China
| | - Xubiao Liu
- School of Information Engineering, Nanchang University, Nanchang, People's Republic of China
| | - Lei Fang
- Department of Biomedical Engineering, Huazhong University of Science and Technology, Wuhan, People's Republic of China
| | - Qiegen Liu
- School of Information Engineering, Nanchang University, Nanchang, People's Republic of China
| | - Bingxuan Li
- Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, People's Republic of China
| |
Collapse
|
4
|
Yu B, Ozdemir S, Dong Y, Shao W, Pan T, Shi K, Gong K. Robust whole-body PET image denoising using 3D diffusion models: evaluation across various scanners, tracers, and dose levels. Eur J Nucl Med Mol Imaging 2025:10.1007/s00259-025-07122-4. [PMID: 39912940 DOI: 10.1007/s00259-025-07122-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2024] [Accepted: 01/27/2025] [Indexed: 02/07/2025]
Abstract
PURPOSE Whole-body PET imaging plays an essential role in cancer diagnosis and treatment but suffers from low image quality. Traditional deep learning-based denoising methods work well for a specific acquisition but are less effective in handling diverse PET protocols. In this study, we proposed and validated a 3D Denoising Diffusion Probabilistic Model (3D DDPM) as a robust and universal solution for whole-body PET image denoising. METHODS The proposed 3D DDPM gradually injected noise into the images during the forward diffusion phase, allowing the model to learn to reconstruct the clean data during the reverse diffusion process. A 3D convolutional network was trained using high-quality data from the Biograph Vision Quadra PET/CT scanner to generate the score function, enabling the model to capture accurate PET distribution information extracted from the total-body datasets. The trained 3D DDPM was evaluated on datasets from four scanners, four tracer types, and six dose levels representing a broad spectrum of clinical scenarios. RESULTS The proposed 3D DDPM consistently outperformed 2D DDPM, 3D UNet, and 3D GAN, demonstrating its superior denoising performance across all tested conditions. Additionally, the model's uncertainty maps exhibited lower variance, reflecting its higher confidence in its outputs. CONCLUSIONS The proposed 3D DDPM can effectively handle various clinical settings, including variations in dose levels, scanners, and tracers, establishing it as a promising foundational model for PET image denoising. The trained 3D DDPM model of this work can be utilized off the shelf by researchers as a whole-body PET image denoising solution. The code and model are available at https://github.com/Miche11eU/PET-Image-Denoising-Using-3D-Diffusion-Model .
Collapse
Affiliation(s)
- Boxiao Yu
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL, USA
| | - Savas Ozdemir
- Department of Radiology, University of Florida, Jacksonville, FL, USA
| | - Yafei Dong
- Yale PET Center, Yale School of Medicine, New Haven, CT, USA
| | - Wei Shao
- Department of Medicine, University of Florida, Gainesville, FL, USA
| | - Tinsu Pan
- Department of Imaging Physics, University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Kuangyu Shi
- Department of Nuclear Medicine, University of Bern, Bern, Switzerland
| | - Kuang Gong
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL, USA.
| |
Collapse
|
5
|
Xie H, Guo L, Velo A, Liu Z, Liu Q, Guo X, Zhou B, Chen X, Tsai YJ, Miao T, Xia M, Liu YH, Armstrong IS, Wang G, Carson RE, Sinusas AJ, Liu C. Noise-aware dynamic image denoising and positron range correction for Rubidium-82 cardiac PET imaging via self-supervision. Med Image Anal 2025; 100:103391. [PMID: 39579623 PMCID: PMC11647511 DOI: 10.1016/j.media.2024.103391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2024] [Revised: 11/05/2024] [Accepted: 11/08/2024] [Indexed: 11/25/2024]
Abstract
Rubidium-82 (82Rb) is a radioactive isotope widely used for cardiac PET imaging. Despite numerous benefits of 82Rb, there are several factors that limits its image quality and quantitative accuracy. First, the short half-life of 82Rb results in noisy dynamic frames. Low signal-to-noise ratio would result in inaccurate and biased image quantification. Noisy dynamic frames also lead to highly noisy parametric images. The noise levels also vary substantially in different dynamic frames due to radiotracer decay and short half-life. Existing denoising methods are not applicable for this task due to the lack of paired training inputs/labels and inability to generalize across varying noise levels. Second, 82Rb emits high-energy positrons. Compared with other tracers such as 18F, 82Rb travels a longer distance before annihilation, which negatively affect image spatial resolution. Here, the goal of this study is to propose a self-supervised method for simultaneous (1) noise-aware dynamic image denoising and (2) positron range correction for 82Rb cardiac PET imaging. Tested on a series of PET scans from a cohort of normal volunteers, the proposed method produced images with superior visual quality. To demonstrate the improvement in image quantification, we compared image-derived input functions (IDIFs) with arterial input functions (AIFs) from continuous arterial blood samples. The IDIF derived from the proposed method led to lower AUC differences, decreasing from 11.09% to 7.58% on average, compared to the original dynamic frames. The proposed method also improved the quantification of myocardium blood flow (MBF), as validated against 15O-water scans, with mean MBF differences decreased from 0.43 to 0.09, compared to the original dynamic frames. We also conducted a generalizability experiment on 37 patient scans obtained from a different country using a different scanner. The presented method enhanced defect contrast and resulted in lower regional MBF in areas with perfusion defects. Lastly, comparison with other related methods is included to show the effectiveness of the proposed method.
Collapse
Affiliation(s)
- Huidong Xie
- Department of Biomedical Engineering, Yale University, USA.
| | - Liang Guo
- Department of Biomedical Engineering, Yale University, USA
| | - Alexandre Velo
- Department of Radiology and Biomedical Imaging, Yale University, USA
| | - Zhao Liu
- Department of Radiology and Biomedical Imaging, Yale University, USA
| | - Qiong Liu
- Department of Biomedical Engineering, Yale University, USA
| | - Xueqi Guo
- Department of Biomedical Engineering, Yale University, USA
| | - Bo Zhou
- Department of Biomedical Engineering, Yale University, USA
| | - Xiongchao Chen
- Department of Biomedical Engineering, Yale University, USA
| | - Yu-Jung Tsai
- Department of Radiology and Biomedical Imaging, Yale University, USA
| | - Tianshun Miao
- Department of Radiology and Biomedical Imaging, Yale University, USA
| | - Menghua Xia
- Department of Radiology and Biomedical Imaging, Yale University, USA
| | - Yi-Hwa Liu
- Department of Internal Medicine (Cardiology), Yale University, USA
| | - Ian S Armstrong
- Department of Nuclear Medicine, University of Manchester, UK
| | - Ge Wang
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, USA
| | - Richard E Carson
- Department of Biomedical Engineering, Yale University, USA; Department of Radiology and Biomedical Imaging, Yale University, USA
| | - Albert J Sinusas
- Department of Biomedical Engineering, Yale University, USA; Department of Radiology and Biomedical Imaging, Yale University, USA; Department of Internal Medicine (Cardiology), Yale University, USA
| | - Chi Liu
- Department of Biomedical Engineering, Yale University, USA; Department of Radiology and Biomedical Imaging, Yale University, USA.
| |
Collapse
|
6
|
Han J, Zhang H, Ning K. Techniques for learning and transferring knowledge for microbiome-based classification and prediction: review and assessment. Brief Bioinform 2024; 26:bbaf015. [PMID: 39820436 PMCID: PMC11737891 DOI: 10.1093/bib/bbaf015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2024] [Revised: 12/10/2024] [Accepted: 01/06/2025] [Indexed: 01/19/2025] Open
Abstract
The volume of microbiome data is growing at an exponential rate, and the current methodologies for big data mining are encountering substantial obstacles. Effectively managing and extracting valuable insights from these vast microbiome datasets has emerged as a significant challenge in the field of contemporary microbiome research. This comprehensive review delves into the utilization of foundation models and transfer learning techniques within the context of microbiome-based classification and prediction tasks, advocating for a transition away from traditional task-specific or scenario-specific models towards more adaptable, continuous learning models. The article underscores the practicality and benefits of initially constructing a robust foundation model, which can then be fine-tuned using transfer learning to tackle specific context tasks. In real-world scenarios, the application of transfer learning empowers models to leverage disease-related data from one geographical area and enhance diagnostic precision in different regions. This transition from relying on "good models" to embracing "adaptive models" resonates with the philosophy of "teaching a man to fish" thereby paving the way for advancements in personalized medicine and accurate diagnosis. Empirical research suggests that the integration of foundation models with transfer learning methodologies substantially boosts the performance of models when dealing with large-scale and diverse microbiome datasets, effectively mitigating the challenges posed by data heterogeneity.
Collapse
Affiliation(s)
- Jin Han
- Key Laboratory of Molecular Biophysics of the Ministry of Education, Hubei Key Laboratory of Bioinformatics and Molecular-imaging, Center of AI Biology, Department of Bioinformatics and Systems Biology, College of Life Science and Technology, Huazhong University of Science and Technology, Luoyu Road 1037, Wuhan 430074, Hubei, China
| | - Haohong Zhang
- Key Laboratory of Molecular Biophysics of the Ministry of Education, Hubei Key Laboratory of Bioinformatics and Molecular-imaging, Center of AI Biology, Department of Bioinformatics and Systems Biology, College of Life Science and Technology, Huazhong University of Science and Technology, Luoyu Road 1037, Wuhan 430074, Hubei, China
| | - Kang Ning
- Key Laboratory of Molecular Biophysics of the Ministry of Education, Hubei Key Laboratory of Bioinformatics and Molecular-imaging, Center of AI Biology, Department of Bioinformatics and Systems Biology, College of Life Science and Technology, Huazhong University of Science and Technology, Luoyu Road 1037, Wuhan 430074, Hubei, China
| |
Collapse
|
7
|
Kuang X, Li B, Lyu T, Xue Y, Huang H, Xie Q, Zhu W. PET image reconstruction using weighted nuclear norm maximization and deep learning prior. Phys Med Biol 2024; 69:215023. [PMID: 39374634 DOI: 10.1088/1361-6560/ad841d] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2024] [Accepted: 10/07/2024] [Indexed: 10/09/2024]
Abstract
The ill-posed Positron emission tomography (PET) reconstruction problem usually results in limited resolution and significant noise. Recently, deep neural networks have been incorporated into PET iterative reconstruction framework to improve the image quality. In this paper, we propose a new neural network-based iterative reconstruction method by using weighted nuclear norm (WNN) maximization, which aims to recover the image details in the reconstruction process. The novelty of our method is the application of WNN maximization rather than WNN minimization in PET image reconstruction. Meanwhile, a neural network is used to control the noise originated from WNN maximization. Our method is evaluated on simulated and clinical datasets. The simulation results show that the proposed approach outperforms state-of-the-art neural network-based iterative methods by achieving the best contrast/noise tradeoff with a remarkable contrast improvement on the lesion contrast recovery. The study on clinical datasets also demonstrates that our method can recover lesions of different sizes while suppressing noise in various low-dose PET image reconstruction tasks. Our code is available athttps://github.com/Kuangxd/PETReconstruction.
Collapse
Affiliation(s)
- Xiaodong Kuang
- Center for Frontier Fundamental Studies, Zhejiang Lab, Hangzhou, People's Republic of China
| | - Bingxuan Li
- Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, People's Republic of China
| | - Tianling Lyu
- Center for Frontier Fundamental Studies, Zhejiang Lab, Hangzhou, People's Republic of China
| | - Yitian Xue
- Center for Frontier Fundamental Studies, Zhejiang Lab, Hangzhou, People's Republic of China
| | - Hailiang Huang
- Center for Frontier Fundamental Studies, Zhejiang Lab, Hangzhou, People's Republic of China
| | - Qingguo Xie
- Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, People's Republic of China
| | - Wentao Zhu
- Center for Frontier Fundamental Studies, Zhejiang Lab, Hangzhou, People's Republic of China
| |
Collapse
|
8
|
Kim D, Kang SK, Shin SA, Choi H, Lee JS. Improving 18F-FDG PET Quantification Through a Spatial Normalization Method. J Nucl Med 2024; 65:1645-1651. [PMID: 39209545 PMCID: PMC11448607 DOI: 10.2967/jnumed.123.267360] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2023] [Accepted: 08/02/2024] [Indexed: 09/04/2024] Open
Abstract
Quantification of 18F-FDG PET images is useful for accurate diagnosis and evaluation of various brain diseases, including brain tumors, epilepsy, dementia, and Parkinson disease. However, accurate quantification of 18F-FDG PET images requires matched 3-dimensional T1 MRI scans of the same individuals to provide detailed information on brain anatomy. In this paper, we propose a transfer learning approach to adapt a pretrained deep neural network model from amyloid PET to spatially normalize 18F-FDG PET images without the need for 3-dimensional MRI. Methods: The proposed method is based on a deep learning model for automatic spatial normalization of 18F-FDG brain PET images, which was developed by fine-tuning a pretrained model for amyloid PET using only 103 18F-FDG PET and MR images. After training, the algorithm was tested on 65 internal and 78 external test sets. All T1 MR images with a 1-mm isotropic voxel size were processed with FreeSurfer software to provide cortical segmentation maps used to extract a ground-truth regional SUV ratio using cerebellar gray matter as a reference region. These values were compared with those from spatial normalization-based quantification methods using the proposed method and statistical parametric mapping software. Results: The proposed method showed superior spatial normalization compared with statistical parametric mapping, as evidenced by increased normalized mutual information and better size and shape matching in PET images. Quantitative evaluation revealed a consistently higher SUV ratio correlation and intraclass correlation coefficients for the proposed method across various brain regions in both internal and external datasets. The remarkably good correlation and intraclass correlation coefficient values of the proposed method for the external dataset are noteworthy, considering the dataset's different ethnic distribution and the use of different PET scanners and image reconstruction algorithms. Conclusion: This study successfully applied transfer learning to a deep neural network for 18F-FDG PET spatial normalization, demonstrating its resource efficiency and improved performance. This highlights the efficacy of transfer learning, which requires a smaller number of datasets than does the original network training, thus increasing the potential for broader use of deep learning-based brain PET spatial normalization techniques for various clinical and research radiotracers.
Collapse
Affiliation(s)
- Daewoon Kim
- Interdisciplinary Program of Bioengineering, Seoul National University, Seoul, South Korea
- Artificial Intelligence Institute, Seoul National University, Seoul, South Korea
| | - Seung Kwan Kang
- Brightonix Imaging Inc., Seoul, South Korea;
- Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul, South Korea; and
| | | | - Hongyoon Choi
- Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul, South Korea; and
- Department of Nuclear Medicine, Seoul National University College of Medicine and Seoul National University Hospital, Seoul, South Korea
| | - Jae Sung Lee
- Interdisciplinary Program of Bioengineering, Seoul National University, Seoul, South Korea;
- Artificial Intelligence Institute, Seoul National University, Seoul, South Korea
- Brightonix Imaging Inc., Seoul, South Korea
- Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul, South Korea; and
- Department of Nuclear Medicine, Seoul National University College of Medicine and Seoul National University Hospital, Seoul, South Korea
| |
Collapse
|
9
|
Rai S, Bhatt JS, Patra SK. An AI-Based Low-Risk Lung Health Image Visualization Framework Using LR-ULDCT. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:2047-2062. [PMID: 38491236 PMCID: PMC11522248 DOI: 10.1007/s10278-024-01062-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Revised: 01/18/2024] [Accepted: 02/12/2024] [Indexed: 03/18/2024]
Abstract
In this article, we propose an AI-based low-risk visualization framework for lung health monitoring using low-resolution ultra-low-dose CT (LR-ULDCT). We present a novel deep cascade processing workflow to achieve diagnostic visualization on LR-ULDCT (<0.3 mSv) at par high-resolution CT (HRCT) of 100 mSV radiation technology. To this end, we build a low-risk and affordable deep cascade network comprising three sequential deep processes: restoration, super-resolution (SR), and segmentation. Given degraded LR-ULDCT, the first novel network unsupervisedly learns restoration function from augmenting patch-based dictionaries and residuals. The restored version is then super-resolved (SR) for target (sensor) resolution. Here, we combine perceptual and adversarial losses in novel GAN to establish the closeness between probability distributions of generated SR-ULDCT and restored LR-ULDCT. Thus SR-ULDCT is presented to the segmentation network that first separates the chest portion from SR-ULDCT followed by lobe-wise colorization. Finally, we extract five lobes to account for the presence of ground glass opacity (GGO) in the lung. Hence, our AI-based system provides low-risk visualization of input degraded LR-ULDCT to various stages, i.e., restored LR-ULDCT, restored SR-ULDCT, and segmented SR-ULDCT, and achieves diagnostic power of HRCT. We perform case studies by experimenting on real datasets of COVID-19, pneumonia, and pulmonary edema/congestion while comparing our results with state-of-the-art. Ablation experiments are conducted for better visualizing different operating pipelines. Finally, we present a verification report by fourteen (14) experienced radiologists and pulmonologists.
Collapse
Affiliation(s)
- Swati Rai
- Indian Institute of Information Technology Vadodara, Vadodara, India.
| | - Jignesh S Bhatt
- Indian Institute of Information Technology Vadodara, Vadodara, India
| | | |
Collapse
|
10
|
Jafaritadi M, Teuho J, Lehtonen E, Klén R, Saraste A, Levin CS. Deep generative denoising networks enhance quality and accuracy of gated cardiac PET data. Ann Nucl Med 2024; 38:775-788. [PMID: 38842629 DOI: 10.1007/s12149-024-01945-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Accepted: 05/16/2024] [Indexed: 06/07/2024]
Abstract
BACKGROUND Cardiac positron emission tomography (PET) can visualize and quantify the molecular and physiological pathways of cardiac function. However, cardiac and respiratory motion can introduce blurring that reduces PET image quality and quantitative accuracy. Dual cardiac- and respiratory-gated PET reconstruction can mitigate motion artifacts but increases noise as only a subset of data are used for each time frame of the cardiac cycle. AIM The objective of this study is to create a zero-shot image denoising framework using a conditional generative adversarial networks (cGANs) for improving image quality and quantitative accuracy in non-gated and dual-gated cardiac PET images. METHODS Our study included retrospective list-mode data from 40 patients who underwent an 18F-fluorodeoxyglucose (18F-FDG) cardiac PET study. We initially trained and evaluated a 3D cGAN-known as Pix2Pix-on simulated non-gated low-count PET data paired with corresponding full-count target data, and then deployed the model on an unseen test set acquired on the same PET/CT system including both non-gated and dual-gated PET data. RESULTS Quantitative analysis demonstrated that the 3D Pix2Pix network architecture achieved significantly (p value<0.05) enhanced image quality and accuracy in both non-gated and gated cardiac PET images. At 5%, 10%, and 15% preserved count statistics, the model increased peak signal-to-noise ratio (PSNR) by 33.7%, 21.2%, and 15.5%, structural similarity index (SSIM) by 7.1%, 3.3%, and 2.2%, and reduced mean absolute error (MAE) by 61.4%, 54.3%, and 49.7%, respectively. When tested on dual-gated PET data, the model consistently reduced noise, irrespective of cardiac/respiratory motion phases, while maintaining image resolution and accuracy. Significant improvements were observed across all gates, including a 34.7% increase in PSNR, a 7.8% improvement in SSIM, and a 60.3% reduction in MAE. CONCLUSION The findings of this study indicate that dual-gated cardiac PET images, which often have post-reconstruction artifacts potentially affecting diagnostic performance, can be effectively improved using a generative pre-trained denoising network.
Collapse
Affiliation(s)
| | - Jarmo Teuho
- Turku PET Center, University of Turku, Turku, Finland
- Turku PET Center, Turku University Hospital, Turku, Finland
| | - Eero Lehtonen
- Turku PET Center, University of Turku, Turku, Finland
| | - Riku Klén
- Turku PET Center, University of Turku, Turku, Finland
- Turku PET Center, Turku University Hospital, Turku, Finland
| | - Antti Saraste
- Turku PET Center, University of Turku, Turku, Finland
- Turku PET Center, Turku University Hospital, Turku, Finland
- Heart Center, Turku University Hospital, Turku, Finland
| | - Craig S Levin
- Department of Radiology, Stanford University, Stanford, CA, USA.
- Department of Physics, Stanford University, Stanford, CA, USA.
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA.
- Department of Bioengineering, Stanford University, Stanford, CA, USA.
| |
Collapse
|
11
|
Seyyedi N, Ghafari A, Seyyedi N, Sheikhzadeh P. Deep learning-based techniques for estimating high-quality full-dose positron emission tomography images from low-dose scans: a systematic review. BMC Med Imaging 2024; 24:238. [PMID: 39261796 PMCID: PMC11391655 DOI: 10.1186/s12880-024-01417-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Accepted: 08/30/2024] [Indexed: 09/13/2024] Open
Abstract
This systematic review aimed to evaluate the potential of deep learning algorithms for converting low-dose Positron Emission Tomography (PET) images to full-dose PET images in different body regions. A total of 55 articles published between 2017 and 2023 by searching PubMed, Web of Science, Scopus and IEEE databases were included in this review, which utilized various deep learning models, such as generative adversarial networks and UNET, to synthesize high-quality PET images. The studies involved different datasets, image preprocessing techniques, input data types, and loss functions. The evaluation of the generated PET images was conducted using both quantitative and qualitative methods, including physician evaluations and various denoising techniques. The findings of this review suggest that deep learning algorithms have promising potential in generating high-quality PET images from low-dose PET images, which can be useful in clinical practice.
Collapse
Affiliation(s)
- Negisa Seyyedi
- Nursing and Midwifery Care Research Center, Health Management Research Institute, Iran University of Medical Sciences, Tehran, Iran
| | - Ali Ghafari
- Research Center for Evidence-Based Medicine, Iranian EBM Centre: A JBI Centre of Excellence, Tabriz University of Medical Sciences, Tabriz, Iran
| | - Navisa Seyyedi
- Department of Health Information Management and Medical Informatics, School of Allied Medical Science, Tehran University of Medical Sciences, Tehran, Iran
| | - Peyman Sheikhzadeh
- Medical Physics and Biomedical Engineering Department, Medical Faculty, Tehran University of Medical Sciences, Tehran, Iran.
- Department of Nuclear Medicine, Imam Khomeini Hospital Complex, Tehran University of Medical Sciences, Tehran, Iran.
| |
Collapse
|
12
|
Stefano A. Challenges and limitations in applying radiomics to PET imaging: Possible opportunities and avenues for research. Comput Biol Med 2024; 179:108827. [PMID: 38964244 DOI: 10.1016/j.compbiomed.2024.108827] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2024] [Revised: 06/05/2024] [Accepted: 06/29/2024] [Indexed: 07/06/2024]
Abstract
Radiomics, the high-throughput extraction of quantitative imaging features from medical images, holds immense potential for advancing precision medicine in oncology and beyond. While radiomics applied to positron emission tomography (PET) imaging offers unique insights into tumor biology and treatment response, it is imperative to elucidate the challenges and constraints inherent in this domain to facilitate their translation into clinical practice. This review examines the challenges and limitations of applying radiomics to PET imaging, synthesizing findings from the last five years (2019-2023) and highlights the significance of addressing these challenges to realize the full clinical potential of radiomics in oncology and molecular imaging. A comprehensive search was conducted across multiple electronic databases, including PubMed, Scopus, and Web of Science, using keywords relevant to radiomics issues in PET imaging. Only studies published in peer-reviewed journals were eligible for inclusion in this review. Although many studies have highlighted the potential of radiomics in predicting treatment response, assessing tumor heterogeneity, enabling risk stratification, and personalized therapy selection, various challenges regarding the practical implementation of the proposed models still need to be addressed. This review illustrates the challenges and limitations of radiomics in PET imaging across various cancer types, encompassing both phantom and clinical investigations. The analyzed studies highlight the importance of reproducible segmentation methods, standardized pre-processing and post-processing methodologies, and the need to create large multicenter studies registered in a centralized database to promote the continuous validation and clinical integration of radiomics into PET imaging.
Collapse
Affiliation(s)
- Alessandro Stefano
- Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), Cefalù, Italy.
| |
Collapse
|
13
|
Li K, Li H, Anastasio MA. Investigating the use of signal detection information in supervised learning-based image denoising with consideration of task-shift. J Med Imaging (Bellingham) 2024; 11:055501. [PMID: 39247217 PMCID: PMC11376226 DOI: 10.1117/1.jmi.11.5.055501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2023] [Revised: 07/26/2024] [Accepted: 08/09/2024] [Indexed: 09/10/2024] Open
Abstract
Purpose Recently, learning-based denoising methods that incorporate task-relevant information into the training procedure have been developed to enhance the utility of the denoised images. However, this line of research is relatively new and underdeveloped, and some fundamental issues remain unexplored. Our purpose is to yield insights into general issues related to these task-informed methods. This includes understanding the impact of denoising on objective measures of image quality (IQ) when the specified task at inference time is different from that employed for model training, a phenomenon we refer to as "task-shift." Approach A virtual imaging test bed comprising a stylized computational model of a chest X-ray computed tomography imaging system was employed to enable a controlled and tractable study design. A canonical, fully supervised, convolutional neural network-based denoising method was purposely adopted to understand the underlying issues that may be relevant to a variety of applications and more advanced denoising or image reconstruction methods. Signal detection and signal detection-localization tasks under signal-known-statistically with background-known-statistically conditions were considered, and several distinct types of numerical observers were employed to compute estimates of the task performance. Studies were designed to reveal how a task-informed transfer-learning approach can influence the tradeoff between conventional and task-based measures of image quality within the context of the considered tasks. In addition, the impact of task-shift on these image quality measures was assessed. Results The results indicated that certain tradeoffs can be achieved such that the resulting AUC value was significantly improved and the degradation of physical IQ measures was statistically insignificant. It was also observed that introducing task-shift degrades the task performance as expected. The degradation was significant when a relatively simple task was considered for network training and observer performance on a more complex one was assessed at inference time. Conclusions The presented results indicate that the task-informed training method can improve the observer performance while providing control over the tradeoff between traditional and task-based measures of image quality. The behavior of a task-informed model fine-tuning procedure was demonstrated, and the impact of task-shift on task-based image quality measures was investigated.
Collapse
Affiliation(s)
- Kaiyan Li
- University of Illinois Urbana-Champaign, Department of Bioengineering, Urbana, Illinois, United States
| | - Hua Li
- University of Illinois Urbana-Champaign, Department of Bioengineering, Urbana, Illinois, United States
- Washington University School of Medicine in St. Louis, Department of Radiation Oncology, Saint Louis, Missouri, United States
| | - Mark A. Anastasio
- University of Illinois Urbana-Champaign, Department of Bioengineering, Urbana, Illinois, United States
| |
Collapse
|
14
|
Sun H, Huang Y, Hu D, Hong X, Salimi Y, Lv W, Chen H, Zaidi H, Wu H, Lu L. Artificial intelligence-based joint attenuation and scatter correction strategies for multi-tracer total-body PET. EJNMMI Phys 2024; 11:66. [PMID: 39028439 PMCID: PMC11264498 DOI: 10.1186/s40658-024-00666-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2023] [Accepted: 07/04/2024] [Indexed: 07/20/2024] Open
Abstract
BACKGROUND Low-dose ungated CT is commonly used for total-body PET attenuation and scatter correction (ASC). However, CT-based ASC (CT-ASC) is limited by radiation dose risks of CT examinations, propagation of CT-based artifacts and potential mismatches between PET and CT. We demonstrate the feasibility of direct ASC for multi-tracer total-body PET in the image domain. METHODS Clinical uEXPLORER total-body PET/CT datasets of [18F]FDG (N = 52), [18F]FAPI (N = 46) and [68Ga]FAPI (N = 60) were retrospectively enrolled in this study. We developed an improved 3D conditional generative adversarial network (cGAN) to directly estimate attenuation and scatter-corrected PET images from non-attenuation and scatter-corrected (NASC) PET images. The feasibility of the proposed 3D cGAN-based ASC was validated using four training strategies: (1) Paired 3D NASC and CT-ASC PET images from three tracers were pooled into one centralized server (CZ-ASC). (2) Paired 3D NASC and CT-ASC PET images from each tracer were individually used (DL-ASC). (3) Paired NASC and CT-ASC PET images from one tracer ([18F]FDG) were used to train the networks, while the other two tracers were used for testing without fine-tuning (NFT-ASC). (4) The pre-trained networks of (3) were fine-tuned with two other tracers individually (FT-ASC). We trained all networks in fivefold cross-validation. The performance of all ASC methods was evaluated by qualitative and quantitative metrics using CT-ASC as the reference. RESULTS CZ-ASC, DL-ASC and FT-ASC showed comparable visual quality with CT-ASC for all tracers. CZ-ASC and DL-ASC resulted in a normalized mean absolute error (NMAE) of 8.51 ± 7.32% versus 7.36 ± 6.77% (p < 0.05), outperforming NASC (p < 0.0001) in [18F]FDG dataset. CZ-ASC, FT-ASC and DL-ASC led to NMAE of 6.44 ± 7.02%, 6.55 ± 5.89%, and 7.25 ± 6.33% in [18F]FAPI dataset, and NMAE of 5.53 ± 3.99%, 5.60 ± 4.02%, and 5.68 ± 4.12% in [68Ga]FAPI dataset, respectively. CZ-ASC, FT-ASC and DL-ASC were superior to NASC (p < 0.0001) and NFT-ASC (p < 0.0001) in terms of NMAE results. CONCLUSIONS CZ-ASC, DL-ASC and FT-ASC demonstrated the feasibility of providing accurate and robust ASC for multi-tracer total-body PET, thereby reducing the radiation hazards to patients from redundant CT examinations. CZ-ASC and FT-ASC could outperform DL-ASC for cross-tracer total-body PET AC.
Collapse
Affiliation(s)
- Hao Sun
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, 1211, Geneva 4, Switzerland
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China
| | - Yanchao Huang
- Laboratory for Quality Control and Evaluation of Radiopharmaceuticals, Department of Nuclear Medicine, Nanfang Hospital Southern Medical University, Guangzhou, 510515, China
| | - Debin Hu
- Department of Medical Engineering, Nanfang Hospital Southern Medical University, Guangzhou, 510515, China
| | - Xiaotong Hong
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, 1211, Geneva 4, Switzerland
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China
| | - Yazdan Salimi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, 1211, Geneva 4, Switzerland
| | - Wenbing Lv
- Department of Electronic Engineering, Information School, Yunnan University, Kunming, 650091, China
| | - Hongwen Chen
- Laboratory for Quality Control and Evaluation of Radiopharmaceuticals, Department of Nuclear Medicine, Nanfang Hospital Southern Medical University, Guangzhou, 510515, China
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, 1211, Geneva 4, Switzerland
| | - Hubing Wu
- Laboratory for Quality Control and Evaluation of Radiopharmaceuticals, Department of Nuclear Medicine, Nanfang Hospital Southern Medical University, Guangzhou, 510515, China.
| | - Lijun Lu
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China.
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China.
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China.
- Pazhou Lab, Guangzhou, 510330, China.
| |
Collapse
|
15
|
Maus J, Nikulin P, Hofheinz F, Petr J, Braune A, Kotzerke J, van den Hoff J. Deep learning based bilateral filtering for edge-preserving denoising of respiratory-gated PET. EJNMMI Phys 2024; 11:58. [PMID: 38977533 PMCID: PMC11231129 DOI: 10.1186/s40658-024-00661-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2024] [Accepted: 06/17/2024] [Indexed: 07/10/2024] Open
Abstract
BACKGROUND Residual image noise is substantial in positron emission tomography (PET) and one of the factors limiting lesion detection, quantification, and overall image quality. Thus, improving noise reduction remains of considerable interest. This is especially true for respiratory-gated PET investigations. The only broadly used approach for noise reduction in PET imaging has been the application of low-pass filters, usually Gaussians, which however leads to loss of spatial resolution and increased partial volume effects affecting detectability of small lesions and quantitative data evaluation. The bilateral filter (BF) - a locally adaptive image filter - allows to reduce image noise while preserving well defined object edges but manual optimization of the filter parameters for a given PET scan can be tedious and time-consuming, hampering its clinical use. In this work we have investigated to what extent a suitable deep learning based approach can resolve this issue by training a suitable network with the target of reproducing the results of manually adjusted case-specific bilateral filtering. METHODS Altogether, 69 respiratory-gated clinical PET/CT scans with three different tracers ([ 18 F ] FDG,[ 18 F ] L-DOPA,[ 68 Ga ] DOTATATE) were used for the present investigation. Prior to data processing, the gated data sets were split, resulting in a total of 552 single-gate image volumes. For each of these image volumes, four 3D ROIs were delineated: one ROI for image noise assessment and three ROIs for focal uptake (e.g. tumor lesions) measurements at different target/background contrast levels. An automated procedure was used to perform a brute force search of the two-dimensional BF parameter space for each data set to identify the "optimal" filter parameters to generate user-approved ground truth input data consisting of pairs of original and optimally BF filtered images. For reproducing the optimal BF filtering, we employed a modified 3D U-Net CNN incorporating residual learning principle. The network training and evaluation was performed using a 5-fold cross-validation scheme. The influence of filtering on lesion SUV quantification and image noise level was assessed by calculating absolute and fractional differences between the CNN, manual BF, or original (STD) data sets in the previously defined ROIs. RESULTS The automated procedure used for filter parameter determination chose adequate filter parameters for the majority of the data sets with only 19 patient data sets requiring manual tuning. Evaluation of the focal uptake ROIs revealed that CNN as well as BF based filtering essentially maintain the focal SUV max values of the unfiltered images with a low mean ± SD difference of δ SUV max CNN , STD = (-3.9 ± 5.2)% and δ SUV max BF , STD = (-4.4 ± 5.3)%. Regarding relative performance of CNN versus BF, both methods lead to very similar SUV max values in the vast majority of cases with an overall average difference of δ SUV max CNN , BF = (0.5 ± 4.8)%. Evaluation of the noise properties showed that CNN filtering mostly satisfactorily reproduces the noise level and characteristics of BF with δ Noise CNN , BF = (5.6 ± 10.5)%. No significant tracer dependent differences between CNN and BF were observed. CONCLUSIONS Our results show that a neural network based denoising can reproduce the results of a case by case optimized BF in a fully automated way. Apart from rare cases it led to images of practically identical quality regarding noise level, edge preservation, and signal recovery. We believe such a network might proof especially useful in the context of improved motion correction of respiratory-gated PET studies but could also help to establish BF-equivalent edge-preserving CNN filtering in clinical PET since it obviates time consuming manual BF parameter tuning.
Collapse
Affiliation(s)
- Jens Maus
- Department of Positron Emission Tomography, Institute of Radiopharmaceutical Cancer Research, Helmholtz-Zentrum Dresden-Rossendorf, Bautzner Landstraße 400, 01314, Dresden, Germany.
| | - Pavel Nikulin
- Department of Positron Emission Tomography, Institute of Radiopharmaceutical Cancer Research, Helmholtz-Zentrum Dresden-Rossendorf, Bautzner Landstraße 400, 01314, Dresden, Germany
| | - Frank Hofheinz
- Department of Positron Emission Tomography, Institute of Radiopharmaceutical Cancer Research, Helmholtz-Zentrum Dresden-Rossendorf, Bautzner Landstraße 400, 01314, Dresden, Germany
| | - Jan Petr
- Department of Positron Emission Tomography, Institute of Radiopharmaceutical Cancer Research, Helmholtz-Zentrum Dresden-Rossendorf, Bautzner Landstraße 400, 01314, Dresden, Germany
| | - Anja Braune
- Klinik und Poliklinik für Nuklearmedizin, Universtitätsklinikum Carl Gustav Carus, Fetscherstraße 74, 01307, Dresden, Germany
| | - Jörg Kotzerke
- Klinik und Poliklinik für Nuklearmedizin, Universtitätsklinikum Carl Gustav Carus, Fetscherstraße 74, 01307, Dresden, Germany
| | - Jörg van den Hoff
- Department of Positron Emission Tomography, Institute of Radiopharmaceutical Cancer Research, Helmholtz-Zentrum Dresden-Rossendorf, Bautzner Landstraße 400, 01314, Dresden, Germany
- Klinik und Poliklinik für Nuklearmedizin, Universtitätsklinikum Carl Gustav Carus, Fetscherstraße 74, 01307, Dresden, Germany
| |
Collapse
|
16
|
Li S, Zhu Y, Spencer BA, Wang G. Single-Subject Deep-Learning Image Reconstruction With a Neural Optimization Transfer Algorithm for PET-Enabled Dual-Energy CT Imaging. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2024; 33:4075-4089. [PMID: 38941203 DOI: 10.1109/tip.2024.3418347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/30/2024]
Abstract
Combining dual-energy computed tomography (DECT) with positron emission tomography (PET) offers many potential clinical applications but typically requires expensive hardware upgrades or increases radiation doses on PET/CT scanners due to an extra X-ray CT scan. The recent PET-enabled DECT method allows DECT imaging on PET/CT without requiring a second X-ray CT scan. It combines the already existing X-ray CT image with a 511 keV γ -ray CT (gCT) image reconstructed from time-of-flight PET emission data. A kernelized framework has been developed for reconstructing gCT image but this method has not fully exploited the potential of prior knowledge. Use of deep neural networks may explore the power of deep learning in this application. However, common approaches require a large database for training, which is impractical for a new imaging method like PET-enabled DECT. Here, we propose a single-subject method by using neural-network representation as a deep coefficient prior to improving gCT image reconstruction without population-based pre-training. The resulting optimization problem becomes the tomographic estimation of nonlinear neural-network parameters from gCT projection data. This complicated problem can be efficiently solved by utilizing the optimization transfer strategy with quadratic surrogates. Each iteration of the proposed neural optimization transfer algorithm includes: PET activity image update; gCT image update; and least-square neural-network learning in the gCT image domain. This algorithm is guaranteed to monotonically increase the data likelihood. Results from computer simulation, real phantom data and real patient data have demonstrated that the proposed method can significantly improve gCT image quality and consequent multi-material decomposition as compared to other methods.
Collapse
|
17
|
Champendal M, Ribeiro RST, Müller H, Prior JO, Sá Dos Reis C. Nuclear medicine technologists practice impacted by AI denoising applications in PET/CT images. Radiography (Lond) 2024; 30:1232-1239. [PMID: 38917681 DOI: 10.1016/j.radi.2024.06.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2024] [Revised: 05/24/2024] [Accepted: 06/11/2024] [Indexed: 06/27/2024]
Abstract
PURPOSE Artificial intelligence (AI) in positron emission tomography/computed tomography (PET/CT) can be used to improve image quality when it is useful to reduce the injected activity or the acquisition time. Particular attention must be paid to ensure that users adopt this technological innovation when outcomes can be improved by its use. The aim of this study was to identify the aspects that need to be analysed and discussed to implement an AI denoising PET/CT algorithm in clinical practice, based on the representations of Nuclear Medicine Technologists (NMT) from Western-Switzerland, highlighting the barriers and facilitators associated. METHODS Two focus groups were organised in June and September 2023, involving ten voluntary participants recruited from all types of medical imaging departments, forming a diverse sample of NMT. The interview guide followed the first stage of the revised model of Ottawa of Research Use. A content analysis was performed following the three-stage approach described by Wanlin. Ethics cleared the study. RESULTS Clinical practice, workload, knowledge and resources were de 4 themes identified as necessary to be thought before implementing an AI denoising PET/CT algorithm by ten NMT participants (aged 31-60), not familiar with this AI tool. The main barriers to implement this algorithm included workflow challenges, resistance from professionals and lack of education; while the main facilitators were explanations and the availability of support to ask questions such as a "local champion". CONCLUSION To implement a denoising algorithm in PET/CT, several aspects of clinical practice need to be thought to reduce the barriers to its implementation such as the procedures, the workload and the available resources. Participants emphasised also the importance of clear explanations, education, and support for successful implementation. IMPLICATIONS FOR PRACTICE To facilitate the implementation of AI tools in clinical practice, it is important to identify the barriers and propose strategies that can mitigate it.
Collapse
Affiliation(s)
- M Champendal
- School of Health Sciences HESAV, HES-SO, University of Applied Sciences Western Switzerland: Lausanne, CH, Switzerland; Faculty of Biology and Medicine, University of Lausanne, Lausanne, CH, Switzerland.
| | - R S T Ribeiro
- School of Health Sciences HESAV, HES-SO, University of Applied Sciences Western Switzerland: Lausanne, CH, Switzerland.
| | - H Müller
- Informatics Institute, University of Applied Sciences Western Switzerland (HES-SO Valais) Sierre, CH, Switzerland; Medical Faculty, University of Geneva, CH, Switzerland.
| | - J O Prior
- Faculty of Biology and Medicine, University of Lausanne, Lausanne, CH, Switzerland; Nuclear Medicine and Molecular Imaging Department, Lausanne University Hospital (CHUV): Lausanne, CH, Switzerland.
| | - C Sá Dos Reis
- School of Health Sciences HESAV, HES-SO, University of Applied Sciences Western Switzerland: Lausanne, CH, Switzerland.
| |
Collapse
|
18
|
Jang SI, Pan T, Li Y, Heidari P, Chen J, Li Q, Gong K. Spach Transformer: Spatial and Channel-Wise Transformer Based on Local and Global Self-Attentions for PET Image Denoising. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2036-2049. [PMID: 37995174 PMCID: PMC11111593 DOI: 10.1109/tmi.2023.3336237] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/25/2023]
Abstract
Position emission tomography (PET) is widely used in clinics and research due to its quantitative merits and high sensitivity, but suffers from low signal-to-noise ratio (SNR). Recently convolutional neural networks (CNNs) have been widely used to improve PET image quality. Though successful and efficient in local feature extraction, CNN cannot capture long-range dependencies well due to its limited receptive field. Global multi-head self-attention (MSA) is a popular approach to capture long-range information. However, the calculation of global MSA for 3D images has high computational costs. In this work, we proposed an efficient spatial and channel-wise encoder-decoder transformer, Spach Transformer, that can leverage spatial and channel information based on local and global MSAs. Experiments based on datasets of different PET tracers, i.e., 18F-FDG, 18F-ACBC, 18F-DCFPyL, and 68Ga-DOTATATE, were conducted to evaluate the proposed framework. Quantitative results show that the proposed Spach Transformer framework outperforms state-of-the-art deep learning architectures.
Collapse
|
19
|
Dutta K, Laforest R, Luo J, Jha AK, Shoghi KI. Deep learning generation of preclinical positron emission tomography (PET) images from low-count PET with task-based performance assessment. Med Phys 2024; 51:4324-4339. [PMID: 38710222 PMCID: PMC11423763 DOI: 10.1002/mp.17105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 04/02/2024] [Accepted: 04/09/2024] [Indexed: 05/08/2024] Open
Abstract
BACKGROUND Preclinical low-count positron emission tomography (LC-PET) imaging offers numerous advantages such as facilitating imaging logistics, enabling longitudinal studies of long- and short-lived isotopes as well as increasing scanner throughput. However, LC-PET is characterized by reduced photon-count levels resulting in low signal-to-noise ratio (SNR), segmentation difficulties, and quantification uncertainties. PURPOSE We developed and evaluated a novel deep-learning (DL) architecture-Attention based Residual-Dilated Net (ARD-Net)-to generate standard-count PET (SC-PET) images from LC-PET images. The performance of the ARD-Net framework was evaluated for numerous low count realizations using fidelity-based qualitative metrics, task-based segmentation, and quantitative metrics. METHOD Patient Derived tumor Xenograft (PDX) with tumors implanted in the mammary fat-pad were subjected to preclinical [18F]-Fluorodeoxyglucose (FDG)-PET/CT imaging. SC-PET images were derived from a 10 min static FDG-PET acquisition, 50 min post administration of FDG, and were resampled to generate four distinct LC-PET realizations corresponding to 10%, 5%, 1.6%, and 0.8% of SC-PET count-level. ARD-Net was trained and optimized using 48 preclinical FDG-PET datasets, while 16 datasets were utilized to assess performance. Further, the performance of ARD-Net was benchmarked against two leading DL-based methods (Residual UNet, RU-Net; and Dilated Network, D-Net) and non-DL methods (Non-Local Means, NLM; and Block Matching 3D Filtering, BM3D). The performance of the framework was evaluated using traditional fidelity-based image quality metrics such as Structural Similarity Index Metric (SSIM) and Normalized Root Mean Square Error (NRMSE), as well as human observer-based tumor segmentation performance (Dice Score and volume bias) and quantitative analysis of Standardized Uptake Value (SUV) measurements. Additionally, radiomics-derived features were utilized as a measure of quality assurance (QA) in comparison to true SC-PET. Finally, a performance ensemble score (EPS) was developed by integrating fidelity-based and task-based metrics. Concordance Correlation Coefficient (CCC) was utilized to determine concordance between measures. The non-parametric Friedman Test with Bonferroni correction was used to compare the performance of ARD-Net against benchmarked methods with significance at adjusted p-value ≤0.01. RESULTS ARD-Net-generated SC-PET images exhibited significantly better (p ≤ 0.01 post Bonferroni correction) overall image fidelity scores in terms of SSIM and NRMSE at majority of photon-count levels compared to benchmarked DL and non-DL methods. In terms of task-based quantitative accuracy evaluated by SUVMean and SUVPeak, ARD-Net exhibited less than 5% median absolute bias for SUVMean compared to true SC-PET and lower degree of variability compared to benchmarked DL and non-DL based methods in generating SC-PET. Additionally, ARD-Net-generated SC-PET images displayed higher degree of concordance to SC-PET images in terms of radiomics features compared to non-DL and other DL approaches. Finally, the ensemble score suggested that ARD-Net exhibited significantly superior performance compared to benchmarked algorithms (p ≤ 0.01 post Bonferroni correction). CONCLUSION ARD-Net provides a robust framework to generate SC-PET from LC-PET images. ARD-Net generated SC-PET images exhibited superior performance compared other DL and non-DL approaches in terms of image-fidelity based metrics, task-based segmentation metrics, and minimal bias in terms of task-based quantification performance for preclinical PET imaging.
Collapse
Affiliation(s)
- Kaushik Dutta
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Missouri, USA
- Imaging Science Program, McKelvey School of Engineering, Washington University in St Louis, St Louis, Missouri, USA
| | - Richard Laforest
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Missouri, USA
- Imaging Science Program, McKelvey School of Engineering, Washington University in St Louis, St Louis, Missouri, USA
| | - Jingqin Luo
- Department of Surgery, Public Health Sciences, Washington University in St Louis, St Louis, Missouri, USA
| | - Abhinav K Jha
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Missouri, USA
- Imaging Science Program, McKelvey School of Engineering, Washington University in St Louis, St Louis, Missouri, USA
- Department of Biomedical Engineering, McKelvey School of Engineering, Washington University in St Louis, St Louis, Missouri, USA
| | - Kooresh I Shoghi
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Missouri, USA
- Imaging Science Program, McKelvey School of Engineering, Washington University in St Louis, St Louis, Missouri, USA
- Department of Biomedical Engineering, McKelvey School of Engineering, Washington University in St Louis, St Louis, Missouri, USA
| |
Collapse
|
20
|
Nazir N, Sarwar A, Saini BS. Recent developments in denoising medical images using deep learning: An overview of models, techniques, and challenges. Micron 2024; 180:103615. [PMID: 38471391 DOI: 10.1016/j.micron.2024.103615] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 02/20/2024] [Accepted: 02/26/2024] [Indexed: 03/14/2024]
Abstract
Medical imaging plays a critical role in diagnosing and treating various medical conditions. However, interpreting medical images can be challenging even for expert clinicians, as they are often degraded by noise and artifacts that can hinder the accurate identification and analysis of diseases, leading to severe consequences such as patient misdiagnosis or mortality. Various types of noise, including Gaussian, Rician, and Salt-pepper noise, can corrupt the area of interest, limiting the precision and accuracy of algorithms. Denoising algorithms have shown the potential in improving the quality of medical images by removing noise and other artifacts that obscure essential information. Deep learning has emerged as a powerful tool for image analysis and has demonstrated promising results in denoising different medical images such as MRIs, CT scans, PET scans, etc. This review paper provides a comprehensive overview of state-of-the-art deep learning algorithms used for denoising medical images. A total of 120 relevant papers were reviewed, and after screening with specific inclusion and exclusion criteria, 104 papers were selected for analysis. This study aims to provide a thorough understanding for researchers in the field of intelligent denoising by presenting an extensive survey of current techniques and highlighting significant challenges that remain to be addressed. The findings of this review are expected to contribute to the development of intelligent models that enable timely and accurate diagnoses of medical disorders. It was found that 40% of the researchers used models based on Deep convolutional neural networks to denoise the images, followed by encoder-decoder (18%) and other artificial intelligence-based techniques (15%) (Like DIP, etc.). Generative adversarial network was used by 12%, transformer-based approaches (13%) and multilayer perceptron was used by 2% of the researchers. Moreover, Gaussian noise was present in 35% of the images, followed by speckle noise (16%), poisson noise (14%), artifacts (10%), rician noise (7%), Salt-pepper noise (6%), Impulse noise (3%) and other types of noise (9%). While the progress in developing novel models for the denoising of medical images is evident, significant work remains to be done in creating standardized denoising models that perform well across a wide spectrum of medical images. Overall, this review highlights the importance of denoising medical images and provides a comprehensive understanding of the current state-of-the-art deep learning algorithms in this field.
Collapse
|
21
|
Liu X, Vafay Eslahi S, Marin T, Tiss A, Chemli Y, Huang Y, Johnson KA, El Fakhri G, Ouyang J. Cross noise level PET denoising with continuous adversarial domain generalization. Phys Med Biol 2024; 69:10.1088/1361-6560/ad341a. [PMID: 38484401 PMCID: PMC11195012 DOI: 10.1088/1361-6560/ad341a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Accepted: 03/14/2024] [Indexed: 04/04/2024]
Abstract
Objective.Performing positron emission tomography (PET) denoising within the image space proves effective in reducing the variance in PET images. In recent years, deep learning has demonstrated superior denoising performance, but models trained on a specific noise level typically fail to generalize well on different noise levels, due to inherent distribution shifts between inputs. The distribution shift usually results in bias in the denoised images. Our goal is to tackle such a problem using a domain generalization technique.Approach.We propose to utilize the domain generalization technique with a novel feature space continuous discriminator (CD) for adversarial training, using the fraction of events as a continuous domain label. The core idea is to enforce the extraction of noise-level invariant features. Thus minimizing the distribution divergence of latent feature representation for different continuous noise levels, and making the model general for arbitrary noise levels. We created three sets of 10%, 13%-22% (uniformly randomly selected), or 25% fractions of events from 9718F-MK6240 tau PET studies of 60 subjects. For each set, we generated 20 noise realizations. Training, validation, and testing were implemented using 1400, 120, and 420 pairs of 3D image volumes from the same or different sets. We used 3D UNet as the baseline and implemented CD to the continuous noise level training data of 13%-22% set.Main results.The proposed CD improves the denoising performance of our model trained in a 13%-22% fraction set for testing in both 10% and 25% fraction sets, measured by bias and standard deviation using full-count images as references. In addition, our CD method can improve the SSIM and PSNR consistently for Alzheimer-related regions and the whole brain.Significance.To our knowledge, this is the first attempt to alleviate the performance degradation in cross-noise level denoising from the perspective of domain generalization. Our study is also a pioneer work of continuous domain generalization to utilize continuously changing source domains.
Collapse
Affiliation(s)
- Xiaofeng Liu
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, United States of America
- Department of Radiology, Harvard Medical School, Boston, MA 02115, United States of America
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT 06520, United States of America
| | - Samira Vafay Eslahi
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, United States of America
- Department of Radiology, Harvard Medical School, Boston, MA 02115, United States of America
| | - Thibault Marin
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, United States of America
- Department of Radiology, Harvard Medical School, Boston, MA 02115, United States of America
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT 06520, United States of America
| | - Amal Tiss
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, United States of America
- Department of Radiology, Harvard Medical School, Boston, MA 02115, United States of America
| | - Yanis Chemli
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, United States of America
- Department of Radiology, Harvard Medical School, Boston, MA 02115, United States of America
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT 06520, United States of America
| | - Yongsong Huang
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, United States of America
| | - Keith A Johnson
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, United States of America
- Department of Radiology, Harvard Medical School, Boston, MA 02115, United States of America
| | - Georges El Fakhri
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, United States of America
- Department of Radiology, Harvard Medical School, Boston, MA 02115, United States of America
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT 06520, United States of America
| | - Jinsong Ouyang
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, United States of America
- Department of Radiology, Harvard Medical School, Boston, MA 02115, United States of America
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT 06520, United States of America
| |
Collapse
|
22
|
Bousse A, Kandarpa VSS, Shi K, Gong K, Lee JS, Liu C, Visvikis D. A Review on Low-Dose Emission Tomography Post-Reconstruction Denoising with Neural Network Approaches. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2024; 8:333-347. [PMID: 39429805 PMCID: PMC11486494 DOI: 10.1109/trpms.2023.3349194] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2024]
Abstract
Low-dose emission tomography (ET) plays a crucial role in medical imaging, enabling the acquisition of functional information for various biological processes while minimizing the patient dose. However, the inherent randomness in the photon counting process is a source of noise which is amplified low-dose ET. This review article provides an overview of existing post-processing techniques, with an emphasis on deep neural network (NN) approaches. Furthermore, we explore future directions in the field of NN-based low-dose ET. This comprehensive examination sheds light on the potential of deep learning in enhancing the quality and resolution of low-dose ET images, ultimately advancing the field of medical imaging.
Collapse
Affiliation(s)
| | | | - Kuangyu Shi
- Lab for Artificial Intelligence & Translational Theranostics, Dept. Nuclear Medicine, Inselspital, University of Bern, 3010 Bern, Switzerland
| | - Kuang Gong
- The Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital/Harvard Medical School, Boston, MA 02114, USA
| | - Jae Sung Lee
- Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul 03080, Korea
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | | |
Collapse
|
23
|
Fu M, Zhang N, Huang Z, Zhou C, Zhang X, Yuan J, He Q, Yang Y, Zheng H, Liang D, Wu FX, Fan W, Hu Z. OIF-Net: An Optical Flow Registration-Based PET/MR Cross-Modal Interactive Fusion Network for Low-Count Brain PET Image Denoising. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1554-1567. [PMID: 38096101 DOI: 10.1109/tmi.2023.3342809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
The short frames of low-count positron emission tomography (PET) images generally cause high levels of statistical noise. Thus, improving the quality of low-count images by using image postprocessing algorithms to achieve better clinical diagnoses has attracted widespread attention in the medical imaging community. Most existing deep learning-based low-count PET image enhancement methods have achieved satisfying results, however, few of them focus on denoising low-count PET images with the magnetic resonance (MR) image modality as guidance. The prior context features contained in MR images can provide abundant and complementary information for single low-count PET image denoising, especially in ultralow-count (2.5%) cases. To this end, we propose a novel two-stream dual PET/MR cross-modal interactive fusion network with an optical flow pre-alignment module, namely, OIF-Net. Specifically, the learnable optical flow registration module enables the spatial manipulation of MR imaging inputs within the network without any extra training supervision. Registered MR images fundamentally solve the problem of feature misalignment in the multimodal fusion stage, which greatly benefits the subsequent denoising process. In addition, we design a spatial-channel feature enhancement module (SC-FEM) that considers the interactive impacts of multiple modalities and provides additional information flexibility in both the spatial and channel dimensions. Furthermore, instead of simply concatenating two extracted features from these two modalities as an intermediate fusion method, the proposed cross-modal feature fusion module (CM-FFM) adopts cross-attention at multiple feature levels and greatly improves the two modalities' feature fusion procedure. Extensive experimental assessments conducted on real clinical datasets, as well as an independent clinical testing dataset, demonstrate that the proposed OIF-Net outperforms the state-of-the-art methods.
Collapse
|
24
|
Hashimoto F, Onishi Y, Ote K, Tashima H, Reader AJ, Yamaya T. Deep learning-based PET image denoising and reconstruction: a review. Radiol Phys Technol 2024; 17:24-46. [PMID: 38319563 PMCID: PMC10902118 DOI: 10.1007/s12194-024-00780-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 01/03/2024] [Accepted: 01/04/2024] [Indexed: 02/07/2024]
Abstract
This review focuses on positron emission tomography (PET) imaging algorithms and traces the evolution of PET image reconstruction methods. First, we provide an overview of conventional PET image reconstruction methods from filtered backprojection through to recent iterative PET image reconstruction algorithms, and then review deep learning methods for PET data up to the latest innovations within three main categories. The first category involves post-processing methods for PET image denoising. The second category comprises direct image reconstruction methods that learn mappings from sinograms to the reconstructed images in an end-to-end manner. The third category comprises iterative reconstruction methods that combine conventional iterative image reconstruction with neural-network enhancement. We discuss future perspectives on PET imaging and deep learning technology.
Collapse
Affiliation(s)
- Fumio Hashimoto
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan.
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho, Inage-Ku, Chiba, 263-8522, Japan.
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan.
| | - Yuya Onishi
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan
| | - Kibo Ote
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan
| | - Hideaki Tashima
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan
| | - Andrew J Reader
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, SE1 7EH, UK
| | - Taiga Yamaya
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho, Inage-Ku, Chiba, 263-8522, Japan
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan
| |
Collapse
|
25
|
Bousse A, Kandarpa VSS, Shi K, Gong K, Lee JS, Liu C, Visvikis D. A Review on Low-Dose Emission Tomography Post-Reconstruction Denoising with Neural Network Approaches. ARXIV 2024:arXiv:2401.00232v2. [PMID: 38313194 PMCID: PMC10836084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 02/06/2024]
Abstract
Low-dose emission tomography (ET) plays a crucial role in medical imaging, enabling the acquisition of functional information for various biological processes while minimizing the patient dose. However, the inherent randomness in the photon counting process is a source of noise which is amplified in low-dose ET. This review article provides an overview of existing post-processing techniques, with an emphasis on deep neural network (NN) approaches. Furthermore, we explore future directions in the field of NN-based low-dose ET. This comprehensive examination sheds light on the potential of deep learning in enhancing the quality and resolution of low-dose ET images, ultimately advancing the field of medical imaging.
Collapse
Affiliation(s)
| | | | - Kuangyu Shi
- Lab for Artificial Intelligence & Translational Theranostics, Dept. Nuclear Medicine, Inselspital, University of Bern, 3010 Bern, Switzerland
| | - Kuang Gong
- The Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital/Harvard Medical School, Boston, MA 02114, USA
| | - Jae Sung Lee
- Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul 03080, Korea
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | | |
Collapse
|
26
|
Wang Y, Luo Y, Zu C, Zhan B, Jiao Z, Wu X, Zhou J, Shen D, Zhou L. 3D multi-modality Transformer-GAN for high-quality PET reconstruction. Med Image Anal 2024; 91:102983. [PMID: 37926035 DOI: 10.1016/j.media.2023.102983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2022] [Revised: 08/06/2023] [Accepted: 09/28/2023] [Indexed: 11/07/2023]
Abstract
Positron emission tomography (PET) scans can reveal abnormal metabolic activities of cells and provide favorable information for clinical patient diagnosis. Generally, standard-dose PET (SPET) images contain more diagnostic information than low-dose PET (LPET) images but higher-dose scans can also bring higher potential radiation risks. To reduce the radiation risk while acquiring high-quality PET images, in this paper, we propose a 3D multi-modality edge-aware Transformer-GAN for high-quality SPET reconstruction using the corresponding LPET images and T1 acquisitions from magnetic resonance imaging (T1-MRI). Specifically, to fully excavate the metabolic distributions in LPET and anatomical structural information in T1-MRI, we first use two separate CNN-based encoders to extract local spatial features from the two modalities, respectively, and design a multimodal feature integration module to effectively integrate the two kinds of features given the diverse contributions of features at different locations. Then, as CNNs can describe local spatial information well but have difficulty in modeling long-range dependencies in images, we further apply a Transformer-based encoder to extract global semantic information in the input images and use a CNN decoder to transform the encoded features into SPET images. Finally, a patch-based discriminator is applied to ensure the similarity of patch-wise data distribution between the reconstructed and real images. Considering the importance of edge information in anatomical structures for clinical disease diagnosis, besides voxel-level estimation error and adversarial loss, we also introduce an edge-aware loss to retain more edge detail information in the reconstructed SPET images. Experiments on the phantom dataset and clinical dataset validate that our proposed method can effectively reconstruct high-quality SPET images and outperform current state-of-the-art methods in terms of qualitative and quantitative metrics.
Collapse
Affiliation(s)
- Yan Wang
- School of Computer Science, Sichuan University, Chengdu, China
| | - Yanmei Luo
- School of Computer Science, Sichuan University, Chengdu, China
| | - Chen Zu
- Department of Risk Controlling Research, JD.COM, China
| | - Bo Zhan
- School of Computer Science, Sichuan University, Chengdu, China
| | - Zhengyang Jiao
- School of Computer Science, Sichuan University, Chengdu, China
| | - Xi Wu
- School of Computer Science, Chengdu University of Information Technology, China
| | - Jiliu Zhou
- School of Computer Science, Sichuan University, Chengdu, China
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China; Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China.
| | - Luping Zhou
- School of Electrical and Information Engineering, University of Sydney, Australia.
| |
Collapse
|
27
|
Gong K, Johnson K, El Fakhri G, Li Q, Pan T. PET image denoising based on denoising diffusion probabilistic model. Eur J Nucl Med Mol Imaging 2024; 51:358-368. [PMID: 37787849 PMCID: PMC10958486 DOI: 10.1007/s00259-023-06417-8] [Citation(s) in RCA: 16] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2022] [Accepted: 08/22/2023] [Indexed: 10/04/2023]
Abstract
PURPOSE Due to various physical degradation factors and limited counts received, PET image quality needs further improvements. The denoising diffusion probabilistic model (DDPM) was a distribution learning-based model, which tried to transform a normal distribution into a specific data distribution based on iterative refinements. In this work, we proposed and evaluated different DDPM-based methods for PET image denoising. METHODS Under the DDPM framework, one way to perform PET image denoising was to provide the PET image and/or the prior image as the input. Another way was to supply the prior image as the network input with the PET image included in the refinement steps, which could fit for scenarios of different noise levels. 150 brain [[Formula: see text]F]FDG datasets and 140 brain [[Formula: see text]F]MK-6240 (imaging neurofibrillary tangles deposition) datasets were utilized to evaluate the proposed DDPM-based methods. RESULTS Quantification showed that the DDPM-based frameworks with PET information included generated better results than the nonlocal mean, Unet and generative adversarial network (GAN)-based denoising methods. Adding additional MR prior in the model helped achieved better performance and further reduced the uncertainty during image denoising. Solely relying on MR prior while ignoring the PET information resulted in large bias. Regional and surface quantification showed that employing MR prior as the network input while embedding PET image as a data-consistency constraint during inference achieved the best performance. CONCLUSION DDPM-based PET image denoising is a flexible framework, which can efficiently utilize prior information and achieve better performance than the nonlocal mean, Unet and GAN-based denoising methods.
Collapse
Affiliation(s)
- Kuang Gong
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, 32611, FL, USA.
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, 02114, MA, USA.
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, 02114, MA, USA.
| | - Keith Johnson
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, 02114, MA, USA
| | - Georges El Fakhri
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, 02114, MA, USA
| | - Quanzheng Li
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, 02114, MA, USA
| | - Tinsu Pan
- Department of Imaging Physics, University of Texas MD Anderson Cancer Center, Houston, 77030, TX, USA
| |
Collapse
|
28
|
Li A, Yang B, Naganawa M, Fontaine K, Toyonaga T, Carson RE, Tang J. Dose reduction in dynamic synaptic vesicle glycoprotein 2A PET imaging using artificial neural networks. Phys Med Biol 2023; 68:245006. [PMID: 37857316 PMCID: PMC10739622 DOI: 10.1088/1361-6560/ad0535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Revised: 10/02/2023] [Accepted: 10/19/2023] [Indexed: 10/21/2023]
Abstract
Objective. Reducing dose in positron emission tomography (PET) imaging increases noise in reconstructed dynamic frames, which inevitably results in higher noise and possible bias in subsequently estimated images of kinetic parameters than those estimated in the standard dose case. We report the development of a spatiotemporal denoising technique for reduced-count dynamic frames through integrating a cascade artificial neural network (ANN) with the highly constrained back-projection (HYPR) scheme to improve low-dose parametric imaging.Approach. We implemented and assessed the proposed method using imaging data acquired with11C-UCB-J, a PET radioligand bound to synaptic vesicle glycoprotein 2A (SV2A) in the human brain. The patch-based ANN was trained with a reduced-count frame and its full-count correspondence of a subject and was used in cascade to process dynamic frames of other subjects to further take advantage of its denoising capability. The HYPR strategy was then applied to the spatial ANN processed image frames to make use of the temporal information from the entire dynamic scan.Main results. In all the testing subjects including healthy volunteers and Parkinson's disease patients, the proposed method reduced more noise while introducing minimal bias in dynamic frames and the resulting parametric images, as compared with conventional denoising methods.Significance. Achieving 80% noise reduction with a bias of -2% in dynamic frames, which translates into 75% and 70% of noise reduction in the tracer uptake (bias, -2%) and distribution volume (bias, -5%) images, the proposed ANN+HYPR technique demonstrates the denoising capability equivalent to a 11-fold dose increase for dynamic SV2A PET imaging with11C-UCB-J.
Collapse
Affiliation(s)
- Andi Li
- Department of Biomedical Engineering, University of Cincinnati, Cincinnati, OH, United States of America
| | - Bao Yang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, People’s Republic of China
| | - Mika Naganawa
- Positron Emission Tomography Center, Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America
| | - Kathryn Fontaine
- Positron Emission Tomography Center, Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America
| | - Takuya Toyonaga
- Positron Emission Tomography Center, Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America
| | - Richard E Carson
- Positron Emission Tomography Center, Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America
| | - Jing Tang
- Department of Biomedical Engineering, University of Cincinnati, Cincinnati, OH, United States of America
| |
Collapse
|
29
|
Pretorius PH, Liu J, Kalluri KS, Jiang Y, Leppo JA, Dahlberg ST, Kikut J, Parker MW, Keating FK, Licho R, Auer B, Lindsay C, Konik A, Yang Y, Wernick MN, King MA. Observer studies of image quality of denoising reduced-count cardiac single photon emission computed tomography myocardial perfusion imaging by three-dimensional Gaussian post-reconstruction filtering and deep learning. J Nucl Cardiol 2023; 30:2427-2437. [PMID: 37221409 PMCID: PMC11401514 DOI: 10.1007/s12350-023-03295-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Accepted: 04/25/2023] [Indexed: 05/25/2023]
Abstract
BACKGROUND The aim of this research was to asses perfusion-defect detection-accuracy by human observers as a function of reduced-counts for 3D Gaussian post-reconstruction filtering vs deep learning (DL) denoising to determine if there was improved performance with DL. METHODS SPECT projection data of 156 normally interpreted patients were used for these studies. Half were altered to include hybrid perfusion defects with defect presence and location known. Ordered-subset expectation-maximization (OSEM) reconstruction was employed with the optional correction of attenuation (AC) and scatter (SC) in addition to distance-dependent resolution (RC). Count levels varied from full-counts (100%) to 6.25% of full-counts. The denoising strategies were previously optimized for defect detection using total perfusion deficit (TPD). Four medical physicist (PhD) and six physician (MD) observers rated the slices using a graphical user interface. Observer ratings were analyzed using the LABMRMC multi-reader, multi-case receiver-operating-characteristic (ROC) software to calculate and compare statistically the area-under-the-ROC-curves (AUCs). RESULTS For the same count-level no statistically significant increase in AUCs for DL over Gaussian denoising was determined when counts were reduced to either the 25% or 12.5% of full-counts. The average AUC for full-count OSEM with solely RC and Gaussian filtering was lower than for the strategies with AC and SC, except for a reduction to 6.25% of full-counts, thus verifying the utility of employing AC and SC with RC. CONCLUSION We did not find any indication that at the dose levels investigated and with the DL network employed, that DL denoising was superior in AUC to optimized 3D post-reconstruction Gaussian filtering.
Collapse
Affiliation(s)
- P Hendrik Pretorius
- Division of Nuclear Medicine, Department of Radiology, University of Massachusetts Chan Medical School, Worcester, MA, USA.
| | - Junchi Liu
- Electrical and Computer Engineering, Illinois Institute of Technology, Chicago, IL, USA
| | - Kesava S Kalluri
- Division of Nuclear Medicine, Department of Radiology, University of Massachusetts Chan Medical School, Worcester, MA, USA
| | | | | | - Seth T Dahlberg
- Cardiovascular Medicine, University of Massachusetts Chan Medical School, Worcester, MA, USA
| | - Janusz Kikut
- University of Vermont Medical Center, Burlington, VT, USA
| | - Matthew W Parker
- Cardiovascular Medicine, University of Massachusetts Chan Medical School, Worcester, MA, USA
| | | | - Robert Licho
- UMass Memorial Medical Center - University Campus, Worcester, MA, USA
| | - Benjamin Auer
- Brigham and Women's Hospital Department of Radiology, Boston, MA, USA
| | - Clifford Lindsay
- Division of Nuclear Medicine, Department of Radiology, University of Massachusetts Chan Medical School, Worcester, MA, USA
| | - Arda Konik
- Dana-Farber Cancer Institute Department of Radiation Oncology, Boston, MA, USA
| | - Yongyi Yang
- Electrical and Computer Engineering, Illinois Institute of Technology, Chicago, IL, USA
| | - Miles N Wernick
- Electrical and Computer Engineering, Illinois Institute of Technology, Chicago, IL, USA
| | - Michael A King
- Division of Nuclear Medicine, Department of Radiology, University of Massachusetts Chan Medical School, Worcester, MA, USA
| |
Collapse
|
30
|
Sun H, Wang F, Yang Y, Hong X, Xu W, Wang S, Mok GSP, Lu L. Transfer learning-based attenuation correction for static and dynamic cardiac PET using a generative adversarial network. Eur J Nucl Med Mol Imaging 2023; 50:3630-3646. [PMID: 37474736 DOI: 10.1007/s00259-023-06343-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Accepted: 07/12/2023] [Indexed: 07/22/2023]
Abstract
PURPOSE The goal of this work is to demonstrate the feasibility of directly generating attenuation-corrected PET images from non-attenuation-corrected (NAC) PET images for both rest and stress-state static or dynamic [13N]ammonia MP PET based on a generative adversarial network. METHODS We recruited 60 subjects for rest-only scans and 14 subjects for rest-stress scans, all of whom underwent [13N]ammonia cardiac PET/CT examinations to acquire static and dynamic frames with both 3D NAC and CT-based AC (CTAC) PET images. We developed a 3D pix2pix deep learning AC (DLAC) framework via a U-net + ResNet-based generator and a convolutional neural network-based discriminator. Paired static or dynamic NAC and CTAC PET images from 60 rest-only subjects were used as network inputs and labels for static (S-DLAC) and dynamic (D-DLAC) training, respectively. The pre-trained S-DLAC network was then fine-tuned by paired dynamic NAC and CTAC PET frames of 60 rest-only subjects to derive an improved D-DLAC-FT for dynamic PET images. The 14 rest-stress subjects were used as an internal testing dataset and separately tested on different network models without training. The proposed methods were evaluated using visual quality and quantitative metrics. RESULTS The proposed S-DLAC, D-DLAC, and D-DLAC-FT methods were consistent with clinical CTAC in terms of various images and quantitative metrics. The S-DLAC (slope = 0.9423, R2 = 0.947) showed a higher correlation with the reference static CTAC as compared to static NAC (slope = 0.0992, R2 = 0.654). D-DLAC-FT yielded lower myocardial blood flow (MBF) errors in the whole left ventricular myocardium than D-DLAC, but with no significant difference, both for the 60 rest-state subjects (6.63 ± 5.05% vs. 7.00 ± 6.84%, p = 0.7593) and the 14 stress-state subjects (1.97 ± 2.28% vs. 3.21 ± 3.89%, p = 0.8595). CONCLUSION The proposed S-DLAC, D-DLAC, and D-DLAC-FT methods achieve comparable performance with clinical CTAC. Transfer learning shows promising potential for dynamic MP PET.
Collapse
Affiliation(s)
- Hao Sun
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China
- Biomedical Imaging Laboratory (BIG), Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Taipa, Macau SAR, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China
| | - Fanghu Wang
- PET Center, Department of Nuclear Medicine, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Yuling Yang
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China
| | - Xiaotong Hong
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China
| | - Weiping Xu
- PET Center, Department of Nuclear Medicine, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Shuxia Wang
- PET Center, Department of Nuclear Medicine, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China.
| | - Greta S P Mok
- Biomedical Imaging Laboratory (BIG), Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Taipa, Macau SAR, China.
| | - Lijun Lu
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China.
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China.
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China.
- Pazhou Lab, Guangzhou, 510330, China.
| |
Collapse
|
31
|
Huang Z, Li W, Wang Y, Liu Z, Zhang Q, Jin Y, Wu R, Quan G, Liang D, Hu Z, Zhang N. MLNAN: Multi-level noise-aware network for low-dose CT imaging implemented with constrained cycle Wasserstein generative adversarial networks. Artif Intell Med 2023; 143:102609. [PMID: 37673577 DOI: 10.1016/j.artmed.2023.102609] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Revised: 05/17/2023] [Accepted: 06/06/2023] [Indexed: 09/08/2023]
Abstract
Low-dose CT techniques attempt to minimize the radiation exposure of patients by estimating the high-resolution normal-dose CT images to reduce the risk of radiation-induced cancer. In recent years, many deep learning methods have been proposed to solve this problem by building a mapping function between low-dose CT images and their high-dose counterparts. However, most of these methods ignore the effect of different radiation doses on the final CT images, which results in large differences in the intensity of the noise observable in CT images. What'more, the noise intensity of low-dose CT images exists significantly differences under different medical devices manufacturers. In this paper, we propose a multi-level noise-aware network (MLNAN) implemented with constrained cycle Wasserstein generative adversarial networks to recovery the low-dose CT images under uncertain noise levels. Particularly, the noise-level classification is predicted and reused as a prior pattern in generator networks. Moreover, the discriminator network introduces noise-level determination. Under two dose-reduction strategies, experiments to evaluate the performance of proposed method are conducted on two datasets, including the simulated clinical AAPM challenge datasets and commercial CT datasets from United Imaging Healthcare (UIH). The experimental results illustrate the effectiveness of our proposed method in terms of noise suppression and structural detail preservation compared with several other deep-learning based methods. Ablation studies validate the effectiveness of the individual components regarding the afforded performance improvement. Further research for practical clinical applications and other medical modalities is required in future works.
Collapse
Affiliation(s)
- Zhenxing Huang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Wenbo Li
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Beijing 101408, China
| | - Yunling Wang
- Department of Radiology, First Affiliated Hospital of Xinjiang Medical University, Urumqi, 830011, China.
| | - Zhou Liu
- Department of Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, 518116, China
| | - Qiyang Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Yuxi Jin
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Ruodai Wu
- Department of Radiology, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen 518055, China
| | - Guotao Quan
- Shanghai United Imaging Healthcare, Shanghai 201807, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Na Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China.
| |
Collapse
|
32
|
Sun J, Jiang H, Du Y, Li CY, Wu TH, Liu YH, Yang BH, Mok GSP. Deep learning-based denoising in projection-domain and reconstruction-domain for low-dose myocardial perfusion SPECT. J Nucl Cardiol 2023; 30:970-985. [PMID: 35982208 DOI: 10.1007/s12350-022-03045-x] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2021] [Accepted: 06/13/2022] [Indexed: 10/15/2022]
Abstract
BACKGROUND Low-dose (LD) myocardial perfusion (MP) SPECT suffers from high noise level, leading to compromised diagnostic accuracy. Here we investigated the denoising performance for MP-SPECT using a conditional generative adversarial network (cGAN) in projection-domain (cGAN-prj) and reconstruction-domain (cGAN-recon). METHODS Sixty-four noisy SPECT projections were simulated for a population of 100 XCAT phantoms with different anatomical variations and 99mTc-sestamibi distributions. Series of LD projections were obtained by scaling the full dose (FD) count rate to be 1/20 to 1/2 of the original. Twenty patients with 99mTc-sestamibi stress SPECT/CT scans were retrospectively analyzed. For each patient, LD SPECT images (7/10 to 1/10 of FD) were generated from the FD list mode data. All projections were reconstructed by the quantitative OS-EM method. A 3D cGAN was implemented to predict FD images from their corresponding LD images in the projection- and reconstruction-domain. The denoised projections were reconstructed for analysis in various quantitative indices along with cGAN-recon, Gaussian, and Butterworth-filtered images. RESULTS cGAN denoising improves image quality as compared to LD and conventional post-reconstruction filtering. cGAN-prj can further reduce the dose level as compared to cGAN-recon without compromising the image quality. CONCLUSIONS Denoising based on cGAN-prj is superior to cGAN-recon for MP-SPECT.
Collapse
Affiliation(s)
- Jingzhang Sun
- Biomedical Imaging Laboratory (BIG), Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Macau SAR, China
| | - Han Jiang
- Biomedical Imaging Laboratory (BIG), Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Macau SAR, China
| | - Yu Du
- Biomedical Imaging Laboratory (BIG), Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Macau SAR, China
| | - Chien-Ying Li
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, Taiwan, ROC
- Department of Nuclear Medicine, Taipei Veterans General Hospital, Taipei, Taiwan, ROC
| | - Tung-Hsin Wu
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, Taiwan, ROC
| | - Yi-Hwa Liu
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, Taiwan, ROC
- Department of Internal Medicine, Yale University School of Medicine, New Haven, CT, USA
| | - Bang-Hung Yang
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, Taiwan, ROC.
- Department of Nuclear Medicine, Taipei Veterans General Hospital, Taipei, Taiwan, ROC.
| | - Greta S P Mok
- Biomedical Imaging Laboratory (BIG), Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Macau SAR, China.
| |
Collapse
|
33
|
Liu Z, Wolfe S, Yu Z, Laforest R, Mhlanga JC, Fraum TJ, Itani M, Dehdashti F, Siegel BA, Jha AK. Observer-study-based approaches to quantitatively evaluate the realism of synthetic medical images. Phys Med Biol 2023; 68:10.1088/1361-6560/acc0ce. [PMID: 36863028 PMCID: PMC10411234 DOI: 10.1088/1361-6560/acc0ce] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2022] [Accepted: 03/02/2023] [Indexed: 03/04/2023]
Abstract
Objective.Synthetic images generated by simulation studies have a well-recognized role in developing and evaluating imaging systems and methods. However, for clinically relevant development and evaluation, the synthetic images must be clinically realistic and, ideally, have the same distribution as that of clinical images. Thus, mechanisms that can quantitatively evaluate this clinical realism and, ideally, the similarity in distributions of the real and synthetic images, are much needed.Approach.We investigated two observer-study-based approaches to quantitatively evaluate the clinical realism of synthetic images. In the first approach, we presented a theoretical formalism for the use of an ideal-observer study to quantitatively evaluate the similarity in distributions between the real and synthetic images. This theoretical formalism provides a direct relationship between the area under the receiver operating characteristic curve, AUC, for an ideal observer and the distributions of real and synthetic images. The second approach is based on the use of expert-human-observer studies to quantitatively evaluate the realism of synthetic images. In this approach, we developed a web-based software to conduct two-alternative forced-choice (2-AFC) experiments with expert human observers. The usability of this software was evaluated by conducting a system usability scale (SUS) survey with seven expert human readers and five observer-study designers. Further, we demonstrated the application of this software to evaluate a stochastic and physics-based image-synthesis technique for oncologic positron emission tomography (PET). In this evaluation, the 2-AFC study with our software was performed by six expert human readers, who were highly experienced in reading PET scans, with years of expertise ranging from 7 to 40 years (median: 12 years, average: 20.4 years).Main results.In the ideal-observer-study-based approach, we theoretically demonstrated that the AUC for an ideal observer can be expressed, to an excellent approximation, by the Bhattacharyya distance between the distributions of the real and synthetic images. This relationship shows that a decrease in the ideal-observer AUC indicates a decrease in the distance between the two image distributions. Moreover, a lower bound of ideal-observer AUC = 0.5 implies that the distributions of synthetic and real images exactly match. For the expert-human-observer-study-based approach, our software for performing the 2-AFC experiments is available athttps://apps.mir.wustl.edu/twoafc. Results from the SUS survey demonstrate that the web application is very user friendly and accessible. As a secondary finding, evaluation of a stochastic and physics-based PET image-synthesis technique using our software showed that expert human readers had limited ability to distinguish the real images from the synthetic images.Significance.This work addresses the important need for mechanisms to quantitatively evaluate the clinical realism of synthetic images. The mathematical treatment in this paper shows that quantifying the similarity in the distribution of real and synthetic images is theoretically possible by using an ideal-observer-study-based approach. Our developed software provides a platform for designing and performing 2-AFC experiments with human observers in a highly accessible, efficient, and secure manner. Additionally, our results on the evaluation of the stochastic and physics-based image-synthesis technique motivate the application of this technique to develop and evaluate a wide array of PET imaging methods.
Collapse
Affiliation(s)
- Ziping Liu
- Department of Biomedical Engineering, Washington University, St. Louis, MO 63130, United States of America
| | - Scott Wolfe
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO 63110, United States of America
| | - Zitong Yu
- Department of Biomedical Engineering, Washington University, St. Louis, MO 63130, United States of America
| | - Richard Laforest
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO 63110, United States of America
- Alvin J. Siteman Cancer Center, Washington University School of Medicine, St. Louis, MO 63110, United States of America
| | - Joyce C Mhlanga
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO 63110, United States of America
| | - Tyler J Fraum
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO 63110, United States of America
- Alvin J. Siteman Cancer Center, Washington University School of Medicine, St. Louis, MO 63110, United States of America
| | - Malak Itani
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO 63110, United States of America
| | - Farrokh Dehdashti
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO 63110, United States of America
- Alvin J. Siteman Cancer Center, Washington University School of Medicine, St. Louis, MO 63110, United States of America
| | - Barry A Siegel
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO 63110, United States of America
- Alvin J. Siteman Cancer Center, Washington University School of Medicine, St. Louis, MO 63110, United States of America
| | - Abhinav K Jha
- Department of Biomedical Engineering, Washington University, St. Louis, MO 63130, United States of America
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO 63110, United States of America
- Alvin J. Siteman Cancer Center, Washington University School of Medicine, St. Louis, MO 63110, United States of America
| |
Collapse
|
34
|
Alberts I, Sari H, Mingels C, Afshar-Oromieh A, Pyka T, Shi K, Rominger A. Long-axial field-of-view PET/CT: perspectives and review of a revolutionary development in nuclear medicine based on clinical experience in over 7000 patients. Cancer Imaging 2023; 23:28. [PMID: 36934273 PMCID: PMC10024603 DOI: 10.1186/s40644-023-00540-3] [Citation(s) in RCA: 40] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2023] [Accepted: 02/25/2023] [Indexed: 03/20/2023] Open
Abstract
Recently introduced long-axial field-of-view (LAFOV) PET/CT systems represent one of the most significant advancements in nuclear medicine since the advent of multi-modality PET/CT imaging. The higher sensitivity exhibited by such systems allow for reductions in applied activity and short duration scans. However, we consider this to be just one small part of the story: Instead, the ability to image the body in its entirety in a single FOV affords insights which standard FOV systems cannot provide. For example, we now have the ability to capture a wider dynamic range of a tracer by imaging it over multiple half-lives without detrimental image noise, to leverage lower radiopharmaceutical doses by using dual-tracer techniques and with improved quantification. The potential for quantitative dynamic whole-body imaging using abbreviated protocols potentially makes these techniques viable for routine clinical use, transforming PET-reporting from a subjective analysis of semi-quantitative maps of radiopharmaceutical uptake at a single time-point to an accurate and quantitative, non-invasive tool to determine human function and physiology and to explore organ interactions and to perform whole-body systems analysis. This article will share the insights obtained from 2 years' of clinical operation of the first Biograph Vision Quadra (Siemens Healthineers) LAFOV system. It will also survey the current state-of-the-art in PET technology. Several technologies are poised to furnish systems with even greater sensitivity and resolution than current systems, potentially with orders of magnitude higher sensitivity. Current barriers which remain to be surmounted, such as data pipelines, patient throughput and the hindrances to implementing kinetic analysis for routine patient care will also be discussed.
Collapse
Affiliation(s)
- Ian Alberts
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Freiburgstr. 18, 3010, Bern, Switzerland
| | - Hasan Sari
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Freiburgstr. 18, 3010, Bern, Switzerland
- Advanced Clinical Imaging Technology, Siemens Healthcare AG, Lausanne, Switzerland
| | - Clemens Mingels
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Freiburgstr. 18, 3010, Bern, Switzerland
| | - Ali Afshar-Oromieh
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Freiburgstr. 18, 3010, Bern, Switzerland
| | - Thomas Pyka
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Freiburgstr. 18, 3010, Bern, Switzerland
| | - Kuangyu Shi
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Freiburgstr. 18, 3010, Bern, Switzerland
| | - Axel Rominger
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Freiburgstr. 18, 3010, Bern, Switzerland.
| |
Collapse
|
35
|
Sun J, Yang BH, Li CY, Du Y, Liu YH, Wu TH, Mok GSP. Fast myocardial perfusion SPECT denoising using an attention-guided generative adversarial network. Front Med (Lausanne) 2023; 10:1083413. [PMID: 36817784 PMCID: PMC9935600 DOI: 10.3389/fmed.2023.1083413] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2022] [Accepted: 01/16/2023] [Indexed: 02/05/2023] Open
Abstract
Purpose Deep learning-based denoising is promising for myocardial perfusion (MP) SPECT. However, conventional convolutional neural network (CNN)-based methods use fixed-sized convolutional kernels to convolute one region within the receptive field at a time, which would be ineffective for learning the feature dependencies across large regions. The attention mechanism (Att) is able to learn the relationships between the local receptive field and other voxels in the image. In this study, we propose a 3D attention-guided generative adversarial network (AttGAN) for denoising fast MP-SPECT images. Methods Fifty patients who underwent 1184 MBq 99mTc-sestamibi stress SPECT/CT scan were retrospectively recruited. Sixty projections were acquired over 180° and the acquisition time was 10 s/view for the full time (FT) mode. Fast MP-SPECT projection images (1 s to 7 s) were generated from the FT list mode data. We further incorporated binary patient defect information (0 = without defect, 1 = with defect) into AttGAN (AttGAN-def). AttGAN, AttGAN-def, cGAN, and Unet were implemented using Tensorflow with the Adam optimizer running up to 400 epochs. FT and fast MP-SPECT projection pairs of 35 patients were used for training the networks for each acquisition time, while 5 and 10 patients were applied for validation and testing. Five-fold cross-validation was performed and data for all 50 patients were tested. Voxel-based error indices, joint histogram, linear regression, and perfusion defect size (PDS) were analyzed. Results All quantitative indices of AttGAN-based networks are superior to cGAN and Unet on all acquisition time images. AttGAN-def further improves AttGAN performance. The mean absolute error of PDS by AttcGAN-def was 1.60 on acquisition time of 1 s/prj, as compared to 2.36, 2.76, and 3.02 by AttGAN, cGAN, and Unet. Conclusion Denoising based on AttGAN is superior to conventional CNN-based networks for MP-SPECT.
Collapse
Affiliation(s)
- Jingzhang Sun
- Biomedical Imaging Laboratory (BIG), Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Taipa, Macao SAR, China
| | - Bang-Hung Yang
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Hsinchu, Taiwan,Department of Nuclear Medicine, Taipei Veterans General Hospital, Taipei City, Taiwan
| | - Chien-Ying Li
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Hsinchu, Taiwan,Department of Nuclear Medicine, Taipei Veterans General Hospital, Taipei City, Taiwan
| | - Yu Du
- Biomedical Imaging Laboratory (BIG), Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Taipa, Macao SAR, China
| | - Yi-Hwa Liu
- Department of Internal Medicine, Yale University School of Medicine, New Haven, CT, United States
| | - Tung-Hsin Wu
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Hsinchu, Taiwan,Tung-Hsin Wu,
| | - Greta S. P. Mok
- Biomedical Imaging Laboratory (BIG), Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Taipa, Macao SAR, China,Center for Cognitive and Brain Sciences, Institute of Collaborative Innovation, University of Macau, Taipa, Macao SAR, China,Ministry of Education Frontiers Science Center for Precision Oncology, Faculty of Health Science, University of Macau, Taipa, Macao SAR, China,*Correspondence: Greta S. P. Mok,
| |
Collapse
|
36
|
Itagaki K, Miyake KK, Tanoue M, Oishi T, Kataoka M, Kawashima M, Toi M, Nakamoto Y. Feasibility of Dedicated Breast Positron Emission Tomography Image Denoising Using a Residual Neural Network. ASIA OCEANIA JOURNAL OF NUCLEAR MEDICINE & BIOLOGY 2023; 11:145-157. [PMID: 37324225 PMCID: PMC10261694 DOI: 10.22038/aojnmb.2023.71598.1501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Objectives This study aimed to create a deep learning (DL)-based denoising model using a residual neural network (Res-Net) trained to reduce noise in ring-type dedicated breast positron emission tomography (dbPET) images acquired in about half the emission time, and to evaluate the feasibility and the effectiveness of the model in terms of its noise reduction performance and preservation of quantitative values compared to conventional post-image filtering techniques. Methods Low-count (LC) and full-count (FC) PET images with acquisition durations of 3 and 7 minutes, respectively, were reconstructed. A Res-Net was trained to create a noise reduction model using fifteen patients' data. The inputs to the network were LC images and its outputs were denoised PET (LC + DL) images, which should resemble FC images. To evaluate the LC + DL images, Gaussian and non-local mean (NLM) filters were applied to the LC images (LC + Gaussian and LC + NLM, respectively). To create reference images, a Gaussian filter was applied to the FC images (FC + Gaussian). The usefulness of our denoising model was objectively and visually evaluated using test data set of thirteen patients. The coefficient of variation (CV) of background fibroglandular tissue or fat tissue were measured to evaluate the performance of the noise reduction. The SUVmax and SUVpeak of lesions were also measured. The agreement of the SUV measurements was evaluated by Bland-Altman plots. Results The CV of background fibroglandular tissue in the LC + DL images was significantly lower (9.10±2.76) than the CVs in the LC (13.60± 3.66) and LC + Gaussian images (11.51± 3.56). No significant difference was observed in both SUVmax and SUVpeak of lesions between LC + DL and reference images. For the visual assessment, the smoothness rating for the LC + DL images was significantly better than that for the other images except for the reference images. Conclusion Our model reduced the noise in dbPET images acquired in about half the emission time while preserving quantitative values of lesions. This study demonstrates that machine learning is feasible and potentially performs better than conventional post-image filtering in dbPET denoising.
Collapse
Affiliation(s)
- Koji Itagaki
- Division of Clinical Radiology Service, Kyoto University Hospital, Kyoto, Japan
| | - Kanae K. Miyake
- Department of Advanced Medical Imaging Research, Graduate School of Medicine Kyoto University, Kyoto , Japan
| | - Minori Tanoue
- Division of Clinical Radiology Service, Kyoto University Hospital, Kyoto, Japan
| | - Tae Oishi
- Department of Diagnostic Imaging and Nuclear Medicine, Graduate School of Medicine Kyoto University, Kyoto, Japan
| | - Masako Kataoka
- Department of Diagnostic Imaging and Nuclear Medicine, Graduate School of Medicine Kyoto University, Kyoto, Japan
| | - Masahiro Kawashima
- Department of Breast Surgery, Graduate School of Medicine Kyoto University, Kyoto, Japan
| | - Masakazu Toi
- Department of Breast Surgery, Graduate School of Medicine Kyoto University, Kyoto, Japan
| | - Yuji Nakamoto
- Department of Diagnostic Imaging and Nuclear Medicine, Graduate School of Medicine Kyoto University, Kyoto, Japan
| |
Collapse
|
37
|
Dynamic PET images denoising using spectral graph wavelet transform. Med Biol Eng Comput 2023; 61:97-107. [PMID: 36323982 DOI: 10.1007/s11517-022-02698-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2020] [Accepted: 10/11/2022] [Indexed: 11/06/2022]
Abstract
Positron emission tomography (PET) is a non-invasive molecular imaging method for quantitative observation of physiological and biochemical changes in living organisms. The quality of the reconstructed PET image is limited by many different physical degradation factors. Various denoising methods including Gaussian filtering (GF) and non-local mean (NLM) filtering have been proposed to improve the image quality. However, image denoising usually blurs edges, of which high frequency components are filtered as noises. On the other hand, it is well-known that edges in a PET image are important to detection and recognition of a lesion. Denoising while preserving the edges of PET images remains an important yet challenging problem in PET image processing. In this paper, we propose a novel denoising method with good edge-preserving performance based on spectral graph wavelet transform (SGWT) for dynamic PET images denoising. We firstly generate a composite image from the entire time series, then perform SGWT on the PET images, and finally reconstruct the low graph frequency content to get the denoised dynamic PET images. Experimental results on simulation and in vivo data show that the proposed approach significantly outperforms the GF, NLM and graph filtering methods. Compared with deep learning-based method, the proposed method has the similar denoising performance, but it does not need lots of training data and has low computational complexity.
Collapse
|
38
|
Abstract
Medical imaging is a great asset for modern medicine, since it allows physicians to spatially interrogate a disease site, resulting in precise intervention for diagnosis and treatment, and to observe particular aspect of patients' conditions that otherwise would not be noticeable. Computational analysis of medical images, moreover, can allow the discovery of disease patterns and correlations among cohorts of patients with the same disease, thus suggesting common causes or providing useful information for better therapies and cures. Machine learning and deep learning applied to medical images, in particular, have produced new, unprecedented results that can pave the way to advanced frontiers of medical discoveries. While computational analysis of medical images has become easier, however, the possibility to make mistakes or generate inflated or misleading results has become easier, too, hindering reproducibility and deployment. In this article, we provide ten quick tips to perform computational analysis of medical images avoiding common mistakes and pitfalls that we noticed in multiple studies in the past. We believe our ten guidelines, if taken into practice, can help the computational-medical imaging community to perform better scientific research that eventually can have a positive impact on the lives of patients worldwide.
Collapse
Affiliation(s)
- Davide Chicco
- Institute of Health Policy Management and Evaluation, University of Toronto, Toronto, Ontario, Canada
| | - Rakesh Shiradkar
- Department of Biomedical Engineering, Emory University, Atlanta, Georgia, United States of America
| |
Collapse
|
39
|
Sun H, Jiang Y, Yuan J, Wang H, Liang D, Fan W, Hu Z, Zhang N. High-quality PET image synthesis from ultra-low-dose PET/MRI using bi-task deep learning. Quant Imaging Med Surg 2022; 12:5326-5342. [PMID: 36465830 PMCID: PMC9703111 DOI: 10.21037/qims-22-116] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Accepted: 08/04/2022] [Indexed: 01/25/2023]
Abstract
BACKGROUND Lowering the dose for positron emission tomography (PET) imaging reduces patients' radiation burden but decreases the image quality by increasing noise and reducing imaging detail and quantifications. This paper introduces a method for acquiring high-quality PET images from an ultra-low-dose state to achieve both high-quality images and a low radiation burden. METHODS We developed a two-task-based end-to-end generative adversarial network, named bi-c-GAN, that incorporated the advantages of PET and magnetic resonance imaging (MRI) modalities to synthesize high-quality PET images from an ultra-low-dose input. Moreover, a combined loss, including the mean absolute error, structural loss, and bias loss, was created to improve the trained model's performance. Real integrated PET/MRI data from 67 patients' axial heads (each with 161 slices) were used for training and validation purposes. Synthesized images were quantified by the peak signal-to-noise ratio (PSNR), normalized mean square error (NMSE), structural similarity (SSIM), and contrast noise ratio (CNR). The improvement ratios of these four selected quantitative metrics were used to compare the images produced by bi-c-GAN with other methods. RESULTS In the four-fold cross-validation, the proposed bi-c-GAN outperformed the other three selected methods (U-net, c-GAN, and multiple input c-GAN). With the bi-c-GAN, in a 5% low-dose PET, the image quality was higher than that of the other three methods by at least 6.7% in the PSNR, 0.6% in the SSIM, 1.3% in the NMSE, and 8% in the CNR. In the hold-out validation, bi-c-GAN improved the image quality compared to U-net and c-GAN in both 2.5% and 10% low-dose PET. For example, the PSNR using bi-C-GAN was at least 4.46% in the 2.5% low-dose PET and at most 14.88% in the 10% low-dose PET. Visual examples also showed a higher quality of images generated from the proposed method, demonstrating the denoising and improving ability of bi-c-GAN. CONCLUSIONS By taking advantage of integrated PET/MR images and multitask deep learning (MDL), the proposed bi-c-GAN can efficiently improve the image quality of ultra-low-dose PET and reduce radiation exposure.
Collapse
Affiliation(s)
- Hanyu Sun
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yongluo Jiang
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Jianmin Yuan
- Central Research Institute, Shanghai United Imaging Healthcare, Shanghai, China
| | - Haining Wang
- United Imaging Research Institute of Innovative Medical Equipment, Shenzhen, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Wei Fan
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China;,United Imaging Research Institute of Innovative Medical Equipment, Shenzhen, China
| | - Na Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China;,United Imaging Research Institute of Innovative Medical Equipment, Shenzhen, China
| |
Collapse
|
40
|
Image denoising in the deep learning era. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10305-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
41
|
Liu H, Yousefi H, Mirian N, Lin M, Menard D, Gregory M, Aboian M, Boustani A, Chen MK, Saperstein L, Pucar D, Kulon M, Liu C. PET Image Denoising using a Deep-Learning Method for Extremely Obese Patients. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2022; 6:766-770. [PMID: 37284026 PMCID: PMC10241407 DOI: 10.1109/trpms.2021.3131999] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/03/2023]
Abstract
The image quality in clinical PET scan can be severely degraded due to high noise levels in extremely obese patients. Our work aimed to reduce the noise in clinical PET images of extremely obese subjects to the noise level of lean subject images, to ensure consistent imaging quality. The noise level was measured by normalized standard deviation (NSTD) derived from a liver region of interest. A deep learning-based noise reduction method with a fully 3D patch-based U-Net was used. Two U-Nets, U-Nets A and B, were trained on datasets with 40% and 10% count levels derived from 100 lean subjects, respectively. The clinical PET images of 10 extremely obese subjects were denoised using the two U-Nets. The results showed the noise levels of the images with 40% counts of lean subjects were consistent with those of the extremely obese subjects. U-Net A effectively reduced the noise in the images of the extremely obese patients while preserving the fine structures. The liver NSTD improved from 0.13±0.04 to 0.08±0.03 after noise reduction (p = 0.01). After denoising, the image noise level of extremely obese subjects was similar to that of lean subjects, in terms of liver NSTD (0.08±0.03 vs. 0.08±0.02, p = 0.74). In contrast, U-Net B over-smoothed the images of extremely obese patients, resulting in blurred fine structures. In a pilot reader study comparing extremely obese patients without and with U-Net A, the difference was not significant. In conclusion, the U-Net trained by datasets from lean subjects with matched count level can provide promising denoising performance for extremely obese subjects while maintaining image resolution, though further clinical evaluation is needed.
Collapse
Affiliation(s)
- Hui Liu
- Department of Engineering Physics, Tsinghua University, and Key Laboratory of Particle & Radiation Imaging, Ministry of Education (Tsinghua University), Beijing, China, on leave from the Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, 06511, USA
| | - Hamed Yousefi
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Niloufar Mirian
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - MingDe Lin
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
- Visage Imaging, Inc., San Diego, CA, USA
| | - David Menard
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Matthew Gregory
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Mariam Aboian
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Annemarie Boustani
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Ming-Kai Chen
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Lawrence Saperstein
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Darko Pucar
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Michal Kulon
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| |
Collapse
|
42
|
Schwenck J, Kneilling M, Riksen NP, la Fougère C, Mulder DJ, Slart RJHA, Aarntzen EHJG. A role for artificial intelligence in molecular imaging of infection and inflammation. Eur J Hybrid Imaging 2022; 6:17. [PMID: 36045228 PMCID: PMC9433558 DOI: 10.1186/s41824-022-00138-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Accepted: 05/16/2022] [Indexed: 12/03/2022] Open
Abstract
The detection of occult infections and low-grade inflammation in clinical practice remains challenging and much depending on readers’ expertise. Although molecular imaging, like [18F]FDG PET or radiolabeled leukocyte scintigraphy, offers quantitative and reproducible whole body data on inflammatory responses its interpretation is limited to visual analysis. This often leads to delayed diagnosis and treatment, as well as untapped areas of potential application. Artificial intelligence (AI) offers innovative approaches to mine the wealth of imaging data and has led to disruptive breakthroughs in other medical domains already. Here, we discuss how AI-based tools can improve the detection sensitivity of molecular imaging in infection and inflammation but also how AI might push the data analysis beyond current application toward predicting outcome and long-term risk assessment.
Collapse
|
43
|
Manimegalai P, Suresh Kumar R, Valsalan P, Dhanagopal R, Vasanth Raj PT, Christhudass J. 3D Convolutional Neural Network Framework with Deep Learning for Nuclear Medicine. SCANNING 2022; 2022:9640177. [PMID: 35924105 PMCID: PMC9308558 DOI: 10.1155/2022/9640177] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Accepted: 06/27/2022] [Indexed: 05/15/2023]
Abstract
Though artificial intelligence (AI) has been used in nuclear medicine for more than 50 years, more progress has been made in deep learning (DL) and machine learning (ML), which have driven the development of new AI abilities in the field. ANNs are used in both deep learning and machine learning in nuclear medicine. Alternatively, if 3D convolutional neural network (CNN) is used, the inputs may be the actual images that are being analyzed, rather than a set of inputs. In nuclear medicine, artificial intelligence reimagines and reengineers the field's therapeutic and scientific capabilities. Understanding the concepts of 3D CNN and U-Net in the context of nuclear medicine provides for a deeper engagement with clinical and research applications, as well as the ability to troubleshoot problems when they emerge. Business analytics, risk assessment, quality assurance, and basic classifications are all examples of simple ML applications. General nuclear medicine, SPECT, PET, MRI, and CT may benefit from more advanced DL applications for classification, detection, localization, segmentation, quantification, and radiomic feature extraction utilizing 3D CNNs. An ANN may be used to analyze a small dataset at the same time as traditional statistical methods, as well as bigger datasets. Nuclear medicine's clinical and research practices have been largely unaffected by the introduction of artificial intelligence (AI). Clinical and research landscapes have been fundamentally altered by the advent of 3D CNN and U-Net applications. Nuclear medicine professionals must now have at least an elementary understanding of AI principles such as neural networks (ANNs) and convolutional neural networks (CNNs).
Collapse
Affiliation(s)
- P. Manimegalai
- Department of Biomedical Engineering, Karunya Institute of Technology and Sciences, Coimbatore, India
| | - R. Suresh Kumar
- Center for System Design, Chennai Institute of Technology, Chennai, India
| | - Prajoona Valsalan
- Department of Electrical and Computer Engineering, Dhofar University, Salalah, Oman
| | - R. Dhanagopal
- Center for System Design, Chennai Institute of Technology, Chennai, India
| | - P. T. Vasanth Raj
- Center for System Design, Chennai Institute of Technology, Chennai, India
| | - Jerome Christhudass
- Department of Biomedical Engineering, Karunya Institute of Technology and Sciences, Coimbatore, India
| |
Collapse
|
44
|
Ma R, Hu J, Sari H, Xue S, Mingels C, Viscione M, Kandarpa VSS, Li WB, Visvikis D, Qiu R, Rominger A, Li J, Shi K. An encoder-decoder network for direct image reconstruction on sinograms of a long axial field of view PET. Eur J Nucl Med Mol Imaging 2022; 49:4464-4477. [PMID: 35819497 DOI: 10.1007/s00259-022-05861-2] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Accepted: 06/02/2022] [Indexed: 11/04/2022]
Abstract
PURPOSE Deep learning is an emerging reconstruction method for positron emission tomography (PET), which can tackle complex PET corrections in an integrated procedure. This paper optimizes the direct PET reconstruction from sinogram on a long axial field of view (LAFOV) PET. METHODS This paper proposes a novel deep learning architecture to reduce the biases during direct reconstruction from sinograms to images. This architecture is based on an encoder-decoder network, where the perceptual loss is used with pre-trained convolutional layers. It is trained and tested on data of 80 patients acquired from recent Siemens Biograph Vision Quadra long axial FOV (LAFOV) PET/CT. The patients are randomly split into a training dataset of 60 patients, a validation dataset of 10 patients, and a test dataset of 10 patients. The 3D sinograms are converted into 2D sinogram slices and used as input to the network. In addition, the vendor reconstructed images are considered as ground truths. Finally, the proposed method is compared with DeepPET, a benchmark deep learning method for PET reconstruction. RESULTS Compared with DeepPET, the proposed network significantly reduces the root-mean-squared error (NRMSE) from 0.63 to 0.6 (p < 0.01) and increases the structural similarity index (SSIM) and peak signal-to-noise ratio (PSNR) from 0.93 to 0.95 (p < 0.01) and from 82.02 to 82.36 (p < 0.01), respectively. The reconstruction time is approximately 10 s per patient, which is shortened by 23 times compared with the conventional method. The errors of mean standardized uptake values (SUVmean) for lesions between ground truth and the predicted result are reduced from 33.5 to 18.7% (p = 0.03). In addition, the error of max SUV is reduced from 32.7 to 21.8% (p = 0.02). CONCLUSION The results demonstrate the feasibility of using deep learning to reconstruct images with acceptable image quality and short reconstruction time. It is shown that the proposed method can improve the quality of deep learning-based reconstructed images without additional CT images for attenuation and scattering corrections. This study demonstrated the feasibility of deep learning to rapidly reconstruct images without additional CT images for complex corrections from actual clinical measurements on LAFOV PET. Despite improving the current development, AI-based reconstruction does not work appropriately for untrained scenarios due to limited extrapolation capability and cannot completely replace conventional reconstruction currently.
Collapse
Affiliation(s)
- Ruiyao Ma
- Department of Engineering Physics, Tsinghua University, and Key Laboratory of Particle & Radiation Imaging (Tsinghua University), Ministry of Education, Beijing, 100084, China.,Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland.,Institute of Radiation Medicine, Helmholtz Zentrum München German Research Center for Environmental Health (GmbH), Bavaria, Neuherberg, Germany
| | - Jiaxi Hu
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Hasan Sari
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland.,Advanced Clinical Imaging Technology, Siemens Healthcare AG, Lausanne, Switzerland
| | - Song Xue
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Clemens Mingels
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Marco Viscione
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | | | - Wei Bo Li
- Institute of Radiation Medicine, Helmholtz Zentrum München German Research Center for Environmental Health (GmbH), Bavaria, Neuherberg, Germany
| | | | - Rui Qiu
- Department of Engineering Physics, Tsinghua University, and Key Laboratory of Particle & Radiation Imaging (Tsinghua University), Ministry of Education, Beijing, 100084, China.
| | - Axel Rominger
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Junli Li
- Department of Engineering Physics, Tsinghua University, and Key Laboratory of Particle & Radiation Imaging (Tsinghua University), Ministry of Education, Beijing, 100084, China.
| | - Kuangyu Shi
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| |
Collapse
|
45
|
Visvikis D, Lambin P, Beuschau Mauridsen K, Hustinx R, Lassmann M, Rischpler C, Shi K, Pruim J. Application of artificial intelligence in nuclear medicine and molecular imaging: a review of current status and future perspectives for clinical translation. Eur J Nucl Med Mol Imaging 2022; 49:4452-4463. [PMID: 35809090 PMCID: PMC9606092 DOI: 10.1007/s00259-022-05891-w] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Accepted: 06/25/2022] [Indexed: 02/06/2023]
Abstract
Artificial intelligence (AI) will change the face of nuclear medicine and molecular imaging as it will in everyday life. In this review, we focus on the potential applications of AI in the field, both from a physical (radiomics, underlying statistics, image reconstruction and data analysis) and a clinical (neurology, cardiology, oncology) perspective. Challenges for transferability from research to clinical practice are being discussed as is the concept of explainable AI. Finally, we focus on the fields where challenges should be set out to introduce AI in the field of nuclear medicine and molecular imaging in a reliable manner.
Collapse
Affiliation(s)
| | - Philippe Lambin
- The D-Lab, Department of Precision Medicine, GROW - School for Oncology, Maastricht University Medical Center (MUMC +), Maastricht, The Netherlands.,Department of Radiology and Nuclear Medicine, GROW - School for Oncology, Maastricht University Medical Center (MUMC +), Maastricht, The Netherlands
| | - Kim Beuschau Mauridsen
- Center of Functionally Integrative Neuroscience and MindLab, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark.,Department of Nuclear Medicine, University of Bern, Bern, Switzerland
| | - Roland Hustinx
- GIGA-CRC in Vivo Imaging, University of Liège, GIGA, Avenue de l'Hôpital 11, 4000, Liege, Belgium
| | - Michael Lassmann
- Klinik Und Poliklinik Für Nuklearmedizin, Universitätsklinikum Würzburg, Würzburg, Germany
| | - Christoph Rischpler
- Department of Nuclear Medicine, University Hospital Essen, University of Duisburg-Essen, Essen, Germany
| | - Kuangyu Shi
- Department of Nuclear Medicine, University of Bern, Bern, Switzerland.,Department of Informatics, Technical University of Munich, Munich, Germany
| | - Jan Pruim
- Medical Imaging Center, Dept. of Nuclear Medicine and Molecular Imaging, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands.
| |
Collapse
|
46
|
Sun J, Du Y, Li C, Wu TH, Yang B, Mok GSP. Pix2Pix generative adversarial network for low dose myocardial perfusion SPECT denoising. Quant Imaging Med Surg 2022; 12:3539-3555. [PMID: 35782241 PMCID: PMC9246746 DOI: 10.21037/qims-21-1042] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Accepted: 02/18/2022] [Indexed: 11/12/2023]
Abstract
BACKGROUND Myocardial perfusion (MP) SPECT is a well-established method for diagnosing cardiac disease, yet its radiation risk poses safety concern. This study aims to apply and evaluate the use of Pix2Pix generative adversarial network (Pix2Pix GAN) in denoising low dose MP SPECT images. METHODS One hundred male and female patients with different 99mTc-sestamibi activity distributions, organ and body sizes were simulated by a population of digital 4D Extended Cardiac Torso (XCAT) phantoms. Realistic noisy SPECT projections of full dose of 987 MBq injection and 16 min acquisition, and low dose ranged from 1/20 to 1/2 of the full dose, were generated by an analytical projector from the right anterior oblique (RAO) to the left posterior oblique (LPO) positions. Additionally, twenty patients underwent ~1,184 MBq 99mTc-sestamibi stress SPECT/CT scan were also retrospectively recruited for the study. For each patient, low dose SPECT images (7/10 to 1/10 of full dose) were generated from the full dose list mode data. Our Pix2Pix GAN model was trained with full dose and low dose reconstructed SPECT image pairs. Normalized mean square error (NMSE), structural similarity index (SSIM), coefficient of variation (CV), full-width-at-half-maximum (FWHM) and relative defect size differences (RSD) of Pix2Pix GAN processed images were evaluated along with a reference convolutional auto encoder (CAE) network and post-reconstruction filters. RESULTS NMSE values of 0.0233±0.004 vs. 0.0249±0.004 and 0.0313±0.007 vs. 0.0579±0.016 were obtained on 1/2 and 1/20 dose level for Pix2Pix GAN and CAE in the simulation study, while they were 0.0376±0.010 vs. 0.0433±0.010 and 0.0907±0.020 vs. 0.1186±0.025 on 7/10 and 1/10 dose level in the clinical study. Similar results were also obtained from the SSIM, CV, FWHM and RSD values. Overall, the use of Pix2Pix GAN was superior to other denoising methods in all physical indices, particular in the lower dose levels in the simulation and clinical study. CONCLUSIONS The Pix2Pix GAN method is effective to reduce the noise level of low dose MP SPECT. Further studies on clinical performance are warranted to demonstrate its full clinical effectiveness.
Collapse
Affiliation(s)
- Jingzhang Sun
- Biomedical Imaging Laboratory (BIG), Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Macau, China
| | - Yu Du
- Biomedical Imaging Laboratory (BIG), Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Macau, China
- Center for Cognitive and Brain Sciences, Institute of Collaborative Innovation, University of Macau, Macau, China
| | - ChienYing Li
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei
- Department of Nuclear Medicine, Taipei Veterans General Hospital, Taipei
| | - Tung-Hsin Wu
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei
| | - BangHung Yang
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei
- Department of Nuclear Medicine, Taipei Veterans General Hospital, Taipei
| | - Greta S. P. Mok
- Biomedical Imaging Laboratory (BIG), Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Macau, China
- Center for Cognitive and Brain Sciences, Institute of Collaborative Innovation, University of Macau, Macau, China
| |
Collapse
|
47
|
Cui J, Gong K, Guo N, Kim K, Liu H, Li Q. Unsupervised PET logan parametric image estimation using conditional deep image prior. Med Image Anal 2022; 80:102519. [PMID: 35767910 DOI: 10.1016/j.media.2022.102519] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Revised: 06/14/2022] [Accepted: 06/15/2022] [Indexed: 11/18/2022]
Abstract
Recently, deep learning-based denoising methods have been gradually used for PET images denoising and have shown great achievements. Among these methods, one interesting framework is conditional deep image prior (CDIP) which is an unsupervised method that does not need prior training or a large number of training pairs. In this work, we combined CDIP with Logan parametric image estimation to generate high-quality parametric images. In our method, the kinetic model is the Logan reference tissue model that can avoid arterial sampling. The neural network was utilized to represent the images of Logan slope and intercept. The patient's computed tomography (CT) image or magnetic resonance (MR) image was used as the network input to provide anatomical information. The optimization function was constructed and solved by the alternating direction method of multipliers (ADMM) algorithm. Both simulation and clinical patient datasets demonstrated that the proposed method could generate parametric images with more detailed structures. Quantification results showed that the proposed method results had higher contrast-to-noise (CNR) improvement ratios (PET/CT datasets: 62.25%±29.93%; striatum of brain PET datasets : 129.51%±32.13%, thalamus of brain PET datasets: 128.24%±31.18%) than Gaussian filtered results (PET/CT datasets: 23.33%±18.63%; striatum of brain PET datasets: 74.71%±8.71%, thalamus of brain PET datasets: 73.02%±9.34%) and nonlocal mean (NLM) denoised results (PET/CT datasets: 37.55%±26.56%; striatum of brain PET datasets: 100.89%±16.13%, thalamus of brain PET datasets: 103.59%±16.37%).
Collapse
Affiliation(s)
- Jianan Cui
- The State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, Zhejiang 310027, China; The Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital/Harvard Medical School, Boston MA 02114, USA
| | - Kuang Gong
- The Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital/Harvard Medical School, Boston MA 02114, USA
| | - Ning Guo
- The Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital/Harvard Medical School, Boston MA 02114, USA
| | - Kyungsang Kim
- The Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital/Harvard Medical School, Boston MA 02114, USA
| | - Huafeng Liu
- The State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, Zhejiang 310027, China; Jiaxing Key Laboratory of Photonic Sensing and Intelligent Imaging, Jiaxing, Zhejiang 314000, China; Intelligent Optics and Photonics Research Center, Jiaxing Research Institute, Zhejiang University, Zhejiang 314000, China.
| | - Quanzheng Li
- The Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital/Harvard Medical School, Boston MA 02114, USA.
| |
Collapse
|
48
|
Daveau RS, Law I, Henriksen OM, Hasselbalch SG, Andersen UB, Anderberg L, Højgaard L, Andersen FL, Ladefoged CN. Deep learning based low-activity PET reconstruction of [ 11C]PiB and [ 18F]FE-PE2I in neurodegenerative disorders. Neuroimage 2022; 259:119412. [PMID: 35753592 DOI: 10.1016/j.neuroimage.2022.119412] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Revised: 06/17/2022] [Accepted: 06/22/2022] [Indexed: 11/17/2022] Open
Abstract
PURPOSE Positron Emission Tomography (PET) can support a diagnosis of neurodegenerative disorder by identifying disease-specific pathologies. Our aim was to investigate the feasibility of using activity reduction in clinical [18F]FE-PE2I and [11C]PiB PET/CT scans, simulating low injected activity or scanning time reduction, in combination with AI-assisted denoising. METHODS A total of 162 patients with clinically uncertain Alzheimer's disease underwent amyloid [11C]PiB PET/CT and 509 patients referred for clinically uncertain Parkinson's disease underwent dopamine transporter (DAT) [18F]FE-PE2I PET/CT. Simulated low-activity data were obtained by random sampling of 5% of the events from the list-mode file and a 5% time window extraction in the middle of the scan. A three-dimensional convolutional neural network (CNN) was trained to denoise the resulting PET images for each disease cohort. RESULTS Noise reduction of low-activity PET images was successful for both cohorts using 5% of the original activity with improvement in visual quality and all similarity metrics with respect to the ground-truth images. Clinically relevant metrics extracted from the low-activity images deviated <2% compared to ground-truth values, which were not significantly changed when extracting the metrics from the denoised images. CONCLUSION The presented models were based on the same network architecture and proved to be a robust tool for denoising brain PET images with two widely different tracer distributions (delocalized, ([11C]PiB, and highly localized, [18F]FE-PE2I). This broad and robust application makes the presented network a good choice for improving the quality of brain images to the level of the standard-activity images without degrading clinical metric extraction. This will allow for reduced dose or scan time in PET/CT to be implemented clinically.
Collapse
Affiliation(s)
- Raphaël S Daveau
- Department of Clinical Physiology and Nuclear Medicine, Rigshospitalet, University of Copenhagen, Denmark
| | - Ian Law
- Department of Clinical Physiology and Nuclear Medicine, Rigshospitalet, University of Copenhagen, Denmark
| | - Otto Mølby Henriksen
- Department of Clinical Physiology and Nuclear Medicine, Rigshospitalet, University of Copenhagen, Denmark
| | | | - Ulrik Bjørn Andersen
- Department of Clinical Physiology and Nuclear Medicine, Rigshospitalet, University of Copenhagen, Denmark
| | - Lasse Anderberg
- Department of Clinical Physiology and Nuclear Medicine, Rigshospitalet, University of Copenhagen, Denmark
| | - Liselotte Højgaard
- Department of Clinical Physiology and Nuclear Medicine, Rigshospitalet, University of Copenhagen, Denmark
| | - Flemming Littrup Andersen
- Department of Clinical Physiology and Nuclear Medicine, Rigshospitalet, University of Copenhagen, Denmark
| | - Claes Nøhr Ladefoged
- Department of Clinical Physiology and Nuclear Medicine, Rigshospitalet, University of Copenhagen, Denmark.
| |
Collapse
|
49
|
A Hybrid Deep Learning Model for Brain Tumour Classification. ENTROPY 2022; 24:e24060799. [PMID: 35741521 PMCID: PMC9222774 DOI: 10.3390/e24060799] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/08/2022] [Revised: 06/03/2022] [Accepted: 06/04/2022] [Indexed: 11/16/2022]
Abstract
A brain tumour is one of the major reasons for death in humans, and it is the tenth most common type of tumour that affects people of all ages. However, if detected early, it is one of the most treatable types of tumours. Brain tumours are classified using biopsy, which is not usually performed before definitive brain surgery. An image classification technique for tumour diseases is important for accelerating the treatment process and avoiding surgery and errors from manual diagnosis by radiologists. The advancement of technology and machine learning (ML) can assist radiologists in tumour diagnostics using magnetic resonance imaging (MRI) images without invasive procedures. This work introduced a new hybrid CNN-based architecture to classify three brain tumour types through MRI images. The method suggested in this paper uses hybrid deep learning classification based on CNN with two methods. The first method combines a pre-trained Google-Net model of the CNN algorithm for feature extraction with SVM for pattern classification. The second method integrates a finely tuned Google-Net with a soft-max classifier. The proposed approach was evaluated using MRI brain images that contain a total of 1426 glioma images, 708 meningioma images, 930 pituitary tumour images, and 396 normal brain images. The reported results showed that an accuracy of 93.1% was achieved from the finely tuned Google-Net model. However, the synergy of Google-Net as a feature extractor with an SVM classifier improved recognition accuracy to 98.1%.
Collapse
|
50
|
Deep Learning-Based Denoising in Brain Tumor CHO PET: Comparison with Traditional Approaches. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12105187] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
18F-choline (CHO) PET image remains noisy despite minimum physiological activity in the normal brain, and this study developed a deep learning-based denoising algorithm for brain tumor CHO PET. Thirty-nine presurgical CHO PET/CT data were retrospectively collected for patients with pathological confirmed primary diffuse glioma. Two conventional denoising methods, namely, block-matching and 3D filtering (BM3D) and non-local means (NLM), and two deep learning-based approaches, namely, Noise2Noise (N2N) and Noise2Void (N2V), were established for imaging denoising, and the methods were developed without paired data. All algorithms improved the image quality to a certain extent, with the N2N demonstrating the best contrast-to-noise ratio (CNR) (4.05 ± 3.45), CNR improvement ratio (13.60% ± 2.05%) and the lowest entropy (1.68 ± 0.17), compared with other approaches. Little changes were identified in traditional tumor PET features including maximum standard uptake value (SUVmax), SUVmean and total lesion activity (TLA), while the tumor-to-normal (T/N ratio) increased thanks to smaller noise. These results suggested that the N2N algorithm can acquire sufficient denoising performance while preserving the original features of tumors, and may be generalized for abundant brain tumor PET images.
Collapse
|