1
|
Perspectives of the European Association of Nuclear Medicine on the role of artificial intelligence (AI) in molecular brain imaging. Eur J Nucl Med Mol Imaging 2024; 51:1007-1011. [PMID: 38097746 DOI: 10.1007/s00259-023-06553-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/22/2024]
|
2
|
Deep learning-based PET image denoising and reconstruction: a review. Radiol Phys Technol 2024; 17:24-46. [PMID: 38319563 PMCID: PMC10902118 DOI: 10.1007/s12194-024-00780-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 01/03/2024] [Accepted: 01/04/2024] [Indexed: 02/07/2024]
Abstract
This review focuses on positron emission tomography (PET) imaging algorithms and traces the evolution of PET image reconstruction methods. First, we provide an overview of conventional PET image reconstruction methods from filtered backprojection through to recent iterative PET image reconstruction algorithms, and then review deep learning methods for PET data up to the latest innovations within three main categories. The first category involves post-processing methods for PET image denoising. The second category comprises direct image reconstruction methods that learn mappings from sinograms to the reconstructed images in an end-to-end manner. The third category comprises iterative reconstruction methods that combine conventional iterative image reconstruction with neural-network enhancement. We discuss future perspectives on PET imaging and deep learning technology.
Collapse
|
3
|
A Review on Low-Dose Emission Tomography Post-Reconstruction Denoising with Neural Network Approaches. ARXIV 2024:arXiv:2401.00232v2. [PMID: 38313194 PMCID: PMC10836084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 02/06/2024]
Abstract
Low-dose emission tomography (ET) plays a crucial role in medical imaging, enabling the acquisition of functional information for various biological processes while minimizing the patient dose. However, the inherent randomness in the photon counting process is a source of noise which is amplified in low-dose ET. This review article provides an overview of existing post-processing techniques, with an emphasis on deep neural network (NN) approaches. Furthermore, we explore future directions in the field of NN-based low-dose ET. This comprehensive examination sheds light on the potential of deep learning in enhancing the quality and resolution of low-dose ET images, ultimately advancing the field of medical imaging.
Collapse
|
4
|
PET image denoising based on denoising diffusion probabilistic model. Eur J Nucl Med Mol Imaging 2024; 51:358-368. [PMID: 37787849 PMCID: PMC10958486 DOI: 10.1007/s00259-023-06417-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2022] [Accepted: 08/22/2023] [Indexed: 10/04/2023]
Abstract
PURPOSE Due to various physical degradation factors and limited counts received, PET image quality needs further improvements. The denoising diffusion probabilistic model (DDPM) was a distribution learning-based model, which tried to transform a normal distribution into a specific data distribution based on iterative refinements. In this work, we proposed and evaluated different DDPM-based methods for PET image denoising. METHODS Under the DDPM framework, one way to perform PET image denoising was to provide the PET image and/or the prior image as the input. Another way was to supply the prior image as the network input with the PET image included in the refinement steps, which could fit for scenarios of different noise levels. 150 brain [[Formula: see text]F]FDG datasets and 140 brain [[Formula: see text]F]MK-6240 (imaging neurofibrillary tangles deposition) datasets were utilized to evaluate the proposed DDPM-based methods. RESULTS Quantification showed that the DDPM-based frameworks with PET information included generated better results than the nonlocal mean, Unet and generative adversarial network (GAN)-based denoising methods. Adding additional MR prior in the model helped achieved better performance and further reduced the uncertainty during image denoising. Solely relying on MR prior while ignoring the PET information resulted in large bias. Regional and surface quantification showed that employing MR prior as the network input while embedding PET image as a data-consistency constraint during inference achieved the best performance. CONCLUSION DDPM-based PET image denoising is a flexible framework, which can efficiently utilize prior information and achieve better performance than the nonlocal mean, Unet and GAN-based denoising methods.
Collapse
|
5
|
Image reconstruction using UNET-transformer network for fast and low-dose PET scans. Comput Med Imaging Graph 2023; 110:102315. [PMID: 38006648 DOI: 10.1016/j.compmedimag.2023.102315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Revised: 09/26/2023] [Accepted: 11/15/2023] [Indexed: 11/27/2023]
Abstract
INTRODUCTION Low-dose and fast PET imaging (low-count PET) play a significant role in enhancing patient safety, healthcare efficiency, and patient comfort during medical imaging procedures. To achieve high-quality images with low-count PET scans, effective reconstruction models are crucial for denoising and enhancing image quality. The main goal of this paper is to develop an effective and accurate deep learning-based method for reconstructing low-count PET images, which is a challenging problem due to the limited amount of available data and the high level of noise in the acquired images. The proposed method aims to improve the quality of reconstructed PET images while preserving important features, such as edges and small details, by combining the strengths of UNET and Transformer networks. MATERIAL AND METHODS The proposed TrUNET-MAPEM model integrates a residual UNET-transformer regularizer into the unrolled maximum a posteriori expectation maximization (MAPEM) algorithm for PET image reconstruction. A loss function based on a combination of structural similarity index (SSIM) and mean squared error (MSE) is utilized to evaluate the accuracy of the reconstructed images. The simulated dataset was generated using the Brainweb phantom, while the real patient dataset was acquired using a Siemens Biograph mMR PET scanner. We also implemented state-of-the-art methods for comparison purposes: OSEM, MAPOSEM, and supervised learning using 3D-UNET network. The reconstructed images are compared to ground truth images using metrics such as peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and relative root mean square error (rRMSE) to quantitatively evaluate the accuracy of the reconstructed images. RESULTS Our proposed TrUNET-MAPEM approach was evaluated using both simulated and real patient data. For the patient data, our model achieved an average PSNR of 33.72 dB, an average SSIM of 0.955, and an average rRMSE of 0.39. These results outperformed other methods which had average PSNRs of 36.89 dB, 34.12 dB, and 33.52 db, average SSIMs of 0.944, 0.947, and 0.951, and average rRMSEs of 0.59, 0.49, and 0.42. For the simulated data, our model achieved an average PSNR of 31.23 dB, an average SSIM of 0.95, and an average rRMSE of 0.55. These results also outperformed other state-of-the-art methods, such as OSEM, MAPOSEM, and 3DUNET-MAPEM. The model demonstrates the potential for clinical use by successfully reconstructing smooth images while preserving edges. The comparison with other methods demonstrates the superiority of our approach, as it outperforms all other methods for all three metrics. CONCLUSION The proposed TrUNET-MAPEM model presents a significant advancement in the field of low-count PET image reconstruction. The results demonstrate the potential for clinical use, as the model can produce images with reduced noise levels and better edge preservation compared to other reconstruction and post-processing algorithms. The proposed approach may have important clinical applications in the early detection and diagnosis of various diseases.
Collapse
|
6
|
Application of a validated prostate MRI deep learning system to independent same-vendor multi-institutional data: demonstration of transferability. Eur Radiol 2023; 33:7463-7476. [PMID: 37507610 PMCID: PMC10598076 DOI: 10.1007/s00330-023-09882-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Revised: 04/24/2023] [Accepted: 04/27/2023] [Indexed: 07/30/2023]
Abstract
OBJECTIVES To evaluate a fully automatic deep learning system to detect and segment clinically significant prostate cancer (csPCa) on same-vendor prostate MRI from two different institutions not contributing to training of the system. MATERIALS AND METHODS In this retrospective study, a previously bi-institutionally validated deep learning system (UNETM) was applied to bi-parametric prostate MRI data from one external institution (A), a PI-RADS distribution-matched internal cohort (B), and a csPCa stratified subset of single-institution external public challenge data (C). csPCa was defined as ISUP Grade Group ≥ 2 determined from combined targeted and extended systematic MRI/transrectal US-fusion biopsy. Performance of UNETM was evaluated by comparing ROC AUC and specificity at typical PI-RADS sensitivity levels. Lesion-level analysis between UNETM segmentations and radiologist-delineated segmentations was performed using Dice coefficient, free-response operating characteristic (FROC), and weighted alternative (waFROC). The influence of using different diffusion sequences was analyzed in cohort A. RESULTS In 250/250/140 exams in cohorts A/B/C, differences in ROC AUC were insignificant with 0.80 (95% CI: 0.74-0.85)/0.87 (95% CI: 0.83-0.92)/0.82 (95% CI: 0.75-0.89). At sensitivities of 95% and 90%, UNETM achieved specificity of 30%/50% in A, 44%/71% in B, and 43%/49% in C, respectively. Dice coefficient of UNETM and radiologist-delineated lesions was 0.36 in A and 0.49 in B. The waFROC AUC was 0.67 (95% CI: 0.60-0.83) in A and 0.7 (95% CI: 0.64-0.78) in B. UNETM performed marginally better on readout-segmented than on single-shot echo-planar-imaging. CONCLUSION For same-vendor examinations, deep learning provided comparable discrimination of csPCa and non-csPCa lesions and examinations between local and two independent external data sets, demonstrating the applicability of the system to institutions not participating in model training. CLINICAL RELEVANCE STATEMENT A previously bi-institutionally validated fully automatic deep learning system maintained acceptable exam-level diagnostic performance in two independent external data sets, indicating the potential of deploying AI models without retraining or fine-tuning, and corroborating evidence that AI models extract a substantial amount of transferable domain knowledge about MRI-based prostate cancer assessment. KEY POINTS • A previously bi-institutionally validated fully automatic deep learning system maintained acceptable exam-level diagnostic performance in two independent external data sets. • Lesion detection performance and segmentation congruence was similar on the institutional and an external data set, as measured by the weighted alternative FROC AUC and Dice coefficient. • Although the system generalized to two external institutions without re-training, achieving expected sensitivity and specificity levels using the deep learning system requires probability thresholds to be adjusted, underlining the importance of institution-specific calibration and quality control.
Collapse
|
7
|
Multiparametric Oncologic Hybrid Imaging: Machine Learning Challenges and Opportunities. Nuklearmedizin 2023; 62:306-313. [PMID: 37802058 DOI: 10.1055/a-2157-6670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/08/2023]
Abstract
BACKGROUND Machine learning (ML) is considered an important technology for future data analysis in health care. METHODS The inherently technology-driven fields of diagnostic radiology and nuclear medicine will both benefit from ML in terms of image acquisition and reconstruction. Within the next few years, this will lead to accelerated image acquisition, improved image quality, a reduction of motion artifacts and - for PET imaging - reduced radiation exposure and new approaches for attenuation correction. Furthermore, ML has the potential to support decision making by a combined analysis of data derived from different modalities, especially in oncology. In this context, we see great potential for ML in multiparametric hybrid imaging and the development of imaging biomarkers. RESULTS AND CONCLUSION In this review, we will describe the basics of ML, present approaches in hybrid imaging of MRI, CT, and PET, and discuss the specific challenges associated with it and the steps ahead to make ML a diagnostic and clinical tool in the future. KEY POINTS · ML provides a viable clinical solution for the reconstruction, processing, and analysis of hybrid imaging obtained from MRI, CT, and PET..
Collapse
|
8
|
International Nuclear Medicine Consensus on the Clinical Use of Amyloid Positron Emission Tomography in Alzheimer's Disease. PHENOMICS (CHAM, SWITZERLAND) 2023; 3:375-389. [PMID: 37589025 PMCID: PMC10425321 DOI: 10.1007/s43657-022-00068-9] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Revised: 07/19/2022] [Accepted: 07/22/2022] [Indexed: 08/18/2023]
Abstract
Alzheimer's disease (AD) is the main cause of dementia, with its diagnosis and management remaining challenging. Amyloid positron emission tomography (PET) has become increasingly important in medical practice for patients with AD. To integrate and update previous guidelines in the field, a task group of experts of several disciplines from multiple countries was assembled, and they revised and approved the content related to the application of amyloid PET in the medical settings of cognitively impaired individuals, focusing on clinical scenarios, patient preparation, administered activities, as well as image acquisition, processing, interpretation and reporting. In addition, expert opinions, practices, and protocols of prominent research institutions performing research on amyloid PET of dementia are integrated. With the increasing availability of amyloid PET imaging, a complete and standard pipeline for the entire examination process is essential for clinical practice. This international consensus and practice guideline will help to promote proper clinical use of amyloid PET imaging in patients with AD.
Collapse
|
9
|
AI Transformers for Radiation Dose Reduction in Serial Whole-Body PET Scans. Radiol Artif Intell 2023; 5:e220246. [PMID: 37293349 PMCID: PMC10245181 DOI: 10.1148/ryai.220246] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 03/30/2023] [Accepted: 04/12/2023] [Indexed: 06/10/2023]
Abstract
Purpose To develop a deep learning approach that enables ultra-low-dose, 1% of the standard clinical dosage (3 MBq/kg), ultrafast whole-body PET reconstruction in cancer imaging. Materials and Methods In this Health Insurance Portability and Accountability Act-compliant study, serial fluorine 18-labeled fluorodeoxyglucose PET/MRI scans of pediatric patients with lymphoma were retrospectively collected from two cross-continental medical centers between July 2015 and March 2020. Global similarity between baseline and follow-up scans was used to develop Masked-LMCTrans, a longitudinal multimodality coattentional convolutional neural network (CNN) transformer that provides interaction and joint reasoning between serial PET/MRI scans from the same patient. Image quality of the reconstructed ultra-low-dose PET was evaluated in comparison with a simulated standard 1% PET image. The performance of Masked-LMCTrans was compared with that of CNNs with pure convolution operations (classic U-Net family), and the effect of different CNN encoders on feature representation was assessed. Statistical differences in the structural similarity index measure (SSIM), peak signal-to-noise ratio (PSNR), and visual information fidelity (VIF) were assessed by two-sample testing with the Wilcoxon signed rank t test. Results The study included 21 patients (mean age, 15 years ± 7 [SD]; 12 female) in the primary cohort and 10 patients (mean age, 13 years ± 4; six female) in the external test cohort. Masked-LMCTrans-reconstructed follow-up PET images demonstrated significantly less noise and more detailed structure compared with simulated 1% extremely ultra-low-dose PET images. SSIM, PSNR, and VIF were significantly higher for Masked-LMCTrans-reconstructed PET (P < .001), with improvements of 15.8%, 23.4%, and 186%, respectively. Conclusion Masked-LMCTrans achieved high image quality reconstruction of 1% low-dose whole-body PET images.Keywords: Pediatrics, PET, Convolutional Neural Network (CNN), Dose Reduction Supplemental material is available for this article. © RSNA, 2023.
Collapse
|
10
|
Clinical application of AI-based PET images in oncological patients. Semin Cancer Biol 2023; 91:124-142. [PMID: 36906112 DOI: 10.1016/j.semcancer.2023.03.005] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 02/28/2023] [Accepted: 03/07/2023] [Indexed: 03/11/2023]
Abstract
Based on the advantages of revealing the functional status and molecular expression of tumor cells, positron emission tomography (PET) imaging has been performed in numerous types of malignant diseases for diagnosis and monitoring. However, insufficient image quality, the lack of a convincing evaluation tool and intra- and interobserver variation in human work are well-known limitations of nuclear medicine imaging and restrict its clinical application. Artificial intelligence (AI) has gained increasing interest in the field of medical imaging due to its powerful information collection and interpretation ability. The combination of AI and PET imaging potentially provides great assistance to physicians managing patients. Radiomics, an important branch of AI applied in medical imaging, can extract hundreds of abstract mathematical features of images for further analysis. In this review, an overview of the applications of AI in PET imaging is provided, focusing on image enhancement, tumor detection, response and prognosis prediction and correlation analyses with pathology or specific gene mutations in several types of tumors. Our aim is to describe recent clinical applications of AI-based PET imaging in malignant diseases and to focus on the description of possible future developments.
Collapse
|
11
|
Ultra-low-dose in brain 18F-FDG PET/MRI in clinical settings. Sci Rep 2022; 12:15341. [PMID: 36097015 PMCID: PMC9467977 DOI: 10.1038/s41598-022-18029-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Accepted: 08/03/2022] [Indexed: 11/10/2022] Open
Abstract
We previously showed that the injected activity could be reduced to 1 MBq/kg without significantly degrading image quality for the exploration of neurocognitive disorders in 18F-FDG-PET/MRI. We now hypothesized that injected activity could be reduced ten-fold. We simulated a 18F-FDG-PET/MRI ultra-low-dose protocol (0.2 MBq/Kg, PETULD) and compared it to our reference protocol (2 MBq/Kg, PETSTD) in 50 patients with cognitive impairment. We tested the reproducibility between PETULD and PETSTD using SUVratios measurements. We also assessed the impact of PETULD for between-group comparisons and for visual analysis performed by three physicians. The intra-operator agreement between visual assessment of PETSTD and PETULD in patients with severe anomalies was substantial to almost perfect (kappa > 0.79). For patients with normal metabolism or moderate hypometabolism however, it was only moderate to substantial (kappa > 0.53). SUV ratios were strongly reproducible (SUVratio difference ± SD = 0.09 ± 0.08). Between-group comparisons yielded very similar results using either PETULD or PETSTD. 18F-FDG activity may be reduced to 0.2 MBq/Kg without compromising quantitative measurements. The visual interpretation was reproducible between ultra-low-dose and standard protocol for patients with severe hypometabolism, but less so for those with moderate hypometabolism. These results suggest that a low-dose protocol (1 MBq/Kg) should be preferred in the context of neurodegenerative disease diagnosis.
Collapse
|
12
|
A personalized deep learning denoising strategy for low-count PET images. Phys Med Biol 2022; 67:10.1088/1361-6560/ac783d. [PMID: 35697017 PMCID: PMC9321225 DOI: 10.1088/1361-6560/ac783d] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2022] [Accepted: 06/13/2022] [Indexed: 11/12/2022]
Abstract
Objective. Deep learning denoising networks are typically trained with images that are representative of the testing data. Due to the large variability of the noise levels in positron emission tomography (PET) images, it is challenging to develop a proper training set for general clinical use. Our work aims to develop a personalized denoising strategy for the low-count PET images at various noise levels.Approach.We first investigated the impact of the noise level in the training images on the model performance. Five 3D U-Net models were trained on five groups of images at different noise levels, and a one-size-fits-all model was trained on images covering a wider range of noise levels. We then developed a personalized weighting method by linearly blending the results from two models trained on 20%-count level images and 60%-count level images to balance the trade-off between noise reduction and spatial blurring. By adjusting the weighting factor, denoising can be conducted in a personalized and task-dependent way.Main results.The evaluation results of the six models showed that models trained on noisier images had better performance in denoising but introduced more spatial blurriness, and the one-size-fits-all model did not generalize well when deployed for testing images with a wide range of noise levels. The personalized denoising results showed that noisier images require higher weights on noise reduction to maximize the structural similarity and mean squared error. And model trained on 20%-count level images can produce the best liver lesion detectability.Significance.Our study demonstrated that in deep learning-based low dose PET denoising, noise levels in the training input images have a substantial impact on the model performance. The proposed personalized denoising strategy utilized two training sets to overcome the drawbacks introduced by each individual network and provided a series of denoised results for clinical reading.
Collapse
|
13
|
Automated knee cartilage segmentation for heterogeneous clinical MRI using generative adversarial networks with transfer learning. Quant Imaging Med Surg 2022; 12:2620-2633. [PMID: 35502381 PMCID: PMC9014147 DOI: 10.21037/qims-21-459] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2021] [Accepted: 10/26/2021] [Indexed: 08/27/2023]
Abstract
BACKGROUND This study aimed to build a deep learning model to automatically segment heterogeneous clinical MRI scans by optimizing a pre-trained model built from a homogeneous research dataset with transfer learning. METHODS Conditional generative adversarial networks pretrained on the Osteoarthritis Initiative MR images was transferred to 30 sets of heterogenous MR images collected from clinical routines. Two trained radiologists manually segmented the 30 sets of clinical MR images for model training, validation and test. The model performance was compared to models trained from scratch with different datasets, as well as two radiologists. A 5-fold cross validation was performed. RESULTS The transfer learning model obtained an overall averaged Dice coefficient of 0.819, an averaged 95 percentile Hausdorff distance of 1.463 mm, and an averaged average symmetric surface distance of 0.350 mm on the 5 random holdout test sets. A 5-fold cross validation had a mean Dice coefficient of 0.801, mean 95 percentile Hausdorff distance of 1.746 mm, and mean average symmetric surface distance of 0.364 mm. It outperformed other models and performed similarly as the radiologists. CONCLUSIONS A transfer learning model was able to automatically segment knee cartilage, with performance comparable to human, using heterogeneous clinical MR images with a small training data size. In addition, the model proved robust when tested through cross validation and on images from a different vendor. We found it feasible to perform fully automated cartilage segmentation of clinical knee MR images, which would facilitate the clinical application of quantitative MRI techniques and other prediction models for improved patient treatment planning.
Collapse
|
14
|
Parametric image generation with the uEXPLORER total-body PET/CT system through deep learning. Eur J Nucl Med Mol Imaging 2022; 49:2482-2492. [DOI: 10.1007/s00259-022-05731-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Accepted: 02/13/2022] [Indexed: 11/25/2022]
|
15
|
Deep learning-based image reconstruction and post-processing methods in positron emission tomography for low-dose imaging and resolution enhancement. Eur J Nucl Med Mol Imaging 2022; 49:3098-3118. [PMID: 35312031 PMCID: PMC9250483 DOI: 10.1007/s00259-022-05746-4] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Accepted: 02/25/2022] [Indexed: 12/21/2022]
Abstract
Image processing plays a crucial role in maximising diagnostic quality of positron emission tomography (PET) images. Recently, deep learning methods developed across many fields have shown tremendous potential when applied to medical image enhancement, resulting in a rich and rapidly advancing literature surrounding this subject. This review encapsulates methods for integrating deep learning into PET image reconstruction and post-processing for low-dose imaging and resolution enhancement. A brief introduction to conventional image processing techniques in PET is firstly presented. We then review methods which integrate deep learning into the image reconstruction framework as either deep learning-based regularisation or as a fully data-driven mapping from measured signal to images. Deep learning-based post-processing methods for low-dose imaging, temporal resolution enhancement and spatial resolution enhancement are also reviewed. Finally, the challenges associated with applying deep learning to enhance PET images in the clinical setting are discussed and future research directions to address these challenges are presented.
Collapse
|
16
|
Multiparametric Oncologic Hybrid Imaging: Machine Learning Challenges and Opportunities. ROFO-FORTSCHR RONTG 2022; 194:605-612. [PMID: 35211929 DOI: 10.1055/a-1718-4128] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
BACKGROUND Machine learning (ML) is considered an important technology for future data analysis in health care. METHODS The inherently technology-driven fields of diagnostic radiology and nuclear medicine will both benefit from ML in terms of image acquisition and reconstruction. Within the next few years, this will lead to accelerated image acquisition, improved image quality, a reduction of motion artifacts and - for PET imaging - reduced radiation exposure and new approaches for attenuation correction. Furthermore, ML has the potential to support decision making by a combined analysis of data derived from different modalities, especially in oncology. In this context, we see great potential for ML in multiparametric hybrid imaging and the development of imaging biomarkers. RESULTS AND CONCLUSION In this review, we will describe the basics of ML, present approaches in hybrid imaging of MRI, CT, and PET, and discuss the specific challenges associated with it and the steps ahead to make ML a diagnostic and clinical tool in the future. KEY POINTS · ML provides a viable clinical solution for the reconstruction, processing, and analysis of hybrid imaging obtained from MRI, CT, and PET.. CITATION FORMAT · Küstner T, Hepp T, Seith F. Multiparametric Oncologic Hybrid Imaging: Machine Learning Challenges and Opportunities. Fortschr Röntgenstr 2022; DOI: 10.1055/a-1718-4128.
Collapse
|
17
|
Abstract
Artificial intelligence (AI) has been applied to various medical imaging tasks, such as computer-aided diagnosis. Specifically, deep learning techniques such as convolutional neural network (CNN) and generative adversarial network (GAN) have been extensively used for medical image generation. Image generation with deep learning has been investigated in studies using positron emission tomography (PET). This article reviews studies that applied deep learning techniques for image generation on PET. We categorized the studies for PET image generation with deep learning into three themes as follows: (1) recovering full PET data from noisy data by denoising with deep learning, (2) PET image reconstruction and attenuation correction with deep learning and (3) PET image translation and synthesis with deep learning. We introduce recent studies based on these three categories. Finally, we mention the limitations of applying deep learning techniques to PET image generation and future prospects for PET image generation.
Collapse
|
18
|
Spatial adaptive and transformer fusion network (STFNet) for low-count PET blind denoising with MRI. Med Phys 2021; 49:343-356. [PMID: 34796526 DOI: 10.1002/mp.15368] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2021] [Revised: 10/28/2021] [Accepted: 11/08/2021] [Indexed: 01/12/2023] Open
Abstract
PURPOSE Positron emission tomography (PET) has been widely used in various clinical applications. PET is a type of emission computed tomography and operates by positron annihilation radiation. With magnetic resonance imaging (MRI) providing anatomical information, joint PET/MRI reduces the radiation exposure risk of patients. Improved hardware and imaging algorithms have been proposed to further decrease the dose from radioactive tracers or the bed duration, but few methods focus on denoising low-count PET with MRI input. The existing methods are based on fixed conventional convolution and local attention, which do not sufficiently extract and fuse contextual and complementary information from multimodal input. There is still much room for improvement. Therefore, we propose a novel deep learning method for low-count PET/MRI denoising called the spatial-adaptive and transformer fusion network (STFNet), which consists of a Siamese encoder with a spatial-adaptive block (SA-block) and the transformer fusion encoder (TFE). METHODS Our proposed STFNet consists of a Siamese encoder with an SA-block, TFE, and two branches of the decoder. First, in the encoder, we adapt the SA-block in the Siamese encoder. The SA-block comprises deformable convolution with fusion modulation (DCFM) and two convolutional operations, which can promote network extraction of more relative and long-range contextual features. Second, the pixel-to-pixel TFE helps the network establish a local and global relationship between high-level feature maps of PET and MRI. In the decoder part, we design two branches for PET denoising and MRI translation, and predictions are obtained by trainable weighted summation. This proposed algorithm is implemented to predict synthetic standard-dose neck PET images from low-count neck PET images and MRI. Additionally, this method is compared with the existing U-Net and residual U-Net methods with and without MRI input. RESULTS To demonstrate the advantages of our method, we introduce configuration studies about TFE, ablation studies, and empirical comparative studies. Quantitative analyses are based on root mean square error (RSME), peak signal-to-noise ratio (PSNR), structural similarity (SSIM), and Pearson correlation coefficient (PCC). Additionally, qualitative results show the comparisons between our proposed method and other existing methods. All experimental results and visualizations show that our method achieves state-of-the-art performance in quantification and qualification. CONCLUSIONS Based on our experiments, STFNet performs better than existing methods in measurement and visualization. However, our proposed method may still be suboptimal because we apply only the L1 loss to train our data set, and the data set includes corrupted PET with different low counts. In the future, we may exploit a generative adversarial network (GAN)-based paradigm in our STFNet to further improve the visual quality.
Collapse
|
19
|
Abstract
High noise and low spatial resolution are two key confounding factors that limit the qualitative and quantitative accuracy of PET images. Artificial intelligence models for image denoising and deblurring are becoming increasingly popular for the postreconstruction enhancement of PET images. We present a detailed review of recent efforts for artificial intelligence-based PET image enhancement with a focus on network architectures, data types, loss functions, and evaluation metrics. We also highlight emerging areas in this field that are quickly gaining popularity, identify barriers to large-scale adoption of artificial intelligence models for PET image enhancement, and discuss future directions.
Collapse
|
20
|
Radiomics, machine learning, and artificial intelligence-what the neuroradiologist needs to know. Neuroradiology 2021; 63:1957-1967. [PMID: 34537858 PMCID: PMC8449698 DOI: 10.1007/s00234-021-02813-9] [Citation(s) in RCA: 45] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Accepted: 09/09/2021] [Indexed: 01/04/2023]
Abstract
PURPOSE Artificial intelligence (AI) is playing an ever-increasing role in Neuroradiology. METHODS When designing AI-based research in neuroradiology and appreciating the literature, it is important to understand the fundamental principles of AI. Training, validation, and test datasets must be defined and set apart as priorities. External validation and testing datasets are preferable, when feasible. The specific type of learning process (supervised vs. unsupervised) and the machine learning model also require definition. Deep learning (DL) is an AI-based approach that is modelled on the structure of neurons of the brain; convolutional neural networks (CNN) are a commonly used example in neuroradiology. RESULTS Radiomics is a frequently used approach in which a multitude of imaging features are extracted from a region of interest and subsequently reduced and selected to convey diagnostic or prognostic information. Deep radiomics uses CNNs to directly extract features and obviate the need for predefined features. CONCLUSION Common limitations and pitfalls in AI-based research in neuroradiology are limited sample sizes ("small-n-large-p problem"), selection bias, as well as overfitting and underfitting.
Collapse
|
21
|
New PET technologies - embracing progress and pushing the limits. Eur J Nucl Med Mol Imaging 2021; 48:2711-2726. [PMID: 34081153 PMCID: PMC8263417 DOI: 10.1007/s00259-021-05390-4] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2021] [Accepted: 04/25/2021] [Indexed: 12/11/2022]
|
22
|
True ultra-low-dose amyloid PET/MRI enhanced with deep learning for clinical interpretation. Eur J Nucl Med Mol Imaging 2021; 48:2416-2425. [PMID: 33416955 PMCID: PMC8891344 DOI: 10.1007/s00259-020-05151-9] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2020] [Accepted: 12/06/2020] [Indexed: 02/02/2023]
Abstract
PURPOSE While sampled or short-frame realizations have shown the potential power of deep learning to reduce radiation dose for PET images, evidence in true injected ultra-low-dose cases is lacking. Therefore, we evaluated deep learning enhancement using a significantly reduced injected radiotracer protocol for amyloid PET/MRI. METHODS Eighteen participants underwent two separate 18F-florbetaben PET/MRI studies in which an ultra-low-dose (6.64 ± 3.57 MBq, 2.2 ± 1.3% of standard) or a standard-dose (300 ± 14 MBq) was injected. The PET counts from the standard-dose list-mode data were also undersampled to approximate an ultra-low-dose session. A pre-trained convolutional neural network was fine-tuned using MR images and either the injected or sampled ultra-low-dose PET as inputs. Image quality of the enhanced images was evaluated using three metrics (peak signal-to-noise ratio, structural similarity, and root mean square error), as well as the coefficient of variation (CV) for regional standard uptake value ratios (SUVRs). Mean cerebral uptake was correlated across image types to assess the validity of the sampled realizations. To judge clinical performance, four trained readers scored image quality on a five-point scale (using 15% non-inferiority limits for proportion of studies rated 3 or better) and classified cases into amyloid-positive and negative studies. RESULTS The deep learning-enhanced PET images showed marked improvement on all quality metrics compared with the low-dose images as well as having generally similar regional CVs as the standard-dose. All enhanced images were non-inferior to their standard-dose counterparts. Accuracy for amyloid status was high (97.2% and 91.7% for images enhanced from injected and sampled ultra-low-dose data, respectively) which was similar to intra-reader reproducibility of standard-dose images (98.6%). CONCLUSION Deep learning methods can synthesize diagnostic-quality PET images from ultra-low injected dose simultaneous PET/MRI data, demonstrating the general validity of sampled realizations and the potential to reduce dose significantly for amyloid imaging.
Collapse
|
23
|
Transfer Learning in Magnetic Resonance Brain Imaging: A Systematic Review. J Imaging 2021; 7:66. [PMID: 34460516 PMCID: PMC8321322 DOI: 10.3390/jimaging7040066] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Revised: 03/26/2021] [Accepted: 03/29/2021] [Indexed: 11/25/2022] Open
Abstract
(1) Background: Transfer learning refers to machine learning techniques that focus on acquiring knowledge from related tasks to improve generalization in the tasks of interest. In magnetic resonance imaging (MRI), transfer learning is important for developing strategies that address the variation in MR images from different imaging protocols or scanners. Additionally, transfer learning is beneficial for reutilizing machine learning models that were trained to solve different (but related) tasks to the task of interest. The aim of this review is to identify research directions, gaps in knowledge, applications, and widely used strategies among the transfer learning approaches applied in MR brain imaging; (2) Methods: We performed a systematic literature search for articles that applied transfer learning to MR brain imaging tasks. We screened 433 studies for their relevance, and we categorized and extracted relevant information, including task type, application, availability of labels, and machine learning methods. Furthermore, we closely examined brain MRI-specific transfer learning approaches and other methods that tackled issues relevant to medical imaging, including privacy, unseen target domains, and unlabeled data; (3) Results: We found 129 articles that applied transfer learning to MR brain imaging tasks. The most frequent applications were dementia-related classification tasks and brain tumor segmentation. The majority of articles utilized transfer learning techniques based on convolutional neural networks (CNNs). Only a few approaches utilized clearly brain MRI-specific methodology, and considered privacy issues, unseen target domains, or unlabeled data. We proposed a new categorization to group specific, widely-used approaches such as pretraining and fine-tuning CNNs; (4) Discussion: There is increasing interest in transfer learning for brain MRI. Well-known public datasets have clearly contributed to the popularity of Alzheimer's diagnostics/prognostics and tumor segmentation as applications. Likewise, the availability of pretrained CNNs has promoted their utilization. Finally, the majority of the surveyed studies did not examine in detail the interpretation of their strategies after applying transfer learning, and did not compare their approach with other transfer learning approaches.
Collapse
|
24
|
|
25
|
Abstract
Artificial intelligence (AI) has recently attracted much attention for its potential use in healthcare applications. The use of AI to improve and extract more information out of medical images, given their parallels with natural images and the immense progress in the area of computer vision, has been at the forefront of these advances. This is due to a convergence of factors, including the increasing numbers of scans performed, the availability of open source AI tools, and decreases in the costs of hardware required to implement these technologies. In this article, we review the progress in the use of AI toward optimizing PET/CT and PET/MRI studies. These two methods, which combine molecular information with structural and (in the case of MRI) functional imaging, are extremely valuable for a wide range of clinical indications. They are also tremendously data-rich modalities and as such are highly amenable to data-driven technologies such as AI. The first half of the article will focus on methods to improve PET reconstruction and image quality, which has multiple benefits including faster image acquisition, image reconstruction, and lower or even "zero" radiation dose imaging. It will also address the value of AI-driven methods to perform MR-based attenuation correction. The second half will address how some of these advances can be used to perform to optimize diagnosis from the acquired images, with examples given for whole-body oncology, cardiology, and neurology indications. Overall, it is likely that the use of AI will markedly improve both the quality and safety of PET/CT and PET/MRI as well as enhance our ability to interpret the scans and follow lesions over time. This will hopefully lead to expanded clinical use cases for these valuable technologies leading to better patient care.
Collapse
|