1
|
Shim HS, Cho MJ, Lee JS. Energy estimation methods for positron emission tomography detectors composed of multiple scintillators. Biomed Eng Lett 2025; 15:489-496. [PMID: 40271397 PMCID: PMC12011668 DOI: 10.1007/s13534-025-00464-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2024] [Revised: 01/13/2025] [Accepted: 01/29/2025] [Indexed: 04/25/2025] Open
Abstract
The performance and image quality of positron emission tomography (PET) systems can be enhanced by strategically employing multiple different scintillators, particularly those with different decay times. Two cutting-edge PET detector technologies employing different scintillators with different decay times are the phoswich detector and the emerging metascintillator. In PET imaging, accurate and precise energy measurement is important for effectively rejecting scattered gamma-rays and estimating scatter distribution. However, traditional measures of light output, such as amplitude or integration values of photosensor output pulses, cannot accurately indicate the deposit energy of gamma-rays across multiple scintillators. To address these issues, this study explores two methods for energy estimation in PET detectors that employ multiple scintillators. The first method uses pseudo-inverse matrix generated from the unique pulse profile of each crystal, while the second employs an artificial neural network (ANN) to estimate the energy deposited in each crystal. The effectiveness of the proposed methods was experimentally evaluated using three heavy and dense inorganic scintillation crystals (BGO, LGSO, and GAGG) and three fast plastic scintillators (EJ200, EJ224, and EJ232). The energy estimation method employing ANNs consistently demonstrated superior accuracy across all crystal combinations when compared to the approach utilizing the pseudo-inverse matrix. In the pseudo-inverse matrix approach, there is a negligible difference in accuracy when applying integral-based energy labels as opposed to amplitude-based energy labels. On the other hand, in ANN approach, employing integral-based energy labels consistently outperforms the use of amplitude-based energy labels. This study contributes to the advancement of PET detector technology by proposing and evaluating two methods for estimating the energy in the detector using multiple scintillators. The ANN approach appears to be a promising solution for improving the accuracy of energy estimation, addressing challenges posed by mixed scintillation pulses.
Collapse
Affiliation(s)
- Hyeong Seok Shim
- Interdisciplinary Program of Bioengineering, Seoul National University, Seoul, Korea
- Integrated Major in Innovative Medical Science, Seoul National University Graduate School, Seoul, Korea
- Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul, Korea
| | - Min Jeong Cho
- Interdisciplinary Program of Bioengineering, Seoul National University, Seoul, Korea
- Integrated Major in Innovative Medical Science, Seoul National University Graduate School, Seoul, Korea
- Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul, Korea
| | - Jae Sung Lee
- Interdisciplinary Program of Bioengineering, Seoul National University, Seoul, Korea
- Integrated Major in Innovative Medical Science, Seoul National University Graduate School, Seoul, Korea
- Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul, Korea
- Brightonix Imaging Inc., Seoul, Korea
| |
Collapse
|
2
|
Adeli Z, Hosseini SA, Salimi Y, Vahidfar N, Sheikhzadeh P. A CT-free deep-learning-based attenuation and scatter correction for copper-64 PET in different time-point scans. Radiol Phys Technol 2025:10.1007/s12194-025-00905-2. [PMID: 40261572 DOI: 10.1007/s12194-025-00905-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2024] [Revised: 02/17/2025] [Accepted: 04/02/2025] [Indexed: 04/24/2025]
Abstract
This study aimed to develop and evaluate a deep-learning model for attenuation and scatter correction in whole-body 64Cu-based PET imaging. A swinUNETR model was implemented using the MONAI framework. Whole-body PET-nonAC and PET-CTAC image pairs were used for training, where PET-nonAC served as the input and PET-CTAC as the output. Due to the limited number of Cu-based PET/CT images, a model pre-trained on 51 Ga-PSMA PET images was fine-tuned on 15 Cu-based PET images via transfer learning. The model was trained without freezing layers, adapting learned features to the Cu-based dataset. For testing, six additional Cu-based PET images were used, representing 1-h, 12-h, and 48-h time points, with two images per group. The model performed best at the 12-h time point, with an MSE of 0.002 ± 0.0004 SUV2, PSNR of 43.14 ± 0.08 dB, and SSIM of 0.981 ± 0.002. At 48 h, accuracy slightly decreased (MSE = 0.036 ± 0.034 SUV2), but image quality remained high (PSNR = 44.49 ± 1.09 dB, SSIM = 0.981 ± 0.006). At 1 h, the model also showed strong results (MSE = 0.024 ± 0.002 SUV2, PSNR = 45.89 ± 5.23 dB, SSIM = 0.984 ± 0.005), demonstrating consistency across time points. Despite the limited size of the training dataset, the use of fine-tuning from a previously pre-trained model yielded acceptable performance. The results demonstrate that the proposed deep learning model can effectively generate PET-DLAC images that closely resemble PET-CTAC images, with only minor errors.
Collapse
Affiliation(s)
- Zahra Adeli
- Group of Medical Radiation Engineering, Department of Energy Engineering, Sharif University of Technology, Tehran, Iran
| | - Seyed Abolfazl Hosseini
- Group of Medical Radiation Engineering, Department of Energy Engineering, Sharif University of Technology, Tehran, Iran.
| | - Yazdan Salimi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Nasim Vahidfar
- Department of Nuclear Medicine, Faculty of Medicine, IKHC, Tehran University of Medical Sciences, Tehran, Iran
| | - Peyman Sheikhzadeh
- Department of Nuclear Medicine, Faculty of Medicine, IKHC, Tehran University of Medical Sciences, Tehran, Iran.
- Department of Biomedical Physics and Engineering, Faculty of Medicine, Tehran University of Medical Sciences, Tehran, Iran.
- Research Center for Nuclear Medicine, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran.
| |
Collapse
|
3
|
Yuan H, Zhu M, Yang R, Liu H, Li I, Hong C. Rethinking Domain-Specific Pretraining by Supervised or Self-Supervised Learning for Chest Radiograph Classification: A Comparative Study Against ImageNet Counterparts in Cold-Start Active Learning. HEALTH CARE SCIENCE 2025; 4:110-143. [PMID: 40241982 PMCID: PMC11997468 DOI: 10.1002/hcs2.70009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/10/2024] [Revised: 01/05/2025] [Accepted: 01/26/2025] [Indexed: 04/18/2025]
Abstract
Objective Deep learning (DL) has become the prevailing method in chest radiograph analysis, yet its performance heavily depends on large quantities of annotated images. To mitigate the cost, cold-start active learning (AL), comprising an initialization followed by subsequent learning, selects a small subset of informative data points for labeling. Recent advancements in pretrained models by supervised or self-supervised learning tailored to chest radiograph have shown broad applicability to diverse downstream tasks. However, their potential in cold-start AL remains unexplored. Methods To validate the efficacy of domain-specific pretraining, we compared two foundation models: supervised TXRV and self-supervised REMEDIS with their general domain counterparts pretrained on ImageNet. Model performance was evaluated at both initialization and subsequent learning stages on two diagnostic tasks: psychiatric pneumonia and COVID-19. For initialization, we assessed their integration with three strategies: diversity, uncertainty, and hybrid sampling. For subsequent learning, we focused on uncertainty sampling powered by different pretrained models. We also conducted statistical tests to compare the foundation models with ImageNet counterparts, investigate the relationship between initialization and subsequent learning, examine the performance of one-shot initialization against the full AL process, and investigate the influence of class balance in initialization samples on initialization and subsequent learning. Results First, domain-specific foundation models failed to outperform ImageNet counterparts in six out of eight experiments on informative sample selection. Both domain-specific and general pretrained models were unable to generate representations that could substitute for the original images as model inputs in seven of the eight scenarios. However, pretrained model-based initialization surpassed random sampling, the default approach in cold-start AL. Second, initialization performance was positively correlated with subsequent learning performance, highlighting the importance of initialization strategies. Third, one-shot initialization performed comparably to the full AL process, demonstrating the potential of reducing experts' repeated waiting during AL iterations. Last, a U-shaped correlation was observed between the class balance of initialization samples and model performance, suggesting that the class balance is more strongly associated with performance at middle budget levels than at low or high budgets. Conclusions In this study, we highlighted the limitations of medical pretraining compared to general pretraining in the context of cold-start AL. We also identified promising outcomes related to cold-start AL, including initialization based on pretrained models, the positive influence of initialization on subsequent learning, the potential for one-shot initialization, and the influence of class balance on middle-budget AL. Researchers are encouraged to improve medical pretraining for versatile DL foundations and explore novel AL methods.
Collapse
Affiliation(s)
- Han Yuan
- Duke‐NUS Medical School, Centre for Quantitative MedicineSingaporeSingapore
| | - Mingcheng Zhu
- Duke‐NUS Medical School, Centre for Quantitative MedicineSingaporeSingapore
- Department of Engineering ScienceUniversity of OxfordOxfordUK
| | - Rui Yang
- Duke‐NUS Medical School, Centre for Quantitative MedicineSingaporeSingapore
| | - Han Liu
- Department of Computer ScienceVanderbilt UniversityNashvilleTennesseeUSA
| | - Irene Li
- Information Technology CenterUniversity of TokyoBunkyo‐kuJapan
| | - Chuan Hong
- Department of Biostatistics and BioinformaticsDuke UniversityDurhamNorth CarolinaUSA
| |
Collapse
|
4
|
Yamada T, Hanaoka K, Morimoto-Ishikawa D, Yamakawa Y, Kumakawa S, Ohtani A, Mizuta T, Kaida H, Ishii K. Crossover evaluation of time-of-flight-based attenuation correction in brain 18F-FDG and 18F-flutemetamol PET. Ann Nucl Med 2025; 39:189-198. [PMID: 39347876 DOI: 10.1007/s12149-024-01986-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2024] [Accepted: 09/23/2024] [Indexed: 10/01/2024]
Abstract
BACKGROUND Brain-dedicated positron emission tomography (PET) systems offer high spatial resolution and sensitivity for accurate clinical assessments. Attenuation correction (AC) is important in PET imaging, particularly in brain studies. This study assessed the reproducibility of attenuation maps (µ-maps) generated by a specialized time-of-flight (TOF) brain-dedicated PET system for imaging using different PET tracers. METHODS Twelve subjects underwent both 18F-fluorodeoxyglucose (FDG)-PET and 18F-flutemetamol (FMM) amyloid-PET scans. Images were reconstructed with µ-maps obtained by a maximum likelihood-based AC method. Voxel-based and region-based analyses were used to compare µ-maps obtained with FDG-PET versus FMM-PET; FDG-PET images reconstructed using an FDG-PET µ-map (FDG × FDG) versus those reconstructed with an FMM-PET µ-map (FDG × FMM); and FMM-PET images reconstructed using an FDG-PET µ-map (FMM × FDG) versus those reconstructed with an FMM-PET µ-map (FMM × FMM). RESULTS Small but significant differences in µ-maps were observed between tracers, primarily in bone regions. In the comparison between the µ-maps obtained with FDG-PET and FMM-PET, the µ-maps obtained with FDG-PET had higher µ-values than those obtained with FMM-PET in the parietal regions of the head and skull, in a portion of the cerebellar dentate nucleus and on the surface of the frontal lobe. The comparison between FDG and FDG × FMM values in different regions yielded findings similar to those of the µ-maps comparison. FDG × FMM values were significantly higher than FDG values in the bilateral temporal bones and a small part of the temporal lobe. Similarly, FMM values were significantly higher than FMM × FDG values in the bilateral temporal bones. FMM × FDG values were significantly higher than FMM values in a small area of the right cerebellar hemisphere. However, the relative errors in these µ-maps were within ± 4%, suggesting that they are clinically insignificant. In PET images reconstructed with the original and swapped µ-maps, the relative errors were within ± 7% and the quality was nearly equivalent. CONCLUSION These findings suggest the clinical reliability of the AC method without an external radiation source in TOF brain-dedicated PET systems.
Collapse
Affiliation(s)
- Takahiro Yamada
- Division of Positron Emission Tomography Institute of Advanced Clinical Medicine, Kindai University Hospital, Osaka, Japan.
| | - Kohei Hanaoka
- Division of Positron Emission Tomography Institute of Advanced Clinical Medicine, Kindai University Hospital, Osaka, Japan
| | - Daisuke Morimoto-Ishikawa
- Division of Positron Emission Tomography Institute of Advanced Clinical Medicine, Kindai University Hospital, Osaka, Japan
| | | | - Shiho Kumakawa
- Medical Systems Division, Shimadzu Corporation, Kyoto, Japan
| | - Atsushi Ohtani
- Medical Systems Division, Shimadzu Corporation, Kyoto, Japan
| | - Tetsuro Mizuta
- Medical Systems Division, Shimadzu Corporation, Kyoto, Japan
| | - Hayato Kaida
- Division of Positron Emission Tomography Institute of Advanced Clinical Medicine, Kindai University Hospital, Osaka, Japan
- Department of Radiology, Faculty of Medicine, Kindai University, Osaka, Japan
| | - Kazunari Ishii
- Division of Positron Emission Tomography Institute of Advanced Clinical Medicine, Kindai University Hospital, Osaka, Japan
- Department of Radiology, Faculty of Medicine, Kindai University, Osaka, Japan
| |
Collapse
|
5
|
Farshchitabrizi AH, Sadeghi MH, Sina S, Alavi M, Feshani ZN, Omidi H. AI-enhanced PET/CT image synthesis using CycleGAN for improved ovarian cancer imaging. Pol J Radiol 2025; 90:e26-e35. [PMID: 40070416 PMCID: PMC11891552 DOI: 10.5114/pjr/196804] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2024] [Accepted: 12/03/2024] [Indexed: 03/14/2025] Open
Abstract
Purpose Ovarian cancer is the fifth fatal cancer among women. Positron emission tomography (PET), which offers detailed metabolic data, can be effectively used for early cancer screening. However, proper attenuation correction is essential for interpreting the data obtained by this imaging modality. Computed tomography (CT) imaging is commonly performed alongside PET imaging for attenuation correction. This approach may introduce some issues in spatial alignment and registration of the images obtained by the two modalities. This study aims to perform PET image attenuation correction by using generative adversarial networks (GANs), without additional CT imaging. Material and methods The PET/CT data from 55 ovarian cancer patients were used in this study. Three GAN architectures: Conditional GAN, Wasserstein GAN, and CycleGAN, were evaluated for attenuation correction. The statistical performance of each model was assessed by calculating the mean squared error (MSE) and mean absolute error (MAE). The radiological performance assessments of the models were performed by comparing the standardised uptake value and the Hounsfield unit values of the whole body and selected organs, in the synthetic and real PET and CT images. Results Based on the results, CycleGAN demonstrated effective attenuation correction and pseudo-CT generation, with high accuracy. The MAE and MSE for all images were 2.15 ± 0.34 and 3.14 ± 0.56, respectively. For CT reconstruction, such values were found to be 4.17 ± 0.96 and 5.66 ± 1.01, respectively. Conclusions The results showed the potential of deep learning in reducing radiation exposure and improving the quality of PET imaging. Further refinement and clinical validation are needed for full clinical applicability.
Collapse
Affiliation(s)
- Amir Hossein Farshchitabrizi
- Namazi Hospital, Shiraz University of Medical Sciences, Shiraz, Iran
- Radiation Research Centre, School of Mechanical Engineering, Shiraz University, Shiraz, Iran
| | - Mohammad Hossein Sadeghi
- Nuclear Engineering Department, School of Mechanical Engineering, Shiraz University, Shiraz, Iran
| | - Sedigheh Sina
- Radiation Research Centre, School of Mechanical Engineering, Shiraz University, Shiraz, Iran
- Nuclear Engineering Department, School of Mechanical Engineering, Shiraz University, Shiraz, Iran
| | - Mehrosadat Alavi
- Ionising and Non-Ionising Radiation protection Research Centre, School of Paramedical Sciences, Shiraz University of Medical Sciences, Shiraz, Iran
| | | | - Hamid Omidi
- Nuclear Engineering Department, School of Mechanical Engineering, Shiraz University, Shiraz, Iran
| |
Collapse
|
6
|
Zhang Q, Zhou C, Zhang X, Fan W, Zheng H, Liang D, Hu Z. Realization of high-end PET devices that assist conventional PET devices in improving image quality via diffusion modeling. EJNMMI Phys 2024; 11:103. [PMID: 39692956 DOI: 10.1186/s40658-024-00706-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2024] [Accepted: 11/21/2024] [Indexed: 12/19/2024] Open
Abstract
PURPOSE This study aimed to implement high-end positron emission tomography (PET) equipment to assist conventional PET equipment in improving image quality via a distribution learning-based diffusion model. METHODS A diffusion model was first trained on a dataset of high-quality (HQ) images acquired by a high-end PET device (uEXPLORER scanner), and the quality of the conventional PET images was later improved on the basis of this trained model built on null-space constraints. Data from 180 patients were used in this study. Among them, 137 patients who underwent total-body PET/computed tomography scans via a uEXPLORER scanner at the Sun Yat-sen University Cancer Center were retrospectively enrolled. The datasets of 50 of these patients were used to train the diffusion model. The remaining 87 cases and 43 PET images acquired from The Cancer Imaging Archive were used to quantitatively and qualitatively evaluate the proposed method. The nonlocal means (NLM) method, UNet and a generative adversarial network (GAN) were used as reference methods. RESULTS The incorporation of HQ imaging priors derived from high-end devices into the diffusion model through network training can enable the sharing of information between scanners, thereby pushing the limits of conventional scanners and improving their imaging quality. The quantitative results showed that the diffusion model based on null-space constraints produced better and more stable results than those of the methods based on NLM, UNet and the GAN and is well suited for cross-center and cross-device imaging. CONCLUSION A diffusion model based on null-space constraints is a flexible framework that can effectively utilize the prior information provided by high-end scanners to improve the image quality of conventional scanners in cross-center and cross-device scenarios.
Collapse
Affiliation(s)
- Qiyang Zhang
- Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Chao Zhou
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, 510060, China
| | - Xu Zhang
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, 510060, China
| | - Wei Fan
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, 510060, China
| | - Hairong Zheng
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Dong Liang
- Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Zhanli Hu
- Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, 518055, China.
| |
Collapse
|
7
|
Kim SM, Lee JS. A comprehensive review on Compton camera image reconstruction: from principles to AI innovations. Biomed Eng Lett 2024; 14:1175-1193. [PMID: 39465108 PMCID: PMC11502649 DOI: 10.1007/s13534-024-00418-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2024] [Revised: 08/09/2024] [Accepted: 08/20/2024] [Indexed: 10/29/2024] Open
Abstract
Compton cameras have emerged as promising tools in biomedical imaging, offering sensitive gamma-ray imaging capabilities for diverse applications. This review paper comprehensively overviews the latest advancements in Compton camera image reconstruction technologies. Beginning with a discussion of the fundamental principles of Compton scattering and its relevance to gamma-ray imaging, the paper explores the key components and design considerations of Compton camera systems. We then review various image reconstruction algorithms employed in Compton camera systems, including analytical, iterative, and statistical approaches. Recent developments in machine learning-based reconstruction methods are also discussed, highlighting their potential to enhance image quality and reduce reconstruction time in biomedical applications. In particular, we focus on the challenges posed by conical back-projection in Compton camera image reconstruction, and how innovative signal processing techniques have addressed these challenges to improve image accuracy and spatial resolution. Furthermore, experimental validations of Compton camera imaging in preclinical and clinical settings, including multi-tracer and whole-gamma imaging studies are introduced. In summary, this review provides potentially useful information about the current state-of-the-art Compton camera image reconstruction technologies, offering a helpful guide for investigators new to this field.
Collapse
Affiliation(s)
- Soo Mee Kim
- Maritime ICT & Mobility Research Department, Korea Institute of Ocean Science and Technology, Busan, Korea
| | - Jae Sung Lee
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080 Korea
- Brightonix Imaging Inc., Seoul, Korea
| |
Collapse
|
8
|
Lee MS, Shim HS, Lee JS. Strategies for mitigating inter-crystal scattering effects in positron emission tomography: a comprehensive review. Biomed Eng Lett 2024; 14:1243-1258. [PMID: 39465104 PMCID: PMC11502689 DOI: 10.1007/s13534-024-00427-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2024] [Revised: 09/05/2024] [Accepted: 09/09/2024] [Indexed: 10/29/2024] Open
Abstract
Inter-crystal scattering (ICS) events in Positron Emission Tomography (PET) present challenges affecting system sensitivity and image quality. Understanding the physics and factors influencing ICS occurrence is crucial for developing strategies to mitigate its impact. This review paper explores the physics behind ICS events and their occurrence within PET detectors. Various methodologies, including energy-based comparisons, Compton kinematics-based approaches, statistical methods, and Artificial Intelligence (AI) techniques, which have been proposed for identifying and recovering ICS events accurately are introduced. Energy-based methods offer simplicity by comparing energy depositions in crystals. Compton kinematics-based approaches utilize trajectory information for first interaction position estimation, yielding reasonably good results. Additionally, statistical approach and AI algorithms contribute by optimizing likelihood analysis and neural network models for improved positioning accuracy. Experimental validations and simulation studies highlight the potential of recovering ICS events and enhancing PET sensitivity and image quality. Especially, AI technologies offers a promising avenue for addressing ICS challenges and improving PET image accuracy and resolution. These methods offer promising solutions for overcoming the challenges posed by ICS events and enhancing the accuracy and resolution of PET imaging, ultimately improving diagnostic capabilities and patient outcomes. Further studies applying these approaches to real PET systems are needed to validate theoretical results and assess practical implementation feasibility.
Collapse
Affiliation(s)
- Min Sun Lee
- Environmental Radioactivity Assessment Team, Nuclear Emergency & Environmental Protection Division, Korea Atomic Energy Research Institute, Daejeon, Republic of Korea
| | - Hyeong Seok Shim
- Interdisciplinary Program in Bioengineering, Seoul National University, Seoul, Republic of Korea
- Integrated Major in Innovative Medical Science, Seoul National University Graduate School, Seoul, Republic of Korea
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080 Republic of Korea
| | - Jae Sung Lee
- Interdisciplinary Program in Bioengineering, Seoul National University, Seoul, Republic of Korea
- Integrated Major in Innovative Medical Science, Seoul National University Graduate School, Seoul, Republic of Korea
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080 Republic of Korea
- Brightonix Imaging Inc, Seoul, Republic of Korea
| |
Collapse
|
9
|
Avanzo M, Stancanello J, Pirrone G, Drigo A, Retico A. The Evolution of Artificial Intelligence in Medical Imaging: From Computer Science to Machine and Deep Learning. Cancers (Basel) 2024; 16:3702. [PMID: 39518140 PMCID: PMC11545079 DOI: 10.3390/cancers16213702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2024] [Revised: 10/26/2024] [Accepted: 10/29/2024] [Indexed: 11/16/2024] Open
Abstract
Artificial intelligence (AI), the wide spectrum of technologies aiming to give machines or computers the ability to perform human-like cognitive functions, began in the 1940s with the first abstract models of intelligent machines. Soon after, in the 1950s and 1960s, machine learning algorithms such as neural networks and decision trees ignited significant enthusiasm. More recent advancements include the refinement of learning algorithms, the development of convolutional neural networks to efficiently analyze images, and methods to synthesize new images. This renewed enthusiasm was also due to the increase in computational power with graphical processing units and the availability of large digital databases to be mined by neural networks. AI soon began to be applied in medicine, first through expert systems designed to support the clinician's decision and later with neural networks for the detection, classification, or segmentation of malignant lesions in medical images. A recent prospective clinical trial demonstrated the non-inferiority of AI alone compared with a double reading by two radiologists on screening mammography. Natural language processing, recurrent neural networks, transformers, and generative models have both improved the capabilities of making an automated reading of medical images and moved AI to new domains, including the text analysis of electronic health records, image self-labeling, and self-reporting. The availability of open-source and free libraries, as well as powerful computing resources, has greatly facilitated the adoption of deep learning by researchers and clinicians. Key concerns surrounding AI in healthcare include the need for clinical trials to demonstrate efficacy, the perception of AI tools as 'black boxes' that require greater interpretability and explainability, and ethical issues related to ensuring fairness and trustworthiness in AI systems. Thanks to its versatility and impressive results, AI is one of the most promising resources for frontier research and applications in medicine, in particular for oncological applications.
Collapse
Affiliation(s)
- Michele Avanzo
- Medical Physics Department, Centro di Riferimento Oncologico di Aviano (CRO) IRCCS, 33081 Aviano, Italy; (G.P.); (A.D.)
| | | | - Giovanni Pirrone
- Medical Physics Department, Centro di Riferimento Oncologico di Aviano (CRO) IRCCS, 33081 Aviano, Italy; (G.P.); (A.D.)
| | - Annalisa Drigo
- Medical Physics Department, Centro di Riferimento Oncologico di Aviano (CRO) IRCCS, 33081 Aviano, Italy; (G.P.); (A.D.)
| | - Alessandra Retico
- National Institute for Nuclear Physics (INFN), Pisa Division, 56127 Pisa, Italy;
| |
Collapse
|
10
|
Yie SY, Kang SK, Gil J, Hwang D, Choi H, Kim YK, Paeng JC, Lee JS. Enhancing bone scan image quality: an improved self-supervised denoising approach. Phys Med Biol 2024; 69:215020. [PMID: 39312947 DOI: 10.1088/1361-6560/ad7e79] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Accepted: 09/23/2024] [Indexed: 09/25/2024]
Abstract
Objective.Bone scans play an important role in skeletal lesion assessment, but gamma cameras exhibit challenges with low sensitivity and high noise levels. Deep learning (DL) has emerged as a promising solution to enhance image quality without increasing radiation exposure or scan time. However, existing self-supervised denoising methods, such as Noise2Noise (N2N), may introduce deviations from the clinical standard in bone scans. This study proposes an improved self-supervised denoising technique to minimize discrepancies between DL-based denoising and full scan images.Approach.Retrospective analysis of 351 whole-body bone scan data sets was conducted. In this study, we used N2N and Noise2FullCount (N2F) denoising models, along with an interpolated version of N2N (iN2N). Denoising networks were separately trained for each reduced scan time from 5 to 50%, and also trained for mixed training datasets, which include all shortened scans. We performed quantitative analysis and clinical evaluation by nuclear medicine experts.Main results.The denoising networks effectively generated images resembling full scans, with N2F revealing distinctive patterns for different scan times, N2N producing smooth textures with slight blurring, and iN2N closely mirroring full scan patterns. Quantitative analysis showed that denoising improved with longer input times and mixed count training outperformed fixed count training. Traditional denoising methods lagged behind DL-based denoising. N2N demonstrated limitations in long-scan images. Clinical evaluation favored N2N and iN2N in resolution, noise, blurriness, and findings, showcasing their potential for enhanced diagnostic performance in quarter-time scans.Significance.The improved self-supervised denoising technique presented in this study offers a viable solution to enhance bone scan image quality, minimizing deviations from clinical standards. The method's effectiveness was demonstrated quantitatively and clinically, showing promise for quarter-time scans without compromising diagnostic performance. This approach holds potential for improving bone scan interpretations, aiding in more accurate clinical diagnoses.
Collapse
Affiliation(s)
- Si Young Yie
- Interdisciplinary Program in Bioengineering, Seoul National University, Seoul, Republic of Korea
- Integrated Major in Innovative Medical Science, Seoul National University, Seoul, Republic of Korea
- Artificial Intelligence Institute, Seoul National University, Seoul, Republic of Korea
- Department of Nuclear Medicine, Seoul National University Hospital, Seoul, Republic of Korea
| | | | - Joonhyung Gil
- Department of Nuclear Medicine, Seoul National University Hospital, Seoul, Republic of Korea
| | - Donghwi Hwang
- Department of Nuclear Medicine, Seoul National University Hospital, Seoul, Republic of Korea
| | - Hongyoon Choi
- Department of Nuclear Medicine, Seoul National University Hospital, Seoul, Republic of Korea
| | - Yu Kyeong Kim
- Department of Nuclear Medicine, Seoul National University Boramae Medical Center, Seoul, Republic of Korea
| | - Jin Chul Paeng
- Department of Nuclear Medicine, Seoul National University Hospital, Seoul, Republic of Korea
| | - Jae Sung Lee
- Interdisciplinary Program in Bioengineering, Seoul National University, Seoul, Republic of Korea
- Integrated Major in Innovative Medical Science, Seoul National University, Seoul, Republic of Korea
- Artificial Intelligence Institute, Seoul National University, Seoul, Republic of Korea
- Department of Nuclear Medicine, Seoul National University Hospital, Seoul, Republic of Korea
- Brightonix Imaging Inc., Seoul, Republic of Korea
| |
Collapse
|
11
|
Kang SK, Kim D, Shin SA, Kim YK, Choi H, Lee JS. Accurate Automated Quantification of Dopamine Transporter PET Without MRI Using Deep Learning-based Spatial Normalization. Nucl Med Mol Imaging 2024; 58:354-363. [PMID: 39308485 PMCID: PMC11415331 DOI: 10.1007/s13139-024-00869-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2024] [Revised: 06/16/2024] [Accepted: 06/17/2024] [Indexed: 09/25/2024] Open
Abstract
Purpose Dopamine transporter imaging is crucial for assessing presynaptic dopaminergic neurons in Parkinson's disease (PD) and related parkinsonian disorders. While 18F-FP-CIT PET offers advantages in spatial resolution and sensitivity over 123I-β-CIT or 123I-FP-CIT SPECT imaging, accurate quantification remains essential. This study presents a novel automatic quantification method for 18F-FP-CIT PET images, utilizing an artificial intelligence (AI)-based robust PET spatial normalization (SN) technology that eliminates the need for anatomical images. Methods The proposed SN engine consists of convolutional neural networks, trained using 213 paired datasets of 18F-FP-CIT PET and 3D structural MRI. Remarkably, only PET images are required as input during inference. A cyclic training strategy enables backward deformation from template to individual space. An additional 89 paired 18F-FP-CIT PET and 3D MRI datasets were used to evaluate the accuracy of striatal activity quantification. MRI-based PET quantification using FIRST software was also conducted for comparison. The proposed method was also validated using 135 external datasets. Results The proposed AI-based method successfully generated spatially normalized 18F-FP-CIT PET images, obviating the need for CT or MRI. The striatal PET activity determined by proposed PET-only method and MRI-based PET quantification using FIRST algorithm were highly correlated, with R 2 and slope ranging 0.96-0.99 and 0.98-1.02 in both internal and external datasets. Conclusion Our AI-based SN method enables accurate automatic quantification of striatal activity in 18F-FP-CIT brain PET images without MRI support. This approach holds promise for evaluating presynaptic dopaminergic function in PD and related parkinsonian disorders.
Collapse
Affiliation(s)
- Seung Kwan Kang
- Brightonix Imaging Inc., Seongsu-Yeok SK V1 Tower, 25 Yeonmujang 5Ga-Gil, Seongdong-Gu, Seoul, 04782 Korea
- Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul, Korea
| | - Daewoon Kim
- Interdisciplinary Program of Bioengineering, Seoul National University, Seoul, Korea
- Artificial Intelligence Institute, Seoul National University, Seoul, Korea
| | - Seong A. Shin
- Brightonix Imaging Inc., Seongsu-Yeok SK V1 Tower, 25 Yeonmujang 5Ga-Gil, Seongdong-Gu, Seoul, 04782 Korea
| | - Yu Kyeong Kim
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-Ro, Jongno-Gu, Seoul, 03080 Korea
- Department of Nuclear Medicine, Seoul Metropolitan Government Seoul National University Boramae Medical Center, Seoul, Korea
| | - Hongyoon Choi
- Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul, Korea
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-Ro, Jongno-Gu, Seoul, 03080 Korea
| | - Jae Sung Lee
- Brightonix Imaging Inc., Seongsu-Yeok SK V1 Tower, 25 Yeonmujang 5Ga-Gil, Seongdong-Gu, Seoul, 04782 Korea
- Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul, Korea
- Interdisciplinary Program of Bioengineering, Seoul National University, Seoul, Korea
- Artificial Intelligence Institute, Seoul National University, Seoul, Korea
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-Ro, Jongno-Gu, Seoul, 03080 Korea
| |
Collapse
|
12
|
Kim D, Kang SK, Shin SA, Choi H, Lee JS. Improving 18F-FDG PET Quantification Through a Spatial Normalization Method. J Nucl Med 2024; 65:1645-1651. [PMID: 39209545 PMCID: PMC11448607 DOI: 10.2967/jnumed.123.267360] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2023] [Accepted: 08/02/2024] [Indexed: 09/04/2024] Open
Abstract
Quantification of 18F-FDG PET images is useful for accurate diagnosis and evaluation of various brain diseases, including brain tumors, epilepsy, dementia, and Parkinson disease. However, accurate quantification of 18F-FDG PET images requires matched 3-dimensional T1 MRI scans of the same individuals to provide detailed information on brain anatomy. In this paper, we propose a transfer learning approach to adapt a pretrained deep neural network model from amyloid PET to spatially normalize 18F-FDG PET images without the need for 3-dimensional MRI. Methods: The proposed method is based on a deep learning model for automatic spatial normalization of 18F-FDG brain PET images, which was developed by fine-tuning a pretrained model for amyloid PET using only 103 18F-FDG PET and MR images. After training, the algorithm was tested on 65 internal and 78 external test sets. All T1 MR images with a 1-mm isotropic voxel size were processed with FreeSurfer software to provide cortical segmentation maps used to extract a ground-truth regional SUV ratio using cerebellar gray matter as a reference region. These values were compared with those from spatial normalization-based quantification methods using the proposed method and statistical parametric mapping software. Results: The proposed method showed superior spatial normalization compared with statistical parametric mapping, as evidenced by increased normalized mutual information and better size and shape matching in PET images. Quantitative evaluation revealed a consistently higher SUV ratio correlation and intraclass correlation coefficients for the proposed method across various brain regions in both internal and external datasets. The remarkably good correlation and intraclass correlation coefficient values of the proposed method for the external dataset are noteworthy, considering the dataset's different ethnic distribution and the use of different PET scanners and image reconstruction algorithms. Conclusion: This study successfully applied transfer learning to a deep neural network for 18F-FDG PET spatial normalization, demonstrating its resource efficiency and improved performance. This highlights the efficacy of transfer learning, which requires a smaller number of datasets than does the original network training, thus increasing the potential for broader use of deep learning-based brain PET spatial normalization techniques for various clinical and research radiotracers.
Collapse
Affiliation(s)
- Daewoon Kim
- Interdisciplinary Program of Bioengineering, Seoul National University, Seoul, South Korea
- Artificial Intelligence Institute, Seoul National University, Seoul, South Korea
| | - Seung Kwan Kang
- Brightonix Imaging Inc., Seoul, South Korea;
- Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul, South Korea; and
| | | | - Hongyoon Choi
- Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul, South Korea; and
- Department of Nuclear Medicine, Seoul National University College of Medicine and Seoul National University Hospital, Seoul, South Korea
| | - Jae Sung Lee
- Interdisciplinary Program of Bioengineering, Seoul National University, Seoul, South Korea;
- Artificial Intelligence Institute, Seoul National University, Seoul, South Korea
- Brightonix Imaging Inc., Seoul, South Korea
- Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul, South Korea; and
- Department of Nuclear Medicine, Seoul National University College of Medicine and Seoul National University Hospital, Seoul, South Korea
| |
Collapse
|
13
|
Chin M, Jafaritadi M, Franco AB, Nasir Ullah M, Chinn G, Innes D, Levin CS. Self-normalization for a 1 mm 3resolution clinical PET system using deep learning. Phys Med Biol 2024; 69:175004. [PMID: 39084640 DOI: 10.1088/1361-6560/ad69fb] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Accepted: 07/31/2024] [Indexed: 08/02/2024]
Abstract
Objective.This work proposes, for the first time, an image-based end-to-end self-normalization framework for positron emission tomography (PET) using conditional generative adversarial networks (cGANs).Approach.We evaluated different approaches by exploring each of the following three methodologies. First, we used images that were either unnormalized or corrected for geometric factors, which encompass all time-invariant factors, as input data types. Second, we set the input tensor shape as either a single axial slice (2D) or three contiguous axial slices (2.5D). Third, we chose either Pix2Pix or polarized self-attention (PSA) Pix2Pix, which we developed for this work, as a deep learning network. The targets for all approaches were the axial slices of images normalized using the direct normalization method. We performed Monte Carlo simulations of ten voxelized phantoms with the SimSET simulation tool and produced 26,000 pairs of axial image slices for training and testing.Main results.The results showed that 2.5D PSA Pix2Pix trained with geometric-factors-corrected input images achieved the best performance among all the methods we tested. All approaches improved general image quality figures of merit peak signal to noise ratio (PSNR) and structural similarity index (SSIM) from ∼15 % to ∼55 %, and 2.5D PSA Pix2Pix showed the highest PSNR (28.074) and SSIM (0.921). Lesion detectability, measured with region of interest (ROI) PSNR, SSIM, normalized contrast recovery coefficient, and contrast-to-noise ratio, was generally improved for all approaches, and 2.5D PSA Pix2Pix trained with geometric-factors-corrected input images achieved the highest ROI PSNR (28.920) and SSIM (0.973).Significance.This study demonstrates the potential of an image-based end-to-end self-normalization framework using cGANs for improving PET image quality and lesion detectability without the need for separate normalization scans.
Collapse
Affiliation(s)
- Myungheon Chin
- Department of Electrical Engineering, Stanford University, Stanford, CA, United States of America
- Department of Radiology, Stanford University, Stanford, CA, United States of America
| | - Mojtaba Jafaritadi
- Department of Radiology, Stanford University, Stanford, CA, United States of America
| | - Andrew B Franco
- Department of Mechanical Engineering, Stanford University, Stanford, CA, United States of America
| | - Muhammad Nasir Ullah
- Department of Radiology, Stanford University, Stanford, CA, United States of America
| | - Garry Chinn
- Department of Radiology, Stanford University, Stanford, CA, United States of America
| | - Derek Innes
- Department of Radiology, Stanford University, Stanford, CA, United States of America
| | - Craig S Levin
- Department of Electrical Engineering, Stanford University, Stanford, CA, United States of America
- Department of Radiology, Stanford University, Stanford, CA, United States of America
- Department of Physics, Stanford University, Stanford, CA, United States of America
- Department of Bioengineering, Stanford University, Stanford, CA, United States of America
| |
Collapse
|
14
|
Fezeu F, Jbara OF, Jbarah A, Choucha A, De Maria L, Ciaglia E, De Simone M, Samnick S. PET imaging for a very early detection of rapid eye movement sleep behaviour disorder and Parkinson's disease - A model-based cost-effectiveness analysis. Clin Neurol Neurosurg 2024; 243:108404. [PMID: 38944021 DOI: 10.1016/j.clineuro.2024.108404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2024] [Accepted: 06/19/2024] [Indexed: 07/01/2024]
Abstract
Parkinson's disease (PD) is the second most prevalent neurodegenerative condition after Alzheimer's disease and it represents one of the fastest emerging neurological diseases worldwide. PD is usually diagnosed after the third decade of life with symptoms like tremors at rest and muscle stiffness. Rapid Eye Movement sleep behavioral disorder (RBD) is another disorder that is caused by a loss of typical muscle relaxation during sleep with a lot of motor activity. Usually, RBD is strongly associated with PD. Recent studies have demonstrated that PD reduces the life expectancy of patients to 10 and 20 years after being diagnosed. In addition, delayed diagnosis and treatment of these neurological disorders have significant socio-economic impacts on patients, their partners and on the general public. Often, it is not clear about PD associated financial burdens both in low and high-income countries. On the other hand, PD triggers neurological variations that affect differences in the dopamine transporter (DAT) and in glucose metabolism. Therefore, positron emission tomography (PET) using specific DAT radiotracers and fluorine-18 labeled desoxyglucose (FDG) has being considered a key imaging technique that could be applied clinically for the very early diagnosis of RBD and in PD. However, a few myths about PET is that it is very expensive. Here, we looked at the cost of treatment of PD and RBD in relation to early PET imaging. Our finding suggests that PET imaging might also be a cost sparing diagnostic option in the management of patients with PD and RBD, not only for first world countries as it is the case now but also for the third world countries. Therefore, PET is a cost-effective imaging technique for very early diagnostic of RBD and PD.
Collapse
Affiliation(s)
- Francis Fezeu
- Brain Global, Department of Neurology & Neurological Surgery, 27659 Arabian Drive, Salisbury, MD 21801, USA
| | - Omar F Jbara
- Neuropedia for Training and Scientific Research, Amman, Jordan
| | | | - Anis Choucha
- Department of Neurosurgery, Aix Marseille University, APHM, UH Timone, Marseille 13005, France
| | - Lucio De Maria
- Unit of Neurosurgery, Department of Surgical Specialties, Radiological Sciences, and Public Health, University of Brescia, Brescia 25123, Italy; Unit of Neurosurgery, Department of Clinical Neuroscience, Geneva University Hospitals (HUG), Geneva 1205, Switzerland
| | - Elena Ciaglia
- Department of Medicine, Surgery and Dentistry "Scuola Medica Salernitana", University of Salerno, Via S. Allende, Baronissi 84081, Italy
| | - Matteo De Simone
- Department of Medicine, Surgery and Dentistry "Scuola Medica Salernitana", University of Salerno, Via S. Allende, Baronissi 84081, Italy.
| | - Samuel Samnick
- Interdisciplinary PET Centre and Radiopharmacy at the Department of Nuclear Medicine of the University Würzburg, Germany; Interdisciplinary PET-Centre at the Julius-Maximilians University Würzburg, Germany
| |
Collapse
|
15
|
Sun H, Huang Y, Hu D, Hong X, Salimi Y, Lv W, Chen H, Zaidi H, Wu H, Lu L. Artificial intelligence-based joint attenuation and scatter correction strategies for multi-tracer total-body PET. EJNMMI Phys 2024; 11:66. [PMID: 39028439 PMCID: PMC11264498 DOI: 10.1186/s40658-024-00666-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2023] [Accepted: 07/04/2024] [Indexed: 07/20/2024] Open
Abstract
BACKGROUND Low-dose ungated CT is commonly used for total-body PET attenuation and scatter correction (ASC). However, CT-based ASC (CT-ASC) is limited by radiation dose risks of CT examinations, propagation of CT-based artifacts and potential mismatches between PET and CT. We demonstrate the feasibility of direct ASC for multi-tracer total-body PET in the image domain. METHODS Clinical uEXPLORER total-body PET/CT datasets of [18F]FDG (N = 52), [18F]FAPI (N = 46) and [68Ga]FAPI (N = 60) were retrospectively enrolled in this study. We developed an improved 3D conditional generative adversarial network (cGAN) to directly estimate attenuation and scatter-corrected PET images from non-attenuation and scatter-corrected (NASC) PET images. The feasibility of the proposed 3D cGAN-based ASC was validated using four training strategies: (1) Paired 3D NASC and CT-ASC PET images from three tracers were pooled into one centralized server (CZ-ASC). (2) Paired 3D NASC and CT-ASC PET images from each tracer were individually used (DL-ASC). (3) Paired NASC and CT-ASC PET images from one tracer ([18F]FDG) were used to train the networks, while the other two tracers were used for testing without fine-tuning (NFT-ASC). (4) The pre-trained networks of (3) were fine-tuned with two other tracers individually (FT-ASC). We trained all networks in fivefold cross-validation. The performance of all ASC methods was evaluated by qualitative and quantitative metrics using CT-ASC as the reference. RESULTS CZ-ASC, DL-ASC and FT-ASC showed comparable visual quality with CT-ASC for all tracers. CZ-ASC and DL-ASC resulted in a normalized mean absolute error (NMAE) of 8.51 ± 7.32% versus 7.36 ± 6.77% (p < 0.05), outperforming NASC (p < 0.0001) in [18F]FDG dataset. CZ-ASC, FT-ASC and DL-ASC led to NMAE of 6.44 ± 7.02%, 6.55 ± 5.89%, and 7.25 ± 6.33% in [18F]FAPI dataset, and NMAE of 5.53 ± 3.99%, 5.60 ± 4.02%, and 5.68 ± 4.12% in [68Ga]FAPI dataset, respectively. CZ-ASC, FT-ASC and DL-ASC were superior to NASC (p < 0.0001) and NFT-ASC (p < 0.0001) in terms of NMAE results. CONCLUSIONS CZ-ASC, DL-ASC and FT-ASC demonstrated the feasibility of providing accurate and robust ASC for multi-tracer total-body PET, thereby reducing the radiation hazards to patients from redundant CT examinations. CZ-ASC and FT-ASC could outperform DL-ASC for cross-tracer total-body PET AC.
Collapse
Affiliation(s)
- Hao Sun
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, 1211, Geneva 4, Switzerland
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China
| | - Yanchao Huang
- Laboratory for Quality Control and Evaluation of Radiopharmaceuticals, Department of Nuclear Medicine, Nanfang Hospital Southern Medical University, Guangzhou, 510515, China
| | - Debin Hu
- Department of Medical Engineering, Nanfang Hospital Southern Medical University, Guangzhou, 510515, China
| | - Xiaotong Hong
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, 1211, Geneva 4, Switzerland
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China
| | - Yazdan Salimi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, 1211, Geneva 4, Switzerland
| | - Wenbing Lv
- Department of Electronic Engineering, Information School, Yunnan University, Kunming, 650091, China
| | - Hongwen Chen
- Laboratory for Quality Control and Evaluation of Radiopharmaceuticals, Department of Nuclear Medicine, Nanfang Hospital Southern Medical University, Guangzhou, 510515, China
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, 1211, Geneva 4, Switzerland
| | - Hubing Wu
- Laboratory for Quality Control and Evaluation of Radiopharmaceuticals, Department of Nuclear Medicine, Nanfang Hospital Southern Medical University, Guangzhou, 510515, China.
| | - Lijun Lu
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China.
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China.
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China.
- Pazhou Lab, Guangzhou, 510330, China.
| |
Collapse
|
16
|
Correia PMM, Cruzeiro B, Dias J, Encarnação PMCC, Ribeiro FM, Rodrigues CA, Silva ALM. Precise positioning of gamma ray interactions in multiplexed pixelated scintillators using artificial neural networks. Biomed Phys Eng Express 2024; 10:045038. [PMID: 38779912 DOI: 10.1088/2057-1976/ad4f73] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2024] [Accepted: 05/22/2024] [Indexed: 05/25/2024]
Abstract
Introduction. The positioning ofγray interactions in positron emission tomography (PET) detectors is commonly made through the evaluation of the Anger logic flood histograms. machine learning techniques, leveraging features extracted from signal waveform, have demonstrated successful applications in addressing various challenges in PET instrumentation.Aim. This paper evaluates the use of artificial neural networks (NN) forγray interaction positioning in pixelated scintillators coupled to a multiplexed array of silicon photomultipliers (SiPM).Methods. An array of 16 Cerium doped Lutetium-based (LYSO) crystal pixels (cross-section 2 × 2 mm2) coupled to 16 SiPM (S13360-1350) were used for the experimental setup. Data from each of the 16 LYSO pixels was recorded, a total of 160000 events. The detectors were irradiated by 511 keV annihilationγrays from a Sodium-22 (22Na) source. Another LYSO crystal was used for electronic collimation. Features extracted from the signal waveform were used to train the model. Two models were tested: i) single multiple-class neural network (mcNN), with 16 possible outputs followed by a softmax and ii) 16 binary classification neural networks (bNN), each one specialized in identifying events occurred in each position.Results. Both NN models showed a mean positioning accuracy above 85% on the evaluation dataset, although the mcNN is faster to train.DiscussionThe method's accuracy is affected by the introduction of misclassified events that interacted in the neighbour's crystals and were misclassified during the dataset acquisition. Electronic collimation reduces this effect, however results could be improved using a more complex acquisition setup, such as a light-sharing configuration.ConclusionsThe methods comparison showed that mcNN and bNN can surpass the Anger logic, showing the feasibility of using these models in positioning procedures of future multiplexed detector systems in a linear configuration.
Collapse
Affiliation(s)
- P M M Correia
- Institute for Nanostructures, Nanomodelling and Nanofabrication (i3N), University of Aveiro, Campus Universitário de Santiago, 3810-193, Aveiro, Portugal
| | - B Cruzeiro
- Institute for Nanostructures, Nanomodelling and Nanofabrication (i3N), University of Aveiro, Campus Universitário de Santiago, 3810-193, Aveiro, Portugal
| | - J Dias
- Faculdade de Economia, CeBER, Universidade de Coimbra, Av. Dias da Silva, 165, 3004-512 Coimbra, Portugal
- INESC-Coimbra, Universidade de Coimbra, Rua Sílvio Lima, Pólo II, 3030-290 Coimbra, Portugal
| | - P M C C Encarnação
- Institute for Nanostructures, Nanomodelling and Nanofabrication (i3N), University of Aveiro, Campus Universitário de Santiago, 3810-193, Aveiro, Portugal
| | - F M Ribeiro
- Institute for Nanostructures, Nanomodelling and Nanofabrication (i3N), University of Aveiro, Campus Universitário de Santiago, 3810-193, Aveiro, Portugal
| | - C A Rodrigues
- Institute for Nanostructures, Nanomodelling and Nanofabrication (i3N), University of Aveiro, Campus Universitário de Santiago, 3810-193, Aveiro, Portugal
| | - A L M Silva
- Institute for Nanostructures, Nanomodelling and Nanofabrication (i3N), University of Aveiro, Campus Universitário de Santiago, 3810-193, Aveiro, Portugal
| |
Collapse
|
17
|
Kang SK, Heo M, Chung JY, Kim D, Shin SA, Choi H, Chung A, Ha JM, Kim H, Lee JS. Clinical Performance Evaluation of an Artificial Intelligence-Powered Amyloid Brain PET Quantification Method. Nucl Med Mol Imaging 2024; 58:246-254. [PMID: 38932756 PMCID: PMC11196433 DOI: 10.1007/s13139-024-00861-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 04/05/2024] [Accepted: 04/09/2024] [Indexed: 06/28/2024] Open
Abstract
Purpose This study assesses the clinical performance of BTXBrain-Amyloid, an artificial intelligence-powered software for quantifying amyloid uptake in brain PET images. Methods 150 amyloid brain PET images were visually assessed by experts and categorized as negative and positive. Standardized uptake value ratio (SUVR) was calculated with cerebellum grey matter as the reference region, and receiver operating characteristic (ROC) and precision-recall (PR) analysis for BTXBrain-Amyloid were conducted. For comparison, same image processing and analysis was performed using Statistical Parametric Mapping (SPM) program. In addition, to evaluate the spatial normalization (SN) performance, mutual information (MI) between MRI template and spatially normalized PET images was calculated and SPM group analysis was conducted. Results Both BTXBrain and SPM methods discriminated between negative and positive groups. However, BTXBrain exhibited lower SUVR standard deviation (0.06 and 0.21 for negative and positive, respectively) than SPM method (0.11 and 0.25). In ROC analysis, BTXBrain had an AUC of 0.979, compared to 0.959 for SPM, while PR curves showed an AUC of 0.983 for BTXBrain and 0.949 for SPM. At the optimal cut-off, the sensitivity and specificity were 0.983 and 0.921 for BTXBrain and 0.917 and 0.921 for SPM12, respectively. MI evaluation also favored BTXBrain (0.848 vs. 0.823), indicating improved SN. In SPM group analysis, BTXBrain exhibited higher sensitivity in detecting basal ganglia differences between negative and positive groups. Conclusion BTXBrain-Amyloid outperformed SPM in clinical performance evaluation, also demonstrating superior SN and improved detection of deep brain differences. These results suggest the potential of BTXBrain-Amyloid as a valuable tool for clinical amyloid PET image evaluation.
Collapse
Affiliation(s)
- Seung Kwan Kang
- Brightonix Imaging Inc., Seoul, Korea
- Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul, Korea
| | - Mina Heo
- Department of Neurology, College of Medicine, Chosun University and Chosun University Hospital, 365 Pilmun-Daero, Dong-Gu, Gwangju, South Korea
| | - Ji Yeon Chung
- Department of Neurology, College of Medicine, Chosun University and Chosun University Hospital, 365 Pilmun-Daero, Dong-Gu, Gwangju, South Korea
| | - Daewoon Kim
- Interdisciplinary Program of Bioengineering, Seoul National University, Seoul, Korea
- Artificial Intelligence Institute, Seoul National University, Seoul, Korea
| | | | - Hongyoon Choi
- Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul, Korea
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-Ro, Jongno-Gu, Seoul, 03080 Korea
| | - Ari Chung
- Department of Nuclear Medicine, College of Medicine, Chosun University and Chosun University Hospital, Gwangju, Korea
| | - Jung-Min Ha
- Department of Nuclear Medicine, College of Medicine, Chosun University and Chosun University Hospital, Gwangju, Korea
| | - Hoowon Kim
- Department of Neurology, College of Medicine, Chosun University and Chosun University Hospital, 365 Pilmun-Daero, Dong-Gu, Gwangju, South Korea
| | - Jae Sung Lee
- Brightonix Imaging Inc., Seoul, Korea
- Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul, Korea
- Interdisciplinary Program of Bioengineering, Seoul National University, Seoul, Korea
- Artificial Intelligence Institute, Seoul National University, Seoul, Korea
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-Ro, Jongno-Gu, Seoul, 03080 Korea
| |
Collapse
|
18
|
Hashimoto F, Onishi Y, Ote K, Tashima H, Reader AJ, Yamaya T. Deep learning-based PET image denoising and reconstruction: a review. Radiol Phys Technol 2024; 17:24-46. [PMID: 38319563 PMCID: PMC10902118 DOI: 10.1007/s12194-024-00780-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 01/03/2024] [Accepted: 01/04/2024] [Indexed: 02/07/2024]
Abstract
This review focuses on positron emission tomography (PET) imaging algorithms and traces the evolution of PET image reconstruction methods. First, we provide an overview of conventional PET image reconstruction methods from filtered backprojection through to recent iterative PET image reconstruction algorithms, and then review deep learning methods for PET data up to the latest innovations within three main categories. The first category involves post-processing methods for PET image denoising. The second category comprises direct image reconstruction methods that learn mappings from sinograms to the reconstructed images in an end-to-end manner. The third category comprises iterative reconstruction methods that combine conventional iterative image reconstruction with neural-network enhancement. We discuss future perspectives on PET imaging and deep learning technology.
Collapse
Affiliation(s)
- Fumio Hashimoto
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan.
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho, Inage-Ku, Chiba, 263-8522, Japan.
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan.
| | - Yuya Onishi
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan
| | - Kibo Ote
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan
| | - Hideaki Tashima
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan
| | - Andrew J Reader
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, SE1 7EH, UK
| | - Taiga Yamaya
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho, Inage-Ku, Chiba, 263-8522, Japan
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan
| |
Collapse
|
19
|
Kobayashi T, Shigeki Y, Yamakawa Y, Tsutsumida Y, Mizuta T, Hanaoka K, Watanabe S, Morimoto-Ishikawa D, Yamada T, Kaida H, Ishii K. Generating PET Attenuation Maps via Sim2Real Deep Learning-Based Tissue Composition Estimation Combined with MLACF. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:167-179. [PMID: 38343219 DOI: 10.1007/s10278-023-00902-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Revised: 07/20/2023] [Accepted: 08/10/2023] [Indexed: 03/02/2024]
Abstract
Deep learning (DL) has recently attracted attention for data processing in positron emission tomography (PET). Attenuation correction (AC) without computed tomography (CT) data is one of the interests. Here, we present, to our knowledge, the first attempt to generate an attenuation map of the human head via Sim2Real DL-based tissue composition estimation from model training using only the simulated PET dataset. The DL model accepts a two-dimensional non-attenuation-corrected PET image as input and outputs a four-channel tissue-composition map of soft tissue, bone, cavity, and background. Then, an attenuation map is generated by a linear combination of the tissue composition maps and, finally, used as input for scatter+random estimation and as an initial estimate for attenuation map reconstruction by the maximum likelihood attenuation correction factor (MLACF), i.e., the DL estimate is refined by the MLACF. Preliminary results using clinical brain PET data showed that the proposed DL model tended to estimate anatomical details inaccurately, especially in the neck-side slices. However, it succeeded in estimating overall anatomical structures, and the PET quantitative accuracy with DL-based AC was comparable to that with CT-based AC. Thus, the proposed DL-based approach combined with the MLACF is also a promising CT-less AC approach.
Collapse
Affiliation(s)
- Tetsuya Kobayashi
- Technology Research Laboratory, Shimadzu Corporation, 3-9-4, Hikaridai, Seika-cho, Soraku-gun, Kyoto, 619-0237, Japan.
| | - Yui Shigeki
- Technology Research Laboratory, Shimadzu Corporation, 3-9-4, Hikaridai, Seika-cho, Soraku-gun, Kyoto, 619-0237, Japan
| | - Yoshiyuki Yamakawa
- Medical Systems Division, Shimadzu Corporation, 1, Nishinokyo Kuwabara-cho, Nakagyo-ku, Kyoto, 604-8511, Japan
| | - Yumi Tsutsumida
- Medical Systems Division, Shimadzu Corporation, 1, Nishinokyo Kuwabara-cho, Nakagyo-ku, Kyoto, 604-8511, Japan
| | - Tetsuro Mizuta
- Medical Systems Division, Shimadzu Corporation, 1, Nishinokyo Kuwabara-cho, Nakagyo-ku, Kyoto, 604-8511, Japan
| | - Kohei Hanaoka
- Division of Positron Emission Tomography, Institute of Advanced Clinical Medicine, Kindai University, 377-2, Onohigashi, Osakasayama, Osaka, 589-8511, Japan
| | - Shota Watanabe
- Division of Positron Emission Tomography, Institute of Advanced Clinical Medicine, Kindai University, 377-2, Onohigashi, Osakasayama, Osaka, 589-8511, Japan
| | - Daisuke Morimoto-Ishikawa
- Division of Positron Emission Tomography, Institute of Advanced Clinical Medicine, Kindai University, 377-2, Onohigashi, Osakasayama, Osaka, 589-8511, Japan
| | - Takahiro Yamada
- Division of Positron Emission Tomography, Institute of Advanced Clinical Medicine, Kindai University, 377-2, Onohigashi, Osakasayama, Osaka, 589-8511, Japan
| | - Hayato Kaida
- Division of Positron Emission Tomography, Institute of Advanced Clinical Medicine, Kindai University, 377-2, Onohigashi, Osakasayama, Osaka, 589-8511, Japan
- Department of Radiology, Faculty of Medicine, Kindai University, 377-2, Onohigashi, Osakasayama, Osaka, 589-8511, Japan
| | - Kazunari Ishii
- Division of Positron Emission Tomography, Institute of Advanced Clinical Medicine, Kindai University, 377-2, Onohigashi, Osakasayama, Osaka, 589-8511, Japan
- Department of Radiology, Faculty of Medicine, Kindai University, 377-2, Onohigashi, Osakasayama, Osaka, 589-8511, Japan
| |
Collapse
|
20
|
Lee JS, Lee MS. Advancements in Positron Emission Tomography Detectors: From Silicon Photomultiplier Technology to Artificial Intelligence Applications. PET Clin 2024; 19:1-24. [PMID: 37802675 DOI: 10.1016/j.cpet.2023.06.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/08/2023]
Abstract
This review article focuses on PET detector technology, which is the most crucial factor in determining PET image quality. The article highlights the desired properties of PET detectors, including high detection efficiency, spatial resolution, energy resolution, and timing resolution. Recent advancements in PET detectors to improve these properties are also discussed, including the use of silicon photomultiplier technology, advancements in depth-of-interaction and time-of-flight PET detectors, and the use of artificial intelligence for detector development. The article provides an overview of PET detector technology and its recent advancements, which can significantly enhance PET image quality.
Collapse
Affiliation(s)
- Jae Sung Lee
- Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul 03080, South Korea; Brightonix Imaging Inc., Seoul 04782, South Korea
| | - Min Sun Lee
- Environmental Radioactivity Assessment Team, Nuclear Emergency & Environmental Protection Division, Korea Atomic Energy Research Institute, Daejeon 34057, South Korea.
| |
Collapse
|
21
|
Iwao Y, Akamatsu G, Tashima H, Takahashi M, Yamaya T. Pre-acquired CT-based attenuation correction with automated headrest removal for a brain-dedicated PET system. Radiol Phys Technol 2023; 16:552-559. [PMID: 37819445 DOI: 10.1007/s12194-023-00744-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Revised: 09/21/2023] [Accepted: 09/22/2023] [Indexed: 10/13/2023]
Abstract
Attenuation correction (AC) is essential for quantitative positron emission tomography (PET) images. Attenuation coefficient maps (μ-maps) are usually generated from computed tomography (CT) images when PET-CT combined systems are used. If CT has been performed prior to PET imaging, pre-acquired CT can be used for brain PET AC, because the human head is almost rigid. This pre-acquired CT-based AC approach is suitable for stand-alone brain-dedicated PET, such as VRAIN (ATOX Co. Ltd., Tokyo, Japan). However, the headrest of PET is different from the headrest in pre-acquired CT images, which may degrade the PET image quality. In this study, we prepared three different types of μ-maps: (1) based on the pre-acquired CT, where namely the headrest is different from the PET system (μ-map-diffHr); (2) manually removing the headrest from the pre-acquired CT (μ-map-noHr); and (3) artificially replacing the headrest region with the headrest of the PET system (μ-map-sameHr). Phantom images by VRAIN using each μ-map were investigated for uniformity, noise, and quantitative accuracy. Consequently, only the uniformity of the images using μ-map-diffHr was out of the acceptance criteria. We then proposed an automated method for removing the headrest from pre-acquired CT images. In comparisons of standardized uptake values in nine major brain regions from the 18F-fluoro-2-deoxy-D-glucose-PET of 10 healthy volunteers, no significant differences were found between the μ-map-noHr and the μ-map-sameHr. In conclusion, pre-acquired CT-based AC with automated headrest removal is useful for brain-dedicated PET such as VRAIN.
Collapse
Affiliation(s)
- Yuma Iwao
- Department of Advanced Nuclear Medicine Sciences, Institute for Quantum Medical Science, National Institutes for Quantum Science and Technology (QST), 4-9-1 Anagawa, Inage-Ku, Chiba, 263-8555, Japan.
| | - Go Akamatsu
- Department of Advanced Nuclear Medicine Sciences, Institute for Quantum Medical Science, National Institutes for Quantum Science and Technology (QST), 4-9-1 Anagawa, Inage-Ku, Chiba, 263-8555, Japan.
| | - Hideaki Tashima
- Department of Advanced Nuclear Medicine Sciences, Institute for Quantum Medical Science, National Institutes for Quantum Science and Technology (QST), 4-9-1 Anagawa, Inage-Ku, Chiba, 263-8555, Japan
| | - Miwako Takahashi
- Department of Advanced Nuclear Medicine Sciences, Institute for Quantum Medical Science, National Institutes for Quantum Science and Technology (QST), 4-9-1 Anagawa, Inage-Ku, Chiba, 263-8555, Japan
| | - Taiga Yamaya
- Department of Advanced Nuclear Medicine Sciences, Institute for Quantum Medical Science, National Institutes for Quantum Science and Technology (QST), 4-9-1 Anagawa, Inage-Ku, Chiba, 263-8555, Japan.
| |
Collapse
|
22
|
Shiri I, Salimi Y, Maghsudi M, Jenabi E, Harsini S, Razeghi B, Mostafaei S, Hajianfar G, Sanaat A, Jafari E, Samimi R, Khateri M, Sheikhzadeh P, Geramifar P, Dadgar H, Bitrafan Rajabi A, Assadi M, Bénard F, Vafaei Sadr A, Voloshynovskiy S, Mainta I, Uribe C, Rahmim A, Zaidi H. Differential privacy preserved federated transfer learning for multi-institutional 68Ga-PET image artefact detection and disentanglement. Eur J Nucl Med Mol Imaging 2023; 51:40-53. [PMID: 37682303 PMCID: PMC10684636 DOI: 10.1007/s00259-023-06418-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 08/24/2023] [Indexed: 09/09/2023]
Abstract
PURPOSE Image artefacts continue to pose challenges in clinical molecular imaging, resulting in misdiagnoses, additional radiation doses to patients and financial costs. Mismatch and halo artefacts occur frequently in gallium-68 (68Ga)-labelled compounds whole-body PET/CT imaging. Correcting for these artefacts is not straightforward and requires algorithmic developments, given that conventional techniques have failed to address them adequately. In the current study, we employed differential privacy-preserving federated transfer learning (FTL) to manage clinical data sharing and tackle privacy issues for building centre-specific models that detect and correct artefacts present in PET images. METHODS Altogether, 1413 patients with 68Ga prostate-specific membrane antigen (PSMA)/DOTA-TATE (TOC) PET/CT scans from 3 countries, including 8 different centres, were enrolled in this study. CT-based attenuation and scatter correction (CT-ASC) was used in all centres for quantitative PET reconstruction. Prior to model training, an experienced nuclear medicine physician reviewed all images to ensure the use of high-quality, artefact-free PET images (421 patients' images). A deep neural network (modified U2Net) was trained on 80% of the artefact-free PET images to utilize centre-based (CeBa), centralized (CeZe) and the proposed differential privacy FTL frameworks. Quantitative analysis was performed in 20% of the clean data (with no artefacts) in each centre. A panel of two nuclear medicine physicians conducted qualitative assessment of image quality, diagnostic confidence and image artefacts in 128 patients with artefacts (256 images for CT-ASC and FTL-ASC). RESULTS The three approaches investigated in this study for 68Ga-PET imaging (CeBa, CeZe and FTL) resulted in a mean absolute error (MAE) of 0.42 ± 0.21 (CI 95%: 0.38 to 0.47), 0.32 ± 0.23 (CI 95%: 0.27 to 0.37) and 0.28 ± 0.15 (CI 95%: 0.25 to 0.31), respectively. Statistical analysis using the Wilcoxon test revealed significant differences between the three approaches, with FTL outperforming CeBa and CeZe (p-value < 0.05) in the clean test set. The qualitative assessment demonstrated that FTL-ASC significantly improved image quality and diagnostic confidence and decreased image artefacts, compared to CT-ASC in 68Ga-PET imaging. In addition, mismatch and halo artefacts were successfully detected and disentangled in the chest, abdomen and pelvic regions in 68Ga-PET imaging. CONCLUSION The proposed approach benefits from using large datasets from multiple centres while preserving patient privacy. Qualitative assessment by nuclear medicine physicians showed that the proposed model correctly addressed two main challenging artefacts in 68Ga-PET imaging. This technique could be integrated in the clinic for 68Ga-PET imaging artefact detection and disentanglement using multicentric heterogeneous datasets.
Collapse
Affiliation(s)
- Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
- Department of Cardiology, Inselspital, University of Bern, Bern, Switzerland
| | - Yazdan Salimi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Mehdi Maghsudi
- Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran
| | - Elnaz Jenabi
- Research Center for Nuclear Medicine, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Sara Harsini
- BC Cancer Research Institute, Vancouver, BC, Canada
| | - Behrooz Razeghi
- Department of Computer Science, University of Geneva, Geneva, Switzerland
| | - Shayan Mostafaei
- Division of Clinical Geriatrics, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Stockholm, Sweden
- Department of Medical Epidemiology and Biostatistics, Karolinska Institute, Stockholm, Sweden
| | - Ghasem Hajianfar
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Esmail Jafari
- The Persian Gulf Nuclear Medicine Research Center, Department of Nuclear Medicine, Molecular Imaging, and Theranostics, Bushehr Medical University Hospital, School of Medicine, Bushehr University of Medical Sciences, Bushehr, Iran
| | - Rezvan Samimi
- Department of Medical Radiation Engineering, Shahid Beheshti University, Tehran, Iran
| | - Maziar Khateri
- Department of Medical Radiation Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Peyman Sheikhzadeh
- Department of Nuclear Medicine, Imam Khomeini Hospital Complex, Tehran University of Medical Sciences, Tehran, Iran
| | - Parham Geramifar
- Research Center for Nuclear Medicine, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Habibollah Dadgar
- Cancer Research Center, Razavi Hospital, Imam Reza International University, Mashhad, Iran
| | - Ahmad Bitrafan Rajabi
- Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran
- Echocardiography Research Center, Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran
| | - Majid Assadi
- The Persian Gulf Nuclear Medicine Research Center, Department of Nuclear Medicine, Molecular Imaging, and Theranostics, Bushehr Medical University Hospital, School of Medicine, Bushehr University of Medical Sciences, Bushehr, Iran
| | - François Bénard
- BC Cancer Research Institute, Vancouver, BC, Canada
- Department of Radiology, University of British Columbia, Vancouver, BC, Canada
| | - Alireza Vafaei Sadr
- Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany
- Department of Public Health Sciences, College of Medicine, The Pennsylvania State University, Hershey, PA, 17033, USA
| | | | - Ismini Mainta
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Carlos Uribe
- Department of Radiology, University of British Columbia, Vancouver, BC, Canada
- Molecular Imaging and Therapy, BC Cancer, Vancouver, BC, Canada
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC, Canada
| | - Arman Rahmim
- Department of Radiology, University of British Columbia, Vancouver, BC, Canada
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC, Canada
- Department of Physics and Astronomy, University of British Columbia, Vancouver, Canada
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland.
- Geneva University Neuro Center, Geneva University, Geneva, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark.
| |
Collapse
|
23
|
Jimenez-Mesa C, Arco JE, Martinez-Murcia FJ, Suckling J, Ramirez J, Gorriz JM. Applications of machine learning and deep learning in SPECT and PET imaging: General overview, challenges and future prospects. Pharmacol Res 2023; 197:106984. [PMID: 37940064 DOI: 10.1016/j.phrs.2023.106984] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 10/04/2023] [Accepted: 11/04/2023] [Indexed: 11/10/2023]
Abstract
The integration of positron emission tomography (PET) and single-photon emission computed tomography (SPECT) imaging techniques with machine learning (ML) algorithms, including deep learning (DL) models, is a promising approach. This integration enhances the precision and efficiency of current diagnostic and treatment strategies while offering invaluable insights into disease mechanisms. In this comprehensive review, we delve into the transformative impact of ML and DL in this domain. Firstly, a brief analysis is provided of how these algorithms have evolved and which are the most widely applied in this domain. Their different potential applications in nuclear imaging are then discussed, such as optimization of image adquisition or reconstruction, biomarkers identification, multimodal fusion and the development of diagnostic, prognostic, and disease progression evaluation systems. This is because they are able to analyse complex patterns and relationships within imaging data, as well as extracting quantitative and objective measures. Furthermore, we discuss the challenges in implementation, such as data standardization and limited sample sizes, and explore the clinical opportunities and future horizons, including data augmentation and explainable AI. Together, these factors are propelling the continuous advancement of more robust, transparent, and reliable systems.
Collapse
Affiliation(s)
- Carmen Jimenez-Mesa
- Department of Signal Theory, Networking and Communications, University of Granada, 18010, Spain
| | - Juan E Arco
- Department of Signal Theory, Networking and Communications, University of Granada, 18010, Spain; Department of Communications Engineering, University of Malaga, 29010, Spain
| | | | - John Suckling
- Department of Psychiatry, University of Cambridge, Cambridge CB21TN, UK
| | - Javier Ramirez
- Department of Signal Theory, Networking and Communications, University of Granada, 18010, Spain
| | - Juan Manuel Gorriz
- Department of Signal Theory, Networking and Communications, University of Granada, 18010, Spain; Department of Psychiatry, University of Cambridge, Cambridge CB21TN, UK.
| |
Collapse
|
24
|
Chen X, Liu C. Deep-learning-based methods of attenuation correction for SPECT and PET. J Nucl Cardiol 2023; 30:1859-1878. [PMID: 35680755 DOI: 10.1007/s12350-022-03007-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Accepted: 05/02/2022] [Indexed: 10/18/2022]
Abstract
Attenuation correction (AC) is essential for quantitative analysis and clinical diagnosis of single-photon emission computed tomography (SPECT) and positron emission tomography (PET). In clinical practice, computed tomography (CT) is utilized to generate attenuation maps (μ-maps) for AC of hybrid SPECT/CT and PET/CT scanners. However, CT-based AC methods frequently produce artifacts due to CT artifacts and misregistration of SPECT-CT and PET-CT scans. Segmentation-based AC methods using magnetic resonance imaging (MRI) for PET/MRI scanners are inaccurate and complicated since MRI does not contain direct information of photon attenuation. Computational AC methods for SPECT and PET estimate attenuation coefficients directly from raw emission data, but suffer from low accuracy, cross-talk artifacts, high computational complexity, and high noise level. The recently evolving deep-learning-based methods have shown promising results in AC of SPECT and PET, which can be generally divided into two categories: indirect and direct strategies. Indirect AC strategies apply neural networks to transform emission, transmission, or MR images into synthetic μ-maps or CT images which are then incorporated into AC reconstruction. Direct AC strategies skip the intermediate steps of generating μ-maps or CT images and predict AC SPECT or PET images from non-attenuation-correction (NAC) SPECT or PET images directly. These deep-learning-based AC methods show comparable and even superior performance to non-deep-learning methods. In this article, we first discussed the principles and limitations of non-deep-learning AC methods, and then reviewed the status and prospects of deep-learning-based methods for AC of SPECT and PET.
Collapse
Affiliation(s)
- Xiongchao Chen
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Chi Liu
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA.
- Department of Radiology and Biomedical Imaging, Yale University, PO Box 208048, New Haven, CT, 06520, USA.
| |
Collapse
|
25
|
Küstner T, Hepp T, Seith F. Multiparametric Oncologic Hybrid Imaging: Machine Learning Challenges and Opportunities. Nuklearmedizin 2023; 62:306-313. [PMID: 37802058 DOI: 10.1055/a-2157-6670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/08/2023]
Abstract
BACKGROUND Machine learning (ML) is considered an important technology for future data analysis in health care. METHODS The inherently technology-driven fields of diagnostic radiology and nuclear medicine will both benefit from ML in terms of image acquisition and reconstruction. Within the next few years, this will lead to accelerated image acquisition, improved image quality, a reduction of motion artifacts and - for PET imaging - reduced radiation exposure and new approaches for attenuation correction. Furthermore, ML has the potential to support decision making by a combined analysis of data derived from different modalities, especially in oncology. In this context, we see great potential for ML in multiparametric hybrid imaging and the development of imaging biomarkers. RESULTS AND CONCLUSION In this review, we will describe the basics of ML, present approaches in hybrid imaging of MRI, CT, and PET, and discuss the specific challenges associated with it and the steps ahead to make ML a diagnostic and clinical tool in the future. KEY POINTS · ML provides a viable clinical solution for the reconstruction, processing, and analysis of hybrid imaging obtained from MRI, CT, and PET..
Collapse
Affiliation(s)
- Thomas Küstner
- Medical Image and Data Analysis (MIDAS.lab), Department of Diagnostic and Interventional Radiology, University Hospitals Tubingen, Germany
| | - Tobias Hepp
- Medical Image and Data Analysis (MIDAS.lab), Department of Diagnostic and Interventional Radiology, University Hospitals Tubingen, Germany
| | - Ferdinand Seith
- Department of Diagnostic and Interventional Radiology, University Hospitals Tubingen, Germany
| |
Collapse
|
26
|
Sun H, Wang F, Yang Y, Hong X, Xu W, Wang S, Mok GSP, Lu L. Transfer learning-based attenuation correction for static and dynamic cardiac PET using a generative adversarial network. Eur J Nucl Med Mol Imaging 2023; 50:3630-3646. [PMID: 37474736 DOI: 10.1007/s00259-023-06343-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Accepted: 07/12/2023] [Indexed: 07/22/2023]
Abstract
PURPOSE The goal of this work is to demonstrate the feasibility of directly generating attenuation-corrected PET images from non-attenuation-corrected (NAC) PET images for both rest and stress-state static or dynamic [13N]ammonia MP PET based on a generative adversarial network. METHODS We recruited 60 subjects for rest-only scans and 14 subjects for rest-stress scans, all of whom underwent [13N]ammonia cardiac PET/CT examinations to acquire static and dynamic frames with both 3D NAC and CT-based AC (CTAC) PET images. We developed a 3D pix2pix deep learning AC (DLAC) framework via a U-net + ResNet-based generator and a convolutional neural network-based discriminator. Paired static or dynamic NAC and CTAC PET images from 60 rest-only subjects were used as network inputs and labels for static (S-DLAC) and dynamic (D-DLAC) training, respectively. The pre-trained S-DLAC network was then fine-tuned by paired dynamic NAC and CTAC PET frames of 60 rest-only subjects to derive an improved D-DLAC-FT for dynamic PET images. The 14 rest-stress subjects were used as an internal testing dataset and separately tested on different network models without training. The proposed methods were evaluated using visual quality and quantitative metrics. RESULTS The proposed S-DLAC, D-DLAC, and D-DLAC-FT methods were consistent with clinical CTAC in terms of various images and quantitative metrics. The S-DLAC (slope = 0.9423, R2 = 0.947) showed a higher correlation with the reference static CTAC as compared to static NAC (slope = 0.0992, R2 = 0.654). D-DLAC-FT yielded lower myocardial blood flow (MBF) errors in the whole left ventricular myocardium than D-DLAC, but with no significant difference, both for the 60 rest-state subjects (6.63 ± 5.05% vs. 7.00 ± 6.84%, p = 0.7593) and the 14 stress-state subjects (1.97 ± 2.28% vs. 3.21 ± 3.89%, p = 0.8595). CONCLUSION The proposed S-DLAC, D-DLAC, and D-DLAC-FT methods achieve comparable performance with clinical CTAC. Transfer learning shows promising potential for dynamic MP PET.
Collapse
Affiliation(s)
- Hao Sun
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China
- Biomedical Imaging Laboratory (BIG), Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Taipa, Macau SAR, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China
| | - Fanghu Wang
- PET Center, Department of Nuclear Medicine, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Yuling Yang
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China
| | - Xiaotong Hong
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China
| | - Weiping Xu
- PET Center, Department of Nuclear Medicine, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Shuxia Wang
- PET Center, Department of Nuclear Medicine, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China.
| | - Greta S P Mok
- Biomedical Imaging Laboratory (BIG), Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Taipa, Macau SAR, China.
| | - Lijun Lu
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China.
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China.
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China.
- Pazhou Lab, Guangzhou, 510330, China.
| |
Collapse
|
27
|
Abrahamsen BS, Knudtsen IS, Eikenes L, Bathen TF, Elschot M. Pelvic PET/MR attenuation correction in the image space using deep learning. Front Oncol 2023; 13:1220009. [PMID: 37692851 PMCID: PMC10484800 DOI: 10.3389/fonc.2023.1220009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Accepted: 07/31/2023] [Indexed: 09/12/2023] Open
Abstract
Introduction The five-class Dixon-based PET/MR attenuation correction (AC) model, which adds bone information to the four-class model by registering major bones from a bone atlas, has been shown to be error-prone. In this study, we introduce a novel method of accounting for bone in pelvic PET/MR AC by directly predicting the errors in the PET image space caused by the lack of bone in four-class Dixon-based attenuation correction. Methods A convolutional neural network was trained to predict the four-class AC error map relative to CT-based attenuation correction. Dixon MR images and the four-class attenuation correction µ-map were used as input to the models. CT and PET/MR examinations for 22 patients ([18F]FDG) were used for training and validation, and 17 patients were used for testing (6 [18F]PSMA-1007 and 11 [68Ga]Ga-PSMA-11). A quantitative analysis of PSMA uptake using voxel- and lesion-based error metrics was used to assess performance. Results In the voxel-based analysis, the proposed model reduced the median root mean squared percentage error from 12.1% and 8.6% for the four- and five-class Dixon-based AC methods, respectively, to 6.2%. The median absolute percentage error in the maximum standardized uptake value (SUVmax) in bone lesions improved from 20.0% and 7.0% for four- and five-class Dixon-based AC methods to 3.8%. Conclusion The proposed method reduces the voxel-based error and SUVmax errors in bone lesions when compared to the four- and five-class Dixon-based AC models.
Collapse
Affiliation(s)
- Bendik Skarre Abrahamsen
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway
| | - Ingerid Skjei Knudtsen
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway
| | - Live Eikenes
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway
| | - Tone Frost Bathen
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway
- Department of Radiology and Nuclear Medicine, St. Olavs Hospital, Trondheim University Hospital, Trondheim, Norway
| | - Mattijs Elschot
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway
- Department of Radiology and Nuclear Medicine, St. Olavs Hospital, Trondheim University Hospital, Trondheim, Norway
| |
Collapse
|
28
|
Park J, Kang SK, Hwang D, Choi H, Ha S, Seo JM, Eo JS, Lee JS. Automatic Lung Cancer Segmentation in [ 18F]FDG PET/CT Using a Two-Stage Deep Learning Approach. Nucl Med Mol Imaging 2023; 57:86-93. [PMID: 36998591 PMCID: PMC10043063 DOI: 10.1007/s13139-022-00745-7] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Revised: 03/10/2022] [Accepted: 03/12/2022] [Indexed: 10/18/2022] Open
Abstract
Purpose Since accurate lung cancer segmentation is required to determine the functional volume of a tumor in [18F]FDG PET/CT, we propose a two-stage U-Net architecture to enhance the performance of lung cancer segmentation using [18F]FDG PET/CT. Methods The whole-body [18F]FDG PET/CT scan data of 887 patients with lung cancer were retrospectively used for network training and evaluation. The ground-truth tumor volume of interest was drawn using the LifeX software. The dataset was randomly partitioned into training, validation, and test sets. Among the 887 PET/CT and VOI datasets, 730 were used to train the proposed models, 81 were used as the validation set, and the remaining 76 were used to evaluate the model. In Stage 1, the global U-net receives 3D PET/CT volume as input and extracts the preliminary tumor area, generating a 3D binary volume as output. In Stage 2, the regional U-net receives eight consecutive PET/CT slices around the slice selected by the Global U-net in Stage 1 and generates a 2D binary image as the output. Results The proposed two-stage U-Net architecture outperformed the conventional one-stage 3D U-Net in primary lung cancer segmentation. The two-stage U-Net model successfully predicted the detailed margin of the tumors, which was determined by manually drawing spherical VOIs and applying an adaptive threshold. Quantitative analysis using the Dice similarity coefficient confirmed the advantages of the two-stage U-Net. Conclusion The proposed method will be useful for reducing the time and effort required for accurate lung cancer segmentation in [18F]FDG PET/CT.
Collapse
Affiliation(s)
- Junyoung Park
- Department of Electrical and Computer Engineering, Seoul National University College of Engineering, Seoul, 08826 Korea
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080 Korea
| | - Seung Kwan Kang
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080 Korea
- Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul, 03080 Korea
- Artificial Intelligence Institute, Seoul National University, Seoul, 08826 Korea
- Brightonix Imaging Inc., Seoul, 03080 Korea
| | - Donghwi Hwang
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080 Korea
- Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul, 03080 Korea
- Artificial Intelligence Institute, Seoul National University, Seoul, 08826 Korea
| | - Hongyoon Choi
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080 Korea
| | - Seunggyun Ha
- Division of Nuclear Medicine, Department of Radiology, Seoul St Mary’s Hospital, The Catholic University of Korea, Seoul, 06591 Korea
| | - Jong Mo Seo
- Department of Electrical and Computer Engineering, Seoul National University College of Engineering, Seoul, 08826 Korea
| | - Jae Seon Eo
- Department of Nuclear Medicine, Korea University Guro Hospital, 148 Gurodong-ro, Guro-gu, Seoul, 08308 Korea
| | - Jae Sung Lee
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080 Korea
- Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul, 03080 Korea
- Artificial Intelligence Institute, Seoul National University, Seoul, 08826 Korea
- Brightonix Imaging Inc., Seoul, 03080 Korea
- Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul, 03080 Korea
| |
Collapse
|
29
|
Laurent B, Bousse A, Merlin T, Nekolla S, Visvikis D. PET scatter estimation using deep learning U-Net architecture. Phys Med Biol 2023; 68. [PMID: 36240745 DOI: 10.1088/1361-6560/ac9a97] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2022] [Accepted: 10/13/2022] [Indexed: 03/11/2023]
Abstract
Objective.Positron emission tomography (PET) image reconstruction needs to be corrected for scatter in order to produce quantitatively accurate images. Scatter correction is traditionally achieved by incorporating an estimated scatter sinogram into the forward model during image reconstruction. Existing scatter estimated methods compromise between accuracy and computing time. Nowadays scatter estimation is routinely performed using single scatter simulation (SSS), which does not accurately model multiple scatter and scatter from outside the field-of-view, leading to reduced qualitative and quantitative PET reconstructed image accuracy. On the other side, Monte-Carlo (MC) methods provide a high precision, but are computationally expensive and time-consuming, even with recent progress in MC acceleration.Approach.In this work we explore the potential of deep learning (DL) for accurate scatter correction in PET imaging, accounting for all scatter coincidences. We propose a network based on a U-Net convolutional neural network architecture with 5 convolutional layers. The network takes as input the emission and computed tomography (CT)-derived attenuation factor (AF) sinograms and returns the estimated scatter sinogram. The network training was performed using MC simulated PET datasets. Multiple anthropomorphic extended cardiac-torso phantoms of two different regions (lung and pelvis) were created, considering three different body sizes and different levels of statistics. In addition, two patient datasets were used to assess the performance of the method in clinical practice.Main results.Our experiments showed that the accuracy of our method, namely DL-based scatter estimation (DLSE), was independent of the anatomical region (lungs or pelvis). They also showed that the DLSE-corrected images were similar to that reconstructed from scatter-free data and more accurate than SSS-corrected images.Significance.The proposed method is able to estimate scatter sinograms from emission and attenuation data. It has shown a better accuracy than the SSS, while being faster than MC scatter estimation methods.
Collapse
Affiliation(s)
| | | | | | - Stephan Nekolla
- Department of Nuclear Medicine, Klinikum rechts der Isar der Technischen Universität München, Munich, Germany
| | | |
Collapse
|
30
|
Raymond C, Jurkiewicz MT, Orunmuyi A, Liu L, Dada MO, Ladefoged CN, Teuho J, Anazodo UC. The performance of machine learning approaches for attenuation correction of PET in neuroimaging: A meta-analysis. J Neuroradiol 2023; 50:315-326. [PMID: 36738990 DOI: 10.1016/j.neurad.2023.01.157] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Accepted: 01/28/2023] [Indexed: 02/05/2023]
Abstract
PURPOSE This systematic review provides a consensus on the clinical feasibility of machine learning (ML) methods for brain PET attenuation correction (AC). Performance of ML-AC were compared to clinical standards. METHODS Two hundred and eighty studies were identified through electronic searches of brain PET studies published between January 1, 2008, and August 1, 2022. Reported outcomes for image quality, tissue classification performance, regional and global bias were extracted to evaluate ML-AC performance. Methodological quality of included studies and the quality of evidence of analysed outcomes were assessed using QUADAS-2 and GRADE, respectively. RESULTS A total of 19 studies (2371 participants) met the inclusion criteria. Overall, the global bias of ML methods was 0.76 ± 1.2%. For image quality, the relative mean square error (RMSE) was 0.20 ± 0.4 while for tissues classification, the Dice similarity coefficient (DSC) for bone/soft tissue/air were 0.82 ± 0.1 / 0.95 ± 0.03 / 0.85 ± 0.14. CONCLUSIONS In general, ML-AC performance is within acceptable limits for clinical PET imaging. The sparse information on ML-AC robustness and its limited qualitative clinical evaluation may hinder clinical implementation in neuroimaging, especially for PET/MRI or emerging brain PET systems where standard AC approaches are not readily available.
Collapse
Affiliation(s)
- Confidence Raymond
- Department of Medical Biophysics, Western University, London, ON, Canada; Lawson Health Research Institute, London, ON, Canada
| | - Michael T Jurkiewicz
- Department of Medical Biophysics, Western University, London, ON, Canada; Lawson Health Research Institute, London, ON, Canada; Department of Medical Imaging, Western University, London, ON, Canada
| | - Akintunde Orunmuyi
- Kenyatta University Teaching, Research and Referral Hospital, Nairobi, Kenya
| | - Linshan Liu
- Lawson Health Research Institute, London, ON, Canada
| | | | - Claes N Ladefoged
- Department of Clinical Physiology, Nuclear Medicine, and PET, Rigshospitalet, Copenhagen, Denmark
| | - Jarmo Teuho
- Turku PET Centre, Turku University, Turku, Finland; Turku University Hospital, Turku, Finland
| | - Udunna C Anazodo
- Department of Medical Biophysics, Western University, London, ON, Canada; Lawson Health Research Institute, London, ON, Canada; Montreal Neurological Institute, 3801 Rue University, Montreal, QC H3A 2B4, Canada.
| |
Collapse
|
31
|
Shi L, Zhang J, Toyonaga T, Shao D, Onofrey JA, Lu Y. Deep learning-based attenuation map generation with simultaneously reconstructed PET activity and attenuation and low-dose application. Phys Med Biol 2023; 68. [PMID: 36584395 DOI: 10.1088/1361-6560/acaf49] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Accepted: 12/30/2022] [Indexed: 12/31/2022]
Abstract
Objective. In PET/CT imaging, CT is used for positron emission tomography (PET) attenuation correction (AC). CT artifacts or misalignment between PET and CT can cause AC artifacts and quantification errors in PET. Simultaneous reconstruction (MLAA) of PET activity (λ-MLAA) and attenuation (μ-MLAA) maps was proposed to solve those issues using the time-of-flight PET raw data only. However,λ-MLAA still suffers from quantification error as compared to reconstruction using the gold-standard CT-based attenuation map (μ-CT). Recently, a deep learning (DL)-based framework was proposed to improve MLAA by predictingμ-DL fromλ-MLAA andμ-MLAA using an image domain loss function (IM-loss). However, IM-loss does not directly measure the AC errors according to the PET attenuation physics. Our preliminary studies showed that an additional physics-based loss function can lead to more accurate PET AC. The main objective of this study is to optimize the attenuation map generation framework for clinical full-dose18F-FDG studies. We also investigate the effectiveness of the optimized network on predicting attenuation maps for synthetic low-dose oncological PET studies.Approach. We optimized the proposed DL framework by applying different preprocessing steps and hyperparameter optimization, including patch size, weights of the loss terms and number of angles in the projection-domain loss term. The optimization was performed based on 100 skull-to-toe18F-FDG PET/CT scans with minimal misalignment. The optimized framework was further evaluated on 85 clinical full-dose neck-to-thigh18F-FDG cancer datasets as well as synthetic low-dose studies with only 10% of the full-dose raw data.Main results. Clinical evaluation of tumor quantification as well as physics-based figure-of-merit metric evaluation validated the promising performance of our proposed method. For both full-dose and low-dose studies, the proposed framework achieved <1% error in tumor standardized uptake value measures.Significance. It is of great clinical interest to achieve CT-less PET reconstruction, especially for low-dose PET studies.
Collapse
Affiliation(s)
- Luyao Shi
- Department of Biomedical Engineering, Yale University, New Haven, CT, United States of America
| | - Jiazhen Zhang
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America
| | - Takuya Toyonaga
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America
| | - Dan Shao
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America.,Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, Guangdong, People's Republic of China
| | - John A Onofrey
- Department of Biomedical Engineering, Yale University, New Haven, CT, United States of America.,Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America.,Department of Urology, Yale University, New Haven, CT, United States of America
| | - Yihuan Lu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America
| |
Collapse
|
32
|
Recent Advances in Cardiovascular Diseases Research Using Animal Models and PET Radioisotope Tracers. Int J Mol Sci 2022; 24:ijms24010353. [PMID: 36613797 PMCID: PMC9820417 DOI: 10.3390/ijms24010353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 12/21/2022] [Accepted: 12/22/2022] [Indexed: 12/28/2022] Open
Abstract
Cardiovascular diseases (CVD) is a collective term describing a range of conditions that affect the heart and blood vessels. Due to the varied nature of the disorders, distinguishing between their causes and monitoring their progress is crucial for finding an effective treatment. Molecular imaging enables non-invasive visualisation and quantification of biological pathways, even at the molecular and subcellular levels, what is essential for understanding the causes and development of CVD. Positron emission tomography imaging is so far recognized as the best method for in vivo studies of the CVD related phenomena. The imaging is based on the use of radioisotope-labelled markers, which have been successfully used in both pre-clinical research and clinical studies. Current research on CVD with the use of such radioconjugates constantly increases our knowledge and understanding of the causes, and brings us closer to effective monitoring and treatment. This review outlines recent advances in the use of the so-far available radioisotope markers in the research on cardiovascular diseases in rodent models, points out the problems and provides a perspective for future applications of PET imaging in CVD studies.
Collapse
|
33
|
Attenuation Correction Using Template PET Registration for Brain PET: A Proof-of-Concept Study. J Imaging 2022; 9:jimaging9010002. [PMID: 36662100 PMCID: PMC9867435 DOI: 10.3390/jimaging9010002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Revised: 12/13/2022] [Accepted: 12/19/2022] [Indexed: 12/24/2022] Open
Abstract
NeuroLF is a dedicated brain PET system with an octagonal prism shape housed in a scanner head that can be positioned around a patient's head. Because it does not have MR or CT capabilities, attenuation correction based on an estimation of the attenuation map is a crucial feature. In this article, we demonstrate this method on [18F]FDG PET brain scans performed with a low-resolution proof of concept prototype of NeuroLF called BPET. We perform an affine registration of a template PET scan to the uncorrected emission image, and then apply the resulting transform to the corresponding template attenuation map. Using a whole-body PET/CT system as reference, we quantitively show that this method yields comparable image quality (0.893 average correlation to reference scan) to using the reference µ-map as obtained from the CT scan of the imaged patient (0.908 average correlation). We conclude from this initial study that attenuation correction using template registration instead of a patient CT delivers similar results and is an option for patients undergoing brain PET.
Collapse
|
34
|
Torkaman M, Yang J, Shi L, Wang R, Miller EJ, Sinusas AJ, Liu C, Gullberg GT, Seo Y. Data Management and Network Architecture Effect on Performance Variability in Direct Attenuation Correction via Deep Learning for Cardiac SPECT: A Feasibility Study. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2022; 6:755-765. [PMID: 36059429 PMCID: PMC9438341 DOI: 10.1109/trpms.2021.3138372] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Attenuation correction (AC) is important for accurate interpretation of SPECT myocardial perfusion imaging (MPI). However, it is challenging to perform AC in dedicated cardiac systems not equipped with a transmission imaging capability. Previously, we demonstrated the feasibility of generating attenuation-corrected SPECT images using a deep learning technique (SPECTDL) directly from non-corrected images (SPECTNC). However, we observed performance variability across patients which is an important factor for clinical translation of the technique. In this study, we investigate the feasibility of overcoming the performance variability across patients for the direct AC in SPECT MPI by proposing to develop an advanced network and a data management strategy. To investigate, we compared the accuracy of the SPECTDL for the conventional U-Net and Wasserstein cycle GAN (WCycleGAN) networks. To manage the training data, clustering was applied to a representation of data in the lower-dimensional space, and the training data were chosen based on the similarity of data in this space. Quantitative analysis demonstrated that DL model with an advanced network improves the global performance for the AC task with the limited data. However, the regional results were not improved. The proposed data management strategy demonstrated that the clustered training has potential benefit for effective training.
Collapse
Affiliation(s)
- Mahsa Torkaman
- Radiology and Biomedical Imaging Department, University of California, San Francisco, CA, USA
| | - Jaewon Yang
- Radiology and Biomedical Imaging Department, University of California, San Francisco, CA, USA
| | - Luyao Shi
- Biomedical Engineering Department, Yale University, New Haven, CT, USA
| | - Rui Wang
- Radiology and Biomedical Imaging Department, Yale University, New Haven, CT, USA
| | - Edward J Miller
- Radiology and Biomedical Imaging Department, Yale University, New Haven, CT, USA
| | - Albert J Sinusas
- Biomedical Engineering Department, Yale University, New Haven, CT, USA; Radiology and Biomedical Imaging Department, Yale University, New Haven, CT, USA
| | - Chi Liu
- Biomedical Engineering Department, Yale University, New Haven, CT, USA; Radiology and Biomedical Imaging Department, Yale University, New Haven, CT, USA
| | - Grant T Gullberg
- Radiology and Biomedical Imaging Department, University of California, San Francisco, CA, USA
| | - Youngho Seo
- Radiology and Biomedical Imaging Department, University of California, San Francisco, CA, USA
| |
Collapse
|
35
|
Ahangari S, Beck Olin A, Kinggård Federspiel M, Jakoby B, Andersen TL, Hansen AE, Fischer BM, Littrup Andersen F. A deep learning-based whole-body solution for PET/MRI attenuation correction. EJNMMI Phys 2022; 9:55. [PMID: 35978211 PMCID: PMC9385907 DOI: 10.1186/s40658-022-00486-8] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Accepted: 08/08/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Deep convolutional neural networks have demonstrated robust and reliable PET attenuation correction (AC) as an alternative to conventional AC methods in integrated PET/MRI systems. However, its whole-body implementation is still challenging due to anatomical variations and the limited MRI field of view. The aim of this study is to investigate a deep learning (DL) method to generate voxel-based synthetic CT (sCT) from Dixon MRI and use it as a whole-body solution for PET AC in a PET/MRI system. MATERIALS AND METHODS Fifteen patients underwent PET/CT followed by PET/MRI with whole-body coverage from skull to feet. We performed MRI truncation correction and employed co-registered MRI and CT images for training and leave-one-out cross-validation. The network was pretrained with region-specific images. The accuracy of the AC maps and reconstructed PET images were assessed by performing a voxel-wise analysis and calculating the quantification error in SUV obtained using DL-based sCT (PETsCT) and a vendor-provided atlas-based method (PETAtlas), with the CT-based reconstruction (PETCT) serving as the reference. In addition, region-specific analysis was performed to compare the performances of the methods in brain, lung, liver, spine, pelvic bone, and aorta. RESULTS Our DL-based method resulted in better estimates of AC maps with a mean absolute error of 62 HU, compared to 109 HU for the atlas-based method. We found an excellent voxel-by-voxel correlation between PETCT and PETsCT (R2 = 0.98). The absolute percentage difference in PET quantification for the entire image was 6.1% for PETsCT and 11.2% for PETAtlas. The regional analysis showed that the average errors and the variability for PETsCT were lower than PETAtlas in all regions. The largest errors were observed in the lung, while the smallest biases were observed in the brain and liver. CONCLUSIONS Experimental results demonstrated that a DL approach for whole-body PET AC in PET/MRI is feasible and allows for more accurate results compared with conventional methods. Further evaluation using a larger training cohort is required for more accurate and robust performance.
Collapse
Affiliation(s)
- Sahar Ahangari
- Department of Clinical Physiology, Nuclear Medicine, and PET, Rigshospitalet, Copenhagen, Denmark.
| | - Anders Beck Olin
- Department of Clinical Physiology, Nuclear Medicine, and PET, Rigshospitalet, Copenhagen, Denmark
| | | | | | - Thomas Lund Andersen
- Department of Clinical Physiology, Nuclear Medicine, and PET, Rigshospitalet, Copenhagen, Denmark
| | - Adam Espe Hansen
- Department of Clinical Physiology, Nuclear Medicine, and PET, Rigshospitalet, Copenhagen, Denmark.,Department of Clinical Medicine, University of Copenhagen, Copenhagen, Denmark.,Department of Diagnostic Radiology, Rigshospitalet, Copenhagen, Denmark
| | - Barbara Malene Fischer
- Department of Clinical Physiology, Nuclear Medicine, and PET, Rigshospitalet, Copenhagen, Denmark.,Department of Clinical Medicine, University of Copenhagen, Copenhagen, Denmark
| | - Flemming Littrup Andersen
- Department of Clinical Physiology, Nuclear Medicine, and PET, Rigshospitalet, Copenhagen, Denmark.,Department of Clinical Medicine, University of Copenhagen, Copenhagen, Denmark
| |
Collapse
|
36
|
Visvikis D, Lambin P, Beuschau Mauridsen K, Hustinx R, Lassmann M, Rischpler C, Shi K, Pruim J. Application of artificial intelligence in nuclear medicine and molecular imaging: a review of current status and future perspectives for clinical translation. Eur J Nucl Med Mol Imaging 2022; 49:4452-4463. [PMID: 35809090 PMCID: PMC9606092 DOI: 10.1007/s00259-022-05891-w] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Accepted: 06/25/2022] [Indexed: 02/06/2023]
Abstract
Artificial intelligence (AI) will change the face of nuclear medicine and molecular imaging as it will in everyday life. In this review, we focus on the potential applications of AI in the field, both from a physical (radiomics, underlying statistics, image reconstruction and data analysis) and a clinical (neurology, cardiology, oncology) perspective. Challenges for transferability from research to clinical practice are being discussed as is the concept of explainable AI. Finally, we focus on the fields where challenges should be set out to introduce AI in the field of nuclear medicine and molecular imaging in a reliable manner.
Collapse
Affiliation(s)
| | - Philippe Lambin
- The D-Lab, Department of Precision Medicine, GROW - School for Oncology, Maastricht University Medical Center (MUMC +), Maastricht, The Netherlands.,Department of Radiology and Nuclear Medicine, GROW - School for Oncology, Maastricht University Medical Center (MUMC +), Maastricht, The Netherlands
| | - Kim Beuschau Mauridsen
- Center of Functionally Integrative Neuroscience and MindLab, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark.,Department of Nuclear Medicine, University of Bern, Bern, Switzerland
| | - Roland Hustinx
- GIGA-CRC in Vivo Imaging, University of Liège, GIGA, Avenue de l'Hôpital 11, 4000, Liege, Belgium
| | - Michael Lassmann
- Klinik Und Poliklinik Für Nuklearmedizin, Universitätsklinikum Würzburg, Würzburg, Germany
| | - Christoph Rischpler
- Department of Nuclear Medicine, University Hospital Essen, University of Duisburg-Essen, Essen, Germany
| | - Kuangyu Shi
- Department of Nuclear Medicine, University of Bern, Bern, Switzerland.,Department of Informatics, Technical University of Munich, Munich, Germany
| | - Jan Pruim
- Medical Imaging Center, Dept. of Nuclear Medicine and Molecular Imaging, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands.
| |
Collapse
|
37
|
Toyonaga T, Shao D, Shi L, Zhang J, Revilla EM, Menard D, Ankrah J, Hirata K, Chen MK, Onofrey JA, Lu Y. Deep learning-based attenuation correction for whole-body PET - a multi-tracer study with 18F-FDG, 68 Ga-DOTATATE, and 18F-Fluciclovine. Eur J Nucl Med Mol Imaging 2022; 49:3086-3097. [PMID: 35277742 PMCID: PMC10725742 DOI: 10.1007/s00259-022-05748-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2021] [Accepted: 02/25/2022] [Indexed: 11/04/2022]
Abstract
A novel deep learning (DL)-based attenuation correction (AC) framework was applied to clinical whole-body oncology studies using 18F-FDG, 68 Ga-DOTATATE, and 18F-Fluciclovine. The framework used activity (λ-MLAA) and attenuation (µ-MLAA) maps estimated by the maximum likelihood reconstruction of activity and attenuation (MLAA) algorithm as inputs to a modified U-net neural network with a novel imaging physics-based loss function to learn a CT-derived attenuation map (µ-CT). METHODS Clinical whole-body PET/CT datasets of 18F-FDG (N = 113), 68 Ga-DOTATATE (N = 76), and 18F-Fluciclovine (N = 90) were used to train and test tracer-specific neural networks. For each tracer, forty subjects were used to train the neural network to predict attenuation maps (µ-DL). µ-DL and µ-MLAA were compared to the gold-standard µ-CT. PET images reconstructed using the OSEM algorithm with µ-DL (OSEMDL) and µ-MLAA (OSEMMLAA) were compared to the CT-based reconstruction (OSEMCT). Tumor regions of interest were segmented by two radiologists and tumor SUV and volume measures were reported, as well as evaluation using conventional image analysis metrics. RESULTS µ-DL yielded high resolution and fine detail recovery of the attenuation map, which was superior in quality as compared to µ-MLAA in all metrics for all tracers. Using OSEMCT as the gold-standard, OSEMDL provided more accurate tumor quantification than OSEMMLAA for all three tracers, e.g., error in SUVmax for OSEMMLAA vs. OSEMDL: - 3.6 ± 4.4% vs. - 1.7 ± 4.5% for 18F-FDG (N = 152), - 4.3 ± 5.1% vs. 0.4 ± 2.8% for 68 Ga-DOTATATE (N = 70), and - 7.3 ± 2.9% vs. - 2.8 ± 2.3% for 18F-Fluciclovine (N = 44). OSEMDL also yielded more accurate tumor volume measures than OSEMMLAA, i.e., - 8.4 ± 14.5% (OSEMMLAA) vs. - 3.0 ± 15.0% for 18F-FDG, - 14.1 ± 19.7% vs. 1.8 ± 11.6% for 68 Ga-DOTATATE, and - 15.9 ± 9.1% vs. - 6.4 ± 6.4% for 18F-Fluciclovine. CONCLUSIONS The proposed framework provides accurate and robust attenuation correction for whole-body 18F-FDG, 68 Ga-DOTATATE and 18F-Fluciclovine in tumor SUV measures as well as tumor volume estimation. The proposed method provides clinically equivalent quality as compared to CT in attenuation correction for the three tracers.
Collapse
Affiliation(s)
- Takuya Toyonaga
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Dan Shao
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
- Guangdong Provincial People's Hospital, Guangzhou, Guangdong, China
| | - Luyao Shi
- Department of Biomedical Engineering, Yale University, New Haven, CT, 06520, USA
| | - Jiazhen Zhang
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Enette Mae Revilla
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | | | | | - Kenji Hirata
- Department of Diagnostic Imaging, School of Medicine, Hokkaido University, Sapporo, Hokkaido, Japan
| | - Ming-Kai Chen
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
- Yale New Haven Hospital, New Haven, CT, USA
| | - John A Onofrey
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
- Department of Biomedical Engineering, Yale University, New Haven, CT, 06520, USA
- Department of Urology, Yale University, New Haven, CT, USA
| | - Yihuan Lu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA.
| |
Collapse
|
38
|
Leynes AP, Ahn S, Wangerin KA, Kaushik SS, Wiesinger F, Hope TA, Larson PEZ. Attenuation Coefficient Estimation for PET/MRI With Bayesian Deep Learning Pseudo-CT and Maximum-Likelihood Estimation of Activity and Attenuation. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2022; 6:678-689. [PMID: 38223528 PMCID: PMC10785227 DOI: 10.1109/trpms.2021.3118325] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2024]
Abstract
A major remaining challenge for magnetic resonance-based attenuation correction methods (MRAC) is their susceptibility to sources of magnetic resonance imaging (MRI) artifacts (e.g., implants and motion) and uncertainties due to the limitations of MRI contrast (e.g., accurate bone delineation and density, and separation of air/bone). We propose using a Bayesian deep convolutional neural network that in addition to generating an initial pseudo-CT from MR data, it also produces uncertainty estimates of the pseudo-CT to quantify the limitations of the MR data. These outputs are combined with the maximum-likelihood estimation of activity and attenuation (MLAA) reconstruction that uses the PET emission data to improve the attenuation maps. With the proposed approach uncertainty estimation and pseudo-CT prior for robust MLAA (UpCT-MLAA), we demonstrate accurate estimation of PET uptake in pelvic lesions and show recovery of metal implants. In patients without implants, UpCT-MLAA had acceptable but slightly higher root-mean-squared-error (RMSE) than Zero-echotime and Dixon Deep pseudo-CT when compared to CTAC. In patients with metal implants, MLAA recovered the metal implant; however, anatomy outside the implant region was obscured by noise and crosstalk artifacts. Attenuation coefficients from the pseudo-CT from Dixon MRI were accurate in normal anatomy; however, the metal implant region was estimated to have attenuation coefficients of air. UpCT-MLAA estimated attenuation coefficients of metal implants alongside accurate anatomic depiction outside of implant regions.
Collapse
Affiliation(s)
- Andrew P Leynes
- Department of Radiology and Biomedical Imaging, University of California at San Francisco, San Francisco, CA 94158 USA
- UC Berkeley-UC San Francisco Joint Graduate Program in Bioengineering, University of California at Berkeley, Berkeley, CA 94720 USA
| | - Sangtae Ahn
- Biology and Physics Department, GE Research, Niskayuna, NY 12309 USA
| | | | - Sandeep S Kaushik
- MR Applications Science Laboratory Europe, GE Healthcare, 80807 Munich, Germany
- Department of Computer Science, Technical University of Munich, 80333 Munich, Germany
- Department of Quantitative Biomedicine, University of Zurich, 8057 Zurich, Switzerland
| | - Florian Wiesinger
- MR Applications Science Laboratory Europe, GE Healthcare, 80807 Munich, Germany
| | - Thomas A Hope
- Department of Radiology and Biomedical Imaging, University of California at San Francisco, San Francisco, CA, USA
- Department of Radiology, San Francisco VA Medical Center, San Francisco, CA 94121 USA
| | - Peder E Z Larson
- Department of Radiology and Biomedical Imaging, University of California at San Francisco, San Francisco, CA 94158 USA
- UC Berkeley-UC San Francisco Joint Graduate Program in Bioengineering, University of California at Berkeley, Berkeley, CA 94720 USA
| |
Collapse
|
39
|
Presotto L, Bettinardi V, Bagnalasta M, Scifo P, Savi A, Vanoli EG, Fallanca F, Picchio M, Perani D, Gianolli L, De Bernardi E. Evaluation of a 2D UNet-Based Attenuation Correction Methodology for PET/MR Brain Studies. J Digit Imaging 2022; 35:432-445. [PMID: 35091873 PMCID: PMC9156597 DOI: 10.1007/s10278-021-00551-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Revised: 11/10/2021] [Accepted: 11/16/2021] [Indexed: 12/15/2022] Open
Abstract
Deep learning (DL) strategies applied to magnetic resonance (MR) images in positron emission tomography (PET)/MR can provide synthetic attenuation correction (AC) maps, and consequently PET images, more accurate than segmentation or atlas-registration strategies. As first objective, we aim to investigate the best MR image to be used and the best point of the AC pipeline to insert the synthetic map in. Sixteen patients underwent a 18F-fluorodeoxyglucose (FDG) PET/computed tomography (CT) and a PET/MR brain study in the same day. PET/CT images were reconstructed with attenuation maps obtained: (1) from CT (reference), (2) from MR with an atlas-based and a segmentation-based method and (3) with a 2D UNet trained on MR image/attenuation map pairs. As for MR, T1-weighted and Zero Time Echo (ZTE) images were considered; as for attenuation maps, CTs and 511 keV low-resolution attenuation maps were assessed. As second objective, we assessed the ability of DL strategies to provide proper AC maps in presence of cranial anatomy alterations due to surgery. Three 11C-methionine (METH) PET/MR studies were considered. PET images were reconstructed with attenuation maps obtained: (1) from diagnostic coregistered CT (reference), (2) from MR with an atlas-based and a segmentation-based method and (3) with 2D UNets trained on the sixteen FDG anatomically normal patients. Only UNets taking ZTE images in input were considered. FDG and METH PET images were quantitatively evaluated. As for anatomically normal FDG patients, UNet AC models generally provide an uptake estimate with lower bias than atlas-based or segmentation-based methods. The intersubject average bias on images corrected with UNet AC maps is always smaller than 1.5%, except for AC maps generated on too coarse grids. The intersubject bias variability is the lowest (always lower than 2%) for UNet AC maps coming from ZTE images, larger for other methods. UNet models working on MR ZTE images and generating synthetic CT or 511 keV low-resolution attenuation maps therefore provide the best results in terms of both accuracy and variability. As for METH anatomically altered patients, DL properly reconstructs anatomical alterations. Quantitative results on PET images confirm those found on anatomically normal FDG patients.
Collapse
Affiliation(s)
- Luca Presotto
- Nuclear Medicine Department, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Valentino Bettinardi
- Nuclear Medicine Department, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Matteo Bagnalasta
- Nuclear Medicine Department, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Paola Scifo
- Nuclear Medicine Department, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Annarita Savi
- Nuclear Medicine Department, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | | | - Federico Fallanca
- Nuclear Medicine Department, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Maria Picchio
- Nuclear Medicine Department, IRCCS San Raffaele Scientific Institute, Milan, Italy ,Vita-Salute San Raffaele University, Milan, Italy
| | - Daniela Perani
- Nuclear Medicine Department, IRCCS San Raffaele Scientific Institute, Milan, Italy ,Vita-Salute San Raffaele University, Milan, Italy
| | - Luigi Gianolli
- Nuclear Medicine Department, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Elisabetta De Bernardi
- School of Medicine and Surgery, University of Milano-Bicocca, via Cadore 48, Monza, 20900 Italy ,Bicocca Bioinformatics Biostatistics and Bioimaging Centre - B4, University of Milan-Bicocca, Monza, Italy
| |
Collapse
|
40
|
Adler SS, Seidel J, Choyke PL. Advances in Preclinical PET. Semin Nucl Med 2022; 52:382-402. [PMID: 35307164 PMCID: PMC9038721 DOI: 10.1053/j.semnuclmed.2022.02.002] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Revised: 02/11/2022] [Accepted: 02/14/2022] [Indexed: 12/18/2022]
Abstract
The classical intent of PET imaging is to obtain the most accurate estimate of the amount of positron-emitting radiotracer in the smallest possible volume element located anywhere in the imaging subject at any time using the least amount of radioactivity. Reaching this goal, however, is confounded by an enormous array of interlinked technical issues that limit imaging system performance. As a result, advances in PET, human or animal, are the result of cumulative innovations across each of the component elements of PET, from data acquisition to image analysis. In the report that follows, we trace several of these advances across the imaging process with a focus on small animal PET.
Collapse
Affiliation(s)
- Stephen S Adler
- Frederick National Laboratory for Cancer Research, Frederick, MD; Molecular Imaging Branch, National Cancer Institute, Bethesda MD
| | - Jurgen Seidel
- Contractor to Frederick National Laboratory for Cancer Research, Leidos biodical Research, Inc., Frederick, MD; Molecular Imaging Branch, National Cancer Institute, Bethesda MD
| | - Peter L Choyke
- Molecular Imaging Branch, National Cancer Institute, Bethesda MD.
| |
Collapse
|
41
|
Pain CD, Egan GF, Chen Z. Deep learning-based image reconstruction and post-processing methods in positron emission tomography for low-dose imaging and resolution enhancement. Eur J Nucl Med Mol Imaging 2022; 49:3098-3118. [PMID: 35312031 PMCID: PMC9250483 DOI: 10.1007/s00259-022-05746-4] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Accepted: 02/25/2022] [Indexed: 12/21/2022]
Abstract
Image processing plays a crucial role in maximising diagnostic quality of positron emission tomography (PET) images. Recently, deep learning methods developed across many fields have shown tremendous potential when applied to medical image enhancement, resulting in a rich and rapidly advancing literature surrounding this subject. This review encapsulates methods for integrating deep learning into PET image reconstruction and post-processing for low-dose imaging and resolution enhancement. A brief introduction to conventional image processing techniques in PET is firstly presented. We then review methods which integrate deep learning into the image reconstruction framework as either deep learning-based regularisation or as a fully data-driven mapping from measured signal to images. Deep learning-based post-processing methods for low-dose imaging, temporal resolution enhancement and spatial resolution enhancement are also reviewed. Finally, the challenges associated with applying deep learning to enhance PET images in the clinical setting are discussed and future research directions to address these challenges are presented.
Collapse
Affiliation(s)
- Cameron Dennis Pain
- Monash Biomedical Imaging, Monash University, Melbourne, Australia.
- Department of Electrical and Computer Systems Engineering, Monash University, Melbourne, Australia.
| | - Gary F Egan
- Monash Biomedical Imaging, Monash University, Melbourne, Australia
- Turner Institute for Brain and Mental Health, Monash University, Melbourne, Australia
| | - Zhaolin Chen
- Monash Biomedical Imaging, Monash University, Melbourne, Australia
- Department of Data Science and AI, Monash University, Melbourne, Australia
| |
Collapse
|
42
|
Renner A, Rausch I, Cal Gonzalez J, Laistler E, Moser E, Jochimsen T, Sattler T, Sabri O, Beyer T, Figl M, Birkfellner W, Sattler B. Technical Note: A PET/MR coil with an integrated, orbiting 511 keV transmission source for PET/MR imaging validated in an animal study. Med Phys 2022; 49:2366-2372. [PMID: 35224747 PMCID: PMC9310742 DOI: 10.1002/mp.15586] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Revised: 02/04/2022] [Accepted: 02/04/2022] [Indexed: 11/11/2022] Open
Abstract
Background Purpose Methods Results Conclusion
Collapse
Affiliation(s)
- Andreas Renner
- Center for Medical Physics and Biomedical Engineering Medical University Vienna Austria
- Department of Radiation Oncology Medical University Vienna Austria
| | - Ivo Rausch
- Center for Medical Physics and Biomedical Engineering Medical University Vienna Austria
| | - Jacobo Cal Gonzalez
- Center for Medical Physics and Biomedical Engineering Medical University Vienna Austria
| | - Elmar Laistler
- Center for Medical Physics and Biomedical Engineering Medical University Vienna Austria
| | - Ewald Moser
- Center for Medical Physics and Biomedical Engineering Medical University Vienna Austria
| | - Thies Jochimsen
- Department of Nuclear Medicine University Hospital Leipzig Germany
| | - Tatjana Sattler
- Clinic for Ruminants and Swine University of Leipzig Germany
| | - Osama Sabri
- Department of Nuclear Medicine University Hospital Leipzig Germany
| | - Thomas Beyer
- Center for Medical Physics and Biomedical Engineering Medical University Vienna Austria
| | - Michael Figl
- Center for Medical Physics and Biomedical Engineering Medical University Vienna Austria
| | - Wolfgang Birkfellner
- Center for Medical Physics and Biomedical Engineering Medical University Vienna Austria
| | - Bernhard Sattler
- Department of Nuclear Medicine University Hospital Leipzig Germany
| |
Collapse
|
43
|
Küstner T, Hepp T, Seith F. Multiparametric Oncologic Hybrid Imaging: Machine Learning Challenges and Opportunities. ROFO-FORTSCHR RONTG 2022; 194:605-612. [PMID: 35211929 DOI: 10.1055/a-1718-4128] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
BACKGROUND Machine learning (ML) is considered an important technology for future data analysis in health care. METHODS The inherently technology-driven fields of diagnostic radiology and nuclear medicine will both benefit from ML in terms of image acquisition and reconstruction. Within the next few years, this will lead to accelerated image acquisition, improved image quality, a reduction of motion artifacts and - for PET imaging - reduced radiation exposure and new approaches for attenuation correction. Furthermore, ML has the potential to support decision making by a combined analysis of data derived from different modalities, especially in oncology. In this context, we see great potential for ML in multiparametric hybrid imaging and the development of imaging biomarkers. RESULTS AND CONCLUSION In this review, we will describe the basics of ML, present approaches in hybrid imaging of MRI, CT, and PET, and discuss the specific challenges associated with it and the steps ahead to make ML a diagnostic and clinical tool in the future. KEY POINTS · ML provides a viable clinical solution for the reconstruction, processing, and analysis of hybrid imaging obtained from MRI, CT, and PET.. CITATION FORMAT · Küstner T, Hepp T, Seith F. Multiparametric Oncologic Hybrid Imaging: Machine Learning Challenges and Opportunities. Fortschr Röntgenstr 2022; DOI: 10.1055/a-1718-4128.
Collapse
Affiliation(s)
- Thomas Küstner
- Medical Image and Data Analysis (MIDAS.lab), Department of Diagnostic and Interventional Radiology, University Hospitals Tubingen, Germany
| | - Tobias Hepp
- Medical Image and Data Analysis (MIDAS.lab), Department of Diagnostic and Interventional Radiology, University Hospitals Tubingen, Germany
| | - Ferdinand Seith
- Department of Diagnostic and Interventional Radiology, University Hospitals Tubingen, Germany
| |
Collapse
|
44
|
Matsubara K, Ibaraki M, Nemoto M, Watabe H, Kimura Y. A review on AI in PET imaging. Ann Nucl Med 2022; 36:133-143. [PMID: 35029818 DOI: 10.1007/s12149-021-01710-8] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2021] [Accepted: 12/09/2021] [Indexed: 12/16/2022]
Abstract
Artificial intelligence (AI) has been applied to various medical imaging tasks, such as computer-aided diagnosis. Specifically, deep learning techniques such as convolutional neural network (CNN) and generative adversarial network (GAN) have been extensively used for medical image generation. Image generation with deep learning has been investigated in studies using positron emission tomography (PET). This article reviews studies that applied deep learning techniques for image generation on PET. We categorized the studies for PET image generation with deep learning into three themes as follows: (1) recovering full PET data from noisy data by denoising with deep learning, (2) PET image reconstruction and attenuation correction with deep learning and (3) PET image translation and synthesis with deep learning. We introduce recent studies based on these three categories. Finally, we mention the limitations of applying deep learning techniques to PET image generation and future prospects for PET image generation.
Collapse
Affiliation(s)
- Keisuke Matsubara
- Department of Radiology and Nuclear Medicine, Research Institute for Brain and Blood Vessels, Akita Cerebrospinal and Cardiovascular Center, Akita, Japan
| | - Masanobu Ibaraki
- Department of Radiology and Nuclear Medicine, Research Institute for Brain and Blood Vessels, Akita Cerebrospinal and Cardiovascular Center, Akita, Japan
| | - Mitsutaka Nemoto
- Faculty of Biology-Oriented Science and Technology, and Cyber Informatics Research Institute, Kindai University, Wakayama, Japan
| | - Hiroshi Watabe
- Cyclotron and Radioisotope Center (CYRIC), Tohoku University, Miyagi, Japan
| | - Yuichi Kimura
- Faculty of Biology-Oriented Science and Technology, and Cyber Informatics Research Institute, Kindai University, Wakayama, Japan.
| |
Collapse
|
45
|
Hwang D, Kang SK, Kim KY, Choi H, Lee JS. Comparison of deep learning-based emission-only attenuation correction methods for positron emission tomography. Eur J Nucl Med Mol Imaging 2021; 49:1833-1842. [PMID: 34882262 DOI: 10.1007/s00259-021-05637-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2021] [Accepted: 11/24/2021] [Indexed: 11/26/2022]
Abstract
PURPOSE This study aims to compare two approaches using only emission PET data and a convolution neural network (CNN) to correct the attenuation (μ) of the annihilation photons in PET. METHODS One of the approaches uses a CNN to generate μ-maps from the non-attenuation-corrected (NAC) PET images (μ-CNNNAC). In the other method, CNN is used to improve the accuracy of μ-maps generated using maximum likelihood estimation of activity and attenuation (MLAA) reconstruction (μ-CNNMLAA). We investigated the improvement in the CNN performance by combining the two methods (μ-CNNMLAA+NAC) and the suitability of μ-CNNNAC for providing the scatter distribution required for MLAA reconstruction. Image data from 18F-FDG (n = 100) or 68 Ga-DOTATOC (n = 50) PET/CT scans were used for neural network training and testing. RESULTS The error of the attenuation correction factors estimated using μ-CT and μ-CNNNAC was over 7%, but that of scatter estimates was only 2.5%, indicating the validity of the scatter estimation from μ-CNNNAC. However, CNNNAC provided less accurate bone structures in the μ-maps, while the best results in recovering the fine bone structures were obtained by applying CNNMLAA+NAC. Additionally, the μ-values in the lungs were overestimated by CNNNAC. Activity images (λ) corrected for attenuation using μ-CNNMLAA and μ-CNNMLAA+NAC were superior to those corrected using μ-CNNNAC, in terms of their similarity to λ-CT. However, the improvement in the similarity with λ-CT by combining the CNNNAC and CNNMLAA approaches was insignificant (percent error for lung cancer lesions, λ-CNNNAC = 5.45% ± 7.88%; λ-CNNMLAA = 1.21% ± 5.74%; λ-CNNMLAA+NAC = 1.91% ± 4.78%; percent error for bone cancer lesions, λ-CNNNAC = 1.37% ± 5.16%; λ-CNNMLAA = 0.23% ± 3.81%; λ-CNNMLAA+NAC = 0.05% ± 3.49%). CONCLUSION The use of CNNNAC was feasible for scatter estimation to address the chicken-egg dilemma in MLAA reconstruction, but CNNMLAA outperformed CNNNAC.
Collapse
Affiliation(s)
- Donghwi Hwang
- Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul, South Korea
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080, South Korea
- Artificial Intelligence Institute, Seoul National University, Seoul, South Korea
| | - Seung Kwan Kang
- Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul, South Korea
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080, South Korea
- Artificial Intelligence Institute, Seoul National University, Seoul, South Korea
- Brightonix Imaging Inc., Seoul, South Korea
| | - Kyeong Yun Kim
- Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul, South Korea
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080, South Korea
- Brightonix Imaging Inc., Seoul, South Korea
| | - Hongyoon Choi
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080, South Korea
| | - Jae Sung Lee
- Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul, South Korea.
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080, South Korea.
- Artificial Intelligence Institute, Seoul National University, Seoul, South Korea.
- Brightonix Imaging Inc., Seoul, South Korea.
- Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul, South Korea.
| |
Collapse
|
46
|
Lee JS, Kim KM, Choi Y, Kim HJ. A Brief History of Nuclear Medicine Physics, Instrumentation, and Data Sciences in Korea. Nucl Med Mol Imaging 2021; 55:265-284. [PMID: 34868376 DOI: 10.1007/s13139-021-00721-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Revised: 10/14/2021] [Accepted: 10/18/2021] [Indexed: 10/19/2022] Open
Abstract
We review the history of nuclear medicine physics, instrumentation, and data sciences in Korea to commemorate the 60th anniversary of the Korean Society of Nuclear Medicine. In the 1970s and 1980s, the development of SPECT, nuclear stethoscope, and bone densitometry systems, as well as kidney and cardiac image analysis technology, marked the beginning of nuclear medicine physics and engineering in Korea. With the introduction of PET and cyclotron in Korea in 1994, nuclear medicine imaging research was further activated. With the support of large-scale government projects, the development of gamma camera, SPECT, and PET systems was carried out. Exploiting the use of PET scanners in conjunction with cyclotrons, extensive studies on myocardial blood flow quantification and brain image analysis were also actively pursued. In 2005, Korea's first domestic cyclotron succeeded in producing radioactive isotopes, and the cyclotron was provided to six universities and university hospitals, thereby facilitating the nationwide supply of PET radiopharmaceuticals. Since the late 2000s, research on PET/MRI has been actively conducted, and the advanced research results of Korean scientists in the fields of silicon photomultiplier PET and simultaneous PET/MRI have attracted significant attention from the academic community. Currently, Korean researchers are actively involved in endeavors to solve a variety of complex problems in nuclear medicine using artificial intelligence and deep learning technologies.
Collapse
Affiliation(s)
- Jae Sung Lee
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080 Korea
| | - Kyeong Min Kim
- Department of Isotopic Drug Development, Korea Radioisotope Center for Pharmaceuticals, Korea Institute of Radiological and Medical Sciences, Seoul, Korea
| | - Yong Choi
- Department of Electronic Engineering, Sogang University, Seoul, Korea
| | - Hee-Joung Kim
- Department of Radiological Science, Yonsei University, Wonju, Korea
| |
Collapse
|
47
|
Amirrashedi M, Sarkar S, Mamizadeh H, Ghadiri H, Ghafarian P, Zaidi H, Ay MR. Leveraging deep neural networks to improve numerical and perceptual image quality in low-dose preclinical PET imaging. Comput Med Imaging Graph 2021; 94:102010. [PMID: 34784505 DOI: 10.1016/j.compmedimag.2021.102010] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Revised: 10/25/2021] [Accepted: 10/26/2021] [Indexed: 01/24/2023]
Abstract
The amount of radiotracer injected into laboratory animals is still the most daunting challenge facing translational PET studies. Since low-dose imaging is characterized by a higher level of noise, the quality of the reconstructed images leaves much to be desired. Being the most ubiquitous techniques in denoising applications, edge-aware denoising filters, and reconstruction-based techniques have drawn significant attention in low-count applications. However, for the last few years, much of the credit has gone to deep-learning (DL) methods, which provide more robust solutions to handle various conditions. Albeit being extensively explored in clinical studies, to the best of our knowledge, there is a lack of studies exploring the feasibility of DL-based image denoising in low-count small animal PET imaging. Therefore, herein, we investigated different DL frameworks to map low-dose small animal PET images to their full-dose equivalent with quality and visual similarity on a par with those of standard acquisition. The performance of the DL model was also compared to other well-established filters, including Gaussian smoothing, nonlocal means, and anisotropic diffusion. Visual inspection and quantitative assessment based on quality metrics proved the superior performance of the DL methods in low-count small animal PET studies, paving the way for a more detailed exploration of DL-assisted algorithms in this domain.
Collapse
Affiliation(s)
- Mahsa Amirrashedi
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran; Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran.
| | - Saeed Sarkar
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran; Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran.
| | - Hojjat Mamizadeh
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran; Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran.
| | - Hossein Ghadiri
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran; Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran.
| | - Pardis Ghafarian
- Chronic Respiratory Diseases Research Center, National Research Institute of Tuberculosis and Lung Diseases (NRITLD), Shahid Beheshti University of Medical Sciences, Tehran, Iran; PET/CT and Cyclotron Center, Masih Daneshvari Hospital, Shahid Beheshti University of Medical, Tehran, Iran.
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva CH-1211, Switzerland; Geneva University Neurocenter, Geneva University, Geneva, Switzerland; Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands; Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark.
| | - Mohammad Reza Ay
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran; Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran.
| |
Collapse
|
48
|
McMillan AB, Bradshaw TJ. Artificial Intelligence-Based Data Corrections for Attenuation and Scatter in Position Emission Tomography and Single-Photon Emission Computed Tomography. PET Clin 2021; 16:543-552. [PMID: 34364816 PMCID: PMC10562009 DOI: 10.1016/j.cpet.2021.06.010] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
Recent developments in artificial intelligence (AI) technology have enabled new developments that can improve attenuation and scatter correction in PET and single-photon emission computed tomography (SPECT). These technologies will enable the use of accurate and quantitative imaging without the need to acquire a computed tomography image, greatly expanding the capability of PET/MR imaging, PET-only, and SPECT-only scanners. The use of AI to aid in scatter correction will lead to improvements in image reconstruction speed, and improve patient throughput. This article outlines the use of these new tools, surveys contemporary implementation, and discusses their limitations.
Collapse
Affiliation(s)
- Alan B McMillan
- Department of Radiology, University of Wisconsin, 3252 Clinical Science Center, 600 Highland Avenue, Madison, WI 53792, USA.
| | - Tyler J Bradshaw
- Department of Radiology, University of Wisconsin, 3252 Clinical Science Center, 600 Highland Avenue, Madison, WI 53792, USA. https://twitter.com/tybradshaw11
| |
Collapse
|
49
|
Lee S, Lee JS. Inter-crystal scattering recovery of light-sharing PET detectors using convolutional neural networks. Phys Med Biol 2021; 66. [PMID: 34438380 DOI: 10.1088/1361-6560/ac215d] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2021] [Accepted: 08/26/2021] [Indexed: 11/12/2022]
Abstract
Inter-crystal scattering (ICS) is a type of Compton scattering of photons from one crystal to adjacent crystals and causes inaccurate assignment of the annihilation photon interaction position in positron emission tomography (PET). Because ICS frequently occurs in highly light-shared PET detectors, its recovery is crucial for the spatial resolution improvement. In this study, we propose two different convolutional neural networks (CNNs) for ICS recovery, exploiting the good pattern recognition ability of CNN techniques. Using the signal distribution of a photosensor array as input, one network estimates the energy deposition in each crystal (ICS-eNet) and another network chooses the first-interacted crystal (ICS-cNet). We performed GATE Monte Carlo simulations with optical photon tracking to test PET detectors comprising different crystal arrays (8 × 8 to 21 × 21) with lengths of 20 mm and the same photosensor array (3 mm 8 × 8 array) covering an area of 25.8 × 25.8 mm2. For each detector design, we trained ICS-eNet and ICS-cNet and evaluated their respective performance. ICS-eNet accurately identified whether the events were ICS (accuracy > 90%) and selected interacted crystals (accuracy > 60%) with appropriate energy estimation performance (R2 > 0.7) in the 8 × 8, 12 × 12, and 16 × 16 arrays. ICS-cNet also exhibited satisfactory performance, which was less dependent on the crystal-to-sensor ratio, with an accuracy enhancement that exceeds 10% in selecting the first-interacted crystal and a reduction in error distances compared when no recovery was applied. Both ICS-eNet and ICS-cNet exhibited consistent performances under various optical property settings of the crystals. For spatial resolution measurements in PET rings, both networks achieved significant enhancements particularly for highly pixelated arrays. We also discuss approaches for training the networks in an actual experimental setup. This proof-of-concept study demonstrated the feasibility of CNNs for ICS recovery in various light-sharing designs to efficiently improve the spatial resolution of PET in various applications.
Collapse
Affiliation(s)
- Seungeun Lee
- Department of Nuclear Medicine, Seoul National University, Seoul, 03080, Republic of Korea.,Department of Biomedical Sciences, Seoul National University, Seoul, 03080, Republic of Korea
| | - Jae Sung Lee
- Department of Nuclear Medicine, Seoul National University, Seoul, 03080, Republic of Korea.,Brightonix Imaging Inc., Seoul, 04782, Republic of Korea
| |
Collapse
|
50
|
Yin T, Obi T. Generation of attenuation correction factors from time-of-flight PET emission data using high-resolution residual U-net. Biomed Phys Eng Express 2021; 7. [PMID: 34438372 DOI: 10.1088/2057-1976/ac21aa] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Accepted: 08/26/2021] [Indexed: 11/12/2022]
Abstract
Attenuation correction of annihilation photons is essential in PET image reconstruction for providing accurate quantitative activity maps. In the absence of an aligned CT device to obtain attenuation information, we propose the high-resolution residual U-net (HRU-Net) to extract attenuation correction factors (ACF) directly from time-of-flight (TOF) PET emission data. HRU-Net is built upon the U-Net encoding-decoding architecture and it utilizes four blocks of modified residual connections in each stage. In each residual block, concatenation is performed to incorporate input and output feature vectors. In addition, flexible and efficient elements of convolutional neural network (CNN) such as dilated convolutions, pre-activation order of a batch normalization (BN) layer, a rectified linear unit (ReLU) layer and a convolution layer, and residual connections are utilized to extract high resolution features. To illustrate the effectiveness of the proposed method, HRU-Net estimated ACF, attenuation maps and activity maps are compared with maximum likelihood ACF (MLACF) algorithm, U-Net, and HC-Net. An ablation study is conducted using non-TOF and TOF sinograms as inputs of networks. The experimental results show that HRU-Net with TOF projections as inputs leads to normalized root mean square error (NRMSE) of 4.84% ± 1.58%, outperforming MLACF, U-Net and HC-Net with NRMSE of 47.82% ± 13.62%, 6.92% ± 1.94%, and 7.99% ± 2.49%, respectively.
Collapse
Affiliation(s)
- Tuo Yin
- Department of Information and Communications Engineering, Tokyo Institute of Technology, Yokohama 226-8503, Japan
| | - Takashi Obi
- Institute of Innovative Research, Tokyo Institute of Technology, Yokohama 226-8503, Japan
| |
Collapse
|