1
|
Wakankar R, Khangembam BC. Potential Role of Generative Adversarial Network (GAN)-based Models in PSMA PET/MRI for the Evaluation of Prostate Cancer. Clin Nucl Med 2025:00003072-990000000-01678. [PMID: 40279670 DOI: 10.1097/rlu.0000000000005925] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2025] [Accepted: 03/24/2025] [Indexed: 04/27/2025]
Affiliation(s)
- Ritwik Wakankar
- Department of Nuclear Medicine, All India Institute of Medical Sciences, New Delhi India
| | | |
Collapse
|
2
|
Zhu G, Jiang B, Chen H, Heit JJ, Etter M, Hishaw GA, Faizy TD, Steinberg G, Wintermark M. Using generative adversarial deep learning networks to synthesize cerebrovascular reactivity imaging from pre-acetazolamide arterial spin labeling in moyamoya disease. Neuroradiology 2025:10.1007/s00234-025-03605-1. [PMID: 40183965 DOI: 10.1007/s00234-025-03605-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Accepted: 03/26/2025] [Indexed: 04/05/2025]
Abstract
BACKGROUND Cerebrovascular reactivity (CVR) assesses vascular health in various brain conditions, but CVR measurement requires a challenge to cerebral perfusion such as the administration of acetazolamide(ACZ), thus limiting widespread use. We determined whether generative adversarial networks (GANs) can create CVR images from baseline pre-ACZ arterial spin labeling (ASL) MRI. METHODS This study included 203 Moyamoya cases with a total of 3248 pre- and post-ACZ ASL Cerebral Blood Flow (CBF) images. Reference CVRs were generated from these CBF slices. From this set, 2640 slices were used to train a Pixel-to-Pixel GAN consisting of a generator and discriminator network, with the remaining 608 slices reserved as a testing set. Following training, the pre-ACZ CBF in the testing set was introduced to the trained model to generate synthesized CVR. The quality of the synthesized CVR was evaluated with structural similarity index(SSI), spatial correlation coefficient(SCC), and the root mean squared error(RMSE), compared with reference CVR. The segmentations of the low CVR regions were compared using the Dice similarity coefficient (DSC). Reference and synthesized CVRs in single-slice and individual-hemisphere settings were reviewed to assess CVR status, with Cohen's Kappa measuring consistency. RESULTS The mean SSIs of the CVR of training and testing sets were 0.943 ± 0.019 and 0.943 ± 0.020. The mean SCCs of the CVR of training and testing sets were 0.988 ± 0.009 and 0.987 ± 0.011. The mean RMSEs of the CVR are 0.077 ± 0.015 and 0.079 ± 0.018. Mean DSC of low CVR area of testing sets was 0.593 ± 0.128. Visual interpretation yielded Cohen's Kappa values of 0.896 and 0.813 for the training and testing sets in the single-slice setting, and 0.781 and 0.730 in the individual-hemisphere setting. CONCLUSIONS Synthesized CVR by GANs from baseline ASL without challenge may be a useful alternative in detecting vascular deficits in clinical applications when ACZ challenge is not feasible.
Collapse
Affiliation(s)
- Guangming Zhu
- Department of Neurology, University of Arizona, Tucson, AZ, USA
| | - Bin Jiang
- Department of Radiology, Neuroradiology Section, Stanford University, Stanford, CA, USA
| | - Hui Chen
- Department of Neuroradiology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Jeremy J Heit
- Department of Radiology, Neuroradiology Section, Stanford University, Stanford, CA, USA
| | - Micah Etter
- Department of Neurology, University of Arizona, Tucson, AZ, USA
| | - G Alex Hishaw
- Department of Neurology, University of Arizona, Tucson, AZ, USA
| | - Tobias D Faizy
- Department of Neuroradiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Gary Steinberg
- Department of Neurosurgery, Stanford University, Stanford, CA, USA
| | - Max Wintermark
- Department of Neuroradiology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA.
| |
Collapse
|
3
|
Tang H, Huang Z, Li W, Wu Y, Yuan J, Yang Y, Zhang Y, Qin J, Zheng H, Liang D, Wang M, Hu Z. Automatic Brain Segmentation for PET/MR Dual-Modal Images Through a Cross-Fusion Mechanism. IEEE J Biomed Health Inform 2025; 29:1982-1994. [PMID: 40030515 DOI: 10.1109/jbhi.2024.3516012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/05/2025]
Abstract
The precise segmentation of different brain regions and tissues is usually a prerequisite for the detection and diagnosis of various neurological disorders in neuroscience. Considering the abundance of functional and structural dual-modality information for positron emission tomography/magnetic resonance (PET/MR) images, we propose a novel 3D whole-brain segmentation network with a cross-fusion mechanism introduced to obtain 45 brain regions. Specifically, the network processes PET and MR images simultaneously, employing UX-Net and a cross-fusion block for feature extraction and fusion in the encoder. We test our method by comparing it with other deep learning-based methods, including 3DUXNET, SwinUNETR, UNETR, nnFormer, UNet3D, NestedUNet, ResUNet, and VNet. The experimental results demonstrate that the proposed method achieves better segmentation performance in terms of both visual and quantitative evaluation metrics and achieves more precise segmentation in three views while preserving fine details. In particular, the proposed method achieves superior quantitative results, with a Dice coefficient of 85.73% 0.01%, a Jaccard index of 76.68% 0.02%, a sensitivity of 85.00% 0.01%, a precision of 83.26% 0.03% and a Hausdorff distance (HD) of 4.4885 14.85%. Moreover, the distribution and correlation of the SUV in the volume of interest (VOI) are also evaluated (PCC > 0.9), indicating consistency with the ground truth and the superiority of the proposed method. In future work, we will utilize our whole-brain segmentation method in clinical practice to assist doctors in accurately diagnosing and treating brain diseases.
Collapse
|
4
|
Lee S, Jung JH, Choi Y, Seok E, Jung J, Lim H, Kim D, Yun M. Cross-Modality Image Translation From Brain 18 F-FDG PET/CT Images to Fluid-Attenuated Inversion Recovery Images Using the CypixGAN Framework. Clin Nucl Med 2024; 49:e557-e565. [PMID: 39325494 DOI: 10.1097/rlu.0000000000005441] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/27/2024]
Abstract
PURPOSE PET/CT and MRI can accurately diagnose dementia but are expensive and inconvenient for patients. Therefore, we aimed to generate synthetic fluid-attenuated inversion recovery (FLAIR) images from 18 F-FDG PET and CT images of the human brain using a generative adversarial network (GAN)-based deep learning framework called the CypixGAN, which combined the CycleGAN framework with the L1 loss function of the pix2pix. PATIENTS AND METHODS Data from 143 patients who underwent PET/CT and MRI were used for training (n = 79), validation (n = 20), and testing (n = 44) the deep learning frameworks. Synthetic FLAIR images were generated using the pix2pix, CycleGAN, and CypixGAN, and white matter hyperintensities (WMHs) were then segmented. The performance of CypixGAN was compared with that of the other frameworks. RESULTS The CypixGAN outperformed the pix2pix and CycleGAN in generating synthetic FLAIR images with superior visual quality. Peak signal-to-noise ratio and structural similarity index (mean ± standard deviation) estimated using the CypixGAN (20.23 ± 1.31 and 0.80 ± 0.02, respectively) were significantly higher than those estimated using the pix2pix (19.35 ± 1.43 and 0.79 ± 0.02, respectively) and CycleGAN (18.74 ± 1.49 and 0.78 ± 0.02, respectively) ( P < 0.001). WMHs in synthetic FLAIR images generated using the CypixGAN closely resembled those in ground-truth images, as indicated by the low absolute percentage volume differences and high dice similarity coefficients. CONCLUSIONS The CypixGAN generated high-quality FLAIR images owing to the preservation of spatial information despite using unpaired images. This framework may help improve diagnostic performance and cost-effectiveness of PET/CT when MRI scan is unavailable.
Collapse
Affiliation(s)
- Sangwon Lee
- From the Department of Electronic Engineering, Sogang University, Seoul, Republic of Korea
| | - Jin Ho Jung
- From the Department of Electronic Engineering, Sogang University, Seoul, Republic of Korea
| | - Yong Choi
- From the Department of Electronic Engineering, Sogang University, Seoul, Republic of Korea
| | - Eunyeong Seok
- From the Department of Electronic Engineering, Sogang University, Seoul, Republic of Korea
| | - Jiwoong Jung
- From the Department of Electronic Engineering, Sogang University, Seoul, Republic of Korea
| | - Hyunkeong Lim
- Department of Nuclear Medicine, Yonsei University College of Medicine, Seoul, Republic of Korea
| | | | - Mijin Yun
- Department of Nuclear Medicine, Yonsei University College of Medicine, Seoul, Republic of Korea
| |
Collapse
|
5
|
Hussain D, Al-Masni MA, Aslam M, Sadeghi-Niaraki A, Hussain J, Gu YH, Naqvi RA. Revolutionizing tumor detection and classification in multimodality imaging based on deep learning approaches: Methods, applications and limitations. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2024; 32:857-911. [PMID: 38701131 DOI: 10.3233/xst-230429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2024]
Abstract
BACKGROUND The emergence of deep learning (DL) techniques has revolutionized tumor detection and classification in medical imaging, with multimodal medical imaging (MMI) gaining recognition for its precision in diagnosis, treatment, and progression tracking. OBJECTIVE This review comprehensively examines DL methods in transforming tumor detection and classification across MMI modalities, aiming to provide insights into advancements, limitations, and key challenges for further progress. METHODS Systematic literature analysis identifies DL studies for tumor detection and classification, outlining methodologies including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and their variants. Integration of multimodality imaging enhances accuracy and robustness. RESULTS Recent advancements in DL-based MMI evaluation methods are surveyed, focusing on tumor detection and classification tasks. Various DL approaches, including CNNs, YOLO, Siamese Networks, Fusion-Based Models, Attention-Based Models, and Generative Adversarial Networks, are discussed with emphasis on PET-MRI, PET-CT, and SPECT-CT. FUTURE DIRECTIONS The review outlines emerging trends and future directions in DL-based tumor analysis, aiming to guide researchers and clinicians toward more effective diagnosis and prognosis. Continued innovation and collaboration are stressed in this rapidly evolving domain. CONCLUSION Conclusions drawn from literature analysis underscore the efficacy of DL approaches in tumor detection and classification, highlighting their potential to address challenges in MMI analysis and their implications for clinical practice.
Collapse
Affiliation(s)
- Dildar Hussain
- Department of Artificial Intelligence and Data Science, Sejong University, Seoul, Korea
| | - Mohammed A Al-Masni
- Department of Artificial Intelligence and Data Science, Sejong University, Seoul, Korea
| | - Muhammad Aslam
- Department of Artificial Intelligence and Data Science, Sejong University, Seoul, Korea
| | - Abolghasem Sadeghi-Niaraki
- Department of Computer Science & Engineering and Convergence Engineering for Intelligent Drone, XR Research Center, Sejong University, Seoul, Korea
| | - Jamil Hussain
- Department of Artificial Intelligence and Data Science, Sejong University, Seoul, Korea
| | - Yeong Hyeon Gu
- Department of Artificial Intelligence and Data Science, Sejong University, Seoul, Korea
| | - Rizwan Ali Naqvi
- Department of Intelligent Mechatronics Engineering, Sejong University, Seoul, Korea
| |
Collapse
|
6
|
Applications of Generative Adversarial Networks (GANs) in Positron Emission Tomography (PET) imaging: A review. Eur J Nucl Med Mol Imaging 2022; 49:3717-3739. [PMID: 35451611 DOI: 10.1007/s00259-022-05805-w] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Accepted: 04/12/2022] [Indexed: 11/04/2022]
Abstract
PURPOSE This paper reviews recent applications of Generative Adversarial Networks (GANs) in Positron Emission Tomography (PET) imaging. Recent advances in Deep Learning (DL) and GANs catalysed the research of their applications in medical imaging modalities. As a result, several unique GAN topologies have emerged and been assessed in an experimental environment over the last two years. METHODS The present work extensively describes GAN architectures and their applications in PET imaging. The identification of relevant publications was performed via approved publication indexing websites and repositories. Web of Science, Scopus, and Google Scholar were the major sources of information. RESULTS The research identified a hundred articles that address PET imaging applications such as attenuation correction, de-noising, scatter correction, removal of artefacts, image fusion, high-dose image estimation, super-resolution, segmentation, and cross-modality synthesis. These applications are presented and accompanied by the corresponding research works. CONCLUSION GANs are rapidly employed in PET imaging tasks. However, specific limitations must be eliminated to reach their full potential and gain the medical community's trust in everyday clinical practice.
Collapse
|
7
|
Yue Y, Li N, Shahid H, Bi D, Liu X, Song S, Ta D. Gross Tumor Volume Definition and Comparative Assessment for Esophageal Squamous Cell Carcinoma From 3D 18F-FDG PET/CT by Deep Learning-Based Method. Front Oncol 2022; 12:799207. [PMID: 35372054 PMCID: PMC8967962 DOI: 10.3389/fonc.2022.799207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Accepted: 02/17/2022] [Indexed: 11/13/2022] Open
Abstract
BackgroundThe accurate definition of gross tumor volume (GTV) of esophageal squamous cell carcinoma (ESCC) can promote precise irradiation field determination, and further achieve the radiotherapy curative effect. This retrospective study is intended to assess the applicability of leveraging deep learning-based method to automatically define the GTV from 3D 18F-FDG PET/CT images of patients diagnosed with ESCC.MethodsWe perform experiments on a clinical cohort with 164 18F-FDG PET/CT scans. The state-of-the-art esophageal GTV segmentation deep neural net is first employed to delineate the lesion area on PET/CT images. Afterwards, we propose a novel equivalent truncated elliptical cone integral method (ETECIM) to estimate the GTV value. Indexes of Dice similarity coefficient (DSC), Hausdorff distance (HD), and mean surface distance (MSD) are used to evaluate the segmentation performance. Conformity index (CI), degree of inclusion (DI), and motion vector (MV) are used to assess the differences between predicted and ground truth tumors. Statistical differences in the GTV, DI, and position are also determined.ResultsWe perform 4-fold cross-validation for evaluation, reporting the values of DSC, HD, and MSD as 0.72 ± 0.02, 11.87 ± 4.20 mm, and 2.43 ± 0.60 mm (mean ± standard deviation), respectively. Pearson correlations (R2) achieve 0.8434, 0.8004, 0.9239, and 0.7119 for each fold cross-validation, and there is no significant difference (t = 1.193, p = 0.235) between the predicted and ground truth GTVs. For DI, a significant difference is found (t = −2.263, p = 0.009). For position assessment, there is no significant difference (left-right in x direction: t = 0.102, p = 0.919, anterior–posterior in y direction: t = 0.221, p = 0.826, and cranial–caudal in z direction: t = 0.569, p = 0.570) between the predicted and ground truth GTVs. The median of CI is 0.63, and the gotten MV is small.ConclusionsThe predicted tumors correspond well with the manual ground truth. The proposed GTV estimation approach ETECIM is more precise than the most commonly used voxel volume summation method. The ground truth GTVs can be solved out due to the good linear correlation with the predicted results. Deep learning-based method shows its promising in GTV definition and clinical radiotherapy application.
Collapse
Affiliation(s)
- Yaoting Yue
- Center for Biomedical Engineering, School of Information Science and Technology, Fudan University, Shanghai, China
| | - Nan Li
- Department of Nuclear Medicine, Fudan University Shanghai Cancer Center, Shanghai, China
| | - Husnain Shahid
- Center for Biomedical Engineering, School of Information Science and Technology, Fudan University, Shanghai, China
| | - Dongsheng Bi
- Center for Biomedical Engineering, School of Information Science and Technology, Fudan University, Shanghai, China
| | - Xin Liu
- Academy for Engineering and Technology, Fudan University, Shanghai, China
- *Correspondence: Xin Liu, ; Shaoli Song,
| | - Shaoli Song
- Department of Nuclear Medicine, Fudan University Shanghai Cancer Center, Shanghai, China
- *Correspondence: Xin Liu, ; Shaoli Song,
| | - Dean Ta
- Center for Biomedical Engineering, School of Information Science and Technology, Fudan University, Shanghai, China
- Academy for Engineering and Technology, Fudan University, Shanghai, China
| |
Collapse
|
8
|
18F-FDG-PET correlates of aging and disease course in ALS as revealed by distinct PVC approaches. Eur J Radiol Open 2022; 9:100394. [PMID: 35059473 PMCID: PMC8760536 DOI: 10.1016/j.ejro.2022.100394] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2021] [Revised: 12/23/2021] [Accepted: 01/06/2022] [Indexed: 11/23/2022] Open
|