1
|
Shih CT, Lin KH, Yang BH, Li CY, Lin TL, Mok GSP, Wu TH. Deriving tissue physical densities based on Dixon magnetic resonance images and tissue composition prior knowledge for voxel-based internal dosimetry. EJNMMI Phys 2025; 12:36. [PMID: 40192870 PMCID: PMC11977065 DOI: 10.1186/s40658-025-00737-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2024] [Accepted: 02/28/2025] [Indexed: 04/10/2025] Open
Abstract
BACKGROUND Magnetic resonance (MR) images have been applied in diagnostic and therapeutic nuclear medicine to improve the visualization and characterization of soft tissues and tumors. However, the physical density (ρ) and elemental composition of human tissues required for dosimetric calculation cannot be directly converted from MR images, obstructing MR-based personalized internal dosimetry. In this study, we proposed a method to derive physical densities from Dixon MR images for voxel-based internal dose calculation. METHODS The proposed method defined human tissues as composed of four basic tissues. The physical densities of the human tissues were calculated using the standard tissue composition of the basic tissues and the volume fraction maps calculated from Dixon images. The derived ρ map was applied to calculate the whole-body internal dosimetry using a multiple voxel S-value (MSV) approach. The accuracy of the proposed method in deriving ρ and calculating the internal dose of 18F-FDG PET imaging was evaluated by comparing with those obtained from computed tomography (CT) images of the same patient and was compared with those obtained using generative adversarial networks (GANs). RESULTS The proposed method was superior to the GANs in deriving ρ from Dixon MR images and the following internal dose calculation. On average of a validation set, the mean absolute percent errors (MAPEs) of the whole-body ρ derivation and internal dose calculation using the proposed method were 14.28 ± 11.11% and 3.31 ± 0.69%, respectively. The MAPEs were respectively reduced to 5.97 ± 2.51 and 2.75 ± 0.69% after excluding the intestinal gas with different locations in the Dixon MR and CT images. CONCLUSIONS The proposed method could be applied for accurate and efficient personalized internal dosimetry evaluation in MR-integrated nuclear medicine clinical applications.
Collapse
Affiliation(s)
- Cheng-Ting Shih
- Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung, Taiwan
| | - Ko-Han Lin
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Department of Nuclear Medicine, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Bang-Hung Yang
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Department of Nuclear Medicine, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Chien-Ying Li
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Department of Nuclear Medicine, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Tzu-Lin Lin
- Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung, Taiwan
| | - Greta S P Mok
- Biomedical Imaging Laboratory (BIG), Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Taipa, Macau SAR, China
| | - Tung-Hsin Wu
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, Taiwan.
- Department of Nuclear Medicine, Taipei Veterans General Hospital, Taipei, Taiwan.
| |
Collapse
|
2
|
Cao B, Qi G, Zhao J, Zhu P, Hu Q, Gao X. RTF: Recursive TransFusion for Multi-Modal Image Synthesis. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2025; 34:1573-1587. [PMID: 40031796 DOI: 10.1109/tip.2025.3541877] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/05/2025]
Abstract
Multi-modal image synthesis is crucial for obtaining complete modalities due to the imaging restrictions in reality. Current methods, primarily CNN-based models, find it challenging to extract global representations because of local inductive bias, leading to synthetic structure deformation or color distortion. Despite the significant global representation ability of transformer in capturing long-range dependencies, its huge parameter size requires considerable training data. Multi-modal synthesis solely based on one of the two structures makes it hard to extract comprehensive information from each modality with limited data. To tackle this dilemma, we propose a simple yet effective Recursive TransFusion (RTF) framework for multi-modal image synthesis. Specifically, we develop a TransFusion unit to integrate local knowledge extracted from the individual modality by connecting a CNN-based local representation block (LRB) and a transformer-based global fusion block (GFB) via a feature translating gate (FTG). Considering the numerous parameters introduced by the transformer, we further unfold a TransFusion unit with recursive constraint repeatedly, forming recursive TransFusion (RTF), which progressively extracts multi-modal information at different depths. Our RTF remarkably reduces network parameters while maintaining superior performance. Extensive experiments validate our superiority against the competing methods on multiple benchmarks. The source code will be available at https://github.com/guoliangq/RTF.
Collapse
|
3
|
Lee C, Yoon YH, Sung J, Kim JW, Cho Y, Kim J, Chun J, Kim JS. Abdominal synthetic CT generation for MR-only radiotherapy using structure-conserving loss and transformer-based cycle-GAN. Front Oncol 2025; 14:1478148. [PMID: 39830649 PMCID: PMC11739088 DOI: 10.3389/fonc.2024.1478148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2024] [Accepted: 12/09/2024] [Indexed: 01/22/2025] Open
Abstract
Purpose Recent deep-learning based synthetic computed tomography (sCT) generation using magnetic resonance (MR) images have shown promising results. However, generating sCT for the abdominal region poses challenges due to the patient motion, including respiration and peristalsis. To address these challenges, this study investigated an unsupervised learning approach using a transformer-based cycle-GAN with structure-preserving loss for abdominal cancer patients. Method A total of 120 T2 MR images scanned by 1.5 T Unity MR-Linac and their corresponding CT images for abdominal cancer patient were collected. Patient data were aligned using rigid registration. The study employed a cycle-GAN architecture, incorporating the modified Swin-UNETR as a generator. Modality-independent neighborhood descriptor (MIND) loss was used for geometric consistency. Image quality was compared between sCT and planning CT, using metrics including mean absolute error (MAE), peak signal-to-noise ratio (PSNR), structure similarity index measure (SSIM) and Kullback-Leibler (KL) divergence. Dosimetric evaluation was evaluated between sCT and planning CT, using gamma analysis and relative dose volume histogram differences for each organ-at-risks, utilizing treatment plan. A comparison study was conducted between original, Swin-UNETR-only, MIND-only, and proposed cycle-GAN. Results The MAE, PSNR, SSIM and KL divergence of original cycle-GAN and proposed method were 86.1 HU, 26.48 dB, 0.828, 0.448 and 79.52 HU, 27.05 dB, 0.845, 0.230, respectively. The MAE and PSNR were statistically significant. The global gamma passing rates of the proposed method at 1%/1 mm, 2%/2 mm, and 3%/3 mm were 86.1 ± 5.9%, 97.1 ± 2.7%, and 98.9 ± 1.0%, respectively. Conclusion The proposed method significantly improves image metric of sCT for the abdomen patients than original cycle-GAN. Local gamma analysis was slightly higher for proposed method. This study showed the improvement of sCT using transformer and structure preserving loss even with the complex anatomy of the abdomen.
Collapse
Affiliation(s)
- Chanwoong Lee
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, Seoul, Republic of Korea
- Medical Physics and Biomedical Engineering Lab (MPBEL), Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Young Hun Yoon
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, Seoul, Republic of Korea
- Medical Physics and Biomedical Engineering Lab (MPBEL), Yonsei University College of Medicine, Seoul, Republic of Korea
- Department of Radiation Oncology, Washington University in St. Louis, St Louis, MO, United States
| | - Jiwon Sung
- Department of Radiation Oncology, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Jun Won Kim
- Department of Radiation Oncology, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Yeona Cho
- Department of Radiation Oncology, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Jihun Kim
- Department of Radiation Oncology, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| | | | - Jin Sung Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, Seoul, Republic of Korea
- Medical Physics and Biomedical Engineering Lab (MPBEL), Yonsei University College of Medicine, Seoul, Republic of Korea
- Oncosoft Inc., Seoul, Republic of Korea
| |
Collapse
|
4
|
Roh J, Ryu D, Lee J. CT synthesis with deep learning for MR-only radiotherapy planning: a review. Biomed Eng Lett 2024; 14:1259-1278. [PMID: 39465111 PMCID: PMC11502731 DOI: 10.1007/s13534-024-00430-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2024] [Revised: 09/10/2024] [Accepted: 09/17/2024] [Indexed: 10/29/2024] Open
Abstract
MR-only radiotherapy planning is beneficial from the perspective of both time and safety since it uses synthetic CT for radiotherapy dose calculation instead of real CT scans. To elevate the accuracy of treatment planning and apply the results in practice, various methods have been adopted, among which deep learning models for image-to-image translation have shown good performance by retaining domain-invariant structures while changing domain-specific details. In this paper, we present an overview of diverse deep learning approaches to MR-to-CT synthesis, divided into four classes: convolutional neural networks, generative adversarial networks, transformer models, and diffusion models. By comparing each model and analyzing the general approaches applied to this task, the potential of these models and ways to improve the current methods can be can be evaluated.
Collapse
Affiliation(s)
- Junghyun Roh
- Graduate School of Artificial Intelligence, Ulsan National Institute of Science and Technology, 50, Unist-gil, Ulsan, 44919 Republic of Korea
| | - Dongmin Ryu
- Program in Biomedical Radiation Sciences, Seoul National University, 71, Ihwajang-gil, Seoul, 03087 Republic of Korea
| | - Jimin Lee
- Graduate School of Artificial Intelligence, Ulsan National Institute of Science and Technology, 50, Unist-gil, Ulsan, 44919 Republic of Korea
- Department of Nuclear Engineering, Ulsan National Institute of Science and Technology, 50, Unist-gil, Ulsan, 44919 Republic of Korea
- Graduate School of Health Science and Technology, Ulsan National Institute of Science and Technology, 50, Unist-gil, Ulsan, 44919 Republic of Korea
| |
Collapse
|
5
|
Zhou L, Li G. Reliable multi-modal medical image-to-image translation independent of pixel-wise aligned data. Med Phys 2024; 51:8283-8301. [PMID: 39153225 DOI: 10.1002/mp.17362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2024] [Revised: 07/19/2024] [Accepted: 08/04/2024] [Indexed: 08/19/2024] Open
Abstract
BACKGROUND The current mainstream multi-modal medical image-to-image translation methods face a contradiction. Supervised methods with outstanding performance rely on pixel-wise aligned training data to constrain the model optimization. However, obtaining pixel-wise aligned multi-modal medical image datasets is challenging. Unsupervised methods can be trained without paired data, but their reliability cannot be guaranteed. At present, there is no ideal multi-modal medical image-to-image translation method that can generate reliable translation results without the need for pixel-wise aligned data. PURPOSE This work aims to develop a novel medical image-to-image translation model that is independent of pixel-wise aligned data (MITIA), enabling reliable multi-modal medical image-to-image translation under the condition of misaligned training data. METHODS The proposed MITIA model utilizes a prior extraction network composed of a multi-modal medical image registration module and a multi-modal misalignment error detection module to extract pixel-level prior information from training data with misalignment errors to the largest extent. The extracted prior information is then used to construct a regularization term to constrain the optimization of the unsupervised cycle-consistent Generative Adversarial Network model, restricting its solution space and thereby improving the performance and reliability of the generator. We trained the MITIA model using six datasets containing different misalignment errors and two well-aligned datasets. Subsequently, we conducted quantitative analysis using peak signal-to-noise ratio and structural similarity as metrics. Moreover, we compared the proposed method with six other state-of-the-art image-to-image translation methods. RESULTS The results of both quantitative analysis and qualitative visual inspection indicate that MITIA achieves superior performance compared to the competing state-of-the-art methods, both on misaligned data and aligned data. Furthermore, MITIA shows more stability in the presence of misalignment errors in the training data, regardless of their severity or type. CONCLUSIONS The proposed method achieves outstanding performance in multi-modal medical image-to-image translation tasks without aligned training data. Due to the difficulty in obtaining pixel-wise aligned data for medical image translation tasks, MITIA is expected to generate significant application value in this scenario compared to existing methods.
Collapse
Affiliation(s)
- Langrui Zhou
- Jiangsu Key Laboratory for Biomaterials and Devices, School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
| | - Guang Li
- Jiangsu Key Laboratory for Biomaterials and Devices, School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
| |
Collapse
|
6
|
Wang R, Heimann AF, Tannast M, Zheng G. CycleSGAN: A cycle-consistent and semantics-preserving generative adversarial network for unpaired MR-to-CT image synthesis. Comput Med Imaging Graph 2024; 117:102431. [PMID: 39243464 DOI: 10.1016/j.compmedimag.2024.102431] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2024] [Revised: 08/09/2024] [Accepted: 08/30/2024] [Indexed: 09/09/2024]
Abstract
CycleGAN has been leveraged to synthesize a CT image from an available MR image after trained on unpaired data. Due to the lack of direct constraints between the synthetic and the input images, CycleGAN cannot guarantee structural consistency and often generates inaccurate mappings that shift the anatomy, which is highly undesirable for downstream clinical applications such as MRI-guided radiotherapy treatment planning and PET/MRI attenuation correction. In this paper, we propose a cycle-consistent and semantics-preserving generative adversarial network, referred as CycleSGAN, for unpaired MR-to-CT image synthesis. Our design features a novel and generic way to incorporate semantic information into CycleGAN. This is done by designing a pair of three-player games within the CycleGAN framework where each three-player game consists of one generator and two discriminators to formulate two distinct types of adversarial learning: appearance adversarial learning and structure adversarial learning. These two types of adversarial learning are alternately trained to ensure both realistic image synthesis and semantic structure preservation. Results on unpaired hip MR-to-CT image synthesis show that our method produces better synthetic CT images in both accuracy and visual quality as compared to other state-of-the-art (SOTA) unpaired MR-to-CT image synthesis methods.
Collapse
Affiliation(s)
- Runze Wang
- Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University, No. 800, Dongchuan Road, Shanghai, 200240, China
| | - Alexander F Heimann
- Department of Orthopaedic Surgery, HFR Cantonal Hospital, University of Fribourg, Fribourg, Switzerland
| | - Moritz Tannast
- Department of Orthopaedic Surgery, HFR Cantonal Hospital, University of Fribourg, Fribourg, Switzerland
| | - Guoyan Zheng
- Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University, No. 800, Dongchuan Road, Shanghai, 200240, China.
| |
Collapse
|
7
|
Li W, Huang Z, Chen Z, Jiang Y, Zhou C, Zhang X, Fan W, Zhao Y, Zhang L, Wan L, Yang Y, Zheng H, Liang D, Hu Z. Learning CT-free attenuation-corrected total-body PET images through deep learning. Eur Radiol 2024; 34:5578-5587. [PMID: 38355987 DOI: 10.1007/s00330-024-10647-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2023] [Revised: 11/30/2023] [Accepted: 01/08/2024] [Indexed: 02/16/2024]
Abstract
OBJECTIVES Total-body PET/CT scanners with long axial fields of view have enabled unprecedented image quality and quantitative accuracy. However, the ionizing radiation from CT is a major issue in PET imaging, which is more evident with reduced radiopharmaceutical doses in total-body PET/CT. Therefore, we attempted to generate CT-free attenuation-corrected (CTF-AC) total-body PET images through deep learning. METHODS Based on total-body PET data from 122 subjects (29 females and 93 males), a well-established cycle-consistent generative adversarial network (Cycle-GAN) was employed to generate CTF-AC total-body PET images directly while introducing site structures as prior information. Statistical analyses, including Pearson correlation coefficient (PCC) and t-tests, were utilized for the correlation measurements. RESULTS The generated CTF-AC total-body PET images closely resembled real AC PET images, showing reduced noise and good contrast in different tissue structures. The obtained peak signal-to-noise ratio and structural similarity index measure values were 36.92 ± 5.49 dB (p < 0.01) and 0.980 ± 0.041 (p < 0.01), respectively. Furthermore, the standardized uptake value (SUV) distribution was consistent with that of real AC PET images. CONCLUSION Our approach could directly generate CTF-AC total-body PET images, greatly reducing the radiation risk to patients from redundant anatomical examinations. Moreover, the model was validated based on a multidose-level NAC-AC PET dataset, demonstrating the potential of our method for low-dose PET attenuation correction. In future work, we will attempt to validate the proposed method with total-body PET/CT systems in more clinical practices. CLINICAL RELEVANCE STATEMENT The ionizing radiation from CT is a major issue in PET imaging, which is more evident with reduced radiopharmaceutical doses in total-body PET/CT. Our CT-free PET attenuation correction method would be beneficial for a wide range of patient populations, especially for pediatric examinations and patients who need multiple scans or who require long-term follow-up. KEY POINTS • CT is the main source of radiation in PET/CT imaging, especially for total-body PET/CT devices, and reduced radiopharmaceutical doses make the radiation burden from CT more obvious. • The CT-free PET attenuation correction method would be beneficial for patients who need multiple scans or long-term follow-up by reducing additional radiation from redundant anatomical examinations. • The proposed method could directly generate CT-free attenuation-corrected (CTF-AC) total-body PET images, which is beneficial for PET/MRI or PET-only devices lacking CT image poses.
Collapse
Affiliation(s)
- Wenbo Li
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Beijing, 101408, China
| | - Zhenxing Huang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Zixiang Chen
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Beijing, 101408, China
| | - Yongluo Jiang
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, 510060, China
| | - Chao Zhou
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, 510060, China
| | - Xu Zhang
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, 510060, China
| | - Wei Fan
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, 510060, China
| | - Yumo Zhao
- Central Research Institute, United Imaging Healthcare Group, Shanghai, 201807, China
| | - Lulu Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Liwen Wan
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Yongfeng Yang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Hairong Zheng
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, 518055, China.
| |
Collapse
|
8
|
Wang Z, Yang Y, Chen Y, Yuan T, Sermesant M, Delingette H, Wu O. Mutual Information Guided Diffusion for Zero-Shot Cross-Modality Medical Image Translation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2825-2838. [PMID: 38551825 PMCID: PMC11580158 DOI: 10.1109/tmi.2024.3382043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/02/2024]
Abstract
Cross-modality data translation has attracted great interest in medical image computing. Deep generative models show performance improvement in addressing related challenges. Nevertheless, as a fundamental challenge in image translation, the problem of zero-shot learning cross-modality image translation with fidelity remains unanswered. To bridge this gap, we propose a novel unsupervised zero-shot learning method called Mutual Information guided Diffusion Model, which learns to translate an unseen source image to the target modality by leveraging the inherent statistical consistency of Mutual Information between different modalities. To overcome the prohibitive high dimensional Mutual Information calculation, we propose a differentiable local-wise mutual information layer for conditioning the iterative denoising process. The Local-wise-Mutual-Information-Layer captures identical cross-modality features in the statistical domain, offering diffusion guidance without relying on direct mappings between the source and target domains. This advantage allows our method to adapt to changing source domains without the need for retraining, making it highly practical when sufficient labeled source domain data is not available. We demonstrate the superior performance of MIDiffusion in zero-shot cross-modality translation tasks through empirical comparisons with other generative models, including adversarial-based and diffusion-based models. Finally, we showcase the real-world application of MIDiffusion in 3D zero-shot learning-based cross-modality image segmentation tasks.
Collapse
|
9
|
Touati R, Trung Le W, Kadoury S. Multi-planar dual adversarial network based on dynamic 3D features for MRI-CT head and neck image synthesis. Phys Med Biol 2024; 69:155012. [PMID: 38981593 DOI: 10.1088/1361-6560/ad611a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2024] [Accepted: 07/09/2024] [Indexed: 07/11/2024]
Abstract
Objective.Head and neck radiotherapy planning requires electron densities from different tissues for dose calculation. Dose calculation from imaging modalities such as MRI remains an unsolved problem since this imaging modality does not provide information about the density of electrons.Approach.We propose a generative adversarial network (GAN) approach that synthesizes CT (sCT) images from T1-weighted MRI acquisitions in head and neck cancer patients. Our contribution is to exploit new features that are relevant for improving multimodal image synthesis, and thus improving the quality of the generated CT images. More precisely, we propose a Dual branch generator based on the U-Net architecture and on an augmented multi-planar branch. The augmented branch learns specific 3D dynamic features, which describe the dynamic image shape variations and are extracted from different view-points of the volumetric input MRI. The architecture of the proposed model relies on an end-to-end convolutional U-Net embedding network.Results.The proposed model achieves a mean absolute error (MAE) of18.76±5.167in the target Hounsfield unit (HU) space on sagittal head and neck patients, with a mean structural similarity (MSSIM) of0.95±0.09and a Frechet inception distance (FID) of145.60±8.38. The model yields a MAE of26.83±8.27to generate specific primary tumor regions on axial patient acquisitions, with a Dice score of0.73±0.06and a FID distance equal to122.58±7.55. The improvement of our model over other state-of-the-art GAN approaches is of 3.8%, on a tumor test set. On both sagittal and axial acquisitions, the model yields the best peak signal-to-noise ratio of27.89±2.22and26.08±2.95to synthesize MRI from CT input.Significance.The proposed model synthesizes both sagittal and axial CT tumor images, used for radiotherapy treatment planning in head and neck cancer cases. The performance analysis across different imaging metrics and under different evaluation strategies demonstrates the effectiveness of our dual CT synthesis model to produce high quality sCT images compared to other state-of-the-art approaches. Our model could improve clinical tumor analysis, in which a further clinical validation remains to be explored.
Collapse
Affiliation(s)
- Redha Touati
- MedICAL Laboratory, Polytechnique Montreal, Montreal, QC, Canada
| | - William Trung Le
- MedICAL Laboratory, Polytechnique Montreal, Montreal, QC, Canada
| | - Samuel Kadoury
- MedICAL Laboratory, Polytechnique Montreal, Montreal, QC, Canada
- CHUM Research Center, Montreal, QC, Canada
| |
Collapse
|
10
|
Dalmaz O, Mirza MU, Elmas G, Ozbey M, Dar SUH, Ceyani E, Oguz KK, Avestimehr S, Çukur T. One model to unite them all: Personalized federated learning of multi-contrast MRI synthesis. Med Image Anal 2024; 94:103121. [PMID: 38402791 DOI: 10.1016/j.media.2024.103121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Revised: 02/20/2024] [Accepted: 02/21/2024] [Indexed: 02/27/2024]
Abstract
Curation of large, diverse MRI datasets via multi-institutional collaborations can help improve learning of generalizable synthesis models that reliably translate source- onto target-contrast images. To facilitate collaborations, federated learning (FL) adopts decentralized model training while mitigating privacy concerns by avoiding sharing of imaging data. However, conventional FL methods can be impaired by the inherent heterogeneity in the data distribution, with domain shifts evident within and across imaging sites. Here we introduce the first personalized FL method for MRI Synthesis (pFLSynth) that improves reliability against data heterogeneity via model specialization to individual sites and synthesis tasks (i.e., source-target contrasts). To do this, pFLSynth leverages an adversarial model equipped with novel personalization blocks that control the statistics of generated feature maps across the spatial/channel dimensions, given latent variables specific to sites and tasks. To further promote communication efficiency and site specialization, partial network aggregation is employed over later generator stages while earlier generator stages and the discriminator are trained locally. As such, pFLSynth enables multi-task training of multi-site synthesis models with high generalization performance across sites and tasks. Comprehensive experiments demonstrate the superior performance and reliability of pFLSynth in MRI synthesis against prior federated methods.
Collapse
Affiliation(s)
- Onat Dalmaz
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey
| | - Muhammad U Mirza
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey
| | - Gokberk Elmas
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey
| | - Muzaffer Ozbey
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey
| | - Salman U H Dar
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey
| | - Emir Ceyani
- Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA 90089, USA
| | - Kader K Oguz
- Department of Radiology, University of California, Davis Medical Center, Sacramento, CA 95817, USA
| | - Salman Avestimehr
- Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA 90089, USA
| | - Tolga Çukur
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey; Neuroscience Program, Bilkent University, Ankara 06800, Turkey.
| |
Collapse
|
11
|
Kim H, Yoo SK, Kim JS, Kim YT, Lee JW, Kim C, Hong CS, Lee H, Han MC, Kim DW, Kim SY, Kim TM, Kim WH, Kong J, Kim YB. Clinical feasibility of deep learning-based synthetic CT images from T2-weighted MR images for cervical cancer patients compared to MRCAT. Sci Rep 2024; 14:8504. [PMID: 38605094 PMCID: PMC11009270 DOI: 10.1038/s41598-024-59014-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Accepted: 04/05/2024] [Indexed: 04/13/2024] Open
Abstract
This work aims to investigate the clinical feasibility of deep learning-based synthetic CT images for cervix cancer, comparing them to MR for calculating attenuation (MRCAT). Patient cohort with 50 pairs of T2-weighted MR and CT images from cervical cancer patients was split into 40 for training and 10 for testing phases. We conducted deformable image registration and Nyul intensity normalization for MR images to maximize the similarity between MR and CT images as a preprocessing step. The processed images were plugged into a deep learning model, generative adversarial network. To prove clinical feasibility, we assessed the accuracy of synthetic CT images in image similarity using structural similarity (SSIM) and mean-absolute-error (MAE) and dosimetry similarity using gamma passing rate (GPR). Dose calculation was performed on the true and synthetic CT images with a commercial Monte Carlo algorithm. Synthetic CT images generated by deep learning outperformed MRCAT images in image similarity by 1.5% in SSIM, and 18.5 HU in MAE. In dosimetry, the DL-based synthetic CT images achieved 98.71% and 96.39% in the GPR at 1% and 1 mm criterion with 10% and 60% cut-off values of the prescription dose, which were 0.9% and 5.1% greater GPRs over MRCAT images.
Collapse
Affiliation(s)
- Hojin Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Sang Kyun Yoo
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Jin Sung Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Yong Tae Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Jai Wo Lee
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Changhwan Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Chae-Seon Hong
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Ho Lee
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Min Cheol Han
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Dong Wook Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Se Young Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Tae Min Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Woo Hyoung Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Jayoung Kong
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Yong Bae Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea.
| |
Collapse
|
12
|
Wei K, Kong W, Liu L, Wang J, Li B, Zhao B, Li Z, Zhu J, Yu G. CT synthesis from MR images using frequency attention conditional generative adversarial network. Comput Biol Med 2024; 170:107983. [PMID: 38286104 DOI: 10.1016/j.compbiomed.2024.107983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 12/24/2023] [Accepted: 01/13/2024] [Indexed: 01/31/2024]
Abstract
Magnetic resonance (MR) image-guided radiotherapy is widely used in the treatment planning of malignant tumors, and MR-only radiotherapy, a representative of this technique, requires synthetic computed tomography (sCT) images for effective radiotherapy planning. Convolutional neural networks (CNN) have shown remarkable performance in generating sCT images. However, CNN-based models tend to synthesize more low-frequency components and the pixel-wise loss function usually used to optimize the model can result in blurred images. To address these problems, a frequency attention conditional generative adversarial network (FACGAN) is proposed in this paper. Specifically, a frequency cycle generative model (FCGM) is designed to enhance the inter-mapping between MR and CT and extract more rich tissue structure information. Additionally, a residual frequency channel attention (RFCA) module is proposed and incorporated into the generator to enhance its ability in perceiving the high-frequency image features. Finally, high-frequency loss (HFL) and cycle consistency high-frequency loss (CHFL) are added to the objective function to optimize the model training. The effectiveness of the proposed model is validated on pelvic and brain datasets and compared with state-of-the-art deep learning models. The results show that FACGAN produces higher-quality sCT images while retaining clearer and richer high-frequency texture information.
Collapse
Affiliation(s)
- Kexin Wei
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, China
| | - Weipeng Kong
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, China
| | - Liheng Liu
- Department of Radiology, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Jian Wang
- Department of Radiology, Central Hospital Affiliated to Shandong First Medical University, Jinan, China
| | - Baosheng Li
- Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, No.440, Jiyan Road, Jinan, 250117, Shandong Province, China
| | - Bo Zhao
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, China
| | - Zhenjiang Li
- Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, No.440, Jiyan Road, Jinan, 250117, Shandong Province, China
| | - Jian Zhu
- Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, No.440, Jiyan Road, Jinan, 250117, Shandong Province, China.
| | - Gang Yu
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, China.
| |
Collapse
|
13
|
Sun M, Zhu Y, Li H, Ye J, Li N. ACnerf: enhancement of neural radiance field by alignment and correction of pose to reconstruct new views from a single x-ray. Phys Med Biol 2024; 69:045016. [PMID: 38211316 DOI: 10.1088/1361-6560/ad1d6c] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Accepted: 01/11/2024] [Indexed: 01/13/2024]
Abstract
Objective.Computed tomography (CT) is widely used in medical research and clinical diagnosis. However, acquiring CT data requires patients to be exposed to considerable ionizing radiance, leading to physical harm. Recent studies have considered using neural radiance field (NERF) techniques to infer the full-view CT projections from single-view x-ray projection, thus aiding physician judgment and reducing Radiance hazards. This paper enhances this technique in two directions: (1) accurate generalization capabilities for control models. (2) Consider different ranges of viewpoints.Approach.Building upon generative radiance fields (GRAF), we propose a method called ACnerf to enhance the generalization of the NERF through alignment and pose correction. ACnerf aligns with a reference single x-ray by utilizing a combination of positional encoding with Gaussian random noise (latent code) obtained from GRAF training. This approach avoids compromising the 3D structure caused by altering the generator. During inference, a pose judgment network is employed to correct the pose and optimize the rendered viewpoint. Additionally, when generating a narrow range of views, ACnerf employs frequency-domain regularization to fine-tune the generator and achieve precise projections.Main results.The proposed ACnerf method surpasses the state-of-the-art NERF technique in terms of rendering quality for knee and chest data with varying contrasts. It achieved an average improvement of 2.496 dB in PSNR and 41% in LPIPS for 0°-360° projections. Additionally, for -15° to 15° projections, ACnerf achieved an average improvement of 0.691 dB in PSNR and 25.8% in LPIPS.Significance.With adjustments in alignment, inference, and rendering range, our experiments and evaluations on knee and chest data of different contrasts show that ACnerf effectively reduces artifacts and aberrations in the new view. ACnerf's ability to recover more accurate 3D structures from single x-rays has excellent potential for reducing damage from ionising radiation in clinical diagnostics.
Collapse
Affiliation(s)
- Mengcheng Sun
- School of Information Science and Engineering, East China University of Science and Technology, Shanghai 200237, China, People's Republic of China
| | - Yu Zhu
- School of Information Science and Engineering, East China University of Science and Technology, Shanghai 200237, China, People's Republic of China
| | - Hangyu Li
- School of Information Science and Engineering, East China University of Science and Technology, Shanghai 200237, China, People's Republic of China
| | - Jiongyao Ye
- School of Information Science and Engineering, East China University of Science and Technology, Shanghai 200237, China, People's Republic of China
| | - Nan Li
- Dapartment of Orthopedics, 96603 Military Hospital of PLA, Huaihua 418000, People's Republic of China
| |
Collapse
|
14
|
Masad IS, Abu-Qasmieh IF, Al-Quran HH, Alawneh KZ, Abdalla KM, Al-Qudah AM. CT-based generation of synthetic-pseudo MR images with different weightings for human knee. Comput Biol Med 2024; 169:107842. [PMID: 38096761 DOI: 10.1016/j.compbiomed.2023.107842] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Revised: 12/07/2023] [Accepted: 12/07/2023] [Indexed: 02/08/2024]
Abstract
Synthetic MR images are generated for their high soft-tissue contrast avoiding the discomfort by the long acquisition time and placing claustrophobic patients in the MR scanner's confined space. The aim of this study is to generate synthetic pseudo-MR images from a real CT image for the knee region in vivo. 19 healthy subjects were scanned for model training, while 13 other healthy subjects were imaged for testing. The approach used in this work is novel such that the registration was performed between the MR and CT images, and the femur bone, patella, and the surrounding soft tissue were segmented on the CT image. The tissue type was mapped to its corresponding mean and standard deviation values of the CT# of a window moving on each pixel in the reconstructed CT images, which enabled the remapping of the tissue to its MRI intrinsic parameters: T1, T2, and proton density (ρ). To generate the synthetic MR image of a knee slice, a classic spin-echo sequence was simulated using proper intrinsic and contrast parameters. Results showed that the synthetic MR images were comparable to the real images acquired with the same TE and TR values, and the average slope between them (for all knee segments) was 0.98, while the average percentage root mean square difference (PRD) was 25.7%. In conclusion, this study has shown the feasibility and validity of accurately generating synthetic MR images of the knee region in vivo with different weightings from a single real CT image.
Collapse
Affiliation(s)
- Ihssan S Masad
- Department of Biomedical Systems and Informatics Engineering, Yarmouk University, Irbid, 21163, Jordan.
| | - Isam F Abu-Qasmieh
- Department of Biomedical Systems and Informatics Engineering, Yarmouk University, Irbid, 21163, Jordan
| | - Hiam H Al-Quran
- Department of Biomedical Systems and Informatics Engineering, Yarmouk University, Irbid, 21163, Jordan
| | - Khaled Z Alawneh
- Department of Diagnostic Radiology, Faculty of Medicine, Jordan University of Science and Technology, Irbid, 22110, Jordan; King Abdullah University Hospital, Irbid, 22110, Jordan
| | - Khalid M Abdalla
- Department of Diagnostic Radiology, Faculty of Medicine, Jordan University of Science and Technology, Irbid, 22110, Jordan
| | - Ali M Al-Qudah
- Department of Biomedical Systems and Informatics Engineering, Yarmouk University, Irbid, 21163, Jordan
| |
Collapse
|
15
|
Kim S, Yuan L, Kim S, Suh TS. Generation of tissues outside the field of view (FOV) of radiation therapy simulation imaging based on machine learning and patient body outline (PBO). Radiat Oncol 2024; 19:15. [PMID: 38273278 PMCID: PMC10811833 DOI: 10.1186/s13014-023-02384-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2023] [Accepted: 11/28/2023] [Indexed: 01/27/2024] Open
Abstract
BACKGROUND It is not unusual to see some parts of tissues are excluded in the field of view of CT simulation images. A typical mitigation is to avoid beams entering the missing body parts at the cost of sub-optimal planning. METHODS This study is to solve the problem by developing 3 methods, (1) deep learning (DL) mechanism for missing tissue generation, (2) using patient body outline (PBO) based on surface imaging, and (3) hybrid method combining DL and PBO. The DL model was built upon a Globally and Locally Consistent Image Completion to learn features by Convolutional Neural Networks-based inpainting, based on Generative Adversarial Network. The database used comprised 10,005 CT training slices of 322 lung cancer patients and 166 CT evaluation test slices of 15 patients. CT images were from the publicly available database of the Cancer Imaging Archive. Since existing data were used PBOs were acquired from the CT images. For evaluation, Structural Similarity Index Metric (SSIM), Root Mean Square Error (RMSE) and Peak signal-to-noise ratio (PSNR) were evaluated. For dosimetric validation, dynamic conformal arc plans were made with the ground truth images and images generated by the proposed method. Gamma analysis was conducted at relatively strict criteria of 1%/1 mm (dose difference/distance to agreement) and 2%/2 mm under three dose thresholds of 1%, 10% and 50% of the maximum dose in the plans made on the ground truth image sets. RESULTS The average SSIM in generation part only was 0.06 at epoch 100 but reached 0.86 at epoch 1500. Accordingly, the average SSIM in the whole image also improved from 0.86 to 0.97. At epoch 1500, the average values of RMSE and PSNR in the whole image were 7.4 and 30.9, respectively. Gamma analysis showed excellent agreement with the hybrid method (equal to or higher than 96.6% of the mean of pass rates for all scenarios). CONCLUSIONS It was first demonstrated that missing tissues in simulation imaging could be generated with high similarity, and dosimetric limitation could be overcome. The benefit of this study can be significantly enlarged when MR-only simulation is considered.
Collapse
Affiliation(s)
- Sunmi Kim
- Department of Biomedical Engineering and Research Institute of Biomedical Engineering, College of Medicine, The Catholic University of Korea, 222 Banpo-daero, Seocho-gu, Seoul, 06591, Republic of Korea
- Department of Radiation Oncology, Yonsei Cancer Center, Seoul, 03722, Republic of Korea
| | - Lulin Yuan
- Department of Radiation Oncology, School of Medicine, Virginia Commonwealth University, Richmond, VA, 23284, USA
| | - Siyong Kim
- Department of Radiation Oncology, School of Medicine, Virginia Commonwealth University, Richmond, VA, 23284, USA.
| | - Tae Suk Suh
- Department of Biomedical Engineering and Research Institute of Biomedical Engineering, College of Medicine, The Catholic University of Korea, 222 Banpo-daero, Seocho-gu, Seoul, 06591, Republic of Korea.
| |
Collapse
|
16
|
Wang Z, Fang M, Zhang J, Tang L, Zhong L, Li H, Cao R, Zhao X, Liu S, Zhang R, Xie X, Mai H, Qiu S, Tian J, Dong D. Radiomics and Deep Learning in Nasopharyngeal Carcinoma: A Review. IEEE Rev Biomed Eng 2024; 17:118-135. [PMID: 37097799 DOI: 10.1109/rbme.2023.3269776] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/26/2023]
Abstract
Nasopharyngeal carcinoma is a common head and neck malignancy with distinct clinical management compared to other types of cancer. Precision risk stratification and tailored therapeutic interventions are crucial to improving the survival outcomes. Artificial intelligence, including radiomics and deep learning, has exhibited considerable efficacy in various clinical tasks for nasopharyngeal carcinoma. These techniques leverage medical images and other clinical data to optimize clinical workflow and ultimately benefit patients. In this review, we provide an overview of the technical aspects and basic workflow of radiomics and deep learning in medical image analysis. We then conduct a detailed review of their applications to seven typical tasks in the clinical diagnosis and treatment of nasopharyngeal carcinoma, covering various aspects of image synthesis, lesion segmentation, diagnosis, and prognosis. The innovation and application effects of cutting-edge research are summarized. Recognizing the heterogeneity of the research field and the existing gap between research and clinical translation, potential avenues for improvement are discussed. We propose that these issues can be gradually addressed by establishing standardized large datasets, exploring the biological characteristics of features, and technological upgrades.
Collapse
|
17
|
Gao X, Shi F, Shen D, Liu M. Multimodal transformer network for incomplete image generation and diagnosis of Alzheimer's disease. Comput Med Imaging Graph 2023; 110:102303. [PMID: 37832503 DOI: 10.1016/j.compmedimag.2023.102303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2022] [Revised: 06/27/2023] [Accepted: 09/27/2023] [Indexed: 10/15/2023]
Abstract
Multimodal images such as magnetic resonance imaging (MRI) and positron emission tomography (PET) could provide complementary information about the brain and have been widely investigated for the diagnosis of neurodegenerative disorders such as Alzheimer's disease (AD). However, multimodal brain images are often incomplete in clinical practice. It is still challenging to make use of multimodality for disease diagnosis with missing data. In this paper, we propose a deep learning framework with the multi-level guided generative adversarial network (MLG-GAN) and multimodal transformer (Mul-T) for incomplete image generation and disease classification, respectively. First, MLG-GAN is proposed to generate the missing data, guided by multi-level information from voxels, features, and tasks. In addition to voxel-level supervision and task-level constraint, a feature-level auto-regression branch is proposed to embed the features of target images for an accurate generation. With the complete multimodal images, we propose a Mul-T network for disease diagnosis, which can not only combine the global and local features but also model the latent interactions and correlations from one modality to another with the cross-modal attention mechanism. Comprehensive experiments on three independent datasets (i.e., ADNI-1, ADNI-2, and OASIS-3) show that the proposed method achieves superior performance in the tasks of image generation and disease diagnosis compared to state-of-the-art methods.
Collapse
Affiliation(s)
- Xingyu Gao
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, China
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co.,Ltd., China.
| | - Dinggang Shen
- Department of Research and Development, Shanghai United Imaging Intelligence Co.,Ltd., China; School of Biomedical Engineering, ShanghaiTech University, China.
| | - Manhua Liu
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, China; MoE Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, China.
| |
Collapse
|
18
|
Ozbey M, Dalmaz O, Dar SUH, Bedel HA, Ozturk S, Gungor A, Cukur T. Unsupervised Medical Image Translation With Adversarial Diffusion Models. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3524-3539. [PMID: 37379177 DOI: 10.1109/tmi.2023.3290149] [Citation(s) in RCA: 81] [Impact Index Per Article: 40.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/30/2023]
Abstract
Imputation of missing images via source-to-target modality translation can improve diversity in medical imaging protocols. A pervasive approach for synthesizing target images involves one-shot mapping through generative adversarial networks (GAN). Yet, GAN models that implicitly characterize the image distribution can suffer from limited sample fidelity. Here, we propose a novel method based on adversarial diffusion modeling, SynDiff, for improved performance in medical image translation. To capture a direct correlate of the image distribution, SynDiff leverages a conditional diffusion process that progressively maps noise and source images onto the target image. For fast and accurate image sampling during inference, large diffusion steps are taken with adversarial projections in the reverse diffusion direction. To enable training on unpaired datasets, a cycle-consistent architecture is devised with coupled diffusive and non-diffusive modules that bilaterally translate between two modalities. Extensive assessments are reported on the utility of SynDiff against competing GAN and diffusion models in multi-contrast MRI and MRI-CT translation. Our demonstrations indicate that SynDiff offers quantitatively and qualitatively superior performance against competing baselines.
Collapse
|
19
|
Tian L, Lühr A. Proton range uncertainty caused by synthetic computed tomography generated with deep learning from pelvic magnetic resonance imaging. Acta Oncol 2023; 62:1461-1469. [PMID: 37703314 DOI: 10.1080/0284186x.2023.2256967] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Accepted: 09/04/2023] [Indexed: 09/15/2023]
Abstract
BACKGROUND In proton therapy, it is disputed whether synthetic computed tomography (sCT), derived from magnetic resonance imaging (MRI), permits accurate dose calculations. On the one hand, an MRI-only workflow could eliminate errors caused by, e.g., MRI-CT registration. On the other hand, the extra error would be induced due to an sCT generation model. This work investigated the systematic and random model error induced by sCT generation of a widely discussed deep learning model, pix2pix. MATERIAL AND METHODS An open-source image dataset of 19 patients with cancer in the pelvis was employed and split into 10, 5, and 4 for training, testing, and validation of the model, respectively. Proton pencil beams (200 MeV) were simulated on the real CT and generated sCT using the tool for particle simulation (TOPAS). Monte Carlo (MC) dropout was used for error estimation (50 random sCT samples). Systematic and random model errors were investigated for sCT generation and dose calculation on sCT. RESULTS For sCT generation, random model error near the edge of the body (∼200 HU) was higher than that within the body (∼100 HU near the bone edge and <10 HU in soft tissue). The mean absolute error (MAE) was 49 ± 5, 191 ± 23, and 503 ± 70 HU for the whole body, bone, and air in the patient, respectively. Random model errors of the proton range were small (<0.2 mm) for all spots and evenly distributed throughout the proton fields. Systematic errors of the proton range were -1.0(±2.2) mm and 0.4(±0.9)%, respectively, and were unevenly distributed within the proton fields. For 4.5% of the spots, large errors (>5 mm) were found, which may relate to MRI-CT mismatch due to, e.g., registration, MRI distortion anatomical changes, etc. CONCLUSION The sCT model was shown to be robust, i.e., had a low random model error. However, further investigation to reduce and even predict and manage systematic error is still needed for future MRI-only proton therapy.
Collapse
Affiliation(s)
- Liheng Tian
- Department of Physics, TU Dortmund University, Dortmund, Germany
| | - Armin Lühr
- Department of Physics, TU Dortmund University, Dortmund, Germany
| |
Collapse
|
20
|
Krokos G, MacKewn J, Dunn J, Marsden P. A review of PET attenuation correction methods for PET-MR. EJNMMI Phys 2023; 10:52. [PMID: 37695384 PMCID: PMC10495310 DOI: 10.1186/s40658-023-00569-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 08/07/2023] [Indexed: 09/12/2023] Open
Abstract
Despite being thirteen years since the installation of the first PET-MR system, the scanners constitute a very small proportion of the total hybrid PET systems installed. This is in stark contrast to the rapid expansion of the PET-CT scanner, which quickly established its importance in patient diagnosis within a similar timeframe. One of the main hurdles is the development of an accurate, reproducible and easy-to-use method for attenuation correction. Quantitative discrepancies in PET images between the manufacturer-provided MR methods and the more established CT- or transmission-based attenuation correction methods have led the scientific community in a continuous effort to develop a robust and accurate alternative. These can be divided into four broad categories: (i) MR-based, (ii) emission-based, (iii) atlas-based and the (iv) machine learning-based attenuation correction, which is rapidly gaining momentum. The first is based on segmenting the MR images in various tissues and allocating a predefined attenuation coefficient for each tissue. Emission-based attenuation correction methods aim in utilising the PET emission data by simultaneously reconstructing the radioactivity distribution and the attenuation image. Atlas-based attenuation correction methods aim to predict a CT or transmission image given an MR image of a new patient, by using databases containing CT or transmission images from the general population. Finally, in machine learning methods, a model that could predict the required image given the acquired MR or non-attenuation-corrected PET image is developed by exploiting the underlying features of the images. Deep learning methods are the dominant approach in this category. Compared to the more traditional machine learning, which uses structured data for building a model, deep learning makes direct use of the acquired images to identify underlying features. This up-to-date review goes through the literature of attenuation correction approaches in PET-MR after categorising them. The various approaches in each category are described and discussed. After exploring each category separately, a general overview is given of the current status and potential future approaches along with a comparison of the four outlined categories.
Collapse
Affiliation(s)
- Georgios Krokos
- School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas' Hospital London, King's College London, 1st Floor Lambeth Wing, Westminster Bridge Road, London, SE1 7EH, UK.
| | - Jane MacKewn
- School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas' Hospital London, King's College London, 1st Floor Lambeth Wing, Westminster Bridge Road, London, SE1 7EH, UK
| | - Joel Dunn
- School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas' Hospital London, King's College London, 1st Floor Lambeth Wing, Westminster Bridge Road, London, SE1 7EH, UK
| | - Paul Marsden
- School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas' Hospital London, King's College London, 1st Floor Lambeth Wing, Westminster Bridge Road, London, SE1 7EH, UK
| |
Collapse
|
21
|
Wang Z, Nawaz M, Khan S, Xia P, Irfan M, Wong EC, Chan R, Cao P. Cross modality generative learning framework for anatomical transitive Magnetic Resonance Imaging (MRI) from Electrical Impedance Tomography (EIT) image. Comput Med Imaging Graph 2023; 108:102272. [PMID: 37515968 DOI: 10.1016/j.compmedimag.2023.102272] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Revised: 07/04/2023] [Accepted: 07/08/2023] [Indexed: 07/31/2023]
Abstract
This paper presents a cross-modality generative learning framework for transitive magnetic resonance imaging (MRI) from electrical impedance tomography (EIT). The proposed framework is aimed at converting low-resolution EIT images to high-resolution wrist MRI images using a cascaded cycle generative adversarial network (CycleGAN) model. This model comprises three main components: the collection of initial EIT from the medical device, the generation of a high-resolution transitive EIT image from the corresponding MRI image for domain adaptation, and the coalescence of two CycleGAN models for cross-modality generation. The initial EIT image was generated at three different frequencies (70 kHz, 140 kHz, and 200 kHz) using a 16-electrode belt. Wrist T1-weighted images were acquired on a 1.5T MRI. A total of 19 normal volunteers were imaged using both EIT and MRI, which resulted in 713 paired EIT and MRI images. The cascaded CycleGAN, end-to-end CycleGAN, and Pix2Pix models were trained and tested on the same cohort. The proposed method achieved the highest accuracy in bone detection, with 0.97 for the proposed cascaded CycleGAN, 0.68 for end-to-end CycleGAN, and 0.70 for the Pix2Pix model. Visual inspection showed that the proposed method reduced bone-related errors in the MRI-style anatomical reference compared with end-to-end CycleGAN and Pix2Pix. Multifrequency EIT inputs reduced the testing normalized root mean squared error of MRI-style anatomical reference from 67.9% ± 12.7% to 61.4% ± 8.8% compared with that of single-frequency EIT. The mean conductivity values of fat and bone from regularized EIT were 0.0435 ± 0.0379 S/m and 0.0183 ± 0.0154 S/m, respectively, when the anatomical prior was employed. These results demonstrate that the proposed framework is able to generate MRI-style anatomical references from EIT images with a good degree of accuracy.
Collapse
Affiliation(s)
- Zuojun Wang
- The Department of Diagnostic Radiology, The University of Hong Kong, Hong Kong.
| | - Mehmood Nawaz
- The Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong.
| | - Sheheryar Khan
- School of Professional Education and Executive Development, The Hong Kong Polytechnic University, Hong Kong
| | - Peng Xia
- The Department of Diagnostic Radiology, The University of Hong Kong, Hong Kong
| | - Muhammad Irfan
- Faculty of Electrical Engineering, Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Pakistan
| | | | | | - Peng Cao
- The Department of Diagnostic Radiology, The University of Hong Kong, Hong Kong.
| |
Collapse
|
22
|
Zhou X, Cai W, Cai J, Xiao F, Qi M, Liu J, Zhou L, Li Y, Song T. Multimodality MRI synchronous construction based deep learning framework for MRI-guided radiotherapy synthetic CT generation. Comput Biol Med 2023; 162:107054. [PMID: 37290389 DOI: 10.1016/j.compbiomed.2023.107054] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Revised: 04/24/2023] [Accepted: 05/20/2023] [Indexed: 06/10/2023]
Abstract
Synthesizing computed tomography (CT) images from magnetic resonance imaging (MRI) data can provide the necessary electron density information for accurate dose calculation in the treatment planning of MRI-guided radiation therapy (MRIgRT). Inputting multimodality MRI data can provide sufficient information for accurate CT synthesis: however, obtaining the necessary number of MRI modalities is clinically expensive and time-consuming. In this study, we propose a multimodality MRI synchronous construction based deep learning framework from a single T1-weight (T1) image for MRIgRT synthetic CT (sCT) image generation. The network is mainly based on a generative adversarial network with sequential subtasks of intermediately generating synthetic MRIs and jointly generating the sCT image from the single T1 MRI. It contains a multitask generator and a multibranch discriminator, where the generator consists of a shared encoder and a splitted multibranch decoder. Specific attention modules are designed within the generator for feasible high-dimensional feature representation and fusion. Fifty patients with nasopharyngeal carcinoma who had undergone radiotherapy and had CT and sufficient MRI modalities scanned (5550 image slices for each modality) were used in the experiment. Results showed that our proposed network outperforms state-of-the-art sCT generation methods well with the least MAE, NRMSE, and comparable PSNR and SSIM index measure. Our proposed network exhibits comparable or even superior performance than the multimodality MRI-based generation method although it only takes a single T1 MRI image as input, thereby providing a more effective and economic solution for the laborious and high-cost generation of sCT images in clinical applications.
Collapse
Affiliation(s)
- Xuanru Zhou
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
| | - Wenwen Cai
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
| | - Jiajun Cai
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
| | - Fan Xiao
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
| | - Mengke Qi
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
| | - Jiawen Liu
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
| | - Linghong Zhou
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
| | - Yongbao Li
- Department of Radiation Oncology, Sun Yat-Sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, Guangdong, China.
| | - Ting Song
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China.
| |
Collapse
|
23
|
Zhao Y, Wang H, Yu C, Court LE, Wang X, Wang Q, Pan T, Ding Y, Phan J, Yang J. Compensation cycle consistent generative adversarial networks (Comp-GAN) for synthetic CT generation from MR scans with truncated anatomy. Med Phys 2023; 50:4399-4414. [PMID: 36698291 PMCID: PMC10356747 DOI: 10.1002/mp.16246] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Revised: 12/26/2022] [Accepted: 12/27/2022] [Indexed: 01/27/2023] Open
Abstract
BACKGROUND MR scans used in radiotherapy can be partially truncated due to the limited field of view (FOV), affecting dose calculation accuracy in MR-based radiation treatment planning. PURPOSE We proposed a novel Compensation-cycleGAN (Comp-cycleGAN) by modifying the cycle-consistent generative adversarial network (cycleGAN), to simultaneously create synthetic CT (sCT) images and compensate the missing anatomy from the truncated MR images. METHODS Computed tomography (CT) and T1 MR images with complete anatomy of 79 head-and-neck patients were used for this study. The original MR images were manually cropped 10-25 mm off at the posterior head to simulate clinically truncated MR images. Fifteen patients were randomly chosen for testing and the rest of the patients were used for model training and validation. Both the truncated and original MR images were used in the Comp-cycleGAN training stage, which enables the model to compensate for the missing anatomy by learning the relationship between the truncation and known structures. After the model was trained, sCT images with complete anatomy can be generated by feeding only the truncated MR images into the model. In addition, the external body contours acquired from the CT images with full anatomy could be an optional input for the proposed method to leverage the additional information of the actual body shape for each test patient. The mean absolute error (MAE) of Hounsfield units (HU), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM) were calculated between sCT and real CT images to quantify the overall sCT performance. To further evaluate the shape accuracy, we generated the external body contours for sCT and original MR images with full anatomy. The Dice similarity coefficient (DSC) and mean surface distance (MSD) were calculated between the body contours of sCT and original MR images for the truncation region to assess the anatomy compensation accuracy. RESULTS The average MAE, PSNR, and SSIM calculated over test patients were 93.1 HU/91.3 HU, 26.5 dB/27.4 dB, and 0.94/0.94 for the proposed Comp-cycleGAN models trained without/with body-contour information, respectively. These results were comparable with those obtained from the cycleGAN model which is trained and tested on full-anatomy MR images, indicating the high quality of the sCT generated from truncated MR images by the proposed method. Within the truncated region, the mean DSC and MSD were 0.85/0.89 and 1.3/0.7 mm for the proposed Comp-cycleGAN models trained without/with body contour information, demonstrating good performance in compensating the truncated anatomy. CONCLUSIONS We developed a novel Comp-cycleGAN model that can effectively create sCT with complete anatomy compensation from truncated MR images, which could potentially benefit the MRI-based treatment planning.
Collapse
Affiliation(s)
- Yao Zhao
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX, USA
| | - He Wang
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX, USA
| | - Cenji Yu
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX, USA
| | - Laurence E. Court
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Xin Wang
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX, USA
| | - Qianxia Wang
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tinsu Pan
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX, USA
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Yao Ding
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Jack Phan
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Jinzhong Yang
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX, USA
| |
Collapse
|
24
|
Wang R, Bashyam V, Yang Z, Yu F, Tassopoulou V, Chintapalli SS, Skampardoni I, Sreepada LP, Sahoo D, Nikita K, Abdulkadir A, Wen J, Davatzikos C. Applications of generative adversarial networks in neuroimaging and clinical neuroscience. Neuroimage 2023; 269:119898. [PMID: 36702211 PMCID: PMC9992336 DOI: 10.1016/j.neuroimage.2023.119898] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Revised: 12/16/2022] [Accepted: 01/21/2023] [Indexed: 01/25/2023] Open
Abstract
Generative adversarial networks (GANs) are one powerful type of deep learning models that have been successfully utilized in numerous fields. They belong to the broader family of generative methods, which learn to generate realistic data with a probabilistic model by learning distributions from real samples. In the clinical context, GANs have shown enhanced capabilities in capturing spatially complex, nonlinear, and potentially subtle disease effects compared to traditional generative methods. This review critically appraises the existing literature on the applications of GANs in imaging studies of various neurological conditions, including Alzheimer's disease, brain tumors, brain aging, and multiple sclerosis. We provide an intuitive explanation of various GAN methods for each application and further discuss the main challenges, open questions, and promising future directions of leveraging GANs in neuroimaging. We aim to bridge the gap between advanced deep learning methods and neurology research by highlighting how GANs can be leveraged to support clinical decision making and contribute to a better understanding of the structural and functional patterns of brain diseases.
Collapse
Affiliation(s)
- Rongguang Wang
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA.
| | - Vishnu Bashyam
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Zhijian Yang
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Fanyang Yu
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Vasiliki Tassopoulou
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Sai Spandana Chintapalli
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Ioanna Skampardoni
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA; School of Electrical and Computer Engineering, National Technical University of Athens, Athens, Greece
| | - Lasya P Sreepada
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Dushyant Sahoo
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Konstantina Nikita
- School of Electrical and Computer Engineering, National Technical University of Athens, Athens, Greece
| | - Ahmed Abdulkadir
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA; Department of Clinical Neurosciences, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Junhao Wen
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Christos Davatzikos
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA; Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, USA.
| |
Collapse
|
25
|
Li Y, Xu S, Chen H, Sun Y, Bian J, Guo S, Lu Y, Qi Z. CT synthesis from multi-sequence MRI using adaptive fusion network. Comput Biol Med 2023; 157:106738. [PMID: 36924728 DOI: 10.1016/j.compbiomed.2023.106738] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 02/09/2023] [Accepted: 03/01/2023] [Indexed: 03/13/2023]
Abstract
OBJECTIVE To investigate a method using multi-sequence magnetic resonance imaging (MRI) to synthesize computed tomography (CT) for MRI-only radiation therapy. APPROACH We proposed an adaptive multi-sequence fusion network (AMSF-Net) to exploit both voxel- and context-wise cross-sequence correlations from multiple MRI sequences to synthesize CT using element- and patch-wise fusions, respectively. The element- and patch-wise fusion feature spaces were combined, and the most representative features were selected for modeling. Finally, a densely connected convolutional decoder was applied to utilize the selected features to produce synthetic CT images. MAIN RESULTS This study includes a total number of 90 patients' T1-weighted MRI, T2-weighted MRI and CT data. The AMSF-Net reduced the average mean absolute error (MAE) from 52.88-57.23 to 49.15 HU, increased the peak signal-to-noise ratio (PSNR) from 24.82-25.32 to 25.63 dB, increased the structural similarity index measure (SSIM) from 0.857-0.869 to 0.878, and increased the dice coefficient of bone from 0.886-0.896 to 0.903 compared to the other three existing multi-sequence learning models. The improvements were statistically significant according to two-tailed paired t-test. In addition, AMSF-Net reduced the intensity difference with real CT in five organs at risk, four types of normal tissue and tumor compared with the baseline models. The MAE decreases in parotid and spinal cord were over 8% and 16% with reference to the mean intensity value of the corresponding organ, respectively. Further, the qualitative evaluations confirmed that AMSF-Net exhibited superior structural image quality of synthesized bone and small organs such as the eye lens. SIGNIFICANCE The proposed method can improve the intensity and structural image quality of synthetic CT and has potential for use in clinical applications.
Collapse
Affiliation(s)
- Yan Li
- School of Data and Computer Engineering, Sun Yat-sen University, Guangzhou, PR China
| | - Sisi Xu
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Shenzhen, PR China
| | | | - Ying Sun
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center of Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou, PR China
| | - Jing Bian
- School of Data and Computer Engineering, Sun Yat-sen University, Guangzhou, PR China
| | - Shuanshuan Guo
- The Fifth Affiliated Hospital of Sun Yat-sen University, Cancer Center, Guangzhou, PR China.
| | - Yao Lu
- School of Computer Science and Engineering, Sun Yat-sen University, Guangdong Province Key Laboratory of Computational Science, Guangzhou, PR China.
| | - Zhenyu Qi
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center of Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou, PR China.
| |
Collapse
|
26
|
Lu Y, Li X, Xin L, Song H, Wang X. Mapping the terraces on the Loess Plateau based on a deep learning-based model at 1.89 m resolution. Sci Data 2023; 10:115. [PMID: 36864066 PMCID: PMC9981555 DOI: 10.1038/s41597-023-02005-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Accepted: 02/06/2023] [Indexed: 03/04/2023] Open
Abstract
Terraces on the Loess Plateau play essential roles in soil conservation, as well as agricultural productivity in this region. However, due to the unavailability of high-resolution (<10 m) maps of terrace distribution for this area, current research on these terraces is limited to specific regions. We developed a deep learning-based terrace extraction model (DLTEM) using texture features of the terraces, which have not previously been applied regionally. The model utilizes the UNet++ deep learning network as its framework, with high-resolution satellite images, a digital elevation model, and GlobeLand30 as the interpreted data and topography and vegetation correction data sources, respectively, and incorporates manual correction to produce a 1.89 m spatial resolution terrace distribution map for the Loess Plateau (TDMLP). The accuracy of the TDMLP was evaluated using 11,420 test samples and 815 field validation points, yielding classification results of 98.39% and 96.93%, respectively. The TDMLP provides an important basis for further research on the economic and ecological value of terraces, facilitating the sustainable development of the Loess Plateau.
Collapse
Affiliation(s)
- Yahan Lu
- Key Laboratory of Land Surface Pattern and Simulation, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Sciences, Beijing, 100101, PR China
- University of Chinese Academy of Sciences, Beijing, 100049, PR China
| | - Xiubin Li
- Key Laboratory of Land Surface Pattern and Simulation, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Sciences, Beijing, 100101, PR China
- University of Chinese Academy of Sciences, Beijing, 100049, PR China
| | - Liangjie Xin
- Key Laboratory of Land Surface Pattern and Simulation, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Sciences, Beijing, 100101, PR China.
| | - Hengfei Song
- Key Laboratory of Land Surface Pattern and Simulation, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Sciences, Beijing, 100101, PR China
- University of Chinese Academy of Sciences, Beijing, 100049, PR China
| | - Xue Wang
- Key Laboratory of Land Surface Pattern and Simulation, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Sciences, Beijing, 100101, PR China
- University of Chinese Academy of Sciences, Beijing, 100049, PR China
| |
Collapse
|
27
|
Raymond C, Jurkiewicz MT, Orunmuyi A, Liu L, Dada MO, Ladefoged CN, Teuho J, Anazodo UC. The performance of machine learning approaches for attenuation correction of PET in neuroimaging: A meta-analysis. J Neuroradiol 2023; 50:315-326. [PMID: 36738990 DOI: 10.1016/j.neurad.2023.01.157] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Accepted: 01/28/2023] [Indexed: 02/05/2023]
Abstract
PURPOSE This systematic review provides a consensus on the clinical feasibility of machine learning (ML) methods for brain PET attenuation correction (AC). Performance of ML-AC were compared to clinical standards. METHODS Two hundred and eighty studies were identified through electronic searches of brain PET studies published between January 1, 2008, and August 1, 2022. Reported outcomes for image quality, tissue classification performance, regional and global bias were extracted to evaluate ML-AC performance. Methodological quality of included studies and the quality of evidence of analysed outcomes were assessed using QUADAS-2 and GRADE, respectively. RESULTS A total of 19 studies (2371 participants) met the inclusion criteria. Overall, the global bias of ML methods was 0.76 ± 1.2%. For image quality, the relative mean square error (RMSE) was 0.20 ± 0.4 while for tissues classification, the Dice similarity coefficient (DSC) for bone/soft tissue/air were 0.82 ± 0.1 / 0.95 ± 0.03 / 0.85 ± 0.14. CONCLUSIONS In general, ML-AC performance is within acceptable limits for clinical PET imaging. The sparse information on ML-AC robustness and its limited qualitative clinical evaluation may hinder clinical implementation in neuroimaging, especially for PET/MRI or emerging brain PET systems where standard AC approaches are not readily available.
Collapse
Affiliation(s)
- Confidence Raymond
- Department of Medical Biophysics, Western University, London, ON, Canada; Lawson Health Research Institute, London, ON, Canada
| | - Michael T Jurkiewicz
- Department of Medical Biophysics, Western University, London, ON, Canada; Lawson Health Research Institute, London, ON, Canada; Department of Medical Imaging, Western University, London, ON, Canada
| | - Akintunde Orunmuyi
- Kenyatta University Teaching, Research and Referral Hospital, Nairobi, Kenya
| | - Linshan Liu
- Lawson Health Research Institute, London, ON, Canada
| | | | - Claes N Ladefoged
- Department of Clinical Physiology, Nuclear Medicine, and PET, Rigshospitalet, Copenhagen, Denmark
| | - Jarmo Teuho
- Turku PET Centre, Turku University, Turku, Finland; Turku University Hospital, Turku, Finland
| | - Udunna C Anazodo
- Department of Medical Biophysics, Western University, London, ON, Canada; Lawson Health Research Institute, London, ON, Canada; Montreal Neurological Institute, 3801 Rue University, Montreal, QC H3A 2B4, Canada.
| |
Collapse
|
28
|
Poonkodi S, Kanchana M. 3D-MedTranCSGAN: 3D Medical Image Transformation using CSGAN. Comput Biol Med 2023; 153:106541. [PMID: 36652868 DOI: 10.1016/j.compbiomed.2023.106541] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 11/30/2022] [Accepted: 01/10/2023] [Indexed: 01/15/2023]
Abstract
Computer vision techniques are a rapidly growing area of transforming medical images for various specific medical applications. In an end-to-end application, this paper proposes a 3D Medical Image Transformation Using a CSGAN model named a 3D-MedTranCSGAN. The 3D-MedTranCSGAN model is an integration of non-adversarial loss components and the Cyclic Synthesized Generative Adversarial Networks. The proposed model utilizes PatchGAN's discriminator network, to penalize the difference between the synthesized image and the original image. The model also computes the non-adversary loss functions such as content, perception, and style transfer losses. 3DCascadeNet is a new generator architecture introduced in the paper, which is used to enhance the perceptiveness of the transformed medical image by encoding-decoding pairs. We use the 3D-MedTranCSGAN model to do various tasks without modifying specific applications: PET to CT image transformation; reconstruction of CT to PET; modification of movement artefacts in MR images; and removing noise in PET images. We found that 3D-MedTranCSGAN outperformed other transformation methods in our experiments. For the first task, the proposed model yields SSIM is 0.914, PSNR is 26.12, MSE is 255.5, VIF is 0.4862, UQI is 0.9067 and LPIPs is 0.2284. For the second task, the model yields 0.9197, 25.7, 257.56, 0.4962, 0.9027, 0.2262. For the third task, the model yields 0.8862, 24.94, 0.4071, 0.6410, 0.2196. For the final task, the model yields 0.9521, 33.67, 33.57, 0.6091, 0.9255, 0.0244. Based on the result analysis, the proposed model outperforms the other techniques.
Collapse
Affiliation(s)
- S Poonkodi
- Department of Computing Technologies, School of Computing, SRM Institute of Science and Technology, Kattankulathur, India
| | - M Kanchana
- Department of Computing Technologies, School of Computing, SRM Institute of Science and Technology, Kattankulathur, India.
| |
Collapse
|
29
|
Zhong L, Chen Z, Shu H, Zheng Y, Zhang Y, Wu Y, Feng Q, Li Y, Yang W. QACL: Quartet attention aware closed-loop learning for abdominal MR-to-CT synthesis via simultaneous registration. Med Image Anal 2023; 83:102692. [PMID: 36442293 DOI: 10.1016/j.media.2022.102692] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 10/27/2022] [Accepted: 11/09/2022] [Indexed: 11/18/2022]
Abstract
Synthesis of computed tomography (CT) images from magnetic resonance (MR) images is an important task to overcome the lack of electron density information in MR-only radiotherapy treatment planning (RTP). Some innovative methods have been proposed for abdominal MR-to-CT synthesis. However, it is still challenging due to the large misalignment between preprocessed abdominal MR and CT images and the insufficient feature information learned by models. Although several studies have used the MR-to-CT synthesis to alleviate the difficulty of multi-modal registration, this misalignment remains unsolved when training the MR-to-CT synthesis model. In this paper, we propose an end-to-end quartet attention aware closed-loop learning (QACL) framework for MR-to-CT synthesis via simultaneous registration. Specifically, the proposed quartet attention generator and mono-modal registration network form a closed-loop to improve the performance of MR-to-CT synthesis via simultaneous registration. In particular, a quartet-attention mechanism is developed to enlarge the receptive fields in networks to extract the long-range and cross-dimension spatial dependencies. Experimental results on two independent abdominal datasets demonstrate that our QACL achieves impressive results with MAE of 55.30±10.59 HU, PSNR of 22.85±1.43 dB, and SSIM of 0.83±0.04 for synthesis, and with Dice of 0.799±0.129 for registration. The proposed QACL outperforms the state-of-the-art MR-to-CT synthesis and multi-modal registration methods.
Collapse
Affiliation(s)
- Liming Zhong
- School of Biomedical Engineering, Southern Medical University, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, 510515, China
| | - Zeli Chen
- School of Biomedical Engineering, Southern Medical University, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, 510515, China
| | - Hai Shu
- Department of Biostatistics, School of Global Public Health, New York University, New York, NY, 10003, United States
| | - Yikai Zheng
- School of Biomedical Engineering, Southern Medical University, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, 510515, China
| | - Yiwen Zhang
- School of Biomedical Engineering, Southern Medical University, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, 510515, China
| | - Yuankui Wu
- Department of Medical Imaging Center, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, 510515, China
| | - Yin Li
- Department of Information, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510655, China.
| | - Wei Yang
- School of Biomedical Engineering, Southern Medical University, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, 510515, China.
| |
Collapse
|
30
|
Zhao B, Cheng T, Zhang X, Wang J, Zhu H, Zhao R, Li D, Zhang Z, Yu G. CT synthesis from MR in the pelvic area using Residual Transformer Conditional GAN. Comput Med Imaging Graph 2023; 103:102150. [PMID: 36493595 DOI: 10.1016/j.compmedimag.2022.102150] [Citation(s) in RCA: 27] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Revised: 11/15/2022] [Accepted: 11/27/2022] [Indexed: 12/03/2022]
Abstract
Magnetic resonance (MR) image-guided radiation therapy is a hot topic in current radiation therapy research, which relies on MR to generate synthetic computed tomography (SCT) images for radiation therapy. Convolution-based generative adversarial networks (GAN) have achieved promising results in synthesizing CT from MR since the introduction of deep learning techniques. However, due to the local limitations of pure convolutional neural networks (CNN) structure and the local mismatch between paired MR and CT images, particularly in pelvic soft tissue, the performance of GAN in synthesizing CT from MR requires further improvement. In this paper, we propose a new GAN called Residual Transformer Conditional GAN (RTCGAN), which exploits the advantages of CNN in local texture details and Transformer in global correlation to extract multi-level features from MR and CT images. Furthermore, the feature reconstruction loss is used to further constrain the image potential features, reducing over-smoothing and local distortion of the SCT. The experiments show that RTCGAN is visually closer to the reference CT (RCT) image and achieves desirable results on local mismatch tissues. In the quantitative evaluation, the MAE, SSIM, and PSNR of RTCGAN are 45.05 HU, 0.9105, and 28.31 dB, respectively. All of them outperform other comparison methods, such as deep convolutional neural networks (DCNN), Pix2Pix, Attention-UNet, WPD-DAGAN, and HDL.
Collapse
Affiliation(s)
- Bo Zhao
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, Shandong 250358, China
| | - Tingting Cheng
- Department of General practice, Xiangya Hospital, Central South University, Changsha 410008, China; National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Changsha 410008, China
| | - Xueren Zhang
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, Shandong 250358, China
| | - Jingjing Wang
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, Shandong 250358, China
| | - Hong Zhu
- Department of Radiation Oncology, Xiangya Hospital, Central South University, Changsha 410008, China; National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Changsha 410008, China
| | - Rongchang Zhao
- School of Computer Science and Engineering, Central South University, Changsha 410083, China
| | - Dengwang Li
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, Shandong 250358, China
| | - Zijian Zhang
- Department of Radiation Oncology, Xiangya Hospital, Central South University, Changsha 410008, China; National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Changsha 410008, China
| | - Gang Yu
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, Shandong 250358, China
| |
Collapse
|
31
|
Guerini AE, Nici S, Magrini SM, Riga S, Toraci C, Pegurri L, Facheris G, Cozzaglio C, Farina D, Liserre R, Gasparotti R, Ravanelli M, Rondi P, Spiazzi L, Buglione M. Adoption of Hybrid MRI-Linac Systems for the Treatment of Brain Tumors: A Systematic Review of the Current Literature Regarding Clinical and Technical Features. Technol Cancer Res Treat 2023; 22:15330338231199286. [PMID: 37774771 PMCID: PMC10542234 DOI: 10.1177/15330338231199286] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 07/24/2023] [Accepted: 08/08/2023] [Indexed: 10/01/2023] Open
Abstract
BACKGROUND Possible advantages of magnetic resonance (MR)-guided radiation therapy (MRgRT) for the treatment of brain tumors include improved definition of treatment volumes and organs at risk (OARs) that could allow margin reductions, resulting in limited dose to the OARs and/or dose escalation to target volumes. Recently, hybrid systems integrating a linear accelerator and an magnetic resonance imaging (MRI) scan (MRI-linacs, MRL) have been introduced, that could potentially lead to a fully MRI-based treatment workflow. METHODS We performed a systematic review of the published literature regarding the adoption of MRL for the treatment of primary or secondary brain tumors (last update November 3, 2022), retrieving a total of 2487 records; after a selection based on title and abstracts, the full text of 74 articles was analyzed, finally resulting in the 52 papers included in this review. RESULTS AND DISCUSSION Several solutions have been implemented to achieve a paradigm shift from CT-based radiotherapy to MRgRT, such as the management of geometric integrity and the definition of synthetic CT models that estimate electron density. Multiple sequences have been optimized to acquire images with adequate quality with on-board MR scanner in limited times. Various sophisticated algorithms have been developed to compensate the impact of magnetic field on dose distribution and calculate daily adaptive plans in a few minutes with satisfactory dosimetric parameters for the treatment of primary brain tumors and cerebral metastases. Dosimetric studies and preliminary clinical experiences demonstrated the feasibility of treating brain lesions with MRL. CONCLUSIONS The adoption of an MRI-only workflow is feasible and could offer several advantages for the treatment of brain tumors, including superior image quality for lesions and OARs and the possibility to adapt the treatment plan on the basis of daily MRI. The growing body of clinical data will clarify the potential benefit in terms of toxicity and response to treatment.
Collapse
Affiliation(s)
- Andrea Emanuele Guerini
- Department of Radiation Oncology, University and Spedali Civili Hospital, Brescia, Italy
- Co-first authors
| | - Stefania Nici
- Medical Physics Department, ASST Spedali Civili Hospital, Brescia, Italy
- Co-first authors
| | - Stefano Maria Magrini
- Department of Radiation Oncology, University and Spedali Civili Hospital, Brescia, Italy
| | - Stefano Riga
- Medical Physics Department, ASST Spedali Civili Hospital, Brescia, Italy
| | - Cristian Toraci
- Medical Physics Department, ASST Spedali Civili Hospital, Brescia, Italy
| | - Ludovica Pegurri
- Department of Radiation Oncology, University and Spedali Civili Hospital, Brescia, Italy
| | - Giorgio Facheris
- Department of Radiation Oncology, University and Spedali Civili Hospital, Brescia, Italy
| | - Claudia Cozzaglio
- Department of Radiation Oncology, University and Spedali Civili Hospital, Brescia, Italy
- Medical Physics Department, ASST Spedali Civili Hospital, Brescia, Italy
| | - Davide Farina
- Radiology Unit, Department of Medical and Surgical Specialties, Radiological Sciences and Public Health, University of Brescia, Brescia, Italy
| | - Roberto Liserre
- Department of Radiology, Neuroradiology Unit, ASST Spedali Civili University Hospital, Brescia, Italy
| | - Roberto Gasparotti
- Neuroradiology Unit, Department of Medical-Surgical Specialties, Radiological Sciences and Public Health, University of Brescia, Brescia, Italy
| | - Marco Ravanelli
- Radiology Unit, Department of Medical and Surgical Specialties, Radiological Sciences and Public Health, University of Brescia, Brescia, Italy
| | - Paolo Rondi
- Radiology Unit, Department of Medical and Surgical Specialties, Radiological Sciences and Public Health, University of Brescia, Brescia, Italy
| | - Luigi Spiazzi
- Medical Physics Department, ASST Spedali Civili Hospital, Brescia, Italy
- Co-last author
| | - Michela Buglione
- Department of Radiation Oncology, University and Spedali Civili Hospital, Brescia, Italy
- Co-last author
| |
Collapse
|
32
|
Boroojeni PE, Chen Y, Commean PK, Eldeniz C, Skolnick GB, Merrill C, Patel KB, An H. Deep-learning synthesized pseudo-CT for MR high-resolution pediatric cranial bone imaging (MR-HiPCB). Magn Reson Med 2022; 88:2285-2297. [PMID: 35713359 PMCID: PMC9420780 DOI: 10.1002/mrm.29356] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 05/06/2022] [Accepted: 05/23/2022] [Indexed: 11/12/2022]
Abstract
PURPOSE CT is routinely used to detect cranial abnormalities in pediatric patients with head trauma or craniosynostosis. This study aimed to develop a deep learning method to synthesize pseudo-CT (pCT) images for MR high-resolution pediatric cranial bone imaging to eliminating ionizing radiation from CT. METHODS 3D golden-angle stack-of-stars MRI were obtained from 44 pediatric participants. Two patch-based residual UNets were trained using paired MR and CT patches randomly selected from the whole head (NetWH) or in the vicinity of bone, fractures/sutures, or air (NetBA) to synthesize pCT. A third residual UNet was trained to generate a binary brain mask using only MRI. The pCT images from NetWH (pCTNetWH ) in the brain area and NetBA (pCTNetBA ) in the nonbrain area were combined to generate pCTCom . A manual processing method using inverted MR images was also employed for comparison. RESULTS pCTCom (68.01 ± 14.83 HU) had significantly smaller mean absolute errors (MAEs) than pCTNetWH (82.58 ± 16.98 HU, P < 0.0001) and pCTNetBA (91.32 ± 17.2 HU, P < 0.0001) in the whole head. Within cranial bone, the MAE of pCTCom (227.92 ± 46.88 HU) was significantly lower than pCTNetWH (287.85 ± 59.46 HU, P < 0.0001) but similar to pCTNetBA (230.20 ± 46.17 HU). Dice similarity coefficient of the segmented bone was significantly higher in pCTCom (0.90 ± 0.02) than in pCTNetWH (0.86 ± 0.04, P < 0.0001), pCTNetBA (0.88 ± 0.03, P < 0.0001), and inverted MR (0.71 ± 0.09, P < 0.0001). Dice similarity coefficient from pCTCom demonstrated significantly reduced age dependence than inverted MRI. Furthermore, pCTCom provided excellent suture and fracture visibility comparable to CT. CONCLUSION MR high-resolution pediatric cranial bone imaging may facilitate the clinical translation of a radiation-free MR cranial bone imaging method for pediatric patients.
Collapse
Affiliation(s)
- Parna Eshraghi Boroojeni
- Dept. of Biomedical Engineering, Washington University in
St. Louis, St. Louis, Missouri 63110, USA
| | - Yasheng Chen
- Dept. of Neurology, Washington University in St. Louis, St.
Louis, Missouri 63110, USA
| | - Paul K. Commean
- Mallinckrodt Institute of Radiology, Washington University
in St. Louis, St. Louis, Missouri 63110, USA
| | - Cihat Eldeniz
- Mallinckrodt Institute of Radiology, Washington University
in St. Louis, St. Louis, Missouri 63110, USA
| | - Gary B. Skolnick
- Division of Plastic and Reconstructive Surgery, Washington
University in St. Louis, St. Louis, Missouri 63110, USA
| | - Corinne Merrill
- Division of Plastic and Reconstructive Surgery, Washington
University in St. Louis, St. Louis, Missouri 63110, USA
| | - Kamlesh B. Patel
- Division of Plastic and Reconstructive Surgery, Washington
University in St. Louis, St. Louis, Missouri 63110, USA
| | - Hongyu An
- Dept. of Biomedical Engineering, Washington University in
St. Louis, St. Louis, Missouri 63110, USA
- Dept. of Neurology, Washington University in St. Louis, St.
Louis, Missouri 63110, USA
- Mallinckrodt Institute of Radiology, Washington University
in St. Louis, St. Louis, Missouri 63110, USA
| |
Collapse
|
33
|
Amini Amirkolaee H, Amini Amirkolaee H. Medical image translation using an edge-guided generative adversarial network with global-to-local feature fusion. J Biomed Res 2022; 36:409-422. [PMID: 35821004 PMCID: PMC9724158 DOI: 10.7555/jbr.36.20220037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023] Open
Abstract
In this paper, we propose a framework based deep learning for medical image translation using paired and unpaired training data. Initially, a deep neural network with an encoder-decoder structure is proposed for image-to-image translation using paired training data. A multi-scale context aggregation approach is then used to extract various features from different levels of encoding, which are used during the corresponding network decoding stage. At this point, we further propose an edge-guided generative adversarial network for image-to-image translation based on unpaired training data. An edge constraint loss function is used to improve network performance in tissue boundaries. To analyze framework performance, we conducted five different medical image translation tasks. The assessment demonstrates that the proposed deep learning framework brings significant improvement beyond state-of-the-arts.
Collapse
Affiliation(s)
- Hamed Amini Amirkolaee
- School of Surveying and Geospatial Engineering, College of Engineering, University of Tehran, Tehran 1417935840, Iran,Hamed Amini Amirkolaee, School of Surveying and Geospatial Engineering, College of Engineering, University of Tehran, N Kargar street, Tehran 1417935840, Iran. Tel/Fax: +98-930-9777140/+98-21-88008837, E-mail:
| | - Hamid Amini Amirkolaee
- Civil and Geomatics Engineering Faculty, Tafresh State University, Tafresh 7961139518, Iran
| |
Collapse
|
34
|
Dalmaz O, Yurt M, Cukur T. ResViT: Residual Vision Transformers for Multimodal Medical Image Synthesis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2598-2614. [PMID: 35436184 DOI: 10.1109/tmi.2022.3167808] [Citation(s) in RCA: 122] [Impact Index Per Article: 40.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Generative adversarial models with convolutional neural network (CNN) backbones have recently been established as state-of-the-art in numerous medical image synthesis tasks. However, CNNs are designed to perform local processing with compact filters, and this inductive bias compromises learning of contextual features. Here, we propose a novel generative adversarial approach for medical image synthesis, ResViT, that leverages the contextual sensitivity of vision transformers along with the precision of convolution operators and realism of adversarial learning. ResViT's generator employs a central bottleneck comprising novel aggregated residual transformer (ART) blocks that synergistically combine residual convolutional and transformer modules. Residual connections in ART blocks promote diversity in captured representations, while a channel compression module distills task-relevant information. A weight sharing strategy is introduced among ART blocks to mitigate computational burden. A unified implementation is introduced to avoid the need to rebuild separate synthesis models for varying source-target modality configurations. Comprehensive demonstrations are performed for synthesizing missing sequences in multi-contrast MRI, and CT images from MRI. Our results indicate superiority of ResViT against competing CNN- and transformer-based methods in terms of qualitative observations and quantitative metrics.
Collapse
|
35
|
Pan Y, Liu M, Xia Y, Shen D. Disease-Image-Specific Learning for Diagnosis-Oriented Neuroimage Synthesis With Incomplete Multi-Modality Data. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:6839-6853. [PMID: 34156939 PMCID: PMC9297233 DOI: 10.1109/tpami.2021.3091214] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
Incomplete data problem is commonly existing in classification tasks with multi-source data, particularly the disease diagnosis with multi-modality neuroimages, to track which, some methods have been proposed to utilize all available subjects by imputing missing neuroimages. However, these methods usually treat image synthesis and disease diagnosis as two standalone tasks, thus ignoring the specificity conveyed in different modalities, i.e., different modalities may highlight different disease-relevant regions in the brain. To this end, we propose a disease-image-specific deep learning (DSDL) framework for joint neuroimage synthesis and disease diagnosis using incomplete multi-modality neuroimages. Specifically, with each whole-brain scan as input, we first design a Disease-image-Specific Network (DSNet) with a spatial cosine module to implicitly model the disease-image specificity. We then develop a Feature-consistency Generative Adversarial Network (FGAN) to impute missing neuroimages, where feature maps (generated by DSNet) of a synthetic image and its respective real image are encouraged to be consistent while preserving the disease-image-specific information. Since our FGAN is correlated with DSNet, missing neuroimages can be synthesized in a diagnosis-oriented manner. Experimental results on three datasets suggest that our method can not only generate reasonable neuroimages, but also achieve state-of-the-art performance in both tasks of Alzheimer's disease identification and mild cognitive impairment conversion prediction.
Collapse
|
36
|
Zhan B, Zhou L, Li Z, Wu X, Pu Y, Zhou J, Wang Y, Shen D. D2FE-GAN: Decoupled dual feature extraction based GAN for MRI image synthesis. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.109362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
37
|
Bi-MGAN: Bidirectional T1-to-T2 MRI images prediction using multi-generative multi-adversarial nets. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103994] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
38
|
Din M, Gurbuz S, Akbal E, Dogan S, Durak M, Yildirim I, Tuncer T. Exemplar deep and hand-modeled features based automated and accurate cerebral hemorrhage classification method. Med Eng Phys 2022; 105:103819. [DOI: 10.1016/j.medengphy.2022.103819] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Revised: 05/11/2022] [Accepted: 05/11/2022] [Indexed: 11/17/2022]
|
39
|
Corona-Figueroa A, Frawley J, Taylor SB, Bethapudi S, Shum HPH, Willcocks CG. MedNeRF: Medical Neural Radiance Fields for Reconstructing 3D-aware CT-Projections from a Single X-ray. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:3843-3848. [PMID: 36085823 DOI: 10.1109/embc48229.2022.9871757] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Computed tomography (CT) is an effective med-ical imaging modality, widely used in the field of clinical medicine for the diagnosis of various pathologies. Advances in Multidetector CT imaging technology have enabled additional functionalities, including generation of thin slice multi planar cross-sectional body imaging and 3D reconstructions. However, this involves patients being exposed to a considerable dose of ionising radiation. Excessive ionising radiation can lead to deterministic and harmful effects on the body. This paper proposes a Deep Learning model that learns to reconstruct CT projections from a few or even a single-view X-ray. This is based on a novel architecture that builds from neural radiance fields, which learns a continuous representation of CT scans by disentangling the shape and volumetric depth of surface and internal anatomical structures from 2D images. Our model is trained on chest and knee datasets, and we demonstrate qual-itative and quantitative high-fidelity renderings and compare our approach to other recent radiance field-based methods. Our code and link to our datasets are available at https://qithub.com/abrilcf/mednerf Clinical relevance- Our model is able to infer the anatomical 3D structure from a few or a single-view X-ray showing future potential for reduced ionising radiation exposure during the imaging process.
Collapse
|
40
|
Establishment of a diagnostic model of coronary heart disease in elderly patients with diabetes mellitus based on machine learning algorithms. J Geriatr Cardiol 2022; 19:445-455. [PMID: 35845157 PMCID: PMC9248279 DOI: 10.11909/j.issn.1671-5411.2022.06.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022] Open
Abstract
OBJECTIVE To establish a prediction model of coronary heart disease (CHD) in elderly patients with diabetes mellitus (DM) based on machine learning (ML) algorithms. METHODS Based on the Medical Big Data Research Centre of Chinese PLA General Hospital in Beijing, China, we identified a cohort of elderly inpatients (≥ 60 years), including 10,533 patients with DM complicated with CHD and 12,634 patients with DM without CHD, from January 2008 to December 2017. We collected demographic characteristics and clinical data. After selecting the important features, we established five ML models, including extreme gradient boosting (XGBoost), random forest (RF), decision tree (DT), adaptive boosting (Adaboost) and logistic regression (LR). We compared the receiver operating characteristic curves, area under the curve (AUC) and other relevant parameters of different models and determined the optimal classification model. The model was then applied to 7447 elderly patients with DM admitted from January 2018 to December 2019 to further validate the performance of the model. RESULTS Fifteen features were selected and included in the ML model. The classification precision in the test set of the XGBoost, RF, DT, Adaboost and LR models was 0.778, 0.789, 0.753, 0.750 and 0.689, respectively; and the AUCs of the subjects were 0.851, 0.845, 0.823, 0.833 and 0.731, respectively. Applying the XGBoost model with optimal performance to a newly recruited dataset for validation, the diagnostic sensitivity, specificity, precision, and AUC were 0.792, 0.808, 0.748 and 0.880, respectively. CONCLUSIONS The XGBoost model established in the present study had certain predictive value for elderly patients with DM complicated with CHD.
Collapse
|
41
|
Ranjan A, Lalwani D, Misra R. GAN for synthesizing CT from T2-weighted MRI data towards MR-guided radiation treatment. MAGMA (NEW YORK, N.Y.) 2022; 35:449-457. [PMID: 34741702 DOI: 10.1007/s10334-021-00974-5] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Revised: 10/12/2021] [Accepted: 10/25/2021] [Indexed: 06/13/2023]
Abstract
OBJECTIVE In medical domain, cross-modality image synthesis suffers from multiple issues , such as context-misalignment, image distortion, image blurriness, and loss of details. The fundamental objective behind this study is to address these issues in estimating synthetic Computed tomography (sCT) scans from T2-weighted Magnetic Resonance Imaging (MRI) scans to achieve MRI-guided Radiation Treatment (RT). MATERIALS AND METHODS We proposed a conditional generative adversarial network (cGAN) with multiple residual blocks to estimate sCT from T2-weighted MRI scans using 367 paired brain MR-CT images dataset. Few state-of-the-art deep learning models were implemented to generate sCT including Pix2Pix model, U-Net model, autoencoder model and their results were compared, respectively. RESULTS Results with paired MR-CT image dataset demonstrate that the proposed model with nine residual blocks in generator architecture results in the smallest mean absolute error (MAE) value of [Formula: see text], and mean squared error (MSE) value of [Formula: see text], and produces the largest Pearson correlation coefficient (PCC) value of [Formula: see text], SSIM value of [Formula: see text] and peak signal-to-noise ratio (PSNR) value of [Formula: see text], respectively. We qualitatively evaluated our result by visual comparisons of generated sCT to original CT of respective MRI input. DISCUSSION The quantitative and qualitative comparison of this work demonstrates that deep learning-based cGAN model can be used to estimate sCT scan from a reference T2 weighted MRI scan. The overall accuracy of our proposed model outperforms different state-of-the-art deep learning-based models.
Collapse
Affiliation(s)
- Amit Ranjan
- Department of Computer Science and Engineering, Indian Institute of Technology Patna, Bihta, 801103, India.
| | - Debanshu Lalwani
- Department of Computer Science and Engineering, Indian Institute of Technology Patna, Bihta, 801103, India
| | - Rajiv Misra
- Department of Computer Science and Engineering, Indian Institute of Technology Patna, Bihta, 801103, India
| |
Collapse
|
42
|
Liu M, Zou W, Wang W, Jin CB, Chen J, Piao C. Multi-Conditional Constraint Generative Adversarial Network-Based MR Imaging from CT Scan Data. SENSORS 2022; 22:s22114043. [PMID: 35684665 PMCID: PMC9185366 DOI: 10.3390/s22114043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Revised: 05/19/2022] [Accepted: 05/24/2022] [Indexed: 11/20/2022]
Abstract
Magnetic resonance (MR) imaging is an important computer-aided diagnosis technique with rich pathological information. The factor of physical and physiological constraint seriously affects the applicability of that technique. Thus, computed tomography (CT)-based radiotherapy is more popular on account of its imaging rapidity and environmental simplicity. Therefore, it is of great theoretical and practical significance to design a method that can construct an MR image from the corresponding CT image. In this paper, we treat MR imaging as a machine vision problem and propose a multi-conditional constraint generative adversarial network (GAN) for MR imaging from CT scan data. Considering reversibility of GAN, both generator and reverse generator are designed for MR and CT imaging, respectively, which can constrain each other and improve consistency between features of CT and MR images. In addition, we innovatively treat the real and generated MR image discrimination as object re-identification; cosine error fusing with original GAN loss is designed to enhance verisimilitude and textural features of the MR image. The experimental results with the challenging public CT-MR image dataset show distinct performance improvement over other GANs utilized in medical imaging and demonstrate the effect of our method for medical image modal transformation.
Collapse
Affiliation(s)
- Mingjie Liu
- Automation School, Chongqing University of Posts and Telecommunications, Chongqing 400065, China; (M.L.); (W.Z.); (W.W.); (J.C.)
| | - Wei Zou
- Automation School, Chongqing University of Posts and Telecommunications, Chongqing 400065, China; (M.L.); (W.Z.); (W.W.); (J.C.)
| | - Wentao Wang
- Automation School, Chongqing University of Posts and Telecommunications, Chongqing 400065, China; (M.L.); (W.Z.); (W.W.); (J.C.)
| | | | - Junsheng Chen
- Automation School, Chongqing University of Posts and Telecommunications, Chongqing 400065, China; (M.L.); (W.Z.); (W.W.); (J.C.)
| | - Changhao Piao
- Automation School, Chongqing University of Posts and Telecommunications, Chongqing 400065, China; (M.L.); (W.Z.); (W.W.); (J.C.)
- Correspondence: ; Tel.: +86-138-8399-7871
| |
Collapse
|
43
|
Deng L, Hu J, Wang J, Huang S, Yang X. Synthetic CT generation based on CBCT using respath-cycleGAN. Med Phys 2022; 49:5317-5329. [PMID: 35488299 DOI: 10.1002/mp.15684] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Revised: 04/08/2022] [Accepted: 04/13/2022] [Indexed: 11/10/2022] Open
Abstract
PURPOSE Cone-beam computed tomography (CBCT) plays an important role in radiotherapy, but the presence of a large number of artifacts limits its application. The purpose of this study was to use respath-cycleGAN to synthesize CT (sCT) similar to planning CT (pCT) from CBCT for future clinical practice. METHODS The method integrates the respath concept into the original cycleGAN, called respath-cycleGAN, to map CBCT to pCT. Thirty patients were used for training, and 15 for testing. RESULTS The mean absolute error (MAE), root mean square error (RMSE), peak signal to noise ratio (PSNR), structural similarity index (SSIM), and spatial non-uniformity (SNU) were calculated to assess the quality of sCT generated from CBCT. Compared with CBCT images, the MAE improved from 197.72 to 140.7, RMSE from 339.17 to 266.51, and PSNR from 22.07 to 24.44, while SSIM increased from 0.948 to 0.964. Both visually and quantitatively, sCT with respath is superior to sCT without respath. We also performed a generalization test of the head-and-neck (H&N) model on a pelvic dataset. The results again showed that our model was superior. CONCLUSION We developed a respath-cycleGAN method to synthesize CT with good quality from CBCT. In future clinical practice, this method may be used to develop radiotherapy plans. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Liwei Deng
- Heilongjiang Provincial Key Laboratory of Complex Intelligent System and Integration, School of Automation, Harbin University of Science and Technology, Harbin, Heilongjiang, 150080, China
| | - Jie Hu
- School of Automation, Harbin University of Science and Technology, Harbin, Heilongjiang, 150080, China
| | - Jing Wang
- School of Biomedical Engineering, Guangzhou Xinhua University, Guangzhou, Guangdong, 510520, China
| | - Sijuan Huang
- Huang Department of Radiation Oncology, Sun Yat-sen University Cancer Center; State Key Laboratory of Oncology in South China; Collaborative Innovation Center for Cancer Medicine; Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou, Guangdong, 510060, China
| | - Xin Yang
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center; State Key Laboratory of Oncology in South China; Collaborative Innovation Center for Cancer Medicine; Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou, Guangdong, 510060, China
| |
Collapse
|
44
|
Jabbarpour A, Mahdavi SR, Vafaei Sadr A, Esmaili G, Shiri I, Zaidi H. Unsupervised pseudo CT generation using heterogenous multicentric CT/MR images and CycleGAN: Dosimetric assessment for 3D conformal radiotherapy. Comput Biol Med 2022; 143:105277. [PMID: 35123139 DOI: 10.1016/j.compbiomed.2022.105277] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2021] [Revised: 01/09/2022] [Accepted: 01/27/2022] [Indexed: 11/23/2022]
Abstract
PURPOSE Absorbed dose calculation in magnetic resonance-guided radiation therapy (MRgRT) is commonly based on pseudo CT (pCT) images. This study investigated the feasibility of unsupervised pCT generation from MRI using a cycle generative adversarial network (CycleGAN) and a heterogenous multicentric dataset. A dosimetric analysis in three-dimensional conformal radiotherapy (3DCRT) planning was also performed. MATERIAL AND METHODS Overall, 87 T1-weighted and 102 T2-weighted MR images alongside with their corresponding computed tomography (CT) images of brain cancer patients from multiple centers were used. Initially, images underwent a number of preprocessing steps, including rigid registration, novel CT Masker, N4 bias field correction, resampling, resizing, and rescaling. To overcome the gradient vanishing problem, residual blocks and mean squared error (MSE) loss function were utilized in the generator and in both networks (generator and discriminator), respectively. The CycleGAN was trained and validated using 70 T1 and 80 T2 randomly selected patients in an unsupervised manner. The remaining patients were used as a holdout test set to report final evaluation metrics. The generated pCTs were validated in the context of 3DCRT. RESULTS The CycleGAN model using masked T2 images achieved better performance with a mean absolute error (MAE) of 61.87 ± 22.58 HU, peak signal to noise ratio (PSNR) of 27.05 ± 2.25 (dB), and structural similarity index metric (SSIM) of 0.84 ± 0.05 on the test dataset. T1-weighted MR images used for dosimetric assessment revealed a gamma index of 3%, 3 mm, 2%, 2 mm and 1%, 1 mm with acceptance criteria of 98.96% ± 1.1%, 95% ± 3.68%, 90.1% ± 6.05%, respectively. The DVH differences between CTs and pCTs were within 2%. CONCLUSIONS A promising pCT generation model capable of handling heterogenous multicenteric datasets was proposed. All MR sequences performed competitively with no significant difference in pCT generation. The proposed CT Masker proved promising in improving the model accuracy and robustness. There was no significant difference between using T1-weighted and T2-weighted MR images for pCT generation.
Collapse
Affiliation(s)
- Amir Jabbarpour
- Medical Physics Department, School of Medicine, Iran University of Medical Sciences, Tehran, Iran
| | - Seied Rabi Mahdavi
- Medical Physics Department, School of Medicine, Iran University of Medical Sciences, Tehran, Iran; Radiation Biology Research Center, Iran University of Medical Sciences, Tehran, Iran.
| | - Alireza Vafaei Sadr
- Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany; Department of Theoretical Physics and Center for Astroparticle Physics, Geneva University, Geneva, Switzerland
| | | | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland; Geneva University Neurocenter, Geneva University, Geneva, Switzerland; Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands; Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
| |
Collapse
|
45
|
Wang X, Jian W, Zhang B, Zhu L, He Q, Jin H, Yang G, Cai C, Meng H, Tan X, Li F, Dai Z. Synthetic CT generation from cone-beam CT using deep-learning for breast adaptive radiotherapy. JOURNAL OF RADIATION RESEARCH AND APPLIED SCIENCES 2022. [DOI: 10.1016/j.jrras.2022.03.009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
|
46
|
Wang C, Uh J, Patni T, Merchant T, Li Y, Hua CH, Acharya S. Toward MR-only proton therapy planning for pediatric brain tumors: synthesis of relative proton stopping power images with multiple sequence MRI and development of an online quality assurance tool. Med Phys 2022; 49:1559-1570. [PMID: 35075670 DOI: 10.1002/mp.15479] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Revised: 12/23/2021] [Accepted: 01/11/2022] [Indexed: 11/11/2022] Open
Abstract
PURPOSE To generate synthetic relative proton-stopping-power (sRPSP) images from MRI sequence(s) and develop an online quality assurance (QA) tool for sRPSP to facilitate safe integration of MR-only proton planning into clinical practice. MATERIALS AND METHODS Planning CT and MR images of 195 pediatric brain tumor patients were utilized (training: 150, testing: 45). Seventeen consistent-cycle Generative Adversarial Network (ccGAN) models were trained separately using paired CT-converted RPSP and MRI datasets to transform a subject's MRI into sRPSP. T1-weighted (T1W), T2-weighted (T2W), and FLAIR MRI were permutated to form 17 combinations, with or without preprocessing, for determining the optimal training sequence(s). For evaluation, sRPSP images were converted to synthetic CT (sCT) and compared to the real CT in terms of mean absolute error (MAE) in HU. For QA, sCT was deformed and compared to a reference template built from training dataset to produce a flag map, highlighting pixels that deviate by >100 HU and fall outside the mean ± standard deviation reference intensity. The gamma intensity analysis (10%/3mm) of the deformed sCT against the QA template on the intensity difference was investigated as a surrogate of sCT accuracy. RESULTS The sRPSP images generated from a single T1W or T2W sequence outperformed that generated from multi-MRI sequences in terms of MAE (all P<0.05). Preprocessing with N4 bias and histogram matching reduced MAE of T2W MRI-based sCT (54±21 HU vs. 42±13 HU, P = .002). The gamma intensity analysis of sCT against the QA template was highly correlated with the MAE of sCT against the real CT in the testing cohort (r = -0.89 for T1W sCT; r = -0.93 for T2W sCT). CONCLUSION Accurate sRPSP images can be generated from T1W/T2W MRI for proton planning. A QA tool highlights regions of inaccuracy, flagging problematic cases unsuitable for clinical use. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Chuang Wang
- Department of Radiation Oncology, St. Jude Children's Research Hospital, Memphis, TN, United States Of America
| | - Jinsoo Uh
- Department of Radiation Oncology, St. Jude Children's Research Hospital, Memphis, TN, United States Of America
| | - Tushar Patni
- Department of Biostatistics, St. Jude Children's Research Hospital, Memphis, TN, United States Of America
| | - Thomas Merchant
- Department of Radiation Oncology, St. Jude Children's Research Hospital, Memphis, TN, United States Of America
| | - Yimei Li
- Department of Biostatistics, St. Jude Children's Research Hospital, Memphis, TN, United States Of America
| | - Chia-Ho Hua
- Department of Radiation Oncology, St. Jude Children's Research Hospital, Memphis, TN, United States Of America
| | - Sahaja Acharya
- Department of Radiation Oncology, St. Jude Children's Research Hospital, Memphis, TN, United States Of America.,Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins Medicine, Baltimore, MD, United States Of America
| |
Collapse
|
47
|
Glioma segmentation of optimized 3D U-net and prediction of multi-modal survival time. Neural Comput Appl 2022. [DOI: 10.1007/s00521-021-06351-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
48
|
Hu S, Lei B, Wang S, Wang Y, Feng Z, Shen Y. Bidirectional Mapping Generative Adversarial Networks for Brain MR to PET Synthesis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:145-157. [PMID: 34428138 DOI: 10.1109/tmi.2021.3107013] [Citation(s) in RCA: 48] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Fusing multi-modality medical images, such as magnetic resonance (MR) imaging and positron emission tomography (PET), can provide various anatomical and functional information about the human body. However, PET data is not always available for several reasons, such as high cost, radiation hazard, and other limitations. This paper proposes a 3D end-to-end synthesis network called Bidirectional Mapping Generative Adversarial Networks (BMGAN). Image contexts and latent vectors are effectively used for brain MR-to-PET synthesis. Specifically, a bidirectional mapping mechanism is designed to embed the semantic information of PET images into the high-dimensional latent space. Moreover, the 3D Dense-UNet generator architecture and the hybrid loss functions are further constructed to improve the visual quality of cross-modality synthetic images. The most appealing part is that the proposed method can synthesize perceptually realistic PET images while preserving the diverse brain structures of different subjects. Experimental results demonstrate that the performance of the proposed method outperforms other competitive methods in terms of quantitative measures, qualitative displays, and evaluation metrics for classification.
Collapse
|
49
|
Ding M, Pan SY, Huang J, Yuan C, Zhang Q, Zhu XL, Cai Y. Optical coherence tomography for identification of malignant pulmonary nodules based on random forest machine learning algorithm. PLoS One 2021; 16:e0260600. [PMID: 34971557 PMCID: PMC8719667 DOI: 10.1371/journal.pone.0260600] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Accepted: 11/14/2021] [Indexed: 11/24/2022] Open
Abstract
OBJECTIVE To explore the feasibility of using random forest (RF) machine learning algorithm in assessing normal and malignant peripheral pulmonary nodules based on in vivo endobronchial optical coherence tomography (EB-OCT). METHODS A total of 31 patients with pulmonary nodules were admitted to Department of Respiratory Medicine, Zhongda Hospital, Southeast University, and underwent chest CT, EB-OCT and biopsy. Attenuation coefficient and up to 56 different image features were extracted from A-line and B-scan of 1703 EB-OCT images. Attenuation coefficient and 29 image features with significant p-values were used to analyze the differences between normal and malignant samples. A RF classifier was trained using 70% images as training set, while 30% images were included in the testing set. The accuracy of the automated classification was validated by clinically proven pathological results. RESULTS Attenuation coefficient and 29 image features were found to present different properties with significant p-values between normal and malignant EB-OCT images. The RF algorithm successfully classified the malignant pulmonary nodules with sensitivity, specificity, and accuracy of 90.41%, 77.87% and 83.51% respectively. CONCLUSION It is clinically practical to distinguish the nature of pulmonary nodules by integrating EB-OCT imaging with automated machine learning algorithm. Diagnosis of malignant pulmonary nodules by analyzing quantitative features from EB-OCT images could be a potentially powerful way for early detection of lung cancer.
Collapse
Affiliation(s)
- Ming Ding
- Department of Respiratory Medicine, Southeast University Zhongda Hospital, Nanjing, Jiangsu, China
| | - Shi-yu Pan
- School of Biological Sciences and Medical Engineering, Southeast University, Nanjing, Jiangsu, China
| | - Jing Huang
- Department of Respiratory Medicine, Southeast University Zhongda Hospital, Nanjing, Jiangsu, China
| | - Cheng Yuan
- Department of Respiratory Medicine, Southeast University Zhongda Hospital, Nanjing, Jiangsu, China
| | - Qiang Zhang
- Department of Respiratory Medicine, Southeast University Zhongda Hospital, Nanjing, Jiangsu, China
| | - Xiao-li Zhu
- Department of Respiratory Medicine, Southeast University Zhongda Hospital, Nanjing, Jiangsu, China
| | - Yan Cai
- School of Biological Sciences and Medical Engineering, Southeast University, Nanjing, Jiangsu, China
| |
Collapse
|
50
|
Lei Y, Wang T, Dong X, Tian S, Liu Y, Mao H, Curran WJ, Shu HK, Liu T, Yang X. MRI classification using semantic random forest with auto-context model. Quant Imaging Med Surg 2021; 11:4753-4766. [PMID: 34888187 PMCID: PMC8611460 DOI: 10.21037/qims-20-1114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2020] [Accepted: 04/28/2021] [Indexed: 11/06/2022]
Abstract
BACKGROUND It is challenging to differentiate air and bone on MR images of conventional sequences due to their low contrast. We propose to combine semantic feature extraction under auto-context manner into random forest to improve reasonability of the MRI segmentation for MRI-based radiotherapy treatment planning or PET attention correction. METHODS We applied a semantic classification random forest (SCRF) method which consists of a training stage and a segmentation stage. In the training stage, patch-based MRI features were extracted from registered MRI-CT training images, and the most informative elements were selected via feature selection to train an initial random forest. The rest sequence of random forests was trained by a combination of MRI feature and semantic feature under an auto-context manner. During segmentation, the MRI patches were first fed into these random forests to derive patch-based segmentation. By using patch fusion, the final end-to-end segmentation was obtained. RESULTS The Dice similarity coefficient (DSC) for air, bone and soft tissue classes obtained via proposed method were 0.976±0.007, 0.819±0.050 and 0.932±0.031, compared to 0.916±0.099, 0.673±0.151 and 0.830±0.083 with random forest (RF), and 0.942±0.086, 0.791±0.046 and 0.917±0.033 with U-Net. SCRF also outperformed the competing methods in sensitivity and specificity for all three structure types. CONCLUSIONS The proposed method accurately segmented bone, air and soft tissue. It is promising in facilitating advanced MR application in diagnosis and therapy.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Xue Dong
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Sibo Tian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Yingzi Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Hui Mao
- Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Walter J. Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Hui-Kuo Shu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| |
Collapse
|