1
|
Fatima N, Mento F, Afrakhteh S, Perrone T, Smargiassi A, Inchingolo R, Demi L. Synthetic Lung Ultrasound Data Generation Using Autoencoder With Generative Adversarial Network. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2025; 72:624-635. [PMID: 40146656 DOI: 10.1109/tuffc.2025.3555447] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/29/2025]
Abstract
Class imbalance is a significant challenge in medical image analysis, particularly in lung ultrasound (LUS), where severe patterns are often underrepresented. Traditional oversampling techniques, which simply duplicate original data, have limited effectiveness in addressing this issue. To overcome these limitations, this study introduces a novel supervised autoencoder generative adversarial network (SA-GAN) for data augmentation, leveraging advanced generative artificial intelligence (AI) to create high-quality synthetic samples for minority classes. In addition, the traditional data augmentation technique is used for comparison. The SA-GAN incorporates an autoencoder to develop a conditional latent space, effectively addressing weight clipping issues and ensuring higher quality synthetic data. The generated samples are evaluated using similarity metrics and expert analysis to validate their utility. Furthermore, state-of-the-art neural networks are used for multiclass classification, and their performance is compared when trained with GAN-based augmentation versus traditional data augmentation techniques. These contributions enhance the robustness and reliability of AI models in mitigating class imbalance in LUS analysis.
Collapse
|
2
|
Dong Y, Wang P, Geng H, Liu Y, Wang E. Ultrasound and advanced imaging techniques in prostate cancer diagnosis: A comparative study of mpMRI, TRUS, and PET/CT. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2025; 33:436-447. [PMID: 39973788 DOI: 10.1177/08953996241304988] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/21/2025]
Abstract
ObjectiveThis study aims to assess and compare the diagnostic performance of three advanced imaging modalities-multiparametric magnetic resonance imaging (mpMRI), transrectal ultrasound (TRUS), and positron emission tomography/computed tomography (PET/CT)-in detecting prostate cancer in patients with elevated PSA levels and abnormal DRE findings.MethodsA retrospective analysis was conducted on 150 male patients aged 50-75 years with elevated PSA and abnormal DRE. The diagnostic accuracy of each modality was assessed through sensitivity, specificity, and the area under the curve (AUC) to compare performance in detecting clinically significant prostate cancer (Gleason score ≥ 7).ResultsMpMRI demonstrated the highest diagnostic performance, with a sensitivity of 90%, specificity of 85%, and AUC of 0.92, outperforming both TRUS (sensitivity 76%, specificity 78%, AUC 0.77) and PET/CT (sensitivity 82%, specificity 80%, AUC 0.81). MpMRI detected clinically significant tumors in 80% of cases. Although TRUS and PET/CT had similar detection rates for significant tumors, their overall accuracy was lower. Minor adverse events occurred in 5% of patients undergoing TRUS, while no significant complications were associated with mpMRI or PET/CT.ConclusionThese findings suggest that mpMRI is the most reliable imaging modality for early detection of clinically significant prostate cancer. It reduces the need for unnecessary biopsies and optimizes patient management.
Collapse
Affiliation(s)
- Ying Dong
- Department of Radiology, Beijing Renhe Hospital, Beijing, China
| | - Peng Wang
- Department of Imaging Diagnostic, Binzhou Hospital of Traditional Chinese Medicine, Binzhou City, China
| | - Hua Geng
- Department of Oncology, Binzhou Hospital of Traditional Chinese Medicine, Binzhou City, China
| | - Yankun Liu
- Department of Medical Imaging Center, Central Hospital Afffliated to Shandong First Medical University, Jinan City, China
| | - Enguo Wang
- Department of Medical Imaging Center, Central Hospital Afffliated to Shandong First Medical University, Jinan City, China
| |
Collapse
|
3
|
Pedersen S, Jain S, Chavez M, Ladehoff V, de Freitas BN, Pauwels R. Pano-GAN: A Deep Generative Model for Panoramic Dental Radiographs. J Imaging 2025; 11:41. [PMID: 39997543 PMCID: PMC11856485 DOI: 10.3390/jimaging11020041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2024] [Revised: 01/16/2025] [Accepted: 01/30/2025] [Indexed: 02/26/2025] Open
Abstract
This paper presents the development of a generative adversarial network (GAN) for the generation of synthetic dental panoramic radiographs. While this is an exploratory study, the ultimate aim is to address the scarcity of data in dental research and education. A deep convolutional GAN (DCGAN) with the Wasserstein loss and a gradient penalty (WGAN-GP) was trained on a dataset of 2322 radiographs of varying quality. The focus of this study was on the dentoalveolar part of the radiographs; other structures were cropped out. Significant data cleaning and preprocessing were conducted to standardize the input formats while maintaining anatomical variability. Four candidate models were identified by varying the critic iterations, number of features and the use of denoising prior to training. To assess the quality of the generated images, a clinical expert evaluated a set of generated synthetic radiographs using a ranking system based on visibility and realism, with scores ranging from 1 (very poor) to 5 (excellent). It was found that most generated radiographs showed moderate depictions of dentoalveolar anatomical structures, although they were considerably impaired by artifacts. The mean evaluation scores showed a trade-off between the model trained on non-denoised data, which showed the highest subjective quality for finer structures, such as the mandibular canal and trabecular bone, and one of the models trained on denoised data, which offered better overall image quality, especially in terms of clarity and sharpness and overall realism. These outcomes serve as a foundation for further research into GAN architectures for dental imaging applications.
Collapse
Affiliation(s)
- Søren Pedersen
- Bachelor’s Degree Programme in Data Science, Aarhus University, Nordre Ringgade 1, 8000 Aarhus, Denmark; (S.P.); (M.C.); (V.L.)
| | - Sanyam Jain
- Department of Dentistry and Oral Health, Aarhus University, Vennelyst Boulevard 9, 8000 Aarhus, Denmark;
| | - Mikkel Chavez
- Bachelor’s Degree Programme in Data Science, Aarhus University, Nordre Ringgade 1, 8000 Aarhus, Denmark; (S.P.); (M.C.); (V.L.)
| | - Viktor Ladehoff
- Bachelor’s Degree Programme in Data Science, Aarhus University, Nordre Ringgade 1, 8000 Aarhus, Denmark; (S.P.); (M.C.); (V.L.)
| | - Bruna Neves de Freitas
- Department of Dentistry and Oral Health, Aarhus University, Vennelyst Boulevard 9, 8000 Aarhus, Denmark;
- Aarhus Institute of Advanced Studies, Aarhus University, Høegh-Guldbergs Gade 6B, 8000 Aarhus, Denmark
| | - Ruben Pauwels
- Department of Dentistry and Oral Health, Aarhus University, Vennelyst Boulevard 9, 8000 Aarhus, Denmark;
| |
Collapse
|
4
|
Fountzilas E, Pearce T, Baysal MA, Chakraborty A, Tsimberidou AM. Convergence of evolving artificial intelligence and machine learning techniques in precision oncology. NPJ Digit Med 2025; 8:75. [PMID: 39890986 PMCID: PMC11785769 DOI: 10.1038/s41746-025-01471-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2024] [Accepted: 01/19/2025] [Indexed: 02/03/2025] Open
Abstract
The confluence of new technologies with artificial intelligence (AI) and machine learning (ML) analytical techniques is rapidly advancing the field of precision oncology, promising to improve diagnostic approaches and therapeutic strategies for patients with cancer. By analyzing multi-dimensional, multiomic, spatial pathology, and radiomic data, these technologies enable a deeper understanding of the intricate molecular pathways, aiding in the identification of critical nodes within the tumor's biology to optimize treatment selection. The applications of AI/ML in precision oncology are extensive and include the generation of synthetic data, e.g., digital twins, in order to provide the necessary information to design or expedite the conduct of clinical trials. Currently, many operational and technical challenges exist related to data technology, engineering, and storage; algorithm development and structures; quality and quantity of the data and the analytical pipeline; data sharing and generalizability; and the incorporation of these technologies into the current clinical workflow and reimbursement models.
Collapse
Affiliation(s)
- Elena Fountzilas
- Department of Medical Oncology, St Luke's Clinic, Panorama, Thessaloniki, Greece
| | | | - Mehmet A Baysal
- Department of Investigational Cancer Therapeutics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd., Houston, TX, USA
| | - Abhijit Chakraborty
- Department of Investigational Cancer Therapeutics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd., Houston, TX, USA
| | - Apostolia M Tsimberidou
- Department of Investigational Cancer Therapeutics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd., Houston, TX, USA.
| |
Collapse
|
5
|
Vellini L, Quaranta F, Menna S, Pilloni E, Catucci F, Lenkowicz J, Votta C, Aquilano M, D’Aviero A, Iezzi M, Preziosi F, Re A, Boschetti A, Piccari D, Piras A, Di Dio C, Bombini A, Mattiucci GC, Cusumano D. A deep learning algorithm to generate synthetic computed tomography images for brain treatments from 0.35 T magnetic resonance imaging. Phys Imaging Radiat Oncol 2025; 33:100708. [PMID: 39958708 PMCID: PMC11830347 DOI: 10.1016/j.phro.2025.100708] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2024] [Revised: 01/16/2025] [Accepted: 01/21/2025] [Indexed: 02/18/2025] Open
Abstract
Background and Purpose The development of Magnetic Resonance Imaging (MRI)-only Radiotherapy (RT) represents a significant advancement in the field. This study introduces a Deep Learning (DL) algorithm designed to quickly generate synthetic CT (sCT) images from low-field MR images in the brain, an area not yet explored. Methods Fifty-six patients were divided into training (32), validation (8), and test (16) groups. A conditional Generative Adversarial Network (cGAN) was trained on pre-processed axial paired images. sCTs were validated using mean absolute error (MAE) and mean error (ME) calculated within the patient body. Intensity Modulated Radiation Therapy (IMRT) plans were optimised on simulation MRI and calculated considering sCT and original CT as electron density (ED) map. Dose distributions using sCT and CT were compared using global gamma analysis at different tolerance criteria (2 %/2mm and 3 %/3mm) and evaluating the difference in estimating different Dose Volume Histogram (DVH) parameters for target and organs at risk (OARs). Results The network generated sCTs of each single patient in less than two minutes (mean time = 103 ± 41 s). For test patients, the MAE was 62.1 ± 17.7 HU, and the ME was -7.3 ± 13.4 HU. Dose parameters on sCTs were within 0.5 Gy of those on original CTs. Gamma passing rates 2 %/2mm, and 3 %/3mm criteria were 99.5 %±0.5 %, and 99.7 %±0.3 %, respectively. Conclusion The proposed DL algorithm generates in less than 2 min accurate sCT images in the brain for online adaptive radiotherapy, potentially eliminating the need for CT simulation in MR-only workflows for brain treatments.
Collapse
Affiliation(s)
| | | | | | | | | | - Jacopo Lenkowicz
- Fondazione Policlinico Gemelli Agostino Gemelli IRCCS Roma Italy
| | - Claudio Votta
- Fondazione Policlinico Gemelli Agostino Gemelli IRCCS Roma Italy
| | | | - Andrea D’Aviero
- Department of Medical, Oral and Biotechnological Sciences, “Gabriele D’Annunzio” Università di Chieti, Italy
- Department of Radiation Oncology, “S.S. Annunziata”, Chieti Hospital, Italy
| | | | | | - Alessia Re
- Mater Olbia Hospital Olbia Sassari Italy
| | | | | | - Antonio Piras
- UO Radioterapia Oncologica, Villa Santa Teresa Bagheria Palermo Italy
| | | | - Alessandro Bombini
- Istituto Nazionale di Fisica Nucleare (INFN) Sesto Fiorentino FI Italy
- ICSC - Centro Nazionale di Ricerca in High Performance Computing, Big Data & Quantum Computing Casalecchio di Reno BO Italy
| | - Gian Carlo Mattiucci
- Mater Olbia Hospital Olbia Sassari Italy
- Università Cattolica del Sacro Cuore Rome Italy
| | | |
Collapse
|
6
|
Luo Y, Yang Q, Fan Y, Qi H, Xia M. Measurement Guidance in Diffusion Models: Insight from Medical Image Synthesis. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2024; 46:7983-7997. [PMID: 38743550 DOI: 10.1109/tpami.2024.3399098] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2024]
Abstract
In the field of healthcare, the acquisition of sample is usually restricted by multiple considerations, including cost, labor- intensive annotation, privacy concerns, and radiation hazards, therefore, synthesizing images-of-interest is an important tool to data augmentation. Diffusion models have recently attained state-of-the-art results in various synthesis tasks, and embedding energy functions has been proved that can effectively guide the pre-trained model to synthesize target samples. However, we notice that current method development and validation are still limited to improving indicators, such as Fréchet Inception Distance score (FID) and Inception Score (IS), and have not provided deeper investigations on downstream tasks, like disease grading and diagnosis. Moreover, existing classifier guidance which can be regarded as a special case of energy function can only has a singular effect on altering the distribution of the synthetic dataset. This may contribute to in-distribution synthetic sample that has limited help to downstream model optimization. All these limitations remind that we still have a long way to go to achieve controllable generation. In this work, we first conducted an analysis on previous guidance as well as its contributions on further applications from the perspective of data distribution. To synthesize samples which can help downstream applications, we then introduce uncertainty guidance in each sampling step and design an uncertainty-guided diffusion models. Extensive experiments on four medical datasets, with ten classic networks trained on the augmented sample sets provided a comprehensive evaluation on the practical contributions of our methodology. Furthermore, we provide a theoretical guarantee for general gradient guidance in diffusion models, which would benefit future research on investigating other forms of measurement guidance for specific generative tasks.
Collapse
|
7
|
Bicer M, Phillips ATM, Melis A, McGregor AH, Modenese L. Generative adversarial networks to create synthetic motion capture datasets including subject and gait characteristics. J Biomech 2024; 177:112358. [PMID: 39509807 DOI: 10.1016/j.jbiomech.2024.112358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Revised: 07/24/2024] [Accepted: 10/03/2024] [Indexed: 11/15/2024]
Abstract
Resource-intensive motion capture (mocap) systems challenge predictive deep learning applications, requiring large and diverse datasets. We tackled this by modifying generative adversarial networks (GANs) into conditional GANs (cGANs) that can generate diverse mocap data, including 15 marker trajectories, lower limb joint angles, and 3D ground reaction forces (GRFs), based on specified subject and gait characteristics. The cGAN comprised 1) an encoder compressing mocap data to a latent vector, 2) a decoder reconstructing the mocap data from the latent vector with specific conditions and 3) a discriminator distinguishing random vectors with conditions from encoded latent vectors with conditions. Single-conditional models were trained separately for age, sex, leg length, mass, and walking speed, while an additional model (Multi-cGAN) combined all conditions simultaneously to generate synthetic data. All models closely replicated the training dataset (<8.1 % of the gait cycle different between experimental and synthetic kinematics and GRFs), while a subset with narrow condition ranges was best replicated by the Multi-cGAN, producing similar kinematics (<1°) and GRFs (<0.02 body-weight) averaged by walking speeds. Multi-cGAN also generated synthetic datasets and results for three previous studies using reported mean and standard deviation of subject and gait characteristics. Additionally, unseen test data was best predicted by the walking speed-conditional, showcasing synthetic data diversity. The same model also matched the dynamical consistency of the experimental data (32 % average difference throughout the gait cycle), meaning that transforming the gait cycle data to the original time domain yielded accurate derivative calculations. Importantly, synthetic data poses no privacy concerns, potentially facilitating data sharing.
Collapse
Affiliation(s)
- Metin Bicer
- Department of Civil and Environmental Engineering, Imperial College London, London, UK; Faculty of Sport Sciences, Hacettepe University, Ankara, Türkiye; Translational and Clinical Research Institute, Faculty of Medical Sciences, Newcastle University, Newcastle upon Tyne, UK
| | - Andrew T M Phillips
- Department of Civil and Environmental Engineering, Imperial College London, London, UK
| | | | - Alison H McGregor
- Department of Surgery and Cancer, Imperial College London, London, UK
| | - Luca Modenese
- Department of Civil and Environmental Engineering, Imperial College London, London, UK; Graduate School of Biomedical Engineering, University of New South Wales, Sydney, Australia.
| |
Collapse
|
8
|
Zhang R, Du X, Li H. Application and performance enhancement of FAIMS spectral data for deep learning analysis using generative adversarial network reinforcement. Anal Biochem 2024; 694:115627. [PMID: 39033946 DOI: 10.1016/j.ab.2024.115627] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2024] [Revised: 06/21/2024] [Accepted: 07/18/2024] [Indexed: 07/23/2024]
Abstract
When using High-field asymmetric ion mobility spectrometry (FAIMS) to process complex mixtures for deep learning analysis, there is a problem of poor recognition performance due to the lack of high-quality data and low sample diversity. In this paper, a Generative Adversarial Network (GAN) method is introduced to simulate and generate highly realistic and diverse spectral for expanding the dataset using real mixture spectral data of 15 classes collected by FAIMS. The mixed datasets were put into VGG and ResNeXt for testing respectively, and the experimental results proved that the best recognition effect was achieved when the ratio of real data to generated data was 1:4: where accuracy improved by 24.19 % and 6.43 %; precision improved by 23.71 % and 6.97 %; recall improved by 21.08 % and 7.09 %; and F1-score improved by 24.50 % and 8.23 %. The above results strongly demonstrate that GAN can effectively expand the data volume and increase the sample diversity without increasing the additional experimental cost, which significantly enhances the experimental effect of FAIMS spectral for the analysis of complex mixtures.
Collapse
Affiliation(s)
- Ruilong Zhang
- School of Life and Environmental Sciences, GuiLin University of Electronic Technology, GuiLin, 541004, China
| | - Xiaoxia Du
- School of Life and Environmental Sciences, GuiLin University of Electronic Technology, GuiLin, 541004, China.
| | - Hua Li
- School of Life and Environmental Sciences, GuiLin University of Electronic Technology, GuiLin, 541004, China.
| |
Collapse
|
9
|
Roh J, Ryu D, Lee J. CT synthesis with deep learning for MR-only radiotherapy planning: a review. Biomed Eng Lett 2024; 14:1259-1278. [PMID: 39465111 PMCID: PMC11502731 DOI: 10.1007/s13534-024-00430-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2024] [Revised: 09/10/2024] [Accepted: 09/17/2024] [Indexed: 10/29/2024] Open
Abstract
MR-only radiotherapy planning is beneficial from the perspective of both time and safety since it uses synthetic CT for radiotherapy dose calculation instead of real CT scans. To elevate the accuracy of treatment planning and apply the results in practice, various methods have been adopted, among which deep learning models for image-to-image translation have shown good performance by retaining domain-invariant structures while changing domain-specific details. In this paper, we present an overview of diverse deep learning approaches to MR-to-CT synthesis, divided into four classes: convolutional neural networks, generative adversarial networks, transformer models, and diffusion models. By comparing each model and analyzing the general approaches applied to this task, the potential of these models and ways to improve the current methods can be can be evaluated.
Collapse
Affiliation(s)
- Junghyun Roh
- Graduate School of Artificial Intelligence, Ulsan National Institute of Science and Technology, 50, Unist-gil, Ulsan, 44919 Republic of Korea
| | - Dongmin Ryu
- Program in Biomedical Radiation Sciences, Seoul National University, 71, Ihwajang-gil, Seoul, 03087 Republic of Korea
| | - Jimin Lee
- Graduate School of Artificial Intelligence, Ulsan National Institute of Science and Technology, 50, Unist-gil, Ulsan, 44919 Republic of Korea
- Department of Nuclear Engineering, Ulsan National Institute of Science and Technology, 50, Unist-gil, Ulsan, 44919 Republic of Korea
- Graduate School of Health Science and Technology, Ulsan National Institute of Science and Technology, 50, Unist-gil, Ulsan, 44919 Republic of Korea
| |
Collapse
|
10
|
Bhati D, Neha F, Amiruzzaman M. A Survey on Explainable Artificial Intelligence (XAI) Techniques for Visualizing Deep Learning Models in Medical Imaging. J Imaging 2024; 10:239. [PMID: 39452402 PMCID: PMC11508748 DOI: 10.3390/jimaging10100239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2024] [Revised: 09/14/2024] [Accepted: 09/21/2024] [Indexed: 10/26/2024] Open
Abstract
The combination of medical imaging and deep learning has significantly improved diagnostic and prognostic capabilities in the healthcare domain. Nevertheless, the inherent complexity of deep learning models poses challenges in understanding their decision-making processes. Interpretability and visualization techniques have emerged as crucial tools to unravel the black-box nature of these models, providing insights into their inner workings and enhancing trust in their predictions. This survey paper comprehensively examines various interpretation and visualization techniques applied to deep learning models in medical imaging. The paper reviews methodologies, discusses their applications, and evaluates their effectiveness in enhancing the interpretability, reliability, and clinical relevance of deep learning models in medical image analysis.
Collapse
Affiliation(s)
- Deepshikha Bhati
- Department of Computer Science, Kent State University, Kent, OH 44242, USA;
| | - Fnu Neha
- Department of Computer Science, Kent State University, Kent, OH 44242, USA;
| | - Md Amiruzzaman
- Department of Computer Science, West Chester University, West Chester, PA 19383, USA;
| |
Collapse
|
11
|
Li F, Xu Y, Lemus OD, Wang TJC, Sisti MB, Wuu CS. Synthetic CT for gamma knife radiosurgery dose calculation: A feasibility study. Phys Med 2024; 125:104504. [PMID: 39197262 DOI: 10.1016/j.ejmp.2024.104504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 06/24/2024] [Accepted: 08/22/2024] [Indexed: 09/01/2024] Open
Abstract
PURPOSE To determine if MRI-based synthetic CTs (sCT), generated with no predefined pulse sequence, can be used for inhomogeneity correction in routine gamma knife radiosurgery (GKRS) treatment planning dose calculation. METHODS Two sets of sCTs were generated from T1post and T2 images using cycleGAN. Twenty-eight patients (18 training, 10 validation) were retrospectively selected. The image quality of the generated sCTs was compared with the original CT (oCT) regarding the HU value preservation using histogram comparison, RMSE and MAE, and structural integrity. Dosimetric comparisons were also made among GKRS plans from 3 calculation approaches: TMR10 (oCT), and convolution (oCT and sCT), at four locations: original disease site, bone/tissue interface, air/tissue interface, and mid-brain. RESULTS The study showed that sCTs and oCTs' HU were similar, with T2-sCT performing better. TMR10 significantly underdosed the target by a mean of 5.4% compared to the convolution algorithm. There was no significant difference in convolution algorithm shot time between the oCT and sCT generated with T2. The highest and lowest dosimetric differences between the two CTs were observed in the bone and air interface, respectively. Dosimetric differences of 3.3% were observed in sCT predicted from MRI with stereotactic frames, which was not included in the training sets. CONCLUSIONS MRI-based sCT can be utilized for GKRS convolution dose calculation without the unnecessary radiation dose, and sCT without metal artifacts could be generated in framed cases. Larger datasets inclusive of all pulse sequences can improve the training set. Further investigation and validation studies are needed before clinical implementation.
Collapse
Affiliation(s)
- Fiona Li
- Department of Radiation Oncology, Columbia University, New York, NY, USA.
| | - Yuanguang Xu
- Department of Radiation Oncology, Columbia University, New York, NY, USA
| | - Olga D Lemus
- Department of Radiation Oncology, Columbia University, New York, NY, USA
| | - Tony J C Wang
- Department of Radiation Oncology, Columbia University, New York, NY, USA
| | - Michael B Sisti
- Department of Neurological Surgery, Columbia University, New York, NY, USA
| | - Cheng-Shie Wuu
- Department of Radiation Oncology, Columbia University, New York, NY, USA
| |
Collapse
|
12
|
Xie T, Cao C, Cui ZX, Guo Y, Wu C, Wang X, Li Q, Hu Z, Sun T, Sang Z, Zhou Y, Zhu Y, Liang D, Jin Q, Zeng H, Chen G, Wang H. Synthesizing PET images from high-field and ultra-high-field MR images using joint diffusion attention model. Med Phys 2024; 51:5250-5269. [PMID: 38874206 DOI: 10.1002/mp.17254] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Revised: 05/23/2024] [Accepted: 05/28/2024] [Indexed: 06/15/2024] Open
Abstract
BACKGROUND Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET) stand as pivotal diagnostic tools for brain disorders, offering the potential for mutually enriching disease diagnostic perspectives. However, the costs associated with PET scans and the inherent radioactivity have limited the widespread application of PET. Furthermore, it is noteworthy to highlight the promising potential of high-field and ultra-high-field neuroimaging in cognitive neuroscience research and clinical practice. With the enhancement of MRI resolution, a related question arises: can high-resolution MRI improve the quality of PET images? PURPOSE This study aims to enhance the quality of synthesized PET images by leveraging the superior resolution capabilities provided by high-field and ultra-high-field MRI. METHODS From a statistical perspective, the joint probability distribution is considered the most direct and fundamental approach for representing the correlation between PET and MRI. In this study, we proposed a novel model, the joint diffusion attention model, namely, the joint diffusion attention model (JDAM), which primarily focuses on learning information about the joint probability distribution. JDAM consists of two primary processes: the diffusion process and the sampling process. During the diffusion process, PET gradually transforms into a Gaussian noise distribution by adding Gaussian noise, while MRI remains fixed. The central objective of the diffusion process is to learn the gradient of the logarithm of the joint probability distribution between MRI and noise PET. The sampling process operates as a predictor-corrector. The predictor initiates a reverse diffusion process, and the corrector applies Langevin dynamics. RESULTS Experimental results from the publicly available Alzheimer's Disease Neuroimaging Initiative dataset highlight the effectiveness of the proposed model compared to state-of-the-art (SOTA) models such as Pix2pix and CycleGAN. Significantly, synthetic PET images guided by ultra-high-field MRI exhibit marked improvements in signal-to-noise characteristics when contrasted with those generated from high-field MRI data. These results have been endorsed by medical experts, who consider the PET images synthesized through JDAM to possess scientific merit. This endorsement is based on their symmetrical features and precise representation of regions displaying hypometabolism, a hallmark of Alzheimer's disease. CONCLUSIONS This study establishes the feasibility of generating PET images from MRI. Synthesis of PET by JDAM significantly enhances image quality compared to SOTA models.
Collapse
Affiliation(s)
- Taofeng Xie
- School of Mathematical Sciences, Inner Mongolia University, Hohhot, China
- School of Computer and Information Science, Inner Mongolia Medical University, Hohhot, China
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Science, Shenzhen, China
| | - Chentao Cao
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Science, Shenzhen, China
| | - Zhuo-Xu Cui
- Research Center for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yu Guo
- School of Mathematical Sciences, Inner Mongolia University, Hohhot, China
| | - Caiying Wu
- School of Mathematical Sciences, Inner Mongolia University, Hohhot, China
| | - Xuemei Wang
- Department of Nuclear Medicine, Inner Mongolia Medical University Affiliated Hospital, Hohhot, China
| | - Qingneng Li
- Research Center for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Zhanli Hu
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Science, Shenzhen, China
- Research Center for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, China
| | - Tao Sun
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Science, Shenzhen, China
| | - Ziru Sang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Science, Shenzhen, China
| | - Yihang Zhou
- Research Center for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yanjie Zhu
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Science, Shenzhen, China
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, China
| | - Dong Liang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Science, Shenzhen, China
- Research Center for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, China
| | - Qiyu Jin
- School of Mathematical Sciences, Inner Mongolia University, Hohhot, China
| | - Hongwu Zeng
- Department of Radiology, Shenzhen Children's Hospital, Shenzhen, China
| | - Guoqing Chen
- School of Mathematical Sciences, Inner Mongolia University, Hohhot, China
| | - Haifeng Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Science, Shenzhen, China
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
13
|
Chaudhary MFA, Gerard SE, Christensen GE, Cooper CB, Schroeder JD, Hoffman EA, Reinhardt JM. LungViT: Ensembling Cascade of Texture Sensitive Hierarchical Vision Transformers for Cross-Volume Chest CT Image-to-Image Translation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2448-2465. [PMID: 38373126 PMCID: PMC11227912 DOI: 10.1109/tmi.2024.3367321] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/21/2024]
Abstract
Chest computed tomography (CT) at inspiration is often complemented by an expiratory CT to identify peripheral airways disease. Additionally, co-registered inspiratory-expiratory volumes can be used to derive various markers of lung function. Expiratory CT scans, however, may not be acquired due to dose or scan time considerations or may be inadequate due to motion or insufficient exhale; leading to a missed opportunity to evaluate underlying small airways disease. Here, we propose LungViT- a generative adversarial learning approach using hierarchical vision transformers for translating inspiratory CT intensities to corresponding expiratory CT intensities. LungViT addresses several limitations of the traditional generative models including slicewise discontinuities, limited size of generated volumes, and their inability to model texture transfer at volumetric level. We propose a shifted-window hierarchical vision transformer architecture with squeeze-and-excitation decoder blocks for modeling dependencies between features. We also propose a multiview texture similarity distance metric for texture and style transfer in 3D. To incorporate global information into the training process and refine the output of our model, we use ensemble cascading. LungViT is able to generate large 3D volumes of size 320×320×320 . We train and validate our model using a diverse cohort of 1500 subjects with varying disease severity. To assess model generalizability beyond the development set biases, we evaluate our model on an out-of-distribution external validation set of 200 subjects. Clinical validation on internal and external testing sets shows that synthetic volumes could be reliably adopted for deriving clinical endpoints of chronic obstructive pulmonary disease.
Collapse
|
14
|
Luo Y, Yang Q, Liu Z, Shi Z, Huang W, Zheng G, Cheng J. Target-Guided Diffusion Models for Unpaired Cross-Modality Medical Image Translation. IEEE J Biomed Health Inform 2024; 28:4062-4071. [PMID: 38662561 DOI: 10.1109/jbhi.2024.3393870] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/03/2024]
Abstract
In a clinical setting, the acquisition of certain medical image modality is often unavailable due to various considerations such as cost, radiation, etc. Therefore, unpaired cross-modality translation techniques, which involve training on the unpaired data and synthesizing the target modality with the guidance of the acquired source modality, are of great interest. Previous methods for synthesizing target medical images are to establish one-shot mapping through generative adversarial networks (GANs). As promising alternatives to GANs, diffusion models have recently received wide interests in generative tasks. In this paper, we propose a target-guided diffusion model (TGDM) for unpaired cross-modality medical image translation. For training, to encourage our diffusion model to learn more visual concepts, we adopted a perception prioritized weight scheme (P2W) to the training objectives. For sampling, a pre-trained classifier is adopted in the reverse process to relieve modality-specific remnants from source data. Experiments on both brain MRI-CT and prostate MRI-US datasets demonstrate that the proposed method achieves a visually realistic result that mimics a vivid anatomical section of the target organ. In addition, we have also conducted a subjective assessment based on the synthesized samples to further validate the clinical value of TGDM.
Collapse
|
15
|
Jalloh M, Kankam SB. Harnessing generative artificial intelligence for meningioma prediction: a correspondence. Neurosurg Rev 2024; 47:180. [PMID: 38649559 DOI: 10.1007/s10143-024-02404-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2024] [Revised: 04/08/2024] [Accepted: 04/09/2024] [Indexed: 04/25/2024]
Affiliation(s)
- Mohamed Jalloh
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Samuel Berchi Kankam
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA.
- Harvard T.H Chan School of Public Health, Harvard University, Cambridge, MA, USA.
| |
Collapse
|
16
|
Kim H, Yoo SK, Kim JS, Kim YT, Lee JW, Kim C, Hong CS, Lee H, Han MC, Kim DW, Kim SY, Kim TM, Kim WH, Kong J, Kim YB. Clinical feasibility of deep learning-based synthetic CT images from T2-weighted MR images for cervical cancer patients compared to MRCAT. Sci Rep 2024; 14:8504. [PMID: 38605094 PMCID: PMC11009270 DOI: 10.1038/s41598-024-59014-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Accepted: 04/05/2024] [Indexed: 04/13/2024] Open
Abstract
This work aims to investigate the clinical feasibility of deep learning-based synthetic CT images for cervix cancer, comparing them to MR for calculating attenuation (MRCAT). Patient cohort with 50 pairs of T2-weighted MR and CT images from cervical cancer patients was split into 40 for training and 10 for testing phases. We conducted deformable image registration and Nyul intensity normalization for MR images to maximize the similarity between MR and CT images as a preprocessing step. The processed images were plugged into a deep learning model, generative adversarial network. To prove clinical feasibility, we assessed the accuracy of synthetic CT images in image similarity using structural similarity (SSIM) and mean-absolute-error (MAE) and dosimetry similarity using gamma passing rate (GPR). Dose calculation was performed on the true and synthetic CT images with a commercial Monte Carlo algorithm. Synthetic CT images generated by deep learning outperformed MRCAT images in image similarity by 1.5% in SSIM, and 18.5 HU in MAE. In dosimetry, the DL-based synthetic CT images achieved 98.71% and 96.39% in the GPR at 1% and 1 mm criterion with 10% and 60% cut-off values of the prescription dose, which were 0.9% and 5.1% greater GPRs over MRCAT images.
Collapse
Affiliation(s)
- Hojin Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Sang Kyun Yoo
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Jin Sung Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Yong Tae Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Jai Wo Lee
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Changhwan Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Chae-Seon Hong
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Ho Lee
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Min Cheol Han
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Dong Wook Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Se Young Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Tae Min Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Woo Hyoung Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Jayoung Kong
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Yong Bae Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea.
| |
Collapse
|
17
|
Baldeon-Calisto M, Lai-Yuen SK, Puente-Mejia B. StAC-DA: Structure aware cross-modality domain adaptation framework with image and feature-level adaptation for medical image segmentation. Digit Health 2024; 10:20552076241277440. [PMID: 39229464 PMCID: PMC11369866 DOI: 10.1177/20552076241277440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Accepted: 08/06/2024] [Indexed: 09/05/2024] Open
Abstract
Objective Convolutional neural networks (CNNs) have achieved state-of-the-art results in various medical image segmentation tasks. However, CNNs often assume that the source and target dataset follow the same probability distribution and when this assumption is not satisfied their performance degrades significantly. This poses a limitation in medical image analysis, where including information from different imaging modalities can bring large clinical benefits. In this work, we present an unsupervised Structure Aware Cross-modality Domain Adaptation (StAC-DA) framework for medical image segmentation. Methods StAC-DA implements an image- and feature-level adaptation in a sequential two-step approach. The first step performs an image-level alignment, where images from the source domain are translated to the target domain in pixel space by implementing a CycleGAN-based model. The latter model includes a structure-aware network that preserves the shape of the anatomical structure during translation. The second step consists of a feature-level alignment. A U-Net network with deep supervision is trained with the transformed source domain images and target domain images in an adversarial manner to produce probable segmentations for the target domain. Results The framework is evaluated on bidirectional cardiac substructure segmentation. StAC-DA outperforms leading unsupervised domain adaptation approaches, being ranked first in the segmentation of the ascending aorta when adapting from Magnetic Resonance Imaging (MRI) to Computed Tomography (CT) domain and from CT to MRI domain. Conclusions The presented framework overcomes the limitations posed by differing distributions in training and testing datasets. Moreover, the experimental results highlight its potential to improve the accuracy of medical image segmentation across diverse imaging modalities.
Collapse
Affiliation(s)
- Maria Baldeon-Calisto
- Departamento de Ingeniería Industrial, Colegio de Ciencias e Ingeniería, Instituto de Innovación en Productividad y Logística CATENA-USFQ, Universidad San Francisco de Quito, Quito, Ecuador
| | - Susana K. Lai-Yuen
- Department of Industrial and Management Systems, University of South Florida, Tampa, FL, USA
| | - Bernardo Puente-Mejia
- Departamento de Ingeniería Industrial, Colegio de Ciencias e Ingeniería, Instituto de Innovación en Productividad y Logística CATENA-USFQ, Universidad San Francisco de Quito, Quito, Ecuador
| |
Collapse
|
18
|
Manoj Doss KK, Chen JC. Utilizing deep learning techniques to improve image quality and noise reduction in preclinical low-dose PET images in the sinogram domain. Med Phys 2024; 51:209-223. [PMID: 37966121 DOI: 10.1002/mp.16830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 09/28/2023] [Accepted: 10/22/2023] [Indexed: 11/16/2023] Open
Abstract
BACKGROUND Low-dose positron emission tomography (LD-PET) imaging is commonly employed in preclinical research to minimize radiation exposure to animal subjects. However, LD-PET images often exhibit poor quality and high noise levels due to the low signal-to-noise ratio. Deep learning (DL) techniques such as generative adversarial networks (GANs) and convolutional neural network (CNN) have the capability to enhance the quality of images derived from noisy or low-quality PET data, which encodes critical information about radioactivity distribution in the body. PURPOSE Our objective was to optimize the image quality and reduce noise in preclinical PET images by utilizing the sinogram domain as input for DL models, resulting in improved image quality as compared to LD-PET images. METHODS A GAN and CNN model were utilized to predict high-dose (HD) preclinical PET sinograms from the corresponding LD preclinical PET sinograms. In order to generate the datasets, experiments were conducted on micro-phantoms, animal subjects (rats), and virtual simulations. The quality of DL-generated images was weighted by performing the following quantitative measures: structural similarity index measure (SSIM), root mean squared error (RMSE), peak signal-to-noise ratio (PSNR), signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR). Additionally, DL input and output were both subjected to a spatial resolution calculation of full width half maximum (FWHM) and full width tenth maximum (FWTM). DL outcomes were then compared with the conventional denoising algorithms such as non-local means (NLM), block-matching, and 3D filtering (BM3D). RESULTS The DL models effectively learned image features and produced high-quality images, as reflected in the quantitative metrics. Notably, the FWHM and FWTM values of DL PET images exhibited significantly improved accuracy compared to LD, NLM, and BM3D PET images, and just as precise as HD PET images. The MSE loss underscored the excellent performance of the models, indicating that the models performed well. To further improve the training, the generator loss (G loss) was increased to a value higher than the discriminator loss (D loss), thereby achieving convergence in the GAN model. CONCLUSIONS The sinograms generated by the GAN network closely resembled real HD preclinical PET sinograms and were more realistic than LD. There was a noticeable improvement in image quality and noise factor in the predicted HD images. Importantly, DL networks did not fully compromise the spatial resolution of the images.
Collapse
Affiliation(s)
| | - Jyh-Cheng Chen
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Department of Medical Imaging and Radiological Sciences, China Medical University, Taichung, Taiwan
- School of Medical Imaging, Xuzhou Medical University, Xuzhou, China
| |
Collapse
|
19
|
Li Y, Zhou T, He K, Zhou Y, Shen D. Multi-Scale Transformer Network With Edge-Aware Pre-Training for Cross-Modality MR Image Synthesis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3395-3407. [PMID: 37339020 DOI: 10.1109/tmi.2023.3288001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/22/2023]
Abstract
Cross-modality magnetic resonance (MR) image synthesis can be used to generate missing modalities from given ones. Existing (supervised learning) methods often require a large number of paired multi-modal data to train an effective synthesis model. However, it is often challenging to obtain sufficient paired data for supervised training. In reality, we often have a small number of paired data while a large number of unpaired data. To take advantage of both paired and unpaired data, in this paper, we propose a Multi-scale Transformer Network (MT-Net) with edge-aware pre-training for cross-modality MR image synthesis. Specifically, an Edge-preserving Masked AutoEncoder (Edge-MAE) is first pre-trained in a self-supervised manner to simultaneously perform 1) image imputation for randomly masked patches in each image and 2) whole edge map estimation, which effectively learns both contextual and structural information. Besides, a novel patch-wise loss is proposed to enhance the performance of Edge-MAE by treating different masked patches differently according to the difficulties of their respective imputations. Based on this proposed pre-training, in the subsequent fine-tuning stage, a Dual-scale Selective Fusion (DSF) module is designed (in our MT-Net) to synthesize missing-modality images by integrating multi-scale features extracted from the encoder of the pre-trained Edge-MAE. Furthermore, this pre-trained encoder is also employed to extract high-level features from the synthesized image and corresponding ground-truth image, which are required to be similar (consistent) in the training. Experimental results show that our MT-Net achieves comparable performance to the competing methods even using 70% of all available paired data. Our code will be released at https://github.com/lyhkevin/MT-Net.
Collapse
|
20
|
Kim K, Byun BH, Lim I, Lim SM, Woo SK. Deep Learning-Based Delayed PET Image Synthesis from Corresponding Early Scanned PET for Dosimetry Uptake Estimation. Diagnostics (Basel) 2023; 13:3045. [PMID: 37835788 PMCID: PMC10572561 DOI: 10.3390/diagnostics13193045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Revised: 09/19/2023] [Accepted: 09/19/2023] [Indexed: 10/15/2023] Open
Abstract
The acquisition of in vivo radiopharmaceutical distribution through imaging is time-consuming due to dosimetry, which requires the subject to be scanned at several time points post-injection. This study aimed to generate delayed positron emission tomography images from early images using a deep-learning-based image generation model to mitigate the time cost and inconvenience. Eighteen healthy participants were recruited and injected with [18F]Fluorodeoxyglucose. A paired image-to-image translation model, based on a generative adversarial network (GAN), was used as the generation model. The standardized uptake value (SUV) mean of the generated image of each organ was compared with that of the ground-truth. The least square GAN and perceptual loss combinations displayed the best performance. As the uptake time of the early image became closer to that of the ground-truth image, the translation performance improved. The SUV mean values of the nominated organs were estimated reasonably accurately for the muscle, heart, liver, and spleen. The results demonstrate that the image-to-image translation deep learning model is applicable for the generation of a functional image from another functional image acquired from normal subjects, including predictions of organ-wise activity for specific normal organs.
Collapse
Affiliation(s)
- Kangsan Kim
- Division of Applied RI, Korea Institute of Radiological and Medical Sciences, Seoul 01812, Republic of Korea;
| | - Byung Hyun Byun
- Department of Nuclear Medicine, Korea Institute of Radiological and Medical Sciences, Seoul 01812, Republic of Korea
| | - Ilhan Lim
- Department of Nuclear Medicine, Korea Institute of Radiological and Medical Sciences, Seoul 01812, Republic of Korea
| | - Sang Moo Lim
- Department of Nuclear Medicine, Korea Institute of Radiological and Medical Sciences, Seoul 01812, Republic of Korea
| | - Sang-Keun Woo
- Division of Applied RI, Korea Institute of Radiological and Medical Sciences, Seoul 01812, Republic of Korea;
| |
Collapse
|
21
|
Krokos G, MacKewn J, Dunn J, Marsden P. A review of PET attenuation correction methods for PET-MR. EJNMMI Phys 2023; 10:52. [PMID: 37695384 PMCID: PMC10495310 DOI: 10.1186/s40658-023-00569-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 08/07/2023] [Indexed: 09/12/2023] Open
Abstract
Despite being thirteen years since the installation of the first PET-MR system, the scanners constitute a very small proportion of the total hybrid PET systems installed. This is in stark contrast to the rapid expansion of the PET-CT scanner, which quickly established its importance in patient diagnosis within a similar timeframe. One of the main hurdles is the development of an accurate, reproducible and easy-to-use method for attenuation correction. Quantitative discrepancies in PET images between the manufacturer-provided MR methods and the more established CT- or transmission-based attenuation correction methods have led the scientific community in a continuous effort to develop a robust and accurate alternative. These can be divided into four broad categories: (i) MR-based, (ii) emission-based, (iii) atlas-based and the (iv) machine learning-based attenuation correction, which is rapidly gaining momentum. The first is based on segmenting the MR images in various tissues and allocating a predefined attenuation coefficient for each tissue. Emission-based attenuation correction methods aim in utilising the PET emission data by simultaneously reconstructing the radioactivity distribution and the attenuation image. Atlas-based attenuation correction methods aim to predict a CT or transmission image given an MR image of a new patient, by using databases containing CT or transmission images from the general population. Finally, in machine learning methods, a model that could predict the required image given the acquired MR or non-attenuation-corrected PET image is developed by exploiting the underlying features of the images. Deep learning methods are the dominant approach in this category. Compared to the more traditional machine learning, which uses structured data for building a model, deep learning makes direct use of the acquired images to identify underlying features. This up-to-date review goes through the literature of attenuation correction approaches in PET-MR after categorising them. The various approaches in each category are described and discussed. After exploring each category separately, a general overview is given of the current status and potential future approaches along with a comparison of the four outlined categories.
Collapse
Affiliation(s)
- Georgios Krokos
- School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas' Hospital London, King's College London, 1st Floor Lambeth Wing, Westminster Bridge Road, London, SE1 7EH, UK.
| | - Jane MacKewn
- School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas' Hospital London, King's College London, 1st Floor Lambeth Wing, Westminster Bridge Road, London, SE1 7EH, UK
| | - Joel Dunn
- School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas' Hospital London, King's College London, 1st Floor Lambeth Wing, Westminster Bridge Road, London, SE1 7EH, UK
| | - Paul Marsden
- School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas' Hospital London, King's College London, 1st Floor Lambeth Wing, Westminster Bridge Road, London, SE1 7EH, UK
| |
Collapse
|
22
|
Gerard SE, Chaudhary MFA, Herrmann J, Christensen GE, Estépar RSJ, Reinhardt JM, Hoffman EA. Direct estimation of regional lung volume change from paired and single CT images using residual regression neural network. Med Phys 2023; 50:5698-5714. [PMID: 36929883 PMCID: PMC10743098 DOI: 10.1002/mp.16365] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Revised: 02/11/2023] [Accepted: 03/01/2023] [Indexed: 03/18/2023] Open
Abstract
BACKGROUND Chest computed tomography (CT) enables characterization of pulmonary diseases by producing high-resolution and high-contrast images of the intricate lung structures. Deformable image registration is used to align chest CT scans at different lung volumes, yielding estimates of local tissue expansion and contraction. PURPOSE We investigated the utility of deep generative models for directly predicting local tissue volume change from lung CT images, bypassing computationally expensive iterative image registration and providing a method that can be utilized in scenarios where either one or two CT scans are available. METHODS A residual regression convolutional neural network, called Reg3DNet+, is proposed for directly regressing high-resolution images of local tissue volume change (i.e., Jacobian) from CT images. Image registration was performed between lung volumes at total lung capacity (TLC) and functional residual capacity (FRC) using a tissue mass- and structure-preserving registration algorithm. The Jacobian image was calculated from the registration-derived displacement field and used as the ground truth for local tissue volume change. Four separate Reg3DNet+ models were trained to predict Jacobian images using a multifactorial study design to compare the effects of network input (i.e., single image vs. paired images) and output space (i.e., FRC vs. TLC). The models were trained and evaluated on image datasets from the COPDGene study. Models were evaluated against the registration-derived Jacobian images using local, regional, and global evaluation metrics. RESULTS Statistical analysis revealed that both factors - network input and output space - were significant determinants for change in evaluation metrics. Paired-input models performed better than single-input models, and model performance was better in the output space of FRC rather than TLC. Mean structural similarity index for paired-input models was 0.959 and 0.956 for FRC and TLC output spaces, respectively, and for single-input models was 0.951 and 0.937. Global evaluation metrics demonstrated correlation between registration-derived Jacobian mean and predicted Jacobian mean: coefficient of determination (r2 ) for paired-input models was 0.974 and 0.938 for FRC and TLC output spaces, respectively, and for single-input models was 0.598 and 0.346. After correcting for effort, registration-derived lobar volume change was strongly correlated with the predicted lobar volume change: for paired-input models r2 was 0.899 for both FRC and TLC output spaces, and for single-input models r2 was 0.803 and 0.862, respectively. CONCLUSIONS Convolutional neural networks can be used to directly predict local tissue mechanics, eliminating the need for computationally expensive image registration. Networks that use paired CT images acquired at TLC and FRC allow for more accurate prediction of local tissue expansion compared to networks that use a single image. Networks that only require a single input image still show promising results, particularly after correcting for effort, and allow for local tissue expansion estimation in cases where multiple CT scans are not available. For single-input networks, the FRC image is more predictive of local tissue volume change compared to the TLC image.
Collapse
Affiliation(s)
- Sarah E. Gerard
- Roy J. Carver Department of Biomedical Engineering, University of Iowa, Iowa City, Iowa, USA
- Department of Radiology, University of Iowa, Iowa City, Iowa, USA
| | | | - Jacob Herrmann
- Roy J. Carver Department of Biomedical Engineering, University of Iowa, Iowa City, Iowa, USA
| | - Gary E. Christensen
- Department of Electrical and Computer Engineering, University of Iowa, Iowa City, Iowa, USA
- Department of Radiation Oncology, University of Iowa, Iowa City, Iowa, USA
| | | | - Joseph M. Reinhardt
- Roy J. Carver Department of Biomedical Engineering, University of Iowa, Iowa City, Iowa, USA
- Department of Radiology, University of Iowa, Iowa City, Iowa, USA
| | - Eric A. Hoffman
- Roy J. Carver Department of Biomedical Engineering, University of Iowa, Iowa City, Iowa, USA
- Department of Radiology, University of Iowa, Iowa City, Iowa, USA
| |
Collapse
|
23
|
Rezaeijo SM, Chegeni N, Baghaei Naeini F, Makris D, Bakas S. Within-Modality Synthesis and Novel Radiomic Evaluation of Brain MRI Scans. Cancers (Basel) 2023; 15:3565. [PMID: 37509228 PMCID: PMC10377568 DOI: 10.3390/cancers15143565] [Citation(s) in RCA: 32] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2023] [Revised: 06/27/2023] [Accepted: 07/05/2023] [Indexed: 07/30/2023] Open
Abstract
One of the most common challenges in brain MRI scans is to perform different MRI sequences depending on the type and properties of tissues. In this paper, we propose a generative method to translate T2-Weighted (T2W) Magnetic Resonance Imaging (MRI) volume from T2-weight-Fluid-attenuated-Inversion-Recovery (FLAIR) and vice versa using Generative Adversarial Networks (GAN). To evaluate the proposed method, we propose a novel evaluation schema for generative and synthetic approaches based on radiomic features. For the evaluation purpose, we consider 510 pair-slices from 102 patients to train two different GAN-based architectures Cycle GAN and Dual Cycle-Consistent Adversarial network (DC2Anet). The results indicate that generative methods can produce similar results to the original sequence without significant change in the radiometric feature. Therefore, such a method can assist clinics to make decisions based on the generated image when different sequences are not available or there is not enough time to re-perform the MRI scans.
Collapse
Affiliation(s)
- Seyed Masoud Rezaeijo
- Department of Medical Physics, Faculty of Medicine, Ahvaz Jundishapur University of Medical Sciences, Ahvaz, Iran; (S.M.R.)
| | - Nahid Chegeni
- Department of Medical Physics, Faculty of Medicine, Ahvaz Jundishapur University of Medical Sciences, Ahvaz, Iran; (S.M.R.)
| | - Fariborz Baghaei Naeini
- Faculty of Engineering, Computing and the Environment, Kingston University, Penrhyn Road Campus, Kingston upon Thames, London KT1 2EE, UK; (F.B.N.); (D.M.)
| | - Dimitrios Makris
- Faculty of Engineering, Computing and the Environment, Kingston University, Penrhyn Road Campus, Kingston upon Thames, London KT1 2EE, UK; (F.B.N.); (D.M.)
| | - Spyridon Bakas
- Faculty of Engineering, Computing and the Environment, Kingston University, Penrhyn Road Campus, Kingston upon Thames, London KT1 2EE, UK; (F.B.N.); (D.M.)
- Richards Medical Research Laboratories, Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Floor 7, 3700 Hamilton Walk, Philadelphia, PA 19104, USA
| |
Collapse
|
24
|
Zhang XY, Wei Q, Wu GG, Tang Q, Pan XF, Chen GQ, Zhang D, Dietrich CF, Cui XW. Artificial intelligence - based ultrasound elastography for disease evaluation - a narrative review. Front Oncol 2023; 13:1197447. [PMID: 37333814 PMCID: PMC10272784 DOI: 10.3389/fonc.2023.1197447] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Accepted: 05/22/2023] [Indexed: 06/20/2023] Open
Abstract
Ultrasound elastography (USE) provides complementary information of tissue stiffness and elasticity to conventional ultrasound imaging. It is noninvasive and free of radiation, and has become a valuable tool to improve diagnostic performance with conventional ultrasound imaging. However, the diagnostic accuracy will be reduced due to high operator-dependence and intra- and inter-observer variability in visual observations of radiologists. Artificial intelligence (AI) has great potential to perform automatic medical image analysis tasks to provide a more objective, accurate and intelligent diagnosis. More recently, the enhanced diagnostic performance of AI applied to USE have been demonstrated for various disease evaluations. This review provides an overview of the basic concepts of USE and AI techniques for clinical radiologists and then introduces the applications of AI in USE imaging that focus on the following anatomical sites: liver, breast, thyroid and other organs for lesion detection and segmentation, machine learning (ML) - assisted classification and prognosis prediction. In addition, the existing challenges and future trends of AI in USE are also discussed.
Collapse
Affiliation(s)
- Xian-Ya Zhang
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Qi Wei
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Ge-Ge Wu
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Qi Tang
- Department of Ultrasonography, The First Hospital of Changsha, Changsha, China
| | - Xiao-Fang Pan
- Health Medical Department, Dalian Municipal Central Hospital, Dalian, China
| | - Gong-Quan Chen
- Department of Medical Ultrasound, Minda Hospital of Hubei Minzu University, Enshi, China
| | - Di Zhang
- Department of Medical Ultrasound, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | | | - Xin-Wu Cui
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
25
|
Generative adversarial feature learning for glomerulopathy histological classification. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104562] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
|
26
|
Joseph J, Biji I, Babu N, Pournami PN, Jayaraj PB, Puzhakkal N, Sabu C, Patel V. Fan beam CT image synthesis from cone beam CT image using nested residual UNet based conditional generative adversarial network. Phys Eng Sci Med 2023; 46:703-717. [PMID: 36943626 DOI: 10.1007/s13246-023-01244-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 03/09/2023] [Indexed: 03/23/2023]
Abstract
A radiotherapy technique called Image-Guided Radiation Therapy adopts frequent imaging throughout a treatment session. Fan Beam Computed Tomography (FBCT) based planning followed by Cone Beam Computed Tomography (CBCT) based radiation delivery drastically improved the treatment accuracy. Furtherance in terms of radiation exposure and cost can be achieved if FBCT could be replaced with CBCT. This paper proposes a Conditional Generative Adversarial Network (CGAN) for CBCT-to-FBCT synthesis. Specifically, a new architecture called Nested Residual UNet (NR-UNet) is introduced as the generator of the CGAN. A composite loss function, which comprises adversarial loss, Mean Squared Error (MSE), and Gradient Difference Loss (GDL), is used with the generator. The CGAN utilises the inter-slice dependency in the input by taking three consecutive CBCT slices to generate an FBCT slice. The model is trained using Head-and-Neck (H&N) FBCT-CBCT images of 53 cancer patients. The synthetic images exhibited a Peak Signal-to-Noise Ratio of 34.04±0.93 dB, Structural Similarity Index Measure of 0.9751±0.001 and a Mean Absolute Error of 14.81±4.70 HU. On average, the proposed model guarantees an improvement in Contrast-to-Noise Ratio four times better than the input CBCT images. The model also minimised the MSE and alleviated blurriness. Compared to the CBCT-based plan, the synthetic image results in a treatment plan closer to the FBCT-based plan. The three-slice to single-slice translation captures the three-dimensional contextual information in the input. Besides, it withstands the computational complexity associated with a three-dimensional image synthesis model. Furthermore, the results demonstrate that the proposed model is superior to the state-of-the-art methods.
Collapse
Affiliation(s)
- Jiffy Joseph
- Computer science and Engineering Department, National Institute of Technology Calicut, Kattangal, Calicut, Kerala, 673601, India.
| | - Ivan Biji
- Computer science and Engineering Department, National Institute of Technology Calicut, Kattangal, Calicut, Kerala, 673601, India
| | - Naveen Babu
- Computer science and Engineering Department, National Institute of Technology Calicut, Kattangal, Calicut, Kerala, 673601, India
| | - P N Pournami
- Computer science and Engineering Department, National Institute of Technology Calicut, Kattangal, Calicut, Kerala, 673601, India
| | - P B Jayaraj
- Computer science and Engineering Department, National Institute of Technology Calicut, Kattangal, Calicut, Kerala, 673601, India
| | - Niyas Puzhakkal
- Department of Medical Physics, MVR Cancer Centre & Research Institute, Poolacode, Calicut, Kerala, 673601, India
| | - Christy Sabu
- Computer science and Engineering Department, National Institute of Technology Calicut, Kattangal, Calicut, Kerala, 673601, India
| | - Vedkumar Patel
- Computer science and Engineering Department, National Institute of Technology Calicut, Kattangal, Calicut, Kerala, 673601, India
| |
Collapse
|
27
|
Boorboor S, Mathew S, Ananth M, Talmage D, Role LW, Kaufman AE. NeuRegenerate: A Framework for Visualizing Neurodegeneration. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:1625-1637. [PMID: 34757909 PMCID: PMC10070008 DOI: 10.1109/tvcg.2021.3127132] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Recent advances in high-resolution microscopy have allowed scientists to better understand the underlying brain connectivity. However, due to the limitation that biological specimens can only be imaged at a single timepoint, studying changes to neural projections over time is limited to observations gathered using population analysis. In this article, we introduce NeuRegenerate, a novel end-to-end framework for the prediction and visualization of changes in neural fiber morphology within a subject across specified age-timepoints. To predict projections, we present neuReGANerator, a deep-learning network based on cycle-consistent generative adversarial network (GAN) that translates features of neuronal structures across age-timepoints for large brain microscopy volumes. We improve the reconstruction quality of the predicted neuronal structures by implementing a density multiplier and a new loss function, called the hallucination loss. Moreover, to alleviate artifacts that occur due to tiling of large input volumes, we introduce a spatial-consistency module in the training pipeline of neuReGANerator. Finally, to visualize the change in projections, predicted using neuReGANerator, NeuRegenerate offers two modes: (i) neuroCompare to simultaneously visualize the difference in the structures of the neuronal projections, from two age domains (using structural view and bounded view), and (ii) neuroMorph, a vesselness-based morphing technique to interactively visualize the transformation of the structures from one age-timepoint to the other. Our framework is designed specifically for volumes acquired using wide-field microscopy. We demonstrate our framework by visualizing the structural changes within the cholinergic system of the mouse brain between a young and old specimen.
Collapse
|
28
|
Chen C, Raymond C, Speier W, Jin X, Cloughesy TF, Enzmann D, Ellingson BM, Arnold CW. Synthesizing MR Image Contrast Enhancement Using 3D High-Resolution ConvNets. IEEE Trans Biomed Eng 2023; 70:401-412. [PMID: 35853075 PMCID: PMC9928432 DOI: 10.1109/tbme.2022.3192309] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
OBJECTIVE Gadolinium-based contrast agents (GBCAs) have been widely used to better visualize disease in brain magnetic resonance imaging (MRI). However, gadolinium deposition within the brain and body has raised safety concerns about the use of GBCAs. Therefore, the development of novel approaches that can decrease or even eliminate GBCA exposure while providing similar contrast information would be of significant use clinically. METHODS In this work, we present a deep learning based approach for contrast-enhanced T1 synthesis on brain tumor patients. A 3D high-resolution fully convolutional network (FCN), which maintains high resolution information through processing and aggregates multi-scale information in parallel, is designed to map pre-contrast MRI sequences to contrast-enhanced MRI sequences. Specifically, three pre-contrast MRI sequences, T1, T2 and apparent diffusion coefficient map (ADC), are utilized as inputs and the post-contrast T1 sequences are utilized as target output. To alleviate the data imbalance problem between normal tissues and the tumor regions, we introduce a local loss to improve the contribution of the tumor regions, which leads to better enhancement results on tumors. RESULTS Extensive quantitative and visual assessments are performed, with our proposed model achieving a PSNR of 28.24 dB in the brain and 21.2 dB in tumor regions. CONCLUSION AND SIGNIFICANCE Our results suggest the potential of substituting GBCAs with synthetic contrast images generated via deep learning.
Collapse
|
29
|
Double U-Net CycleGAN for 3D MR to CT image synthesis. Int J Comput Assist Radiol Surg 2023; 18:149-156. [PMID: 35984606 DOI: 10.1007/s11548-022-02732-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Accepted: 07/29/2022] [Indexed: 02/01/2023]
Abstract
PURPOSE CycleGAN and its variants are widely used in medical image synthesis, which can use unpaired data for medical image synthesis. The most commonly used method is to use a Generative Adversarial Network (GAN) model to process 2D slices and thereafter concatenate all of these slices to 3D medical images. Nevertheless, these methods always bring about spatial inconsistencies in contiguous slices. We offer a new model based on the CycleGAN to work out this problem, which can achieve high-quality conversion from magnetic resonance (MR) to computed tomography (CT) images. METHODS To achieve spatial consistencies of 3D medical images and avoid the memory-heavy 3D convolutions, we reorganized the adjacent 3 slices into a 2.5D slice as the input image. Further, we propose a U-Net discriminator network to improve accuracy, which can perceive input objects locally and globally. Then, the model uses Content-Aware ReAssembly of Features (CARAFE) upsampling, which has a large field of view and content awareness takes the place of using a settled kernel for all samples. RESULTS The mean absolute error (MAE), peak-signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM) for double U-Net CycleGAN generated 3D image synthesis are 74.56±10.02, 27.12±0.71 and 0.84±0.03, respectively. Our method achieves preferable results than state-of-the-art methods. CONCLUSION The experiment results indicate our method can realize the conversion of MR to CT images using ill-sorted pair data, and achieves preferable results than state-of-the-art methods. Compared with 3D CycleGAN, it can synthesize better 3D CT images with less computation and memory.
Collapse
|
30
|
Zhao S, Geng C, Guo C, Tian F, Tang X. SARU: A self-attention ResUNet to generate synthetic CT images for MR-only BNCT treatment planning. Med Phys 2023; 50:117-127. [PMID: 36129452 DOI: 10.1002/mp.15986] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 09/01/2022] [Accepted: 09/07/2022] [Indexed: 01/25/2023] Open
Abstract
PURPOSE Despite the significant physical differences between magnetic resonance imaging (MRI) and computed tomography (CT), the high entropy of MRI data indicates the existence of a surjective transformation from MRI to CT image. However, there is no specific optimization of the network itself in previous MRI/CT translation works, resulting in mistakes in details such as the skull margin and cavity edge. These errors might have moderate effect on conventional radiotherapy, but for boron neutron capture therapy (BNCT), the skin dose will be a critical part of the dose composition. Thus, the purpose of this work is to create a self-attention network that could directly transfer MRI to synthetical computerized tomography (sCT) images with lower inaccuracy at the skin edge and examine the viability of magnetic resonance (MR)-guided BNCT. METHODS A retrospective analysis was undertaken on 104 patients with brain malignancies who had both CT and MRI as part of their radiation treatment plan. The CT images were deformably registered to the MRI. In the U-shaped generation network, we introduced spatial and channel attention modules, as well as a versatile "Attentional ResBlock," which reduce the parameters while maintaining high performance. We employed five-fold cross-validation to test all patients, compared the proposed network to those used in earlier studies, and used Monte Carlo software to simulate the BNCT process for dosimetric evaluation in test set. RESULTS Compared with UNet, Pix2Pix, and ResNet, the mean absolute error (MAE) of self-attention ResUNet (SARU) is reduced by 12.91, 17.48, and 9.50 HU, respectively. The "two one-sided tests" show no significant difference in dose-volume histogram (DVH) results. And for all tested cases, the average 2%/2 mm gamma index of UNet, ResNet, Pix2Pix, and SARU were 0.96 ± 0.03, 0.96 ± 0.03, 0.95 ± 0.03, and 0.98 ± 0.01, respectively. The error of skin dose from SARU is much less than the results from other methods. CONCLUSIONS We have developed a residual U-shape network with an attention mechanism to generate sCT images from MRI for BNCT treatment planning with lower MAE in six organs. There is no significant difference between the dose distribution calculated by sCT and real CT. This solution may greatly simplify the BNCT treatment planning process, lower the BNCT treatment dose, and minimize image feature mismatch.
Collapse
Affiliation(s)
- Sheng Zhao
- Department of Nuclear Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, People's Republic of China
| | - Changran Geng
- Department of Nuclear Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, People's Republic of China.,Key Laboratory of Nuclear Technology Application and Radiation Protection in Astronautics (Nanjing University of Aeronautics and Astronautics), Ministry of Industry and Information Technology, Nanjing, People's Republic of China
| | - Chang Guo
- Department of Radiation Oncology, Jiangsu Cancer Hospital, Nanjing, People's Republic of China
| | - Feng Tian
- Department of Nuclear Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, People's Republic of China
| | - Xiaobin Tang
- Department of Nuclear Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, People's Republic of China.,Key Laboratory of Nuclear Technology Application and Radiation Protection in Astronautics (Nanjing University of Aeronautics and Astronautics), Ministry of Industry and Information Technology, Nanjing, People's Republic of China
| |
Collapse
|
31
|
Liang Z, Huang JX, Antani S. Image Translation by Ad CycleGAN for COVID-19 X-Ray Images: A New Approach for Controllable GAN. SENSORS (BASEL, SWITZERLAND) 2022; 22:9628. [PMID: 36559994 PMCID: PMC9785652 DOI: 10.3390/s22249628] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 12/01/2022] [Accepted: 12/05/2022] [Indexed: 06/17/2023]
Abstract
We propose a new generative model named adaptive cycle-consistent generative adversarial network, or Ad CycleGAN to perform image translation between normal and COVID-19 positive chest X-ray images. An independent pre-trained criterion is added to the conventional Cycle GAN architecture to exert adaptive control on image translation. The performance of Ad CycleGAN is compared with the Cycle GAN without the external criterion. The quality of the synthetic images is evaluated by quantitative metrics including Mean Squared Error (MSE), Root Mean Squared Error (RMSE), Peak Signal-to-Noise Ratio (PSNR), Universal Image Quality Index (UIQI), visual information fidelity (VIF), Frechet Inception Distance (FID), and translation accuracy. The experimental results indicate that the synthetic images generated either by the Cycle GAN or by the Ad CycleGAN have lower MSE and RMSE, and higher scores in PSNR, UIQI, and VIF in homogenous image translation (i.e., Y → Y) compared to the heterogenous image translation process (i.e., X → Y). The synthetic images by Ad CycleGAN through the heterogeneous image translation have significantly higher FID score compared to Cycle GAN (p < 0.01). The image translation accuracy of Ad CycleGAN is higher than that of Cycle GAN when normal images are converted to COVID-19 positive images (p < 0.01). Therefore, we conclude that the Ad CycleGAN with the independent criterion can improve the accuracy of GAN image translation. The new architecture has more control on image synthesis and can help address the common class imbalance issue in machine learning methods and artificial intelligence applications with medical images.
Collapse
Affiliation(s)
- Zhaohui Liang
- Information Retrieval and Knowledge Management Laboratory, York University, Toronto, ON M3J 1P3, Canada
| | - Jimmy Xiangji Huang
- Information Retrieval and Knowledge Management Laboratory, York University, Toronto, ON M3J 1P3, Canada
| | - Sameer Antani
- National Library of Medicine, National Institutes of Health, Bethesda, MD 20894, USA
| |
Collapse
|
32
|
A Systematic Review of Artificial Intelligence Applications in Plastic Surgery: Looking to the Future. Plast Reconstr Surg Glob Open 2022; 10:e4608. [PMID: 36479133 PMCID: PMC9722565 DOI: 10.1097/gox.0000000000004608] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2021] [Accepted: 08/24/2022] [Indexed: 01/25/2023]
Abstract
UNLABELLED Artificial intelligence (AI) is presently employed in several medical specialties, particularly those that rely on large quantities of standardized data. The integration of AI in surgical subspecialties is under preclinical investigation but is yet to be widely implemented. Plastic surgeons collect standardized data in various settings and could benefit from AI. This systematic review investigates the current clinical applications of AI in plastic and reconstructive surgery. METHODS A comprehensive literature search of the Medline, EMBASE, Cochrane, and PubMed databases was conducted for AI studies with multiple search terms. Articles that progressed beyond the title and abstract screening were then subcategorized based on the plastic surgery subspecialty and AI application. RESULTS The systematic search yielded a total of 1820 articles. Forty-four studies met inclusion criteria warranting further analysis. Subcategorization of articles by plastic surgery subspecialties revealed that most studies fell into aesthetic and breast surgery (27%), craniofacial surgery (23%), or microsurgery (14%). Analysis of the research study phase of included articles indicated that the current research is primarily in phase 0 (discovery and invention; 43.2%), phase 1 (technical performance and safety; 27.3%), or phase 2 (efficacy, quality improvement, and algorithm performance in a medical setting; 27.3%). Only one study demonstrated translation to clinical practice. CONCLUSIONS The potential of AI to optimize clinical efficiency is being investigated in every subfield of plastic surgery, but much of the research to date remains in the preclinical status. Future implementation of AI into everyday clinical practice will require collaborative efforts.
Collapse
|
33
|
Kim E, Cho HH, Kwon J, Oh YT, Ko ES, Park H. Tumor-Attentive Segmentation-Guided GAN for Synthesizing Breast Contrast-Enhanced MRI Without Contrast Agents. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2022; 11:32-43. [PMID: 36478773 PMCID: PMC9721354 DOI: 10.1109/jtehm.2022.3221918] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Revised: 10/25/2022] [Accepted: 11/10/2022] [Indexed: 11/16/2022]
Abstract
OBJECTIVE Breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is a sensitive imaging technique critical for breast cancer diagnosis. However, the administration of contrast agents poses a potential risk. This can be avoided if contrast-enhanced MRI can be obtained without using contrast agents. Thus, we aimed to generate T1-weighted contrast-enhanced MRI (ceT1) images from pre-contrast T1 weighted MRI (preT1) images in the breast. METHODS We proposed a generative adversarial network to synthesize ceT1 from preT1 breast images that adopted a local discriminator and segmentation task network to focus specifically on the tumor region in addition to the whole breast. The segmentation network performed a related task of segmentation of the tumor region, which allowed important tumor-related information to be enhanced. In addition, edge maps were included to provide explicit shape and structural information. Our approach was evaluated and compared with other methods in the local (n = 306) and external validation (n = 140) cohorts. Four evaluation metrics of normalized mean squared error (NRMSE), Pearson cross-correlation coefficients (CC), peak signal-to-noise ratio (PSNR), and structural similarity index map (SSIM) for the whole breast and tumor region were measured. An ablation study was performed to evaluate the incremental benefits of various components in our approach. RESULTS Our approach performed the best with an NRMSE 25.65, PSNR 54.80 dB, SSIM 0.91, and CC 0.88 on average, in the local test set. CONCLUSION Performance gains were replicated in the validation cohort. SIGNIFICANCE We hope that our method will help patients avoid potentially harmful contrast agents. Clinical and Translational Impact Statement-Contrast agents are necessary to obtain DCE-MRI which is essential in breast cancer diagnosis. However, administration of contrast agents may cause side effects such as nephrogenic systemic fibrosis and risk of toxic residue deposits. Our approach can generate DCE-MRI without contrast agents using a generative deep neural network. Thus, our approach could help patients avoid potentially harmful contrast agents resulting in an improved diagnosis and treatment workflow for breast cancer.
Collapse
Affiliation(s)
- Eunjin Kim
- Department of Electrical and Computer EngineeringSungkyunkwan UniversitySuwon16419South Korea
| | - Hwan-Ho Cho
- Department of Electrical and Computer EngineeringSungkyunkwan UniversitySuwon16419South Korea
- Department of Medical Aritifical IntelligenceKonyang UniversityDaejon35365South Korea
| | - Junmo Kwon
- Department of Electrical and Computer EngineeringSungkyunkwan UniversitySuwon16419South Korea
| | - Young-Tack Oh
- Department of Electrical and Computer EngineeringSungkyunkwan UniversitySuwon16419South Korea
| | - Eun Sook Ko
- Samsung Medical CenterDepartment of Radiology, School of MedicineSungkyunkwan UniversitySeoul06351South Korea
| | - Hyunjin Park
- School of Electronic and Electrical EngineeringSungkyunkwan UniversitySuwon16419South Korea
- Center for Neuroscience Imaging ResearchInstitute for Basic ScienceSuwon16419South Korea
| |
Collapse
|
34
|
Rana A, Dumka A, Singh R, Panda MK, Priyadarshi N. A Computerized Analysis with Machine Learning Techniques for the Diagnosis of Parkinson's Disease: Past Studies and Future Perspectives. Diagnostics (Basel) 2022; 12:2708. [PMID: 36359550 PMCID: PMC9689408 DOI: 10.3390/diagnostics12112708] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Revised: 10/30/2022] [Accepted: 11/02/2022] [Indexed: 08/03/2023] Open
Abstract
According to the World Health Organization (WHO), Parkinson's disease (PD) is a neurodegenerative disease of the brain that causes motor symptoms including slower movement, rigidity, tremor, and imbalance in addition to other problems like Alzheimer's disease (AD), psychiatric problems, insomnia, anxiety, and sensory abnormalities. Techniques including artificial intelligence (AI), machine learning (ML), and deep learning (DL) have been established for the classification of PD and normal controls (NC) with similar therapeutic appearances in order to address these problems and improve the diagnostic procedure for PD. In this article, we examine a literature survey of research articles published up to September 2022 in order to present an in-depth analysis of the use of datasets, various modalities, experimental setups, and architectures that have been applied in the diagnosis of subjective disease. This analysis includes a total of 217 research publications with a list of the various datasets, methodologies, and features. These findings suggest that ML/DL methods and novel biomarkers hold promising results for application in medical decision-making, leading to a more methodical and thorough detection of PD. Finally, we highlight the challenges and provide appropriate recommendations on selecting approaches that might be used for subgrouping and connection analysis with structural magnetic resonance imaging (sMRI), DaTSCAN, and single-photon emission computerized tomography (SPECT) data for future Parkinson's research.
Collapse
Affiliation(s)
- Arti Rana
- Computer Science & Engineering, Veer Madho Singh Bhandari Uttarakhand Technical University, Dehradun 248007, Uttarakhand, India
| | - Ankur Dumka
- Department of Computer Science and Engineering, Women Institute of Technology, Dehradun 248007, Uttarakhand, India
- Department of Computer Science & Engineering, Graphic Era Deemed to be University, Dehradun 248001, Uttarakhand, India
| | - Rajesh Singh
- Division of Research and Innovation, Uttaranchal Institute of Technology, Uttaranchal University, Dehradun 248007, Uttarakhand, India
- Department of Project Management, Universidad Internacional Iberoamericana, Campeche 24560, Mexico
| | - Manoj Kumar Panda
- Department of Electrical Engineering, G.B. Pant Institute of Engineering and Technology, Pauri 246194, Uttarakhand, India
| | - Neeraj Priyadarshi
- Department of Electrical Engineering, JIS College of Engineering, Kolkata 741235, West Bengal, India
| |
Collapse
|
35
|
Li J, Qu Z, Yang Y, Zhang F, Li M, Hu S. TCGAN: a transformer-enhanced GAN for PET synthetic CT. BIOMEDICAL OPTICS EXPRESS 2022; 13:6003-6018. [PMID: 36733758 PMCID: PMC9872870 DOI: 10.1364/boe.467683] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Revised: 08/06/2022] [Accepted: 10/05/2022] [Indexed: 06/18/2023]
Abstract
Multimodal medical images can be used in a multifaceted approach to resolve a wide range of medical diagnostic problems. However, these images are generally difficult to obtain due to various limitations, such as cost of capture and patient safety. Medical image synthesis is used in various tasks to obtain better results. Recently, various studies have attempted to use generative adversarial networks for missing modality image synthesis, making good progress. In this study, we propose a generator based on a combination of transformer network and a convolutional neural network (CNN). The proposed method can combine the advantages of transformers and CNNs to promote a better detail effect. The network is designed for positron emission tomography (PET) to computer tomography synthesis, which can be used for PET attenuation correction. We also experimented on two datasets for magnetic resonance T1- to T2-weighted image synthesis. Based on qualitative and quantitative analyses, our proposed method outperforms the existing methods.
Collapse
Affiliation(s)
- Jitao Li
- College of Information Science and Engineering, Linyi University, Linyi, 276000, China
- College of Chemistry and Chemical Engineering, Linyi University, Linyi, 276000, China
- These authors contributed equally
| | - Zongjin Qu
- College of Chemistry and Chemical Engineering, Linyi University, Linyi, 276000, China
- These authors contributed equally
| | - Yue Yang
- College of Information Science and Engineering, Linyi University, Linyi, 276000, China
| | - Fuchun Zhang
- College of Information Science and Engineering, Linyi University, Linyi, 276000, China
| | - Meng Li
- College of Information Science and Engineering, Linyi University, Linyi, 276000, China
| | - Shunbo Hu
- College of Information Science and Engineering, Linyi University, Linyi, 276000, China
| |
Collapse
|
36
|
Lyu F, Ye M, Ma AJ, Yip TCF, Wong GLH, Yuen PC. Learning From Synthetic CT Images via Test-Time Training for Liver Tumor Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2510-2520. [PMID: 35404812 DOI: 10.1109/tmi.2022.3166230] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Automatic liver tumor segmentation could offer assistance to radiologists in liver tumor diagnosis, and its performance has been significantly improved by recent deep learning based methods. These methods rely on large-scale well-annotated training datasets, but collecting such datasets is time-consuming and labor-intensive, which could hinder their performance in practical situations. Learning from synthetic data is an encouraging solution to address this problem. In our task, synthetic tumors can be injected to healthy images to form training pairs. However, directly applying the model trained using the synthetic tumor images on real test images performs poorly due to the domain shift problem. In this paper, we propose a novel approach, namely Synthetic-to-Real Test-Time Training (SR-TTT), to reduce the domain gap between synthetic training images and real test images. Specifically, we add a self-supervised auxiliary task, i.e., two-step reconstruction, which takes the output of the main segmentation task as its input to build an explicit connection between these two tasks. Moreover, we design a scheduled mixture strategy to avoid error accumulation and bias explosion in the training process. During test time, we adapt the segmentation model to each test image with self-supervision from the auxiliary task so as to improve the inference performance. The proposed method is extensively evaluated on two public datasets for liver tumor segmentation. The experimental results demonstrate that our proposed SR-TTT can effectively mitigate the synthetic-to-real domain shift problem in the liver tumor segmentation task, and is superior to existing state-of-the-art approaches.
Collapse
|
37
|
Cheng D, Chen C, Yanyan M, You P, Huang X, Gai J, Zhao F, Mao N. Self-supervised learning for modal transfer of brain imaging. Front Neurosci 2022; 16:920981. [PMID: 36117623 PMCID: PMC9477095 DOI: 10.3389/fnins.2022.920981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Accepted: 08/08/2022] [Indexed: 11/19/2022] Open
Abstract
Today's brain imaging modality migration techniques are transformed from one modality data in one domain to another. In the specific clinical diagnosis, multiple modal data can be obtained in the same scanning field, and it is more beneficial to synthesize missing modal data by using the diversity characteristics of multiple modal data. Therefore, we introduce a self-supervised learning cycle-consistent generative adversarial network (BSL-GAN) for brain imaging modality transfer. The framework constructs multi-branch input, which enables the framework to learn the diversity characteristics of multimodal data. In addition, their supervision information is mined from large-scale unsupervised data by establishing auxiliary tasks, and the network is trained by constructing supervision information, which not only ensures the similarity between the input and output of modal images, but can also learn valuable representations for downstream tasks.
Collapse
Affiliation(s)
- Dapeng Cheng
- School of Computer Science and Technology, Shandong Business and Technology University, Yantai, China
- Shandong Co-Innovation Center of Future Intelligent Computing, Yantai, China
| | - Chao Chen
- School of Computer Science and Technology, Shandong Business and Technology University, Yantai, China
| | - Mao Yanyan
- School of Computer Science and Technology, Shandong Business and Technology University, Yantai, China
- College of Oceanography and Space Informatics, China University of Petroleum, Qingdao, China
| | - Panlu You
- School of Computer Science and Technology, Shandong Business and Technology University, Yantai, China
| | - Xingdan Huang
- School of Statistics, Shandong Business and Technology University, Yantai, China
| | - Jiale Gai
- School of Computer Science and Technology, Shandong Business and Technology University, Yantai, China
| | - Feng Zhao
- School of Computer Science and Technology, Shandong Business and Technology University, Yantai, China
- Shandong Co-Innovation Center of Future Intelligent Computing, Yantai, China
| | - Ning Mao
- Department of Radiology, Yantai Yuhuangding Hospital, Yantai, China
| |
Collapse
|
38
|
Structurally-constrained optical-flow-guided adversarial generation of synthetic CT for MR-only radiotherapy treatment planning. Sci Rep 2022; 12:14855. [PMID: 36050323 PMCID: PMC9437076 DOI: 10.1038/s41598-022-18256-y] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Accepted: 08/08/2022] [Indexed: 11/24/2022] Open
Abstract
The rapid progress in image-to-image translation methods using deep neural networks has led to advancements in the generation of synthetic CT (sCT) in MR-only radiotherapy workflow. Replacement of CT with MR reduces unnecessary radiation exposure, financial cost and enables more accurate delineation of organs at risk. Previous generative adversarial networks (GANs) have been oriented towards MR to sCT generation. In this work, we have implemented multiple augmented cycle consistent GANs. The augmentation involves structural information constraint (StructCGAN), optical flow consistency constraint (FlowCGAN) and the combination of both the conditions (SFCGAN). The networks were trained and tested on a publicly available Gold Atlas project dataset, consisting of T2-weighted MR and CT volumes of 19 subjects from 3 different sites. The network was tested on 8 volumes acquired from the third site with a different scanner to assess the generalizability of the network on multicenter data. The results indicate that all the networks are robust to scanner variations. The best model, SFCGAN achieved an average ME of 0.9 5.9 HU, an average MAE of 40.4 4.7 HU and 57.2 1.4 dB PSNR outperforming previous research works. Moreover, the optical flow constraint between consecutive frames preserves the consistency across all views compared to 2D image-to-image translation methods. SFCGAN exploits the features of both StructCGAN and FlowCGAN by delivering structurally robust and 3D consistent sCT images. The research work serves as a benchmark for further research in MR-only radiotherapy.
Collapse
|
39
|
Niu Y, Jackson SJ, Alqahtani N, Mostaghimi P, Armstrong RT. Paired and Unpaired Deep Learning Methods for Physically Accurate Super-Resolution Carbonate Rock Images. Transp Porous Media 2022. [DOI: 10.1007/s11242-022-01842-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
Abstract
AbstractX-ray micro-computed tomography (micro-CT) has been widely leveraged to characterise the pore-scale geometry of subsurface porous rocks. Recent developments in super-resolution (SR) methods using deep learning allow for the digital enhancement of low-resolution (LR) images over large spatial scales, creating SR images comparable to high-resolution (HR) ground truth images. This circumvents the common trade-off between resolution and field-of-view. An outstanding issue is the use of paired LR and HR data, which is often required in the training step of such methods but is difficult to obtain. In this work, we rigorously compare two state-of-the-art SR deep learning techniques, using both paired and unpaired data, with like-for-like ground truth data. The first approach requires paired images to train a convolutional neural network (CNN), while the second approach uses unpaired images to train a generative adversarial network (GAN). The two approaches are compared using a micro-CT carbonate rock sample with complicated micro-porous textures. We implemented various image-based and numerical verifications and experimental validation to quantitatively evaluate the physical accuracy and sensitivities of the two methods. Our quantitative results show that the unpaired GAN approach can reconstruct super-resolution images as precise as the paired CNN method, with comparable training times and dataset requirements. This unlocks new applications for micro-CT image enhancement using unpaired deep learning methods; image registration is no longer needed during the data processing stage. Decoupled images from data storage platforms can be exploited to train networks for SR digital rock applications. This opens up a new pathway for various applications related to multi-scale flow simulations in heterogeneous porous media.
Collapse
|
40
|
Oldfield J, Panagakis Y, Nicolaou MA. Adversarial Learning of Disentangled and Generalizable Representations of Visual Attributes. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:3498-3509. [PMID: 33531308 DOI: 10.1109/tnnls.2021.3053205] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Recently, a multitude of methods for image-to-image translation have demonstrated impressive results on problems, such as multidomain or multiattribute transfer. The vast majority of such works leverages the strengths of adversarial learning and deep convolutional autoencoders to achieve realistic results by well-capturing the target data distribution. Nevertheless, the most prominent representatives of this class of methods do not facilitate semantic structure in the latent space and usually rely on binary domain labels for test-time transfer. This leads to rigid models, unable to capture the variance of each domain label. In this light, we propose a novel adversarial learning method that: 1) facilitates the emergence of latent structure by semantically disentangling sources of variation and 2) encourages learning generalizable, continuous, and transferable latent codes that enable flexible attribute mixing. This is achieved by introducing a novel loss function that encourages representations to result in uniformly distributed class posteriors for disentangled attributes. In tandem with an algorithm for inducing generalizable properties, the resulting representations can be utilized for a variety of tasks such as intensity-preserving multiattribute image translation and synthesis, without requiring labeled test data. We demonstrate the merits of the proposed method by a set of qualitative and quantitative experiments on popular databases such as MultiPIE, RaFD, and BU-3DFE, where our method outperforms other state-of-the-art methods in tasks such as intensity-preserving multiattribute transfer and synthesis.
Collapse
|
41
|
Reaungamornrat S, Sari H, Catana C, Kamen A. Multimodal image synthesis based on disentanglement representations of anatomical and modality specific features, learned using uncooperative relativistic GAN. Med Image Anal 2022; 80:102514. [PMID: 35717874 PMCID: PMC9810205 DOI: 10.1016/j.media.2022.102514] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Revised: 05/20/2022] [Accepted: 06/10/2022] [Indexed: 01/05/2023]
Abstract
Growing number of methods for attenuation-coefficient map estimation from magnetic resonance (MR) images have recently been proposed because of the increasing interest in MR-guided radiotherapy and the introduction of positron emission tomography (PET) MR hybrid systems. We propose a deep-network ensemble incorporating stochastic-binary-anatomical encoders and imaging-modality variational autoencoders, to disentangle image-latent spaces into a space of modality-invariant anatomical features and spaces of modality attributes. The ensemble integrates modality-modulated decoders to normalize features and image intensities based on imaging modality. Besides promoting disentanglement, the architecture fosters uncooperative learning, offering ability to maintain anatomical structure in a cross-modality reconstruction. Introduction of a modality-invariant structural consistency constraint further enforces faithful embedding of anatomy. To improve training stability and fidelity of synthesized modalities, the ensemble is trained in a relativistic generative adversarial framework incorporating multiscale discriminators. Analyses of priors and network architectures as well as performance validation were performed on computed tomography (CT) and MR pelvis datasets. The proposed method demonstrated robustness against intensity inhomogeneity, improved tissue-class differentiation, and offered synthetic CT in Hounsfield units with intensities consistent and smooth across slices compared to the state-of-the-art approaches, offering median normalized mutual information of 1.28, normalized cross correlation of 0.97, and gradient cross correlation of 0.59 over 324 images.
Collapse
Affiliation(s)
| | - Hasan Sari
- Havard Medical School, Boston, MA 02115 USA
| | | | - Ali Kamen
- Siemens Healthineers, Digital Technology and Innovation, Princeton, NJ 08540 USA
| |
Collapse
|
42
|
Chung M, Kong ST, Park B, Chung Y, Jung KH, Seo JB. Utilizing Synthetic Nodules for Improving Nodule Detection in Chest Radiographs. J Digit Imaging 2022; 35:1061-1068. [PMID: 35304676 PMCID: PMC9485384 DOI: 10.1007/s10278-022-00608-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Revised: 01/31/2022] [Accepted: 02/14/2022] [Indexed: 10/18/2022] Open
Abstract
Algorithms that automatically identify nodular patterns in chest X-ray (CXR) images could benefit radiologists by reducing reading time and improving accuracy. A promising approach is to use deep learning, where a deep neural network (DNN) is trained to classify and localize nodular patterns (including mass) in CXR images. Such algorithms, however, require enough abnormal cases to learn representations of nodular patterns arising in practical clinical settings. Obtaining large amounts of high-quality data is impractical in medical imaging where (1) acquiring labeled images is extremely expensive, (2) annotations are subject to inaccuracies due to the inherent difficulty in interpreting images, and (3) normal cases occur far more frequently than abnormal cases. In this work, we devise a framework to generate realistic nodules and demonstrate how they can be used to train a DNN identify and localize nodular patterns in CXR images. While most previous research applying generative models to medical imaging are limited to generating visually plausible abnormalities and using these patterns for augmentation, we go a step further to show how the training algorithm can be adjusted accordingly to maximally benefit from synthetic abnormal patterns. A high-precision detection model was first developed and tested on internal and external datasets, and the proposed method was shown to enhance the model's recall while retaining the low level of false positives.
Collapse
Affiliation(s)
| | | | | | | | | | - Joon Beom Seo
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| |
Collapse
|
43
|
Natarajan B, Elakkiya R. Dynamic GAN for high-quality sign language video generation from skeletal poses using generative adversarial networks. Soft comput 2022. [DOI: 10.1007/s00500-022-07014-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
44
|
Weiss R, Karimijafarbigloo S, Roggenbuck D, Rödiger S. Applications of Neural Networks in Biomedical Data Analysis. Biomedicines 2022; 10:1469. [PMID: 35884772 PMCID: PMC9313085 DOI: 10.3390/biomedicines10071469] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 06/16/2022] [Accepted: 06/17/2022] [Indexed: 12/04/2022] Open
Abstract
Neural networks for deep-learning applications, also called artificial neural networks, are important tools in science and industry. While their widespread use was limited because of inadequate hardware in the past, their popularity increased dramatically starting in the early 2000s when it became possible to train increasingly large and complex networks. Today, deep learning is widely used in biomedicine from image analysis to diagnostics. This also includes special topics, such as forensics. In this review, we discuss the latest networks and how they work, with a focus on the analysis of biomedical data, particularly biomarkers in bioimage data. We provide a summary on numerous technical aspects, such as activation functions and frameworks. We also present a data analysis of publications about neural networks to provide a quantitative insight into the use of network types and the number of journals per year to determine the usage in different scientific fields.
Collapse
Affiliation(s)
- Romano Weiss
- Faculty of Environment and Natural Sciences, Brandenburg University of Technology Cottbus-Senftenberg, Universitätsplatz 1, D-01968 Senftenberg, Germany; (R.W.); (S.K.); (D.R.)
| | - Sanaz Karimijafarbigloo
- Faculty of Environment and Natural Sciences, Brandenburg University of Technology Cottbus-Senftenberg, Universitätsplatz 1, D-01968 Senftenberg, Germany; (R.W.); (S.K.); (D.R.)
| | - Dirk Roggenbuck
- Faculty of Environment and Natural Sciences, Brandenburg University of Technology Cottbus-Senftenberg, Universitätsplatz 1, D-01968 Senftenberg, Germany; (R.W.); (S.K.); (D.R.)
- Faculty of Health Sciences Brandenburg, Brandenburg University of Technology Cottbus-Senftenberg, D-01968 Senftenberg, Germany
| | - Stefan Rödiger
- Faculty of Environment and Natural Sciences, Brandenburg University of Technology Cottbus-Senftenberg, Universitätsplatz 1, D-01968 Senftenberg, Germany; (R.W.); (S.K.); (D.R.)
- Faculty of Health Sciences Brandenburg, Brandenburg University of Technology Cottbus-Senftenberg, D-01968 Senftenberg, Germany
| |
Collapse
|
45
|
Ranjan A, Lalwani D, Misra R. GAN for synthesizing CT from T2-weighted MRI data towards MR-guided radiation treatment. MAGMA (NEW YORK, N.Y.) 2022; 35:449-457. [PMID: 34741702 DOI: 10.1007/s10334-021-00974-5] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Revised: 10/12/2021] [Accepted: 10/25/2021] [Indexed: 06/13/2023]
Abstract
OBJECTIVE In medical domain, cross-modality image synthesis suffers from multiple issues , such as context-misalignment, image distortion, image blurriness, and loss of details. The fundamental objective behind this study is to address these issues in estimating synthetic Computed tomography (sCT) scans from T2-weighted Magnetic Resonance Imaging (MRI) scans to achieve MRI-guided Radiation Treatment (RT). MATERIALS AND METHODS We proposed a conditional generative adversarial network (cGAN) with multiple residual blocks to estimate sCT from T2-weighted MRI scans using 367 paired brain MR-CT images dataset. Few state-of-the-art deep learning models were implemented to generate sCT including Pix2Pix model, U-Net model, autoencoder model and their results were compared, respectively. RESULTS Results with paired MR-CT image dataset demonstrate that the proposed model with nine residual blocks in generator architecture results in the smallest mean absolute error (MAE) value of [Formula: see text], and mean squared error (MSE) value of [Formula: see text], and produces the largest Pearson correlation coefficient (PCC) value of [Formula: see text], SSIM value of [Formula: see text] and peak signal-to-noise ratio (PSNR) value of [Formula: see text], respectively. We qualitatively evaluated our result by visual comparisons of generated sCT to original CT of respective MRI input. DISCUSSION The quantitative and qualitative comparison of this work demonstrates that deep learning-based cGAN model can be used to estimate sCT scan from a reference T2 weighted MRI scan. The overall accuracy of our proposed model outperforms different state-of-the-art deep learning-based models.
Collapse
Affiliation(s)
- Amit Ranjan
- Department of Computer Science and Engineering, Indian Institute of Technology Patna, Bihta, 801103, India.
| | - Debanshu Lalwani
- Department of Computer Science and Engineering, Indian Institute of Technology Patna, Bihta, 801103, India
| | - Rajiv Misra
- Department of Computer Science and Engineering, Indian Institute of Technology Patna, Bihta, 801103, India
| |
Collapse
|
46
|
Li Z, Huang X, Zhang Z, Liu L, Wang F, Li S, Gao S, Xia J. Synthesis of magnetic resonance images from computed tomography data using convolutional neural network with contextual loss function. Quant Imaging Med Surg 2022; 12:3151-3169. [PMID: 35655819 PMCID: PMC9131350 DOI: 10.21037/qims-21-846] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Accepted: 02/23/2022] [Indexed: 12/26/2023]
Abstract
BACKGROUND Magnetic resonance imaging (MRI) images synthesized from computed tomography (CT) data can provide more detailed information on pathological structures than that of CT data alone; thus, the synthesis of MRI has received increased attention especially in medical scenarios where only CT images are available. A novel convolutional neural network (CNN) combined with a contextual loss function was proposed for synthesis of T1- and T2-weighted images (T1WI and T2WI) from CT data. METHODS A total of 5,053 and 5,081 slices of T1WI and T2WI, respectively were selected for the dataset of CT and MRI image pairs. Affine registration, image denoising, and contrast enhancement were done on the aforementioned multi-modality medical image dataset comprising T1WI, T2WI, and CT images of the brain. A deep CNN was then proposed by modifying the ResNet structure to constitute the encoder and decoder of U-Net, called double ResNet-U-Net (DRUNet). Three different loss functions were utilized to optimize the parameters of the proposed models: mean squared error (MSE) loss, binary crossentropy (BCE) loss, and contextual loss. Statistical analysis of the independent-sample t-test was conducted by comparing DRUNets with different loss functions and different network layers. RESULTS DRUNet-101 with contextual loss yielded higher values of peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and Tenengrad function (i.e., 34.25±2.06, 0.97±0.03, and 17.03±2.75 for T1WI and 33.50±1.08, 0.98±0.05, and 19.76±3.54 for T2WI respectively). The results were statistically significant at P<0.001 with a narrow confidence interval of difference, indicating the superiority of DRUNet-101 with contextual loss. In addition, both image zooming and difference maps presented for the final synthetic MR images visually reflected the robustness of DRUNet-101 with contextual loss. The visualization of convolution filters and feature maps showed that the proposed model can generate synthetic MR images with high-frequency information. CONCLUSIONS The results demonstrated that DRUNet-101 with contextual loss function provided better high-frequency information in synthetic MR images compared with the other two functions. The proposed DRUNet model has a distinct advantage over previous models in terms of PSNR, SSIM, and Tenengrad score. Overall, DRUNet-101 with contextual loss is recommended for synthesizing MR images from CT scans.
Collapse
Affiliation(s)
- Zhaotong Li
- Institute of Medical Technology, Peking University Health Science Center, Beijing, China
- Institute of Medical Humanities, Peking University, Beijing, China
| | - Xinrui Huang
- Department of Biochemistry and Biophysics, School of Basic Medical Sciences, Peking University, Beijing, China
| | - Zeru Zhang
- Institute of Medical Technology, Peking University Health Science Center, Beijing, China
- Institute of Medical Humanities, Peking University, Beijing, China
| | - Liangyou Liu
- Institute of Medical Technology, Peking University Health Science Center, Beijing, China
- Institute of Medical Humanities, Peking University, Beijing, China
| | - Fei Wang
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Peking University Cancer Hospital & Institute, Beijing Cancer Hospital & Institute, Beijing, China
| | - Sha Li
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Peking University Cancer Hospital & Institute, Beijing Cancer Hospital & Institute, Beijing, China
| | - Song Gao
- Institute of Medical Technology, Peking University Health Science Center, Beijing, China
| | - Jun Xia
- Department of Radiology, The First Affiliated Hospital of Shenzhen University, Health Science Center, Shenzhen Second People’s Hospital, Shenzhen, China
| |
Collapse
|
47
|
Ahmadian H, Mageswaran P, Walter BA, Blakaj DM, Bourekas EC, Mendel E, Marras WS, Soghrati S. Toward an artificial intelligence-assisted framework for reconstructing the digital twin of vertebra and predicting its fracture response. INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING 2022; 38:e3601. [PMID: 35403831 PMCID: PMC9285948 DOI: 10.1002/cnm.3601] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/10/2021] [Revised: 02/13/2022] [Accepted: 04/04/2022] [Indexed: 06/14/2023]
Abstract
This article presents an effort toward building an artificial intelligence (AI) assisted framework, coined ReconGAN, for creating a realistic digital twin of the human vertebra and predicting the risk of vertebral fracture (VF). ReconGAN consists of a deep convolutional generative adversarial network (DCGAN), image-processing steps, and finite element (FE) based shape optimization to reconstruct the vertebra model. This DCGAN model is trained using a set of quantitative micro-computed tomography (micro-QCT) images of the trabecular bone obtained from cadaveric samples. The quality of synthetic trabecular models generated using DCGAN are verified by comparing a set of its statistical microstructural descriptors with those of the imaging data. The synthesized trabecular microstructure is then infused into the vertebra cortical shell extracted from the patient's diagnostic CT scans using an FE-based shape optimization approach to achieve a smooth transition between trabecular to cortical regions. The final geometrical model of the vertebra is converted into a high-fidelity FE model to simulate the VF response using a continuum damage model under compression and flexion loading conditions. A feasibility study is presented to demonstrate the applicability of digital twins generated using this AI-assisted framework to predict the risk of VF in a cancer patient with spinal metastasis.
Collapse
Affiliation(s)
- Hossein Ahmadian
- Department of Integrated Systems EngineeringThe Ohio State UniversityColumbusOhioUSA
| | - Prasath Mageswaran
- Department of Integrated Systems EngineeringThe Ohio State UniversityColumbusOhioUSA
| | - Benjamin A. Walter
- Department of Biomedical EngineeringThe Ohio State UniversityColumbusOhioUSA
| | - Dukagjin M. Blakaj
- Department of Radiation OncologyThe Ohio State UniversityColumbusOhioUSA
| | - Eric C. Bourekas
- Department of Neurological SurgeryThe Ohio State UniversityColumbusOhioUSA
- Department of RadiologyThe Ohio State UniversityColumbusOhioUSA
- Department of NeurologyThe Ohio State UniversityColumbusOhioUSA
| | - Ehud Mendel
- Department of Radiation OncologyThe Ohio State UniversityColumbusOhioUSA
- Department of Neurological SurgeryThe Ohio State UniversityColumbusOhioUSA
- Department of OrthopedicsThe Ohio State UniversityColumbusOhioUSA
| | - William S. Marras
- Department of Integrated Systems EngineeringThe Ohio State UniversityColumbusOhioUSA
| | - Soheil Soghrati
- Department of Mechanical and Aerospace EngineeringThe Ohio State UniversityColumbusOhioUSA
- Department of Materials Science and EngineeringThe Ohio State UniversityColumbusOhioUSA
| |
Collapse
|
48
|
Altini N, Prencipe B, Cascarano GD, Brunetti A, Brunetti G, Triggiani V, Carnimeo L, Marino F, Guerriero A, Villani L, Scardapane A, Bevilacqua V. Liver, kidney and spleen segmentation from CT scans and MRI with deep learning: A survey. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2021.08.157] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
49
|
Sreeja S, Muhammad Noorul Mubarak D. Pseudo computed tomography image generation from brain magnetic resonance image using integration of PCA & DCNN-UNET: A comparative analysis. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-213367] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
MRI-Only Radiation (RT) now avoids some of the issues associated with employing Computed Tomography(CT) in RT chains, such as MRI registration to a separate CT, excess dosage administration, and the cost of recurrent imaging. The fact that MRI signal intensities are unrelated to the biological tissue’s attenuation coefficient poses a problem. This raises workloads, creates uncertainty as a result of the required inter-modality image registrations, and exposes patients to needless radiation. While using only MRI would be preferable, a method for estimating a pseudo-CT (pCT)or synthetic-CT(sCT) for producing electron density maps and patient positioning reference images is required. As Deep Learning(DL) is revolutionized in so many fields these days, an effective and accurate model is required for generating pCT from MRI. So, this paper depicts an efficient DL model in which the following are the stages: a) Data Acquisition where CT and MRI images are collected b) preprocessing these to avoid the anomalies and noises using techniques like outlier elimination, data smoothening and data normalizing c) feature extraction and selection using Principal Component Analysis (PCA) & regression method d) generating pCT from MRI using Deep Convolutional Neural Network and UNET (DCNN-UNET). We here compare both feature extraction (PCA) and classification model (DCNN-UNET) with other methods such as Discrete Wavelet Tranform(DWT), Independent Component Analysis(ICA), Fourier Transform and VGG16, ResNet, AlexNet, DenseNet, CNN (Convolutional Neural Network)respectively. The performance measures used to evaluate these models are Dice Coefficient(DC), Structured Similarity Index Measure(SSIM), Mean Absolute Error(MAE), Mean Squared Error(MSE), Accuracy, Computation Time in which our proposed system outperforms better with 0.94±0.02 over other state-of-art models.
Collapse
Affiliation(s)
- S Sreeja
- Department of Computer Science, University of Kerala, Karyavattom Campus, Trivandrum, Kerala, India
| | | |
Collapse
|
50
|
Addressing the Missing Data Challenge in Multi-Modal Datasets for the Diagnosis of Alzheimer’s Disease. J Neurosci Methods 2022; 375:109582. [DOI: 10.1016/j.jneumeth.2022.109582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2021] [Revised: 03/22/2022] [Accepted: 03/23/2022] [Indexed: 11/18/2022]
|