51
|
Zhang D, Wang C, Chen T, Chen W, Shen Y. Scalable Swin Transformer network for brain tumor segmentation from incomplete MRI modalities. Artif Intell Med 2024; 149:102788. [PMID: 38462288 DOI: 10.1016/j.artmed.2024.102788] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Revised: 12/19/2023] [Accepted: 01/25/2024] [Indexed: 03/12/2024]
Abstract
BACKGROUND Deep learning methods have shown great potential in processing multi-modal Magnetic Resonance Imaging (MRI) data, enabling improved accuracy in brain tumor segmentation. However, the performance of these methods can suffer when dealing with incomplete modalities, which is a common issue in clinical practice. Existing solutions, such as missing modality synthesis, knowledge distillation, and architecture-based methods, suffer from drawbacks such as long training times, high model complexity, and poor scalability. METHOD This paper proposes IMS2Trans, a novel lightweight scalable Swin Transformer network by utilizing a single encoder to extract latent feature maps from all available modalities. This unified feature extraction process enables efficient information sharing and fusion among the modalities, resulting in efficiency without compromising segmentation performance even in the presence of missing modalities. RESULTS Two datasets, BraTS 2018 and BraTS 2020, containing incomplete modalities for brain tumor segmentation are evaluated against popular benchmarks. On the BraTS 2018 dataset, our model achieved higher average Dice similarity coefficient (DSC) scores for the whole tumor, tumor core, and enhancing tumor regions (86.57, 75.67, and 58.28, respectively), in comparison with a state-of-the-art model, i.e. mmFormer (86.45, 75.51, and 57.79, respectively). Similarly, on the BraTS 2020 dataset, our model scored higher DSC scores in these three brain tumor regions (87.33, 79.09, and 62.11, respectively) compared to mmFormer (86.17, 78.34, and 60.36, respectively). We also conducted a Wilcoxon test on the experimental results, and the generated p-value confirmed that our model's performance was statistically significant. Moreover, our model exhibits significantly reduced complexity with only 4.47 M parameters, 121.89G FLOPs, and a model size of 77.13 MB, whereas mmFormer comprises 34.96 M parameters, 265.79 G FLOPs, and a model size of 559.74 MB. These indicate our model, being light-weighted with significantly reduced parameters, is still able to achieve better performance than a state-of-the-art model. CONCLUSION By leveraging a single encoder for processing the available modalities, IMS2Trans offers notable scalability advantages over methods that rely on multiple encoders. This streamlined approach eliminates the need for maintaining separate encoders for each modality, resulting in a lightweight and scalable network architecture. The source code of IMS2Trans and the associated weights are both publicly available at https://github.com/hudscomdz/IMS2Trans.
Collapse
Affiliation(s)
- Dongsong Zhang
- School of Big Data and Artificial Intelligence, Xinyang College, Xinyang, 464000, Henan, China; School of Computing and Engineering, University of Huddersfield, Huddersfield, HD13DH, UK
| | - Changjian Wang
- National Key Laboratory of Parallel and Distributed Computing, Changsha, 410073, Hunan, China
| | - Tianhua Chen
- School of Computing and Engineering, University of Huddersfield, Huddersfield, HD13DH, UK
| | - Weidao Chen
- Beijing Infervision Technology Co., Ltd., Beijing, 100020, China
| | - Yiqing Shen
- Department of Computer Science, Johns Hopkins University, Baltimore, 21218, MD, USA.
| |
Collapse
|
52
|
Li W, Liu J, Wang S, Feng C. MTFN: multi-temporal feature fusing network with co-attention for DCE-MRI synthesis. BMC Med Imaging 2024; 24:47. [PMID: 38373915 PMCID: PMC10875895 DOI: 10.1186/s12880-024-01201-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Accepted: 01/15/2024] [Indexed: 02/21/2024] Open
Abstract
BACKGROUND Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) plays an important role in the diagnosis and treatment of breast cancer. However, obtaining complete eight temporal images of DCE-MRI requires a long scanning time, which causes patients' discomfort in the scanning process. Therefore, to reduce the time, the multi temporal feature fusing neural network with Co-attention (MTFN) is proposed to generate the eighth temporal images of DCE-MRI, which enables the acquisition of DCE-MRI images without scanning. In order to reduce the time, multi-temporal feature fusion cooperative attention mechanism neural network (MTFN) is proposed to generate the eighth temporal images of DCE-MRI, which enables DCE-MRI image acquisition without scanning. METHODS In this paper, we propose multi temporal feature fusing neural network with Co-attention (MTFN) for DCE-MRI Synthesis, in which the Co-attention module can fully fuse the features of the first and third temporal image to obtain the hybrid features. The Co-attention explore long-range dependencies, not just relationships between pixels. Therefore, the hybrid features are more helpful to generate the eighth temporal images. RESULTS We conduct experiments on the private breast DCE-MRI dataset from hospitals and the multi modal Brain Tumor Segmentation Challenge2018 dataset (BraTs2018). Compared with existing methods, the experimental results of our method show the improvement and our method can generate more realistic images. In the meanwhile, we also use synthetic images to classify the molecular typing of breast cancer that the accuracy on the original eighth time-series images and the generated images are 89.53% and 92.46%, which have been improved by about 3%, and the classification results verify the practicability of the synthetic images. CONCLUSIONS The results of subjective evaluation and objective image quality evaluation indicators show the effectiveness of our method, which can obtain comprehensive and useful information. The improvement of classification accuracy proves that the images generated by our method are practical.
Collapse
Affiliation(s)
- Wei Li
- Key Laboratory of Intelligent Computing in Medical Image MIIC, Northeastern University, Shenyang, China
| | - Jiaye Liu
- School of Computer Science and Engineering, Northeastern University, Shenyang, China
| | - Shanshan Wang
- School of Computer Science and Engineering, Northeastern University, Shenyang, China.
| | - Chaolu Feng
- Key Laboratory of Intelligent Computing in Medical Image MIIC, Northeastern University, Shenyang, China
| |
Collapse
|
53
|
Dayarathna S, Islam KT, Uribe S, Yang G, Hayat M, Chen Z. Deep learning based synthesis of MRI, CT and PET: Review and analysis. Med Image Anal 2024; 92:103046. [PMID: 38052145 DOI: 10.1016/j.media.2023.103046] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Revised: 11/14/2023] [Accepted: 11/29/2023] [Indexed: 12/07/2023]
Abstract
Medical image synthesis represents a critical area of research in clinical decision-making, aiming to overcome the challenges associated with acquiring multiple image modalities for an accurate clinical workflow. This approach proves beneficial in estimating an image of a desired modality from a given source modality among the most common medical imaging contrasts, such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Positron Emission Tomography (PET). However, translating between two image modalities presents difficulties due to the complex and non-linear domain mappings. Deep learning-based generative modelling has exhibited superior performance in synthetic image contrast applications compared to conventional image synthesis methods. This survey comprehensively reviews deep learning-based medical imaging translation from 2018 to 2023 on pseudo-CT, synthetic MR, and synthetic PET. We provide an overview of synthetic contrasts in medical imaging and the most frequently employed deep learning networks for medical image synthesis. Additionally, we conduct a detailed analysis of each synthesis method, focusing on their diverse model designs based on input domains and network architectures. We also analyse novel network architectures, ranging from conventional CNNs to the recent Transformer and Diffusion models. This analysis includes comparing loss functions, available datasets and anatomical regions, and image quality assessments and performance in other downstream tasks. Finally, we discuss the challenges and identify solutions within the literature, suggesting possible future directions. We hope that the insights offered in this survey paper will serve as a valuable roadmap for researchers in the field of medical image synthesis.
Collapse
Affiliation(s)
- Sanuwani Dayarathna
- Department of Data Science and AI, Faculty of Information Technology, Monash University, Clayton VIC 3800, Australia.
| | | | - Sergio Uribe
- Department of Medical Imaging and Radiation Sciences, Faculty of Medicine, Monash University, Clayton VIC 3800, Australia
| | - Guang Yang
- Bioengineering Department and Imperial-X, Imperial College London, W12 7SL, United Kingdom
| | - Munawar Hayat
- Department of Data Science and AI, Faculty of Information Technology, Monash University, Clayton VIC 3800, Australia
| | - Zhaolin Chen
- Department of Data Science and AI, Faculty of Information Technology, Monash University, Clayton VIC 3800, Australia; Monash Biomedical Imaging, Clayton VIC 3800, Australia
| |
Collapse
|
54
|
Kumar S, Saber H, Charron O, Freeman L, Tamir JI. Correcting synthetic MRI contrast-weighted images using deep learning. Magn Reson Imaging 2024; 106:43-54. [PMID: 38092082 DOI: 10.1016/j.mri.2023.11.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 11/30/2023] [Accepted: 11/30/2023] [Indexed: 12/17/2023]
Abstract
Synthetic magnetic resonance imaging (MRI) offers a scanning paradigm where a fast multi-contrast sequence can be used to estimate underlying quantitative tissue parameter maps, which are then used to synthesize any desirable clinical contrast by retrospectively changing scan parameters in silico. Two benefits of this approach are the reduced exam time and the ability to generate arbitrary contrasts offline. However, synthetically generated contrasts are known to deviate from the contrast of experimental scans. The reason for contrast mismatch is the necessary exclusion of some unmodeled physical effects such as partial voluming, diffusion, flow, susceptibility, magnetization transfer, and more. The inclusion of these effects in signal encoding would improve the synthetic images, but would make the quantitative imaging protocol impractical due to long scan times. Therefore, in this work, we propose a novel deep learning approach that generates a multiplicative correction term to capture unmodeled effects and correct the synthetic contrast images to better match experimental contrasts for arbitrary scan parameters. The physics inspired deep learning model implicitly accounts for some unmodeled physical effects occurring during the scan. As a proof of principle, we validate our approach on synthesizing arbitrary inversion recovery fast spin-echo scans using a commercially available 2D multi-contrast sequence. We observe that the proposed correction visually and numerically reduces the mismatch with experimentally collected contrasts compared to conventional synthetic MRI. Finally, we show results of a preliminary reader study and find that the proposed method statistically significantly improves in contrast and SNR as compared to synthetic MR images.
Collapse
Affiliation(s)
- Sidharth Kumar
- Chandra Family Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin 78712, TX, USA.
| | - Hamidreza Saber
- Dell Medical School Department of Neurology, The University of Texas at Austin, Austin 78712, TX, USA; Dell Medical School Department of Neurosurgery, The University of Texas at Austin, Austin 78712, TX, USA
| | - Odelin Charron
- Dell Medical School Department of Neurology, The University of Texas at Austin, Austin 78712, TX, USA
| | - Leorah Freeman
- Dell Medical School Department of Neurology, The University of Texas at Austin, Austin 78712, TX, USA; Dell Medical School Department of Diagnostic Medicine, The University of Texas at Austin, Austin 78712, TX, USA
| | - Jonathan I Tamir
- Chandra Family Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin 78712, TX, USA; Dell Medical School Department of Diagnostic Medicine, The University of Texas at Austin, Austin 78712, TX, USA; Oden Institute for Computational Engineering and Sciences, The University of Texas at Austin, Austin 78712, TX, USA
| |
Collapse
|
55
|
Azad R, Kazerouni A, Heidari M, Aghdam EK, Molaei A, Jia Y, Jose A, Roy R, Merhof D. Advances in medical image analysis with vision Transformers: A comprehensive review. Med Image Anal 2024; 91:103000. [PMID: 37883822 DOI: 10.1016/j.media.2023.103000] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 09/30/2023] [Accepted: 10/11/2023] [Indexed: 10/28/2023]
Abstract
The remarkable performance of the Transformer architecture in natural language processing has recently also triggered broad interest in Computer Vision. Among other merits, Transformers are witnessed as capable of learning long-range dependencies and spatial correlations, which is a clear advantage over convolutional neural networks (CNNs), which have been the de facto standard in Computer Vision problems so far. Thus, Transformers have become an integral part of modern medical image analysis. In this review, we provide an encyclopedic review of the applications of Transformers in medical imaging. Specifically, we present a systematic and thorough review of relevant recent Transformer literature for different medical image analysis tasks, including classification, segmentation, detection, registration, synthesis, and clinical report generation. For each of these applications, we investigate the novelty, strengths and weaknesses of the different proposed strategies and develop taxonomies highlighting key properties and contributions. Further, if applicable, we outline current benchmarks on different datasets. Finally, we summarize key challenges and discuss different future research directions. In addition, we have provided cited papers with their corresponding implementations in https://github.com/mindflow-institue/Awesome-Transformer.
Collapse
Affiliation(s)
- Reza Azad
- Faculty of Electrical Engineering and Information Technology, RWTH Aachen University, Aachen, Germany
| | - Amirhossein Kazerouni
- School of Electrical Engineering, Iran University of Science and Technology, Tehran, Iran
| | - Moein Heidari
- School of Electrical Engineering, Iran University of Science and Technology, Tehran, Iran
| | | | - Amirali Molaei
- School of Computer Engineering, Iran University of Science and Technology, Tehran, Iran
| | - Yiwei Jia
- Faculty of Electrical Engineering and Information Technology, RWTH Aachen University, Aachen, Germany
| | - Abin Jose
- Faculty of Electrical Engineering and Information Technology, RWTH Aachen University, Aachen, Germany
| | - Rijo Roy
- Faculty of Electrical Engineering and Information Technology, RWTH Aachen University, Aachen, Germany
| | - Dorit Merhof
- Faculty of Informatics and Data Science, University of Regensburg, Regensburg, Germany; Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany.
| |
Collapse
|
56
|
Wang C, Piao S, Huang Z, Gao Q, Zhang J, Li Y, Shan H. Joint learning framework of cross-modal synthesis and diagnosis for Alzheimer's disease by mining underlying shared modality information. Med Image Anal 2024; 91:103032. [PMID: 37995628 DOI: 10.1016/j.media.2023.103032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Revised: 08/31/2023] [Accepted: 11/13/2023] [Indexed: 11/25/2023]
Abstract
Alzheimer's disease (AD) is one of the most common neurodegenerative disorders presenting irreversible progression of cognitive impairment. How to identify AD as early as possible is critical for intervention with potential preventive measures. Among various neuroimaging modalities used to diagnose AD, functional positron emission tomography (PET) has higher sensitivity than structural magnetic resonance imaging (MRI), but it is also costlier and often not available in many hospitals. How to leverage massive unpaired unlabeled PET to improve the diagnosis performance of AD from MRI becomes rather important. To address this challenge, this paper proposes a novel joint learning framework of unsupervised cross-modal synthesis and AD diagnosis by mining underlying shared modality information, improving the AD diagnosis from MRI while synthesizing more discriminative PET images. We mine underlying shared modality information in two aspects: diversifying modality information through the cross-modal synthesis network and locating critical diagnosis-related patterns through the AD diagnosis network. First, to diversify the modality information, we propose a novel unsupervised cross-modal synthesis network, which implements the inter-conversion between 3D PET and MRI in a single model modulated by the AdaIN module. Second, to locate shared critical diagnosis-related patterns, we propose an interpretable diagnosis network based on fully 2D convolutions, which takes either 3D synthesized PET or original MRI as input. Extensive experimental results on the ADNI dataset show that our framework can synthesize more realistic images, outperform the state-of-the-art AD diagnosis methods, and have better generalization on external AIBL and NACC datasets.
Collapse
Affiliation(s)
- Chenhui Wang
- Institute of Science and Technology for Brain-inspired Intelligence, Fudan University, Shanghai 200433, China
| | - Sirong Piao
- Department of Radiology, Huashan Hospital, Fudan University, Shanghai 200040, China
| | - Zhizhong Huang
- Shanghai Key Lab of Intelligent Information Processing, Fudan University, Shanghai 200433, China; School of Computer Science, Fudan University, Shanghai 200433, China
| | - Qi Gao
- Institute of Science and Technology for Brain-inspired Intelligence, Fudan University, Shanghai 200433, China
| | - Junping Zhang
- Shanghai Key Lab of Intelligent Information Processing, Fudan University, Shanghai 200433, China; School of Computer Science, Fudan University, Shanghai 200433, China
| | - Yuxin Li
- Department of Radiology, Huashan Hospital, Fudan University, Shanghai 200040, China.
| | - Hongming Shan
- Institute of Science and Technology for Brain-inspired Intelligence, Fudan University, Shanghai 200433, China; MOE Frontiers Center for Brain Science, Fudan University, Shanghai, 200433, China; MOE Key Laboratory of Computational Neuroscience and Brain-Inspired Intelligence, Fudan University, Shanghai, 200433, China; Shanghai Center for Brain Science and Brain-inspired Technology, Shanghai 201210, China.
| |
Collapse
|
57
|
Gao X, Shi F, Shen D, Liu M. Multimodal transformer network for incomplete image generation and diagnosis of Alzheimer's disease. Comput Med Imaging Graph 2023; 110:102303. [PMID: 37832503 DOI: 10.1016/j.compmedimag.2023.102303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2022] [Revised: 06/27/2023] [Accepted: 09/27/2023] [Indexed: 10/15/2023]
Abstract
Multimodal images such as magnetic resonance imaging (MRI) and positron emission tomography (PET) could provide complementary information about the brain and have been widely investigated for the diagnosis of neurodegenerative disorders such as Alzheimer's disease (AD). However, multimodal brain images are often incomplete in clinical practice. It is still challenging to make use of multimodality for disease diagnosis with missing data. In this paper, we propose a deep learning framework with the multi-level guided generative adversarial network (MLG-GAN) and multimodal transformer (Mul-T) for incomplete image generation and disease classification, respectively. First, MLG-GAN is proposed to generate the missing data, guided by multi-level information from voxels, features, and tasks. In addition to voxel-level supervision and task-level constraint, a feature-level auto-regression branch is proposed to embed the features of target images for an accurate generation. With the complete multimodal images, we propose a Mul-T network for disease diagnosis, which can not only combine the global and local features but also model the latent interactions and correlations from one modality to another with the cross-modal attention mechanism. Comprehensive experiments on three independent datasets (i.e., ADNI-1, ADNI-2, and OASIS-3) show that the proposed method achieves superior performance in the tasks of image generation and disease diagnosis compared to state-of-the-art methods.
Collapse
Affiliation(s)
- Xingyu Gao
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, China
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co.,Ltd., China.
| | - Dinggang Shen
- Department of Research and Development, Shanghai United Imaging Intelligence Co.,Ltd., China; School of Biomedical Engineering, ShanghaiTech University, China.
| | - Manhua Liu
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, China; MoE Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, China.
| |
Collapse
|
58
|
Ozbey M, Dalmaz O, Dar SUH, Bedel HA, Ozturk S, Gungor A, Cukur T. Unsupervised Medical Image Translation With Adversarial Diffusion Models. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3524-3539. [PMID: 37379177 DOI: 10.1109/tmi.2023.3290149] [Citation(s) in RCA: 81] [Impact Index Per Article: 40.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/30/2023]
Abstract
Imputation of missing images via source-to-target modality translation can improve diversity in medical imaging protocols. A pervasive approach for synthesizing target images involves one-shot mapping through generative adversarial networks (GAN). Yet, GAN models that implicitly characterize the image distribution can suffer from limited sample fidelity. Here, we propose a novel method based on adversarial diffusion modeling, SynDiff, for improved performance in medical image translation. To capture a direct correlate of the image distribution, SynDiff leverages a conditional diffusion process that progressively maps noise and source images onto the target image. For fast and accurate image sampling during inference, large diffusion steps are taken with adversarial projections in the reverse diffusion direction. To enable training on unpaired datasets, a cycle-consistent architecture is devised with coupled diffusive and non-diffusive modules that bilaterally translate between two modalities. Extensive assessments are reported on the utility of SynDiff against competing GAN and diffusion models in multi-contrast MRI and MRI-CT translation. Our demonstrations indicate that SynDiff offers quantitatively and qualitatively superior performance against competing baselines.
Collapse
|
59
|
Yang H, Sun J, Xu Z. Learning Unified Hyper-Network for Multi-Modal MR Image Synthesis and Tumor Segmentation With Missing Modalities. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3678-3689. [PMID: 37540616 DOI: 10.1109/tmi.2023.3301934] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/06/2023]
Abstract
Accurate segmentation of brain tumors is of critical importance in clinical assessment and treatment planning, which requires multiple MR modalities providing complementary information. However, due to practical limits, one or more modalities may be missing in real scenarios. To tackle this problem, existing methods need to train multiple networks or a unified but fixed network for various possible missing modality cases, which leads to high computational burdens or sub-optimal performance. In this paper, we propose a unified and adaptive multi-modal MR image synthesis method, and further apply it to tumor segmentation with missing modalities. Based on the decomposition of multi-modal MR images into common and modality-specific features, we design a shared hyper-encoder for embedding each available modality into the feature space, a graph-attention-based fusion block to aggregate the features of available modalities to the fused features, and a shared hyper-decoder for image reconstruction. We also propose an adversarial common feature constraint to enforce the fused features to be in a common space. As for missing modality segmentation, we first conduct the feature-level and image-level completion using our synthesis method and then segment the tumors based on the completed MR images together with the extracted common features. Moreover, we design a hypernet-based modulation module to adaptively utilize the real and synthetic modalities. Experimental results suggest that our method can not only synthesize reasonable multi-modal MR images, but also achieve state-of-the-art performance on brain tumor segmentation with missing modalities.
Collapse
|
60
|
Wang K, Doneva M, Meineke J, Amthor T, Karasan E, Tan F, Tamir JI, Yu SX, Lustig M. High-fidelity direct contrast synthesis from magnetic resonance fingerprinting. Magn Reson Med 2023; 90:2116-2129. [PMID: 37332200 DOI: 10.1002/mrm.29766] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Revised: 05/03/2023] [Accepted: 05/31/2023] [Indexed: 06/20/2023]
Abstract
PURPOSE This work was aimed at proposing a supervised learning-based method that directly synthesizes contrast-weighted images from the Magnetic Resonance Fingerprinting (MRF) data without performing quantitative mapping and spin-dynamics simulations. METHODS To implement our direct contrast synthesis (DCS) method, we deploy a conditional generative adversarial network (GAN) framework with a multi-branch U-Net as the generator and a multilayer CNN (PatchGAN) as the discriminator. We refer to our proposed approach as N-DCSNet. The input MRF data are used to directly synthesize T1-weighted, T2-weighted, and fluid-attenuated inversion recovery (FLAIR) images through supervised training on paired MRF and target spin echo-based contrast-weighted scans. The performance of our proposed method is demonstrated on in vivo MRF scans from healthy volunteers. Quantitative metrics, including normalized root mean square error (nRMSE), peak signal-to-noise ratio (PSNR), structural similarity (SSIM), learned perceptual image patch similarity (LPIPS), and Fréchet inception distance (FID), were used to evaluate the performance of the proposed method and compare it with others. RESULTS In-vivo experiments demonstrated excellent image quality with respect to that of simulation-based contrast synthesis and previous DCS methods, both visually and according to quantitative metrics. We also demonstrate cases in which our trained model is able to mitigate the in-flow and spiral off-resonance artifacts typically seen in MRF reconstructions, and thus more faithfully represent conventional spin echo-based contrast-weighted images. CONCLUSION We present N-DCSNet to directly synthesize high-fidelity multicontrast MR images from a single MRF acquisition. This method can significantly decrease examination time. By directly training a network to generate contrast-weighted images, our method does not require any model-based simulation and therefore can avoid reconstruction errors due to dictionary matching and contrast simulation (code available at:https://github.com/mikgroup/DCSNet).
Collapse
Affiliation(s)
- Ke Wang
- Electrical Engineering and Computer Sciences, University of California at Berkeley, Berkeley, California, USA
- International Computer Science Institute, University of California at Berkeley, Berkeley, California, USA
| | | | | | | | - Ekin Karasan
- Electrical Engineering and Computer Sciences, University of California at Berkeley, Berkeley, California, USA
| | - Fei Tan
- Bioengineering, UC Berkeley-UCSF, San Francisco, California, USA
| | - Jonathan I Tamir
- Chandra Family Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, Texas, USA
| | - Stella X Yu
- Electrical Engineering and Computer Sciences, University of California at Berkeley, Berkeley, California, USA
- International Computer Science Institute, University of California at Berkeley, Berkeley, California, USA
- Computer Science and Engineering, University of Michigan, Ann Arbor, Michigan, USA
| | | |
Collapse
|
61
|
Roberts M, Hinton G, Wells AJ, Van Der Veken J, Bajger M, Lee G, Liu Y, Chong C, Poonnoose S, Agzarian M, To MS. Imaging evaluation of a proposed 3D generative model for MRI to CT translation in the lumbar spine. Spine J 2023; 23:1602-1612. [PMID: 37479140 DOI: 10.1016/j.spinee.2023.06.399] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/11/2023] [Revised: 06/22/2023] [Accepted: 06/29/2023] [Indexed: 07/23/2023]
Abstract
BACKGROUND CONTEXT A computed tomography (CT) and magnetic resonance imaging (MRI) are used routinely in the radiologic evaluation and surgical planning of patients with lumbar spine pathology, with the modalities being complimentary. We have developed a deep learning algorithm which can produce 3D lumbar spine CT images from MRI data alone. This has the potential to reduce radiation to the patient as well as burden on the health care system. PURPOSE The purpose of this study is to evaluate the accuracy of the synthetic lumbar spine CT images produced using our deep learning model. STUDY DESIGN A training set of 400 unpaired CTs and 400 unpaired MRI scans of the lumbar spine was used to train a supervised 3D cycle-Gan model. Evaluators performed a set of clinically relevant measurements on 20 matched synthetic CTs and true CTs. These measurements were then compared to assess the accuracy of the synthetic CTs. PATIENT SAMPLE The evaluation data set consisted of 20 patients who had CT and MRI scans performed within a 30-day period of each other. All patient data was deidentified. Notable exclusions included artefact from patient motion, metallic implants or any intervention performed in the 30 day intervening period. OUTCOME MEASURES The outcome measured was the mean difference in measurements performed by the group of evaluators between real CT and synthetic CTs in terms of absolute and relative error. METHODS Data from the 20 MRI scans was supplied to our deep learning model which produced 20 "synthetic CT" scans. This formed the evaluation data set. Four clinical evaluators consisting of neurosurgeons and radiologists performed a set of 24 clinically relevant measurements on matched synthetic CT and true CTs in 20 patients. A test set of measurements were performed prior to commencing data collection to identify any significant interobserver variation in measurement technique. RESULTS The measurements performed in the sagittal plane were all within 10% relative error with the majority within 5% relative error. The pedicle measurements performed in the axial plane were considerably less accurate with a relative error of up to 34%. CONCLUSIONS The computer generated synthetic CTs demonstrated a high level of accuracy for the measurements performed in-plane to the original MRIs used for synthesis. The measurements performed on the axial reconstructed images were less accurate, attributable to the images being synthesized from nonvolumetric routine sagittal T1-weighted MRI sequences. It is hypothesized that if axial sequences or volumetric data were input into the algorithm these measurements would have improved accuracy.
Collapse
Affiliation(s)
- Makenze Roberts
- South Australia Medical Imaging, Flinders Medical Centre, Adelaide, South Australia, Australia.
| | - George Hinton
- South Australia Medical Imaging, Flinders Medical Centre, Adelaide, South Australia, Australia
| | - Adam J Wells
- Department of Neurosurgery, Royal Adelaide Hospital, Adelaide, South Australia, Australia; Faculty of Health and Medical Sciences, University of Adelaide, Adelaide, South Australia, Australia
| | - Jorn Van Der Veken
- Department of Neurosurgery, Flinders Medical Centre, Adelaide, South Australia, Australia
| | - Mariusz Bajger
- College of Science and Engineering, Flinders University, Adelaide, South Australia, Australia
| | - Gobert Lee
- College of Science and Engineering, Flinders University, Adelaide, South Australia, Australia
| | - Yifan Liu
- The Australian Institute for Machine Learning, University of Adelaide, Adelaide, South Australia, Australia
| | - Chee Chong
- South Australia Medical Imaging, Flinders Medical Centre, Adelaide, South Australia, Australia
| | - Santosh Poonnoose
- Department of Neurosurgery, Flinders Medical Centre, Adelaide, South Australia, Australia
| | - Marc Agzarian
- South Australia Medical Imaging, Flinders Medical Centre, Adelaide, South Australia, Australia
| | - Minh-Son To
- South Australia Medical Imaging, Flinders Medical Centre, Adelaide, South Australia, Australia; The Australian Institute for Machine Learning, University of Adelaide, Adelaide, South Australia, Australia; Flinders Health and Medical Research Institute, Flinders University, Adelaide, South Australia, Australia
| |
Collapse
|
62
|
Sun H, Wang L, Daskivich T, Qiu S, Han F, D'Agnolo A, Saouaf R, Christodoulou AG, Kim H, Li D, Xie Y. Retrospective T2 quantification from conventional weighted MRI of the prostate based on deep learning. FRONTIERS IN RADIOLOGY 2023; 3:1223377. [PMID: 37886239 PMCID: PMC10598780 DOI: 10.3389/fradi.2023.1223377] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Accepted: 09/28/2023] [Indexed: 10/28/2023]
Abstract
Purpose To develop a deep learning-based method to retrospectively quantify T2 from conventional T1- and T2-weighted images. Methods Twenty-five subjects were imaged using a multi-echo spin-echo sequence to estimate reference prostate T2 maps. Conventional T1- and T2-weighted images were acquired as the input images. A U-Net based neural network was developed to directly estimate T2 maps from the weighted images using a four-fold cross-validation training strategy. The structural similarity index (SSIM), peak signal-to-noise ratio (PSNR), mean percentage error (MPE), and Pearson correlation coefficient were calculated to evaluate the quality of network-estimated T2 maps. To explore the potential of this approach in clinical practice, a retrospective T2 quantification was performed on a high-risk prostate cancer cohort (Group 1) and a low-risk active surveillance cohort (Group 2). Tumor and non-tumor T2 values were evaluated by an experienced radiologist based on region of interest (ROI) analysis. Results The T2 maps generated by the trained network were consistent with the corresponding reference. Prostate tissue structures and contrast were well preserved, with a PSNR of 26.41 ± 1.17 dB, an SSIM of 0.85 ± 0.02, and a Pearson correlation coefficient of 0.86. Quantitative ROI analyses performed on 38 prostate cancer patients revealed estimated T2 values of 80.4 ± 14.4 ms and 106.8 ± 16.3 ms for tumor and non-tumor regions, respectively. ROI measurements showed a significant difference between tumor and non-tumor regions of the estimated T2 maps (P < 0.001). In the two-timepoints active surveillance cohort, patients defined as progressors exhibited lower estimated T2 values of the tumor ROIs at the second time point compared to the first time point. Additionally, the T2 difference between two time points for progressors was significantly greater than that for non-progressors (P = 0.010). Conclusion A deep learning method was developed to estimate prostate T2 maps retrospectively from clinically acquired T1- and T2-weighted images, which has the potential to improve prostate cancer diagnosis and characterization without requiring extra scans.
Collapse
Affiliation(s)
- Haoran Sun
- Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, CA, United States
- Department of Bioengineering, University of California Los Angeles, Los Angeles, CA, United States
| | - Lixia Wang
- Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, CA, United States
| | - Timothy Daskivich
- Minimal Invasive Urology, Cedars-Sinai Medical Center, Los Angeles, CA, United States
| | - Shihan Qiu
- Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, CA, United States
- Department of Bioengineering, University of California Los Angeles, Los Angeles, CA, United States
| | - Fei Han
- Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, CA, United States
| | - Alessandro D'Agnolo
- Imaging/Nuclear Medicine, Cedars-Sinai Medical Center, Los Angeles, CA, United States
| | - Rola Saouaf
- Imaging, Cedars-Sinai Medical Center, Los Angeles, CA, United States
| | - Anthony G. Christodoulou
- Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, CA, United States
- Department of Bioengineering, University of California Los Angeles, Los Angeles, CA, United States
| | - Hyung Kim
- Minimal Invasive Urology, Cedars-Sinai Medical Center, Los Angeles, CA, United States
| | - Debiao Li
- Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, CA, United States
- Department of Bioengineering, University of California Los Angeles, Los Angeles, CA, United States
| | - Yibin Xie
- Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, CA, United States
| |
Collapse
|
63
|
Peng W, Adeli E, Bosschieter T, Hyun Park S, Zhao Q, Pohl KM. Generating Realistic Brain MRIs via a Conditional Diffusion Probabilistic Model. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2023; 14227:14-24. [PMID: 38169668 PMCID: PMC10758344 DOI: 10.1007/978-3-031-43993-3_2] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2024]
Abstract
As acquiring MRIs is expensive, neuroscience studies struggle to attain a sufficient number of them for properly training deep learning models. This challenge could be reduced by MRI synthesis, for which Generative Adversarial Networks (GANs) are popular. GANs, however, are commonly unstable and struggle with creating diverse and high-quality data. A more stable alternative is Diffusion Probabilistic Models (DPMs) with a fine-grained training strategy. To overcome their need for extensive computational resources, we propose a conditional DPM (cDPM) with a memory-efficient process that generates realistic-looking brain MRIs. To this end, we train a 2D cDPM to generate an MRI subvolume conditioned on another subset of slices from the same MRI. By generating slices using arbitrary combinations between condition and target slices, the model only requires limited computational resources to learn interdependencies between slices even if they are spatially far apart. After having learned these dependencies via an attention network, a new anatomy-consistent 3D brain MRI is generated by repeatedly applying the cDPM. Our experiments demonstrate that our method can generate high-quality 3D MRIs that share a similar distribution to real MRIs while still diversifying the training set. The code is available at https://github.com/xiaoiker/mask3DMRI_diffusion and also will be released as part of MONAI, at https://github.com/Project-MONAI/GenerativeModels.
Collapse
Affiliation(s)
- Wei Peng
- Stanford University, Stanford, CA 94305, USA
| | - Ehsan Adeli
- Stanford University, Stanford, CA 94305, USA
| | | | - Sang Hyun Park
- Daegu Gyeongbuk Institute of Science and Technology, Daegu, South Korea
| | - Qingyu Zhao
- Stanford University, Stanford, CA 94305, USA
| | | |
Collapse
|
64
|
Genc O, Morrison MA, Villanueva-Meyer J, Burns B, Hess CP, Banerjee S, Lupo JM. DeepSWI: Using Deep Learning to Enhance Susceptibility Contrast on T2*-Weighted MRI. J Magn Reson Imaging 2023; 58:1200-1210. [PMID: 36733222 PMCID: PMC10443940 DOI: 10.1002/jmri.28622] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Revised: 01/19/2023] [Accepted: 01/20/2023] [Indexed: 02/04/2023] Open
Abstract
BACKGROUND Although susceptibility-weighted imaging (SWI) is the gold standard for visualizing cerebral microbleeds (CMBs) in the brain, the required phase data are not always available clinically. Having a postprocessing tool for generating SWI contrast from T2*-weighted magnitude images is therefore advantageous. PURPOSE To create synthetic SWI images from clinical T2*-weighted magnitude images using deep learning and evaluate the resulting images in terms of similarity to conventional SWI images and ability to detect radiation-associated CMBs. STUDY TYPE Retrospective. POPULATION A total of 145 adults (87 males/58 females; 43.9 years old) with radiation-associated CMBs were used to train (16,093 patches/121 patients), validate (484 patches/4 patients), and test (2420 patches/20 patients) our networks. FIELD STRENGTH/SEQUENCE 3D T2*-weighted, gradient-echo acquired at 3 T. ASSESSMENT Structural similarity index (SSIM), peak signal-to-noise-ratio (PSNR), normalized mean-squared-error (nMSE), CMB counts, and line profiles were compared among magnitude, original SWI, and synthetic SWI images. Three blinded raters (J.E.V.M., M.A.M., B.B. with 8-, 6-, and 4-years of experience, respectively) independently rated and classified test-set images. STATISTICAL TESTS Kruskall-Wallis and Wilcoxon signed-rank tests were used to compare SSIM, PSNR, nMSE, and CMB counts among magnitude, original SWI, and predicted synthetic SWI images. Intraclass correlation assessed interrater variability. P values <0.005 were considered statistically significant. RESULTS SSIM values of the predicted vs. original SWI (0.972, 0.995, 0.9864) were statistically significantly higher than that of the magnitude vs. original SWI (0.970, 0.994, 0.9861) for whole brain, vascular structures, and brain tissue regions, respectively; 67% (19/28) CMBs detected on original SWI images were also detected on the predicted SWI, whereas only 10 (36%) were detected on magnitude images. Overall image quality was similar between the synthetic and original SWI images, with less artifacts on the former. CONCLUSIONS This study demonstrated that deep learning can increase the susceptibility contrast present in neurovasculature and CMBs on T2*-weighted magnitude images, without residual susceptibility-induced artifacts. This may be useful for more accurately estimating CMB burden from magnitude images alone. EVIDENCE LEVEL 3. TECHNICAL EFFICACY Stage 2.
Collapse
Affiliation(s)
- Ozan Genc
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA
- Boğaziçi University, Istanbul, Turkey
| | - Melanie A. Morrison
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA
| | - Javier Villanueva-Meyer
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA
- Department of Neurological Surgery, University of California, San Francisco, CA
| | | | - Christopher P. Hess
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA
- Department of Neurology, University of California, San Francisco, CA
| | | | - Janine M. Lupo
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA
- UCSF/UC Berkeley Graduate Group of Bioengineering, University of California, Berkeley and San Francisco, CA
| |
Collapse
|
65
|
Liu J, Pasumarthi S, Duffy B, Gong E, Datta K, Zaharchuk G. One Model to Synthesize Them All: Multi-Contrast Multi-Scale Transformer for Missing Data Imputation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:2577-2591. [PMID: 37030684 PMCID: PMC10543020 DOI: 10.1109/tmi.2023.3261707] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Multi-contrast magnetic resonance imaging (MRI) is widely used in clinical practice as each contrast provides complementary information. However, the availability of each imaging contrast may vary amongst patients, which poses challenges to radiologists and automated image analysis algorithms. A general approach for tackling this problem is missing data imputation, which aims to synthesize the missing contrasts from existing ones. While several convolutional neural networks (CNN) based algorithms have been proposed, they suffer from the fundamental limitations of CNN models, such as the requirement for fixed numbers of input and output channels, the inability to capture long-range dependencies, and the lack of interpretability. In this work, we formulate missing data imputation as a sequence-to-sequence learning problem and propose a multi-contrast multi-scale Transformer (MMT), which can take any subset of input contrasts and synthesize those that are missing. MMT consists of a multi-scale Transformer encoder that builds hierarchical representations of inputs combined with a multi-scale Transformer decoder that generates the outputs in a coarse-to-fine fashion. The proposed multi-contrast Swin Transformer blocks can efficiently capture intra- and inter-contrast dependencies for accurate image synthesis. Moreover, MMT is inherently interpretable as it allows us to understand the importance of each input contrast in different regions by analyzing the in-built attention maps of Transformer blocks in the decoder. Extensive experiments on two large-scale multi-contrast MRI datasets demonstrate that MMT outperforms the state-of-the-art methods quantitatively and qualitatively.
Collapse
|
66
|
Liu CF, Leigh R, Johnson B, Urrutia V, Hsu J, Xu X, Li X, Mori S, Hillis AE, Faria AV. A large public dataset of annotated clinical MRIs and metadata of patients with acute stroke. Sci Data 2023; 10:548. [PMID: 37607929 PMCID: PMC10444746 DOI: 10.1038/s41597-023-02457-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Accepted: 08/09/2023] [Indexed: 08/24/2023] Open
Abstract
To extract meaningful and reproducible models of brain function from stroke images, for both clinical and research proposes, is a daunting task severely hindered by the great variability of lesion frequency and patterns. Large datasets are therefore imperative, as well as fully automated image post-processing tools to analyze them. The development of such tools, particularly with artificial intelligence, is highly dependent on the availability of large datasets to model training and testing. We present a public dataset of 2,888 multimodal clinical MRIs of patients with acute and early subacute stroke, with manual lesion segmentation, and metadata. The dataset provides high quality, large scale, human-supervised knowledge to feed artificial intelligence models and enable further development of tools to automate several tasks that currently rely on human labor, such as lesion segmentation, labeling, calculation of disease-relevant scores, and lesion-based studies relating function to frequency lesion maps.
Collapse
Affiliation(s)
- Chin-Fu Liu
- Center for Imaging Science, Johns Hopkins University, Baltimore, MD, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Richard Leigh
- Department of Neurology, School of Medicine, Johns Hopkins University, Baltimore, MD, USA
| | - Brenda Johnson
- Department of Neurology, School of Medicine, Johns Hopkins University, Baltimore, MD, USA
| | - Victor Urrutia
- Department of Neurology, School of Medicine, Johns Hopkins University, Baltimore, MD, USA
| | - Johnny Hsu
- Department of Radiology, School of Medicine, Johns Hopkins University, Baltimore, MD, USA
| | - Xin Xu
- Department of Radiology, School of Medicine, Johns Hopkins University, Baltimore, MD, USA
| | - Xin Li
- Department of Radiology, School of Medicine, Johns Hopkins University, Baltimore, MD, USA
| | - Susumu Mori
- Department of Radiology, School of Medicine, Johns Hopkins University, Baltimore, MD, USA
| | - Argye E Hillis
- Department of Neurology, School of Medicine, Johns Hopkins University, Baltimore, MD, USA
- Department of Physical Medicine & Rehabilitation, and Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, USA
| | - Andreia V Faria
- Department of Radiology, School of Medicine, Johns Hopkins University, Baltimore, MD, USA.
| |
Collapse
|
67
|
Tanenbaum LN, Bash SC, Zaharchuk G, Shankaranarayanan A, Chamberlain R, Wintermark M, Beaulieu C, Novick M, Wang L. Deep Learning-Generated Synthetic MR Imaging STIR Spine Images Are Superior in Image Quality and Diagnostically Equivalent to Conventional STIR: A Multicenter, Multireader Trial. AJNR Am J Neuroradiol 2023; 44:987-993. [PMID: 37414452 PMCID: PMC10411840 DOI: 10.3174/ajnr.a7920] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 06/01/2023] [Indexed: 07/08/2023]
Abstract
BACKGROUND AND PURPOSE Deep learning image reconstruction allows faster MR imaging acquisitions while matching or exceeding the standard of care and can create synthetic images from existing data sets. This multicenter, multireader spine study evaluated the performance of synthetically created STIR compared with acquired STIR. MATERIALS AND METHODS From a multicenter, multiscanner data base of 328 clinical cases, a nonreader neuroradiologist randomly selected 110 spine MR imaging studies in 93 patients (sagittal T1, T2, and STIR) and classified them into 5 categories of disease and healthy. A DICOM-based deep learning application generated a synthetically created STIR series from the sagittal T1 and T2 images. Five radiologists (3 neuroradiologists, 1 musculoskeletal radiologist, and 1 general radiologist) rated the STIR quality and classified disease pathology (study 1, n = 80). They then assessed the presence or absence of findings typically evaluated with STIR in patients with trauma (study 2, n = 30). The readers evaluated studies with either acquired STIR or synthetically created STIR in a blinded and randomized fashion with a 1-month washout period. The interchangeability of acquired STIR and synthetically created STIR was assessed using a noninferiority threshold of 10%. RESULTS For classification, there was a decrease in interreader agreement expected by randomly introducing synthetically created STIR of 3.23%. For trauma, there was an overall increase in interreader agreement by +1.9%. The lower bound of confidence for both exceeded the noninferiority threshold, indicating interchangeability of synthetically created STIR with acquired STIR. Both the Wilcoxon signed-rank and t tests showed higher image-quality scores for synthetically created STIR over acquired STIR (P < .0001). CONCLUSIONS Synthetically created STIR spine MR images were diagnostically interchangeable with acquired STIR, while providing significantly higher image quality, suggesting routine clinical practice potential.
Collapse
Affiliation(s)
| | - S C Bash
- From RadNet (L.N.T., S.C.B.), New York, New York
| | - G Zaharchuk
- Stanford University Medical Center (G.Z., C.B.), Stanford, California
| | | | - R Chamberlain
- Subtle Medical (A.S., R.C., L.W.), Menlo Park, California
| | - M Wintermark
- MD Anderson Cancer Center (M.W.), University of Texas, Houston, Texas
| | - C Beaulieu
- Stanford University Medical Center (G.Z., C.B.), Stanford, California
| | - M Novick
- All-American Teleradiology (M.N.), Bay Village, Ohio
| | - L Wang
- Subtle Medical (A.S., R.C., L.W.), Menlo Park, California
| |
Collapse
|
68
|
Wu J, Guo D, Wang L, Yang S, Zheng Y, Shapey J, Vercauteren T, Bisdas S, Bradford R, Saeed S, Kitchen N, Ourselin S, Zhang S, Wang G. TISS-net: Brain tumor image synthesis and segmentation using cascaded dual-task networks and error-prediction consistency. Neurocomputing 2023; 544:None. [PMID: 37528990 PMCID: PMC10243514 DOI: 10.1016/j.neucom.2023.126295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 03/15/2023] [Accepted: 04/30/2023] [Indexed: 08/03/2023]
Abstract
Accurate segmentation of brain tumors from medical images is important for diagnosis and treatment planning, and it often requires multi-modal or contrast-enhanced images. However, in practice some modalities of a patient may be absent. Synthesizing the missing modality has a potential for filling this gap and achieving high segmentation performance. Existing methods often treat the synthesis and segmentation tasks separately or consider them jointly but without effective regularization of the complex joint model, leading to limited performance. We propose a novel brain Tumor Image Synthesis and Segmentation network (TISS-Net) that obtains the synthesized target modality and segmentation of brain tumors end-to-end with high performance. First, we propose a dual-task-regularized generator that simultaneously obtains a synthesized target modality and a coarse segmentation, which leverages a tumor-aware synthesis loss with perceptibility regularization to minimize the high-level semantic domain gap between synthesized and real target modalities. Based on the synthesized image and the coarse segmentation, we further propose a dual-task segmentor that predicts a refined segmentation and error in the coarse segmentation simultaneously, where a consistency between these two predictions is introduced for regularization. Our TISS-Net was validated with two applications: synthesizing FLAIR images for whole glioma segmentation, and synthesizing contrast-enhanced T1 images for Vestibular Schwannoma segmentation. Experimental results showed that our TISS-Net largely improved the segmentation accuracy compared with direct segmentation from the available modalities, and it outperformed state-of-the-art image synthesis-based segmentation methods.
Collapse
Affiliation(s)
- Jianghao Wu
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Dong Guo
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Lu Wang
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Shuojue Yang
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Yuanjie Zheng
- School of Information Science and Engineering, Shandong Normal University, Jinan, China
| | - Jonathan Shapey
- School of Biomedical Engineering & Imaging Sciences, King’s College London, London, UK
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK
| | - Tom Vercauteren
- School of Biomedical Engineering & Imaging Sciences, King’s College London, London, UK
| | - Sotirios Bisdas
- Department of Neuroradiology, National Hospital for Neurology and Neurosurgery, London, UK
| | - Robert Bradford
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK
| | - Shakeel Saeed
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK
| | - Neil Kitchen
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK
| | - Sebastien Ourselin
- School of Biomedical Engineering & Imaging Sciences, King’s College London, London, UK
| | - Shaoting Zhang
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
- SenseTime Research, Shanghai, China
| | - Guotai Wang
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
- Shanghai Artificial Intelligence Laboratory, Shanghai, China
| |
Collapse
|
69
|
Nykänen O, Nevalainen M, Casula V, Isosalo A, Inkinen S, Nikki M, Lattanzi R, Cloos M, Nissi MJ, Nieminen MT. Deep-Learning-Based Contrast Synthesis From MRF Parameter Maps in the Knee Joint. J Magn Reson Imaging 2023; 58:559-568. [PMID: 36562500 PMCID: PMC10287835 DOI: 10.1002/jmri.28573] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Revised: 12/07/2022] [Accepted: 12/07/2022] [Indexed: 12/24/2022] Open
Abstract
BACKGROUND Magnetic resonance fingerprinting (MRF) is a method to speed up acquisition of quantitative MRI data. However, MRF does not usually produce contrast-weighted images that are required by radiologists, limiting reachable total scan time improvement. Contrast synthesis from MRF could significantly decrease the imaging time. PURPOSE To improve clinical utility of MRF by synthesizing contrast-weighted MR images from the quantitative data provided by MRF, using U-nets that were trained for the synthesis task utilizing L1- and perceptual loss functions, and their combinations. STUDY TYPE Retrospective. POPULATION Knee joint MRI data from 184 subjects from Northern Finland 1986 Birth Cohort (ages 33-35, gender distribution not available). FIELD STRENGTH AND SEQUENCE A 3 T, multislice-MRF, proton density (PD)-weighted 3D-SPACE (sampling perfection with application optimized contrasts using different flip angle evolution), fat-saturated T2-weighted 3D-space, water-excited double echo steady state (DESS). ASSESSMENT Data were divided into training, validation, test, and radiologist's assessment sets in the following way: 136 subjects to training, 3 for validation, 3 for testing, and 42 for radiologist's assessment. The synthetic and target images were evaluated using 5-point Likert scale by two musculoskeletal radiologists blinded and with quantitative error metrics. STATISTICAL TESTS Friedman's test accompanied with post hoc Wilcoxon signed-rank test and intraclass correlation coefficient. The statistical cutoff P <0.05 adjusted by Bonferroni correction as necessary was utilized. RESULTS The networks trained in the study could synthesize conventional images with high image quality (Likert scores 3-4 on a 5-point scale). Qualitatively, the best synthetic images were produced with combination of L1- and perceptual loss functions and perceptual loss alone, while L1-loss alone led to significantly poorer image quality (Likert scores below 3). The interreader and intrareader agreement were high (0.80 and 0.92, respectively) and significant. However, quantitative image quality metrics indicated best performance for the pure L1-loss. DATA CONCLUSION Synthesizing high-quality contrast-weighted images from MRF data using deep learning is feasible. However, more studies are needed to validate the diagnostic accuracy of these synthetic images. EVIDENCE LEVEL 4. TECHNICAL EFFICACY Stage 1.
Collapse
Affiliation(s)
- Olli Nykänen
- Department of Applied Physics, Faculty of Science and Forestry, University of Eastern Finland, Yliopistonranta 1 F, Kuopio, Finland
- Research Unit of Medical Imaging, Physics and Technology, Faculty of Medicine, University of Oulu, Aapistie 5 A, Oulu
| | - Mika Nevalainen
- Research Unit of Medical Imaging, Physics and Technology, Faculty of Medicine, University of Oulu, Aapistie 5 A, Oulu
- Medical Research Center, University of Oulu and Oulu University Hospital, Kajaanintie 50, Oulu, Finland
- Department of Diagnostic Radiology, Oulu University Hospital, Kajaanintie 50, Oulu, Finland
| | - Victor Casula
- Research Unit of Medical Imaging, Physics and Technology, Faculty of Medicine, University of Oulu, Aapistie 5 A, Oulu
- Medical Research Center, University of Oulu and Oulu University Hospital, Kajaanintie 50, Oulu, Finland
| | - Antti Isosalo
- Research Unit of Medical Imaging, Physics and Technology, Faculty of Medicine, University of Oulu, Aapistie 5 A, Oulu
| | - Satu Inkinen
- Research Unit of Medical Imaging, Physics and Technology, Faculty of Medicine, University of Oulu, Aapistie 5 A, Oulu
- Helsinki University Hospital, Helsinki, Finland
| | - Marko Nikki
- Department of Diagnostic Radiology, Oulu University Hospital, Kajaanintie 50, Oulu, Finland
| | - Riccardo Lattanzi
- Center for Advanced Imaging Innovation and Research (CAI2R) and Bernard and Irene Schwartz Center for Biomedical Imaging, Department of Radiology, New York University Grossman School of Medicine, 550 1st Avenue, New York, NY, USA
| | - Martijn Cloos
- Centre for Advanced Imaging, University of Queensland, Building 57 of University Dr, Brisbane, Australia
| | - Mikko J. Nissi
- Department of Applied Physics, Faculty of Science and Forestry, University of Eastern Finland, Yliopistonranta 1 F, Kuopio, Finland
| | - Miika T. Nieminen
- Research Unit of Medical Imaging, Physics and Technology, Faculty of Medicine, University of Oulu, Aapistie 5 A, Oulu
- Medical Research Center, University of Oulu and Oulu University Hospital, Kajaanintie 50, Oulu, Finland
- Department of Diagnostic Radiology, Oulu University Hospital, Kajaanintie 50, Oulu, Finland
| |
Collapse
|
70
|
Jiang Y, Zhang S, Chi J. Multi-Modal Brain Tumor Data Completion Based on Reconstruction Consistency Loss. J Digit Imaging 2023; 36:1794-1807. [PMID: 36856903 PMCID: PMC10406787 DOI: 10.1007/s10278-022-00697-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Revised: 07/12/2022] [Accepted: 07/24/2022] [Indexed: 03/02/2023] Open
Abstract
Multi-modal brain magnetic resonance imaging (MRI) data has been widely applied in vison-based brain tumor segmentation methods due to its complementary diagnostic information from different modalities. Since the multi-modal image data is likely to be corrupted by noise or artifacts during the practical scanning process, making it difficult to build a universal model for the subsequent segmentation and diagnosis with incomplete input data, image completion has become one of the most attractive fields in the medical image pre-processing. It can not only assist clinicians to observe the patient's lesion area more intuitively and comprehensively, but also realize the desire to save costs for patients and reduce the psychological pressure of patients during tedious pathological examinations. Recently, many deep learning-based methods have been proposed to complement the multi-modal image data and provided good performance. However, current methods cannot fully reflect the continuous semantic information between the adjacent slices and the structural information of the intra-slice features, resulting in limited complementation effects and efficiencies. To solve these problems, in this work, we propose a novel generative adversarial network (GAN) framework, named as random generative adversarial network (RAGAN), to complete the missing T1, T1ce, and FLAIR data from the given T2 modal data in real brain MRI, which consists of the following parts: (1) For the generator, we use T2 modal images and multi-modal classification labels from the same sample for cyclically supervised training of image generation, so as to realize the restoration of arbitrary modal images. (2) For the discriminator, a multi-branch network is proposed where the primary branch is designed to judge whether the certain generated modal image is similar to the target modal image, while the auxiliary branch is to judge whether its essential visual features are similar to those of the target modal image. We conduct qualitative and quantitative experimental validations on the BraTs2018 dataset, generating 10,686 MRI data in each missing modality. Real brain tumor morphology images were compared with synthetic brain tumor morphology images using PSNR and SSIM as evaluation metrics. Experiments demonstrate that the brightness, resolution, location, and morphology of brain tissue under different modalities are well reconstructed. Meanwhile, we also use the segmentation network as a further validation experiment. Blend synthetic and real images into a segmentation network. Our segmentation network adopts the classic segmentation network UNet. The segmentation result is 77.58%. In order to prove the value of our proposed method, we use the better segmentation network RES_UNet with depth supervision as the segmentation model, and the segmentation accuracy rate is 88.76%. Although our method does not significantly outperform other algorithms, the DICE value is 2% higher than the current state-of-the-art data completion algorithm TC-MGAN.
Collapse
Affiliation(s)
- Yang Jiang
- Faculty of Robot Science and Engineering, Northeastern University, Shenyang, 110167, China
| | - Shuang Zhang
- Faculty of Robot Science and Engineering, Northeastern University, Shenyang, 110167, China
| | - Jianning Chi
- Faculty of Robot Science and Engineering, Northeastern University, Shenyang, 110167, China.
- Key Laboratory of Intelligent Computing in Medical Image of Ministry of Education, Northeastern University, Shenyang, 110167, China.
| |
Collapse
|
71
|
Rezaeijo SM, Chegeni N, Baghaei Naeini F, Makris D, Bakas S. Within-Modality Synthesis and Novel Radiomic Evaluation of Brain MRI Scans. Cancers (Basel) 2023; 15:3565. [PMID: 37509228 PMCID: PMC10377568 DOI: 10.3390/cancers15143565] [Citation(s) in RCA: 32] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2023] [Revised: 06/27/2023] [Accepted: 07/05/2023] [Indexed: 07/30/2023] Open
Abstract
One of the most common challenges in brain MRI scans is to perform different MRI sequences depending on the type and properties of tissues. In this paper, we propose a generative method to translate T2-Weighted (T2W) Magnetic Resonance Imaging (MRI) volume from T2-weight-Fluid-attenuated-Inversion-Recovery (FLAIR) and vice versa using Generative Adversarial Networks (GAN). To evaluate the proposed method, we propose a novel evaluation schema for generative and synthetic approaches based on radiomic features. For the evaluation purpose, we consider 510 pair-slices from 102 patients to train two different GAN-based architectures Cycle GAN and Dual Cycle-Consistent Adversarial network (DC2Anet). The results indicate that generative methods can produce similar results to the original sequence without significant change in the radiometric feature. Therefore, such a method can assist clinics to make decisions based on the generated image when different sequences are not available or there is not enough time to re-perform the MRI scans.
Collapse
Affiliation(s)
- Seyed Masoud Rezaeijo
- Department of Medical Physics, Faculty of Medicine, Ahvaz Jundishapur University of Medical Sciences, Ahvaz, Iran; (S.M.R.)
| | - Nahid Chegeni
- Department of Medical Physics, Faculty of Medicine, Ahvaz Jundishapur University of Medical Sciences, Ahvaz, Iran; (S.M.R.)
| | - Fariborz Baghaei Naeini
- Faculty of Engineering, Computing and the Environment, Kingston University, Penrhyn Road Campus, Kingston upon Thames, London KT1 2EE, UK; (F.B.N.); (D.M.)
| | - Dimitrios Makris
- Faculty of Engineering, Computing and the Environment, Kingston University, Penrhyn Road Campus, Kingston upon Thames, London KT1 2EE, UK; (F.B.N.); (D.M.)
| | - Spyridon Bakas
- Faculty of Engineering, Computing and the Environment, Kingston University, Penrhyn Road Campus, Kingston upon Thames, London KT1 2EE, UK; (F.B.N.); (D.M.)
- Richards Medical Research Laboratories, Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Floor 7, 3700 Hamilton Walk, Philadelphia, PA 19104, USA
| |
Collapse
|
72
|
Jiao C, Ling D, Bian S, Vassantachart A, Cheng K, Mehta S, Lock D, Zhu Z, Feng M, Thomas H, Scholey JE, Sheng K, Fan Z, Yang W. Contrast-Enhanced Liver Magnetic Resonance Image Synthesis Using Gradient Regularized Multi-Modal Multi-Discrimination Sparse Attention Fusion GAN. Cancers (Basel) 2023; 15:3544. [PMID: 37509207 PMCID: PMC10377331 DOI: 10.3390/cancers15143544] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Revised: 07/03/2023] [Accepted: 07/05/2023] [Indexed: 07/30/2023] Open
Abstract
PURPOSES To provide abdominal contrast-enhanced MR image synthesis, we developed an gradient regularized multi-modal multi-discrimination sparse attention fusion generative adversarial network (GRMM-GAN) to avoid repeated contrast injections to patients and facilitate adaptive monitoring. METHODS With IRB approval, 165 abdominal MR studies from 61 liver cancer patients were retrospectively solicited from our institutional database. Each study included T2, T1 pre-contrast (T1pre), and T1 contrast-enhanced (T1ce) images. The GRMM-GAN synthesis pipeline consists of a sparse attention fusion network, an image gradient regularizer (GR), and a generative adversarial network with multi-discrimination. The studies were randomly divided into 115 for training, 20 for validation, and 30 for testing. The two pre-contrast MR modalities, T2 and T1pre images, were adopted as inputs in the training phase. The T1ce image at the portal venous phase was used as an output. The synthesized T1ce images were compared with the ground truth T1ce images. The evaluation metrics include peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and mean squared error (MSE). A Turing test and experts' contours evaluated the image synthesis quality. RESULTS The proposed GRMM-GAN model achieved a PSNR of 28.56, an SSIM of 0.869, and an MSE of 83.27. The proposed model showed statistically significant improvements in all metrics tested with p-values < 0.05 over the state-of-the-art model comparisons. The average Turing test score was 52.33%, which is close to random guessing, supporting the model's effectiveness for clinical application. In the tumor-specific region analysis, the average tumor contrast-to-noise ratio (CNR) of the synthesized MR images was not statistically significant from the real MR images. The average DICE from real vs. synthetic images was 0.90 compared to the inter-operator DICE of 0.91. CONCLUSION We demonstrated the function of a novel multi-modal MR image synthesis neural network GRMM-GAN for T1ce MR synthesis based on pre-contrast T1 and T2 MR images. GRMM-GAN shows promise for avoiding repeated contrast injections during radiation therapy treatment.
Collapse
Affiliation(s)
- Changzhe Jiao
- Department of Radiation Oncology, Keck School of Medicine of USC, Los Angeles, CA 90033, USA (A.V.); (S.M.)
- Department of Radiation Oncology, UC San Francisco, San Francisco, CA 94143, USA
| | - Diane Ling
- Department of Radiation Oncology, Keck School of Medicine of USC, Los Angeles, CA 90033, USA (A.V.); (S.M.)
| | - Shelly Bian
- Department of Radiation Oncology, Keck School of Medicine of USC, Los Angeles, CA 90033, USA (A.V.); (S.M.)
| | - April Vassantachart
- Department of Radiation Oncology, Keck School of Medicine of USC, Los Angeles, CA 90033, USA (A.V.); (S.M.)
| | - Karen Cheng
- Department of Radiation Oncology, Keck School of Medicine of USC, Los Angeles, CA 90033, USA (A.V.); (S.M.)
| | - Shahil Mehta
- Department of Radiation Oncology, Keck School of Medicine of USC, Los Angeles, CA 90033, USA (A.V.); (S.M.)
| | - Derrick Lock
- Department of Radiation Oncology, Keck School of Medicine of USC, Los Angeles, CA 90033, USA (A.V.); (S.M.)
| | - Zhenyu Zhu
- Guangzhou Institute of Technology, Xidian University, Guangzhou 510555, China;
| | - Mary Feng
- Department of Radiation Oncology, UC San Francisco, San Francisco, CA 94143, USA
| | - Horatio Thomas
- Department of Radiation Oncology, UC San Francisco, San Francisco, CA 94143, USA
| | - Jessica E. Scholey
- Department of Radiation Oncology, UC San Francisco, San Francisco, CA 94143, USA
| | - Ke Sheng
- Department of Radiation Oncology, UC San Francisco, San Francisco, CA 94143, USA
| | - Zhaoyang Fan
- Department of Radiology, Keck School of Medicine of USC, Los Angeles, CA 90033, USA
| | - Wensha Yang
- Department of Radiation Oncology, Keck School of Medicine of USC, Los Angeles, CA 90033, USA (A.V.); (S.M.)
- Department of Radiation Oncology, UC San Francisco, San Francisco, CA 94143, USA
| |
Collapse
|
73
|
Hu F, Chen AA, Horng H, Bashyam V, Davatzikos C, Alexander-Bloch A, Li M, Shou H, Satterthwaite TD, Yu M, Shinohara RT. Image harmonization: A review of statistical and deep learning methods for removing batch effects and evaluation metrics for effective harmonization. Neuroimage 2023; 274:120125. [PMID: 37084926 PMCID: PMC10257347 DOI: 10.1016/j.neuroimage.2023.120125] [Citation(s) in RCA: 50] [Impact Index Per Article: 25.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Revised: 04/12/2023] [Accepted: 04/19/2023] [Indexed: 04/23/2023] Open
Abstract
Magnetic resonance imaging and computed tomography from multiple batches (e.g. sites, scanners, datasets, etc.) are increasingly used alongside complex downstream analyses to obtain new insights into the human brain. However, significant confounding due to batch-related technical variation, called batch effects, is present in this data; direct application of downstream analyses to the data may lead to biased results. Image harmonization methods seek to remove these batch effects and enable increased generalizability and reproducibility of downstream results. In this review, we describe and categorize current approaches in statistical and deep learning harmonization methods. We also describe current evaluation metrics used to assess harmonization methods and provide a standardized framework to evaluate newly-proposed methods for effective harmonization and preservation of biological information. Finally, we provide recommendations to end-users to advocate for more effective use of current methods and to methodologists to direct future efforts and accelerate development of the field.
Collapse
Affiliation(s)
- Fengling Hu
- Penn Statistics in Imaging and Visualization Endeavor (PennSIVE), Department of Biostatistics, Epidemiology, and Informatics, Perelman School of Medicine, University of Pennsylvania, 423 Guardian Dr, Philadelphia, PA 19104, United States.
| | - Andrew A Chen
- Penn Statistics in Imaging and Visualization Endeavor (PennSIVE), Department of Biostatistics, Epidemiology, and Informatics, Perelman School of Medicine, University of Pennsylvania, 423 Guardian Dr, Philadelphia, PA 19104, United States
| | - Hannah Horng
- Penn Statistics in Imaging and Visualization Endeavor (PennSIVE), Department of Biostatistics, Epidemiology, and Informatics, Perelman School of Medicine, University of Pennsylvania, 423 Guardian Dr, Philadelphia, PA 19104, United States
| | - Vishnu Bashyam
- Center for Biomedical Image Computing and Analytics (CBICA), Perelman School of Medicine, United States
| | - Christos Davatzikos
- Center for Biomedical Image Computing and Analytics (CBICA), Perelman School of Medicine, United States
| | - Aaron Alexander-Bloch
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, United States; Penn-CHOP Lifespan Brain Institute, United States; Department of Child and Adolescent Psychiatry and Behavioral Science, Children's Hospital of Philadelphia, United States
| | - Mingyao Li
- Statistical Center for Single-Cell and Spatial Genomics, Perelman School of Medicine, University of Pennsylvania, United States
| | - Haochang Shou
- Penn Statistics in Imaging and Visualization Endeavor (PennSIVE), Department of Biostatistics, Epidemiology, and Informatics, Perelman School of Medicine, University of Pennsylvania, 423 Guardian Dr, Philadelphia, PA 19104, United States; Center for Biomedical Image Computing and Analytics (CBICA), Perelman School of Medicine, United States
| | - Theodore D Satterthwaite
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, United States; Penn-CHOP Lifespan Brain Institute, United States; The Penn Lifespan Informatics and Neuroimaging Center, Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, United States
| | - Meichen Yu
- Indiana Alzheimer's Disease Research Center, Indiana University School of Medicine, United States
| | - Russell T Shinohara
- Penn Statistics in Imaging and Visualization Endeavor (PennSIVE), Department of Biostatistics, Epidemiology, and Informatics, Perelman School of Medicine, University of Pennsylvania, 423 Guardian Dr, Philadelphia, PA 19104, United States; Center for Biomedical Image Computing and Analytics (CBICA), Perelman School of Medicine, United States
| |
Collapse
|
74
|
Güngör A, Dar SU, Öztürk Ş, Korkmaz Y, Bedel HA, Elmas G, Ozbey M, Çukur T. Adaptive diffusion priors for accelerated MRI reconstruction. Med Image Anal 2023; 88:102872. [PMID: 37384951 DOI: 10.1016/j.media.2023.102872] [Citation(s) in RCA: 49] [Impact Index Per Article: 24.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 04/13/2023] [Accepted: 06/12/2023] [Indexed: 07/01/2023]
Abstract
Deep MRI reconstruction is commonly performed with conditional models that de-alias undersampled acquisitions to recover images consistent with fully-sampled data. Since conditional models are trained with knowledge of the imaging operator, they can show poor generalization across variable operators. Unconditional models instead learn generative image priors decoupled from the operator to improve reliability against domain shifts related to the imaging operator. Recent diffusion models are particularly promising given their high sample fidelity. Nevertheless, inference with a static image prior can perform suboptimally. Here we propose the first adaptive diffusion prior for MRI reconstruction, AdaDiff, to improve performance and reliability against domain shifts. AdaDiff leverages an efficient diffusion prior trained via adversarial mapping over large reverse diffusion steps. A two-phase reconstruction is executed following training: a rapid-diffusion phase that produces an initial reconstruction with the trained prior, and an adaptation phase that further refines the result by updating the prior to minimize data-consistency loss. Demonstrations on multi-contrast brain MRI clearly indicate that AdaDiff outperforms competing conditional and unconditional methods under domain shifts, and achieves superior or on par within-domain performance.
Collapse
Affiliation(s)
- Alper Güngör
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey; ASELSAN Research Center, Ankara 06200, Turkey
| | - Salman Uh Dar
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey; Department of Internal Medicine III, Heidelberg University Hospital, Heidelberg 69120, Germany
| | - Şaban Öztürk
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey; Department of Electrical and Electronics Engineering, Amasya University, Amasya 05100, Turkey
| | - Yilmaz Korkmaz
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey
| | - Hasan A Bedel
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey
| | - Gokberk Elmas
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey
| | - Muzaffer Ozbey
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey
| | - Tolga Çukur
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey; Neuroscience Program, Bilkent University, Ankara 06800, Turkey.
| |
Collapse
|
75
|
Gong C, Jing C, Chen X, Pun CM, Huang G, Saha A, Nieuwoudt M, Li HX, Hu Y, Wang S. Generative AI for brain image computing and brain network computing: a review. Front Neurosci 2023; 17:1203104. [PMID: 37383107 PMCID: PMC10293625 DOI: 10.3389/fnins.2023.1203104] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Accepted: 05/22/2023] [Indexed: 06/30/2023] Open
Abstract
Recent years have witnessed a significant advancement in brain imaging techniques that offer a non-invasive approach to mapping the structure and function of the brain. Concurrently, generative artificial intelligence (AI) has experienced substantial growth, involving using existing data to create new content with a similar underlying pattern to real-world data. The integration of these two domains, generative AI in neuroimaging, presents a promising avenue for exploring various fields of brain imaging and brain network computing, particularly in the areas of extracting spatiotemporal brain features and reconstructing the topological connectivity of brain networks. Therefore, this study reviewed the advanced models, tasks, challenges, and prospects of brain imaging and brain network computing techniques and intends to provide a comprehensive picture of current generative AI techniques in brain imaging. This review is focused on novel methodological approaches and applications of related new methods. It discussed fundamental theories and algorithms of four classic generative models and provided a systematic survey and categorization of tasks, including co-registration, super-resolution, enhancement, classification, segmentation, cross-modality, brain network analysis, and brain decoding. This paper also highlighted the challenges and future directions of the latest work with the expectation that future research can be beneficial.
Collapse
Affiliation(s)
- Changwei Gong
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Department of Computer Science, University of Chinese Academy of Sciences, Beijing, China
| | - Changhong Jing
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Department of Computer Science, University of Chinese Academy of Sciences, Beijing, China
| | - Xuhang Chen
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Department of Computer and Information Science, University of Macau, Macau, China
| | - Chi Man Pun
- Department of Computer and Information Science, University of Macau, Macau, China
| | - Guoli Huang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Ashirbani Saha
- Department of Oncology and School of Biomedical Engineering, McMaster University, Hamilton, ON, Canada
| | - Martin Nieuwoudt
- Institute for Biomedical Engineering, Stellenbosch University, Stellenbosch, South Africa
| | - Han-Xiong Li
- Department of Systems Engineering, City University of Hong Kong, Hong Kong, China
| | - Yong Hu
- Department of Orthopaedics and Traumatology, The University of Hong Kong, Hong Kong, China
| | - Shuqiang Wang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Department of Computer Science, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
76
|
Jin D, Zheng H, Yuan H. Exploring the Possibility of Measuring Vertebrae Bone Structure Metrics Using MDCT Images: An Unpaired Image-to-Image Translation Method. Bioengineering (Basel) 2023; 10:716. [PMID: 37370647 DOI: 10.3390/bioengineering10060716] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Revised: 06/05/2023] [Accepted: 06/08/2023] [Indexed: 06/29/2023] Open
Abstract
Bone structure metrics are vital for the evaluation of vertebral bone strength. However, the gold standard for measuring bone structure metrics, micro-Computed Tomography (micro-CT), cannot be used in vivo, which hinders the early diagnosis of fragility fractures. This paper used an unpaired image-to-image translation method to capture the mapping between clinical multidetector computed tomography (MDCT) and micro-CT images and then generated micro-CT-like images to measure bone structure metrics. MDCT and micro-CT images were scanned from 75 human lumbar spine specimens and formed training and testing sets. The generator in the model focused on learning both the structure and detailed pattern of bone trabeculae and generating micro-CT-like images, and the discriminator determined whether the generated images were micro-CT images or not. Based on similarity metrics (i.e., SSIM and FID) and bone structure metrics (i.e., bone volume fraction, trabecular separation and trabecular thickness), a set of comparisons were performed. The results show that the proposed method can perform better in terms of both similarity metrics and bone structure metrics and the improvement is statistically significant. In particular, we compared the proposed method with the paired image-to-image method and analyzed the pros and cons of the method used.
Collapse
Affiliation(s)
- Dan Jin
- Department of Radiology, Peking University Third Hospital, Beijing 100191, China
| | - Han Zheng
- School of Traffic and Transportation, Beijing Jiaotong University, Beijing 100044, China
| | - Huishu Yuan
- Department of Radiology, Peking University Third Hospital, Beijing 100191, China
| |
Collapse
|
77
|
Xu M, Ouyang Y, Yuan Z. Deep Learning Aided Neuroimaging and Brain Regulation. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23114993. [PMID: 37299724 DOI: 10.3390/s23114993] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 05/15/2023] [Accepted: 05/22/2023] [Indexed: 06/12/2023]
Abstract
Currently, deep learning aided medical imaging is becoming the hot spot of AI frontier application and the future development trend of precision neuroscience. This review aimed to render comprehensive and informative insights into the recent progress of deep learning and its applications in medical imaging for brain monitoring and regulation. The article starts by providing an overview of the current methods for brain imaging, highlighting their limitations and introducing the potential benefits of using deep learning techniques to overcome these limitations. Then, we further delve into the details of deep learning, explaining the basic concepts and providing examples of how it can be used in medical imaging. One of the key strengths is its thorough discussion of the different types of deep learning models that can be used in medical imaging including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and generative adversarial network (GAN) assisted magnetic resonance imaging (MRI), positron emission tomography (PET)/computed tomography (CT), electroencephalography (EEG)/magnetoencephalography (MEG), optical imaging, and other imaging modalities. Overall, our review on deep learning aided medical imaging for brain monitoring and regulation provides a referrable glance for the intersection of deep learning aided neuroimaging and brain regulation.
Collapse
Affiliation(s)
- Mengze Xu
- Center for Cognition and Neuroergonomics, State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Zhuhai 519087, China
- Centre for Cognitive and Brain Sciences, Institute of Collaborative Innovation, University of Macau, Macau SAR 999078, China
| | - Yuanyuan Ouyang
- Nanomicro Sino-Europe Technology Company Limited, Zhuhai 519031, China
- Jiangfeng China-Portugal Technology Co., Ltd., Macau SAR 999078, China
| | - Zhen Yuan
- Centre for Cognitive and Brain Sciences, Institute of Collaborative Innovation, University of Macau, Macau SAR 999078, China
| |
Collapse
|
78
|
Qiu D, Cheng Y, Wang X. Medical image super-resolution reconstruction algorithms based on deep learning: A survey. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 238:107590. [PMID: 37201252 DOI: 10.1016/j.cmpb.2023.107590] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Revised: 03/21/2023] [Accepted: 05/05/2023] [Indexed: 05/20/2023]
Abstract
BACKGROUND AND OBJECTIVE With the high-resolution (HR) requirements of medical images in clinical practice, super-resolution (SR) reconstruction algorithms based on low-resolution (LR) medical images have become a research hotspot. This type of method can significantly improve image SR without improving hardware equipment, so it is of great significance to review it. METHODS Aiming at the unique SR reconstruction algorithms in the field of medical images, based on subdivided medical fields such as magnetic resonance (MR) images, computed tomography (CT) images, and ultrasound images. Firstly, we deeply analyzed the research progress of SR reconstruction algorithms, and summarized and compared the different types of algorithms. Secondly, we introduced the evaluation indicators corresponding to the SR reconstruction algorithms. Finally, we prospected the development trend of SR reconstruction technology in the medical field. RESULTS The medical image SR reconstruction technology based on deep learning can provide more abundant lesion information, relieve the expert's diagnosis pressure, and improve the diagnosis efficiency and accuracy. CONCLUSION The medical image SR reconstruction technology based on deep learning helps to improve the quality of medicine, provides help for the diagnosis of experts, and lays a solid foundation for the subsequent analysis and identification tasks of the computer, which is of great significance for improving the diagnosis efficiency of experts and realizing intelligent medical care.
Collapse
Affiliation(s)
- Defu Qiu
- Engineering Research Center of Intelligent Control for Underground Space, Ministry of Education, China University of Mining and Technology, Xuzhou 221116, China; School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221116, China
| | - Yuhu Cheng
- Engineering Research Center of Intelligent Control for Underground Space, Ministry of Education, China University of Mining and Technology, Xuzhou 221116, China; School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221116, China
| | - Xuesong Wang
- Engineering Research Center of Intelligent Control for Underground Space, Ministry of Education, China University of Mining and Technology, Xuzhou 221116, China; School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221116, China.
| |
Collapse
|
79
|
Yang J, Li XX, Liu F, Nie D, Lio P, Qi H, Shen D. Fast Multi-Contrast MRI Acquisition by Optimal Sampling of Information Complementary to Pre-Acquired MRI Contrast. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1363-1373. [PMID: 37015608 DOI: 10.1109/tmi.2022.3227262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Recent studies on multi-contrast MRI reconstruction have demonstrated the potential of further accelerating MRI acquisition by exploiting correlation between contrasts. Most of the state-of-the-art approaches have achieved improvement through the development of network architectures for fixed under-sampling patterns, without considering inter-contrast correlation in the under-sampling pattern design. On the other hand, sampling pattern learning methods have shown better reconstruction performance than those with fixed under-sampling patterns. However, most under-sampling pattern learning algorithms are designed for single contrast MRI without exploiting complementary information between contrasts. To this end, we propose a framework to optimize the under-sampling pattern of a target MRI contrast which complements the acquired fully-sampled reference contrast. Specifically, a novel image synthesis network is introduced to extract the redundant information contained in the reference contrast, which is exploited in the subsequent joint pattern optimization and reconstruction network. We have demonstrated superior performance of our learned under-sampling patterns on both public and in-house datasets, compared to the commonly used under-sampling patterns and state-of-the-art methods that jointly optimize the reconstruction network and the under-sampling patterns, up to 8-fold under-sampling factor.
Collapse
|
80
|
Al Khalil Y, Amirrajab S, Lorenz C, Weese J, Pluim J, Breeuwer M. Reducing segmentation failures in cardiac MRI via late feature fusion and GAN-based augmentation. Comput Biol Med 2023; 161:106973. [PMID: 37209615 DOI: 10.1016/j.compbiomed.2023.106973] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Revised: 04/05/2023] [Accepted: 04/22/2023] [Indexed: 05/22/2023]
Abstract
Cardiac magnetic resonance (CMR) image segmentation is an integral step in the analysis of cardiac function and diagnosis of heart related diseases. While recent deep learning-based approaches in automatic segmentation have shown great promise to alleviate the need for manual segmentation, most of these are not applicable to realistic clinical scenarios. This is largely due to training on mainly homogeneous datasets, without variation in acquisition, which typically occurs in multi-vendor and multi-site settings, as well as pathological data. Such approaches frequently exhibit a degradation in prediction performance, particularly on outlier cases commonly associated with difficult pathologies, artifacts and extensive changes in tissue shape and appearance. In this work, we present a model aimed at segmenting all three cardiac structures in a multi-center, multi-disease and multi-view scenario. We propose a pipeline, addressing different challenges with segmentation of such heterogeneous data, consisting of heart region detection, augmentation through image synthesis and a late-fusion segmentation approach. Extensive experiments and analysis demonstrate the ability of the proposed approach to tackle the presence of outlier cases during both training and testing, allowing for better adaptation to unseen and difficult examples. Overall, we show that the effective reduction of segmentation failures on outlier cases has a positive impact on not only the average segmentation performance, but also on the estimation of clinical parameters, leading to a better consistency in derived metrics.
Collapse
Affiliation(s)
- Yasmina Al Khalil
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands.
| | - Sina Amirrajab
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | | | | | - Josien Pluim
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Marcel Breeuwer
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands; Philips Healthcare, MR R&D - Clinical Science, Best, The Netherlands
| |
Collapse
|
81
|
van Tulder G, de Bruijne M. Unpaired, unsupervised domain adaptation assumes your domains are already similar. Med Image Anal 2023; 87:102825. [PMID: 37116296 DOI: 10.1016/j.media.2023.102825] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 03/30/2023] [Accepted: 04/17/2023] [Indexed: 04/30/2023]
Abstract
Unsupervised domain adaptation is a popular method in medical image analysis, but it can be tricky to make it work: without labels to link the domains, domains must be matched using feature distributions. If there is no additional information, this often leaves a choice between multiple possibilities to map the data that may be equally likely but not equally correct. In this paper we explore the fundamental problems that may arise in unsupervised domain adaptation, and discuss conditions that might still make it work. Focusing on medical image analysis, we argue that images from different domains may have similar class balance, similar intensities, similar spatial structure, or similar textures. We demonstrate how these implicit conditions can affect domain adaptation performance in experiments with synthetic data, MNIST digits, and medical images. We observe that practical success of unsupervised domain adaptation relies on existing similarities in the data, and is anything but guaranteed in the general case. Understanding these implicit assumptions is a key step in identifying potential problems in domain adaptation and improving the reliability of the results.
Collapse
Affiliation(s)
- Gijs van Tulder
- Data Science group, Faculty of Science, Radboud University, Postbus 9010, 6500 GL Nijmegen, The Netherlands; Biomedical Imaging Group, Erasmus MC, Postbus 2040, 3000 CA Rotterdam, The Netherlands.
| | - Marleen de Bruijne
- Biomedical Imaging Group, Erasmus MC, Postbus 2040, 3000 CA Rotterdam, The Netherlands; Department of Computer Science, University of Copenhagen, Universitetsparken 1, 2100 Copenhagen, Denmark.
| |
Collapse
|
82
|
Xia Y, Ravikumar N, Lassila T, Frangi AF. Virtual high-resolution MR angiography from non-angiographic multi-contrast MRIs: synthetic vascular model populations for in-silico trials. Med Image Anal 2023; 87:102814. [PMID: 37196537 DOI: 10.1016/j.media.2023.102814] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2022] [Revised: 04/04/2023] [Accepted: 04/08/2023] [Indexed: 05/19/2023]
Abstract
Despite success on multi-contrast MR image synthesis, generating specific modalities remains challenging. Those include Magnetic Resonance Angiography (MRA) that highlights details of vascular anatomy using specialised imaging sequences for emphasising inflow effect. This work proposes an end-to-end generative adversarial network that can synthesise anatomically plausible, high-resolution 3D MRA images using commonly acquired multi-contrast MR images (e.g. T1/T2/PD-weighted MR images) for the same subject whilst preserving the continuity of vascular anatomy. A reliable technique for MRA synthesis would unleash the research potential of very few population databases with imaging modalities (such as MRA) that enable quantitative characterisation of whole-brain vasculature. Our work is motivated by the need to generate digital twins and virtual patients of cerebrovascular anatomy for in-silico studies and/or in-silico trials. We propose a dedicated generator and discriminator that leverage the shared and complementary features of multi-source images. We design a composite loss function for emphasising vascular properties by minimising the statistical difference between the feature representations of the target images and the synthesised outputs in both 3D volumetric and 2D projection domains. Experimental results show that the proposed method can synthesise high-quality MRA images and outperform the state-of-the-art generative models both qualitatively and quantitatively. The importance assessment reveals that T2 and PD-weighted images are better predictors of MRA images than T1; and PD-weighted images contribute to better visibility of small vessel branches towards the peripheral regions. In addition, the proposed approach can generalise to unseen data acquired at different imaging centres with different scanners, whilst synthesising MRAs and vascular geometries that maintain vessel continuity. The results show the potential for use of the proposed approach to generating digital twin cohorts of cerebrovascular anatomy at scale from structural MR images typically acquired in population imaging initiatives.
Collapse
Affiliation(s)
- Yan Xia
- Centre for Computational Imaging and Simulation Technologies in Biomedicine (CISTIB), School of Computing, University of Leeds, Leeds, UK.
| | - Nishant Ravikumar
- Centre for Computational Imaging and Simulation Technologies in Biomedicine (CISTIB), School of Computing, University of Leeds, Leeds, UK
| | - Toni Lassila
- Centre for Computational Imaging and Simulation Technologies in Biomedicine (CISTIB), School of Computing, University of Leeds, Leeds, UK
| | - Alejandro F Frangi
- Centre for Computational Imaging and Simulation Technologies in Biomedicine (CISTIB), School of Computing, University of Leeds, Leeds, UK; Leeds Institute for Cardiovascular and Metabolic Medicine (LICAMM), School of Medicine, University of Leeds, Leeds, UK; Medical Imaging Research Center (MIRC), Cardiovascular Science and Electronic Engineering Departments, KU Leuven, Leuven, Belgium; Alan Turing Institute, London, UK
| |
Collapse
|
83
|
Shimron E, Perlman O. AI in MRI: Computational Frameworks for a Faster, Optimized, and Automated Imaging Workflow. Bioengineering (Basel) 2023; 10:492. [PMID: 37106679 PMCID: PMC10135995 DOI: 10.3390/bioengineering10040492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 04/12/2023] [Accepted: 04/18/2023] [Indexed: 04/29/2023] Open
Abstract
Over the last decade, artificial intelligence (AI) has made an enormous impact on a wide range of fields, including science, engineering, informatics, finance, and transportation [...].
Collapse
Affiliation(s)
- Efrat Shimron
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, CA 94720, USA
| | - Or Perlman
- Department of Biomedical Engineering, Tel Aviv University, Tel Aviv 6997801, Israel
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv 6997801, Israel
| |
Collapse
|
84
|
Lin H, Figini M, D'Arco F, Ogbole G, Tanno R, Blumberg SB, Ronan L, Brown BJ, Carmichael DW, Lagunju I, Cross JH, Fernandez-Reyes D, Alexander DC. Low-field magnetic resonance image enhancement via stochastic image quality transfer. Med Image Anal 2023; 87:102807. [PMID: 37120992 DOI: 10.1016/j.media.2023.102807] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Revised: 01/18/2023] [Accepted: 03/30/2023] [Indexed: 05/02/2023]
Abstract
Low-field (<1T) magnetic resonance imaging (MRI) scanners remain in widespread use in low- and middle-income countries (LMICs) and are commonly used for some applications in higher income countries e.g. for small child patients with obesity, claustrophobia, implants, or tattoos. However, low-field MR images commonly have lower resolution and poorer contrast than images from high field (1.5T, 3T, and above). Here, we present Image Quality Transfer (IQT) to enhance low-field structural MRI by estimating from a low-field image the image we would have obtained from the same subject at high field. Our approach uses (i) a stochastic low-field image simulator as the forward model to capture uncertainty and variation in the contrast of low-field images corresponding to a particular high-field image, and (ii) an anisotropic U-Net variant specifically designed for the IQT inverse problem. We evaluate the proposed algorithm both in simulation and using multi-contrast (T1-weighted, T2-weighted, and fluid attenuated inversion recovery (FLAIR)) clinical low-field MRI data from an LMIC hospital. We show the efficacy of IQT in improving contrast and resolution of low-field MR images. We demonstrate that IQT-enhanced images have potential for enhancing visualisation of anatomical structures and pathological lesions of clinical relevance from the perspective of radiologists. IQT is proved to have capability of boosting the diagnostic value of low-field MRI, especially in low-resource settings.
Collapse
Affiliation(s)
- Hongxiang Lin
- Research Center for Healthcare Data Science, Zhejiang Lab, Hangzhou 311121, Zhejiang, China; Centre for Medical Image Computing, University College London, London WC1E 6BT, United Kingdom; Department of Computer Science, University College London, London WC1E 6BT, United Kingdom.
| | - Matteo Figini
- Centre for Medical Image Computing, University College London, London WC1E 6BT, United Kingdom; Department of Computer Science, University College London, London WC1E 6BT, United Kingdom
| | - Felice D'Arco
- Department of Radiology, Great Ormond Street Hospital for Children, London WC1N 3JH, United Kingdom
| | - Godwin Ogbole
- Department of Radiology, College of Medicine, University of Ibadan, Ibadan 200284, Nigeria
| | | | - Stefano B Blumberg
- Centre for Medical Image Computing, University College London, London WC1E 6BT, United Kingdom; Department of Computer Science, University College London, London WC1E 6BT, United Kingdom; Centre for Artificial Intelligence, University College London, London WC1E 6BT, United Kingdom
| | - Lisa Ronan
- Centre for Medical Image Computing, University College London, London WC1E 6BT, United Kingdom; Department of Computer Science, University College London, London WC1E 6BT, United Kingdom
| | - Biobele J Brown
- Department of Paediatrics, College of Medicine, University of Ibadan, Ibadan 200284, Nigeria
| | - David W Carmichael
- School of Biomedical Engineering & Imaging Sciences, King's College London, London NW3 3ES, United Kingdom; UCL Great Ormond Street Institute of Child Health, London WC1N 3JH, United Kingdom
| | - Ikeoluwa Lagunju
- Department of Paediatrics, College of Medicine, University of Ibadan, Ibadan 200284, Nigeria
| | - Judith Helen Cross
- UCL Great Ormond Street Institute of Child Health, London WC1N 3JH, United Kingdom
| | - Delmiro Fernandez-Reyes
- Department of Computer Science, University College London, London WC1E 6BT, United Kingdom; Department of Paediatrics, College of Medicine, University of Ibadan, Ibadan 200284, Nigeria
| | - Daniel C Alexander
- Centre for Medical Image Computing, University College London, London WC1E 6BT, United Kingdom; Department of Computer Science, University College London, London WC1E 6BT, United Kingdom
| |
Collapse
|
85
|
Kebaili A, Lapuyade-Lahorgue J, Ruan S. Deep Learning Approaches for Data Augmentation in Medical Imaging: A Review. J Imaging 2023; 9:81. [PMID: 37103232 PMCID: PMC10144738 DOI: 10.3390/jimaging9040081] [Citation(s) in RCA: 36] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Revised: 03/31/2023] [Accepted: 04/07/2023] [Indexed: 04/28/2023] Open
Abstract
Deep learning has become a popular tool for medical image analysis, but the limited availability of training data remains a major challenge, particularly in the medical field where data acquisition can be costly and subject to privacy regulations. Data augmentation techniques offer a solution by artificially increasing the number of training samples, but these techniques often produce limited and unconvincing results. To address this issue, a growing number of studies have proposed the use of deep generative models to generate more realistic and diverse data that conform to the true distribution of the data. In this review, we focus on three types of deep generative models for medical image augmentation: variational autoencoders, generative adversarial networks, and diffusion models. We provide an overview of the current state of the art in each of these models and discuss their potential for use in different downstream tasks in medical imaging, including classification, segmentation, and cross-modal translation. We also evaluate the strengths and limitations of each model and suggest directions for future research in this field. Our goal is to provide a comprehensive review about the use of deep generative models for medical image augmentation and to highlight the potential of these models for improving the performance of deep learning algorithms in medical image analysis.
Collapse
Affiliation(s)
| | | | - Su Ruan
- Université Rouen Normandie, INSA Rouen Normandie, Université Le Havre Normandie, Normandie Univ, LITIS UR 4108, F-76000 Rouen, France
| |
Collapse
|
86
|
Wang R, Bashyam V, Yang Z, Yu F, Tassopoulou V, Chintapalli SS, Skampardoni I, Sreepada LP, Sahoo D, Nikita K, Abdulkadir A, Wen J, Davatzikos C. Applications of generative adversarial networks in neuroimaging and clinical neuroscience. Neuroimage 2023; 269:119898. [PMID: 36702211 PMCID: PMC9992336 DOI: 10.1016/j.neuroimage.2023.119898] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Revised: 12/16/2022] [Accepted: 01/21/2023] [Indexed: 01/25/2023] Open
Abstract
Generative adversarial networks (GANs) are one powerful type of deep learning models that have been successfully utilized in numerous fields. They belong to the broader family of generative methods, which learn to generate realistic data with a probabilistic model by learning distributions from real samples. In the clinical context, GANs have shown enhanced capabilities in capturing spatially complex, nonlinear, and potentially subtle disease effects compared to traditional generative methods. This review critically appraises the existing literature on the applications of GANs in imaging studies of various neurological conditions, including Alzheimer's disease, brain tumors, brain aging, and multiple sclerosis. We provide an intuitive explanation of various GAN methods for each application and further discuss the main challenges, open questions, and promising future directions of leveraging GANs in neuroimaging. We aim to bridge the gap between advanced deep learning methods and neurology research by highlighting how GANs can be leveraged to support clinical decision making and contribute to a better understanding of the structural and functional patterns of brain diseases.
Collapse
Affiliation(s)
- Rongguang Wang
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA.
| | - Vishnu Bashyam
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Zhijian Yang
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Fanyang Yu
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Vasiliki Tassopoulou
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Sai Spandana Chintapalli
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Ioanna Skampardoni
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA; School of Electrical and Computer Engineering, National Technical University of Athens, Athens, Greece
| | - Lasya P Sreepada
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Dushyant Sahoo
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Konstantina Nikita
- School of Electrical and Computer Engineering, National Technical University of Athens, Athens, Greece
| | - Ahmed Abdulkadir
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA; Department of Clinical Neurosciences, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Junhao Wen
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Christos Davatzikos
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA; Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, USA.
| |
Collapse
|
87
|
Pal S, Dutta S, Maitra R. Personalized synthetic MR imaging with deep learning enhancements. Magn Reson Med 2023; 89:1634-1643. [PMID: 36420834 PMCID: PMC10100029 DOI: 10.1002/mrm.29527] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Revised: 10/25/2022] [Accepted: 10/27/2022] [Indexed: 11/25/2022]
Abstract
PURPOSE Personalized synthetic MRI (syn-MRI) uses MR images of an individual subject acquired at a few design parameters (echo time, repetition time, flip angle) to obtain underlying parametric ( ρ , T 1 , T 2 ) $$ \left(\rho, {\mathrm{T}}_1,{\mathrm{T}}_2\right) $$ maps, from where MR images of that individual at other design parameter settings are synthesized. However, classical methods that use least-squares (LS) or maximum likelihood estimators (MLE) are unsatisfactory at higher noise levels because the underlying inverse problem is ill-posed. This article provides a pipeline to enhance the synthesis of such images in three-dimensional (3D) using a deep learning (DL) neural network architecture for spatial regularization in a personalized setting where having more than a few training images is impractical. METHODS Our DL enhancements employ a Deep Image Prior (DIP) with a U-net type denoising architecture that includes situations with minimal training data, such as personalized syn-MRI. We provide a general workflow for syn-MRI from three or more training images. Our workflow, called DIPsyn-MRI, uses DIP to enhance training images, then obtains parametric images using LS or MLE before synthesizing images at desired design parameter settings. DIPsyn-MRI is implemented in our publicly available Python package DeepSynMRI available at: https://github.com/StatPal/DeepSynMRI. RESULTS We demonstrate feasibility and improved performance of DIPsyn-MRI on 3D datasets acquired using the Brainweb interface for spin-echo and FLASH imaging sequences, at different noise levels. Our DL enhancements improve syn-MRI in the presence of different intensity nonuniformity levels of the magnetic field, for all but very low noise levels. CONCLUSION This article provides recipes and software to realistically facilitate DL-enhanced personalized syn-MRI.
Collapse
Affiliation(s)
- Subrata Pal
- Department of Statistics, Iowa State University, Ames, Iowa, USA
| | - Somak Dutta
- Department of Statistics, Iowa State University, Ames, Iowa, USA
| | - Ranjan Maitra
- Department of Statistics, Iowa State University, Ames, Iowa, USA
| |
Collapse
|
88
|
Abdusalomov AB, Nasimov R, Nasimova N, Muminov B, Whangbo TK. Evaluating Synthetic Medical Images Using Artificial Intelligence with the GAN Algorithm. SENSORS (BASEL, SWITZERLAND) 2023; 23:3440. [PMID: 37050503 PMCID: PMC10098960 DOI: 10.3390/s23073440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Revised: 03/18/2023] [Accepted: 03/18/2023] [Indexed: 06/19/2023]
Abstract
In recent years, considerable work has been conducted on the development of synthetic medical images, but there are no satisfactory methods for evaluating their medical suitability. Existing methods mainly evaluate the quality of noise in the images, and the similarity of the images to the real images used to generate them. For this purpose, they use feature maps of images extracted in different ways or distribution of images set. Then, the proximity of synthetic images to the real set is evaluated using different distance metrics. However, it is not possible to determine whether only one synthetic image was generated repeatedly, or whether the synthetic set exactly repeats the training set. In addition, most evolution metrics take a lot of time to calculate. Taking these issues into account, we have proposed a method that can quantitatively and qualitatively evaluate synthetic images. This method is a combination of two methods, namely, FMD and CNN-based evaluation methods. The estimation methods were compared with the FID method, and it was found that the FMD method has a great advantage in terms of speed, while the CNN method has the ability to estimate more accurately. To evaluate the reliability of the methods, a dataset of different real images was checked.
Collapse
Affiliation(s)
| | - Rashid Nasimov
- Department of Artificial Intelligence, Tashkent State University of Economics, Tashkent 100066, Uzbekistan
| | - Nigorakhon Nasimova
- Department of Artificial Intelligence, Tashkent State University of Economics, Tashkent 100066, Uzbekistan
| | - Bahodir Muminov
- Department of Artificial Intelligence, Tashkent State University of Economics, Tashkent 100066, Uzbekistan
| | - Taeg Keun Whangbo
- Department of Computer Engineering, Gachon University, Sujeong-Gu, Seongnam-Si 461-701, Gyeonggi-Do, Republic of Korea
| |
Collapse
|
89
|
Joseph J, Biji I, Babu N, Pournami PN, Jayaraj PB, Puzhakkal N, Sabu C, Patel V. Fan beam CT image synthesis from cone beam CT image using nested residual UNet based conditional generative adversarial network. Phys Eng Sci Med 2023; 46:703-717. [PMID: 36943626 DOI: 10.1007/s13246-023-01244-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 03/09/2023] [Indexed: 03/23/2023]
Abstract
A radiotherapy technique called Image-Guided Radiation Therapy adopts frequent imaging throughout a treatment session. Fan Beam Computed Tomography (FBCT) based planning followed by Cone Beam Computed Tomography (CBCT) based radiation delivery drastically improved the treatment accuracy. Furtherance in terms of radiation exposure and cost can be achieved if FBCT could be replaced with CBCT. This paper proposes a Conditional Generative Adversarial Network (CGAN) for CBCT-to-FBCT synthesis. Specifically, a new architecture called Nested Residual UNet (NR-UNet) is introduced as the generator of the CGAN. A composite loss function, which comprises adversarial loss, Mean Squared Error (MSE), and Gradient Difference Loss (GDL), is used with the generator. The CGAN utilises the inter-slice dependency in the input by taking three consecutive CBCT slices to generate an FBCT slice. The model is trained using Head-and-Neck (H&N) FBCT-CBCT images of 53 cancer patients. The synthetic images exhibited a Peak Signal-to-Noise Ratio of 34.04±0.93 dB, Structural Similarity Index Measure of 0.9751±0.001 and a Mean Absolute Error of 14.81±4.70 HU. On average, the proposed model guarantees an improvement in Contrast-to-Noise Ratio four times better than the input CBCT images. The model also minimised the MSE and alleviated blurriness. Compared to the CBCT-based plan, the synthetic image results in a treatment plan closer to the FBCT-based plan. The three-slice to single-slice translation captures the three-dimensional contextual information in the input. Besides, it withstands the computational complexity associated with a three-dimensional image synthesis model. Furthermore, the results demonstrate that the proposed model is superior to the state-of-the-art methods.
Collapse
Affiliation(s)
- Jiffy Joseph
- Computer science and Engineering Department, National Institute of Technology Calicut, Kattangal, Calicut, Kerala, 673601, India.
| | - Ivan Biji
- Computer science and Engineering Department, National Institute of Technology Calicut, Kattangal, Calicut, Kerala, 673601, India
| | - Naveen Babu
- Computer science and Engineering Department, National Institute of Technology Calicut, Kattangal, Calicut, Kerala, 673601, India
| | - P N Pournami
- Computer science and Engineering Department, National Institute of Technology Calicut, Kattangal, Calicut, Kerala, 673601, India
| | - P B Jayaraj
- Computer science and Engineering Department, National Institute of Technology Calicut, Kattangal, Calicut, Kerala, 673601, India
| | - Niyas Puzhakkal
- Department of Medical Physics, MVR Cancer Centre & Research Institute, Poolacode, Calicut, Kerala, 673601, India
| | - Christy Sabu
- Computer science and Engineering Department, National Institute of Technology Calicut, Kattangal, Calicut, Kerala, 673601, India
| | - Vedkumar Patel
- Computer science and Engineering Department, National Institute of Technology Calicut, Kattangal, Calicut, Kerala, 673601, India
| |
Collapse
|
90
|
Kawahara D, Yoshimura H, Matsuura T, Saito A, Nagata Y. MRI image synthesis for fluid-attenuated inversion recovery and diffusion-weighted images with deep learning. Phys Eng Sci Med 2023; 46:313-323. [PMID: 36715853 DOI: 10.1007/s13246-023-01220-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2020] [Accepted: 01/10/2023] [Indexed: 01/31/2023]
Abstract
This study aims to synthesize fluid-attenuated inversion recovery (FLAIR) and diffusion-weighted images (DWI) with a deep conditional adversarial network from T1- and T2-weighted magnetic resonance imaging (MRI) images. A total of 1980 images of 102 patients were split into two datasets: 1470 (68 patients) in a training set and 510 (34 patients) in a test set. The prediction framework was based on a convolutional neural network with a generator and discriminator. T1-weighted, T2-weighted, and composite images were used as inputs. The digital imaging and communications in medicine (DICOM) images were converted to 8-bit red-green-blue images. The red and blue channels of the composite images were assigned to 8-bit grayscale pixel values in T1-weighted images, and the green channel was assigned to those in T2-weighted images. The prediction FLAIR and DWI images were of the same objects as the inputs. For the results, the prediction model with composite MRI input images in the DWI image showed the smallest relative mean absolute error (rMAE) and largest mutual information (MI), and that in the FLAIR image showed the largest relative mean-square error (rMSE), relative root-mean-square error (rRMSE), and peak signal-to-noise ratio (PSNR). For the FLAIR image, the prediction model with the T2-weighted MRI input images generated more accurate synthesis results than that with the T1-weighted inputs. The proposed image synthesis framework can improve the versatility and quality of multi-contrast MRI without extra scans. The composite input MRI image contributes to synthesizing the multi-contrast MRI image efficiently.
Collapse
Affiliation(s)
- Daisuke Kawahara
- Department of Radiation Oncology, Graduate School of Biomedical and Health Sciences, Hiroshima University, Hiroshima, 734-8551, Japan.
| | - Hisanori Yoshimura
- Department of Radiation Oncology, Graduate School of Biomedical and Health Sciences, Hiroshima University, Hiroshima, 734-8551, Japan
- Department of Radiology, National Hospital Organization Kure Medical Center, Hiroshima, 737-0023, Japan
| | - Takaaki Matsuura
- Department of Radiation Oncology, Graduate School of Biomedical and Health Sciences, Hiroshima University, Hiroshima, 734-8551, Japan
| | - Akito Saito
- Department of Radiation Oncology, Graduate School of Biomedical and Health Sciences, Hiroshima University, Hiroshima, 734-8551, Japan
| | - Yasushi Nagata
- Department of Radiation Oncology, Graduate School of Biomedical and Health Sciences, Hiroshima University, Hiroshima, 734-8551, Japan
- Hiroshima High-Precision Radiotherapy Cancer Center, Hiroshima, 732-0057, Japan
| |
Collapse
|
91
|
Chen X, Cao Y, Zhang K, Wang Z, Xie X, Wang Y, Men K, Dai J. Technical note: A method to synthesize magnetic resonance images in different patient rotation angles with deep learning for gantry-free radiotherapy. Med Phys 2023; 50:1746-1755. [PMID: 36135718 DOI: 10.1002/mp.15981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Revised: 08/29/2022] [Accepted: 08/31/2022] [Indexed: 11/11/2022] Open
Abstract
BACKGROUND Recently, patient rotating devices for gantry-free radiotherapy, a new approach to implement external beam radiotherapy, have been introduced. When a patient is rotated in the horizontal position, gravity causes anatomic deformation. For treatment planning, one feasible method is to acquire simulation images at different horizontal rotation angles. PURPOSE This study aimed to investigate the feasibility of synthesizing magnetic resonance (MR) images at patient rotation angles of 180° (prone position) and 90° (lateral position) from those at a rotation angle of 0° (supine position) using deep learning. METHODS This study included 23 healthy male volunteers. They underwent MR imaging (MRI) in the supine position and then in the prone (23 volunteers) and lateral (16 volunteers) positions. T1-weighted fast spin echo was performed for all positions with the same parameters. Two two-dimensional deep learning networks, pix2pix generative adversarial network (pix2pix GAN) and CycleGAN, were developed for synthesizing MR images in the prone and lateral positions from those in the supine position, respectively. For the evaluation of the models, leave-one-out cross-validation was performed. The mean absolute error (MAE), Dice similarity coefficient (DSC), and Hausdorff distance (HD) were used to determine the agreement between the prediction and ground truth for the entire body and four specific organs. RESULTS For pix2pix GAN, the synthesized images were visually bad, and no quantitative evaluation was performed. The quantitative evaluation metrics of the body outlines calculated for the synthesized prone and lateral images using CycleGAN were as follows: MAE, 35.63 ± 3.98 and 40.45 ± 5.83, respectively; DSC, 0.97 ± 0.01 and 0.94 ± 0.01, respectively; and HD (in pixels), 16.74 ± 3.55 and 31.69 ± 12.03, respectively. The quantitative metrics of the bladder and prostate performed were also promising for both the prone and lateral images, with mean values >0.90 in DSC (p > 0.05). The mean DSC and HD values of the bilateral femur for the prone images were 0.96 and 3.63 (in pixels), respectively, and 0.78 and 12.65 (in pixels) for the lateral images, respectively (p < 0.05). CONCLUSIONS The CycleGAN could synthesize the MRI at lateral and prone positions using images at supine position, and it could benefit gantry-free radiation therapy.
Collapse
Affiliation(s)
- Xinyuan Chen
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
- National Cancer Center/National Clinical Research Center for Cancer/Hebei Cancer Hospital, Chinese Academy of Medical Sciences, Langfang, China
| | - Ying Cao
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Kaixuan Zhang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Zhen Wang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Xuejie Xie
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yunxiang Wang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Kuo Men
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Jianrong Dai
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
92
|
Fu Y, Dong S, Niu M, Xue L, Guo H, Huang Y, Xu Y, Yu T, Shi K, Yang Q, Shi Y, Zhang H, Tian M, Zhuo C. AIGAN: Attention-encoding Integrated Generative Adversarial Network for the reconstruction of low-dose CT and low-dose PET images. Med Image Anal 2023; 86:102787. [PMID: 36933386 DOI: 10.1016/j.media.2023.102787] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Revised: 11/05/2022] [Accepted: 02/22/2023] [Indexed: 03/04/2023]
Abstract
X-ray computed tomography (CT) and positron emission tomography (PET) are two of the most commonly used medical imaging technologies for the evaluation of many diseases. Full-dose imaging for CT and PET ensures the image quality but usually raises concerns about the potential health risks of radiation exposure. The contradiction between reducing the radiation exposure and remaining diagnostic performance can be addressed effectively by reconstructing the low-dose CT (L-CT) and low-dose PET (L-PET) images to the same high-quality ones as full-dose (F-CT and F-PET). In this paper, we propose an Attention-encoding Integrated Generative Adversarial Network (AIGAN) to achieve efficient and universal full-dose reconstruction for L-CT and L-PET images. AIGAN consists of three modules: the cascade generator, the dual-scale discriminator and the multi-scale spatial fusion module (MSFM). A sequence of consecutive L-CT (L-PET) slices is first fed into the cascade generator that integrates with a generation-encoding-generation pipeline. The generator plays the zero-sum game with the dual-scale discriminator for two stages: the coarse and fine stages. In both stages, the generator generates the estimated F-CT (F-PET) images as like the original F-CT (F-PET) images as possible. After the fine stage, the estimated fine full-dose images are then fed into the MSFM, which fully explores the inter- and intra-slice structural information, to output the final generated full-dose images. Experimental results show that the proposed AIGAN achieves the state-of-the-art performances on commonly used metrics and satisfies the reconstruction needs for clinical standards.
Collapse
Affiliation(s)
- Yu Fu
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China; Binjiang Institute, Zhejiang University, Hangzhou, China
| | - Shunjie Dong
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China
| | - Meng Niu
- Department of Radiology, The First Hospital of Lanzhou University, Lanzhou, China
| | - Le Xue
- Department of Nuclear Medicine and Medical PET Center The Second Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Hanning Guo
- Institute of Neuroscience and Medicine, Medical Imaging Physics (INM-4), Forschungszentrum Jülich, Jülich, Germany
| | - Yanyan Huang
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China
| | - Yuanfan Xu
- Hangzhou Universal Medical Imaging Diagnostic Center, Hangzhou, China
| | - Tianbai Yu
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China
| | - Kuangyu Shi
- Department of Nuclear Medicine, University Hospital Bern, Bern, Switzerland
| | - Qianqian Yang
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China
| | - Yiyu Shi
- Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN, USA
| | - Hong Zhang
- Binjiang Institute, Zhejiang University, Hangzhou, China; Department of Nuclear Medicine and Medical PET Center The Second Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Mei Tian
- Human Phenome Institute, Fudan University, Shanghai, China.
| | - Cheng Zhuo
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China; Key Laboratory of Collaborative Sensing and Autonomous Unmanned Systems of Zhejiang Province, Hangzhou, China.
| |
Collapse
|
93
|
Iglesias JE, Billot B, Balbastre Y, Magdamo C, Arnold SE, Das S, Edlow BL, Alexander DC, Golland P, Fischl B. SynthSR: A public AI tool to turn heterogeneous clinical brain scans into high-resolution T1-weighted images for 3D morphometry. SCIENCE ADVANCES 2023; 9:eadd3607. [PMID: 36724222 PMCID: PMC9891693 DOI: 10.1126/sciadv.add3607] [Citation(s) in RCA: 41] [Impact Index Per Article: 20.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Accepted: 01/04/2023] [Indexed: 05/10/2023]
Abstract
Every year, millions of brain magnetic resonance imaging (MRI) scans are acquired in hospitals across the world. These have the potential to revolutionize our understanding of many neurological diseases, but their morphometric analysis has not yet been possible due to their anisotropic resolution. We present an artificial intelligence technique, "SynthSR," that takes clinical brain MRI scans with any MR contrast (T1, T2, etc.), orientation (axial/coronal/sagittal), and resolution and turns them into high-resolution T1 scans that are usable by virtually all existing human neuroimaging tools. We present results on segmentation, registration, and atlasing of >10,000 scans of controls and patients with brain tumors, strokes, and Alzheimer's disease. SynthSR yields morphometric results that are very highly correlated with what one would have obtained with high-resolution T1 scans. SynthSR allows sample sizes that have the potential to overcome the power limitations of prospective research studies and shed new light on the healthy and diseased human brain.
Collapse
Affiliation(s)
- Juan E. Iglesias
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
- Centre for Medical Image Computing, Department of Computer Science, University College London, London, UK
- Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Benjamin Billot
- Centre for Medical Image Computing, Department of Computer Science, University College London, London, UK
| | - Yaël Balbastre
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Colin Magdamo
- Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Steven E. Arnold
- Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Sudeshna Das
- Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Brian L. Edlow
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
- Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
- Center for Neurotechnology and Neurorecovery, Massachusetts General Hospital, Boston, MA, USA
| | - Daniel C. Alexander
- Centre for Medical Image Computing, Department of Computer Science, University College London, London, UK
| | - Polina Golland
- Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Bruce Fischl
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
- Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
94
|
Chen C, Raymond C, Speier W, Jin X, Cloughesy TF, Enzmann D, Ellingson BM, Arnold CW. Synthesizing MR Image Contrast Enhancement Using 3D High-Resolution ConvNets. IEEE Trans Biomed Eng 2023; 70:401-412. [PMID: 35853075 PMCID: PMC9928432 DOI: 10.1109/tbme.2022.3192309] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
OBJECTIVE Gadolinium-based contrast agents (GBCAs) have been widely used to better visualize disease in brain magnetic resonance imaging (MRI). However, gadolinium deposition within the brain and body has raised safety concerns about the use of GBCAs. Therefore, the development of novel approaches that can decrease or even eliminate GBCA exposure while providing similar contrast information would be of significant use clinically. METHODS In this work, we present a deep learning based approach for contrast-enhanced T1 synthesis on brain tumor patients. A 3D high-resolution fully convolutional network (FCN), which maintains high resolution information through processing and aggregates multi-scale information in parallel, is designed to map pre-contrast MRI sequences to contrast-enhanced MRI sequences. Specifically, three pre-contrast MRI sequences, T1, T2 and apparent diffusion coefficient map (ADC), are utilized as inputs and the post-contrast T1 sequences are utilized as target output. To alleviate the data imbalance problem between normal tissues and the tumor regions, we introduce a local loss to improve the contribution of the tumor regions, which leads to better enhancement results on tumors. RESULTS Extensive quantitative and visual assessments are performed, with our proposed model achieving a PSNR of 28.24 dB in the brain and 21.2 dB in tumor regions. CONCLUSION AND SIGNIFICANCE Our results suggest the potential of substituting GBCAs with synthetic contrast images generated via deep learning.
Collapse
|
95
|
Poonkodi S, Kanchana M. 3D-MedTranCSGAN: 3D Medical Image Transformation using CSGAN. Comput Biol Med 2023; 153:106541. [PMID: 36652868 DOI: 10.1016/j.compbiomed.2023.106541] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 11/30/2022] [Accepted: 01/10/2023] [Indexed: 01/15/2023]
Abstract
Computer vision techniques are a rapidly growing area of transforming medical images for various specific medical applications. In an end-to-end application, this paper proposes a 3D Medical Image Transformation Using a CSGAN model named a 3D-MedTranCSGAN. The 3D-MedTranCSGAN model is an integration of non-adversarial loss components and the Cyclic Synthesized Generative Adversarial Networks. The proposed model utilizes PatchGAN's discriminator network, to penalize the difference between the synthesized image and the original image. The model also computes the non-adversary loss functions such as content, perception, and style transfer losses. 3DCascadeNet is a new generator architecture introduced in the paper, which is used to enhance the perceptiveness of the transformed medical image by encoding-decoding pairs. We use the 3D-MedTranCSGAN model to do various tasks without modifying specific applications: PET to CT image transformation; reconstruction of CT to PET; modification of movement artefacts in MR images; and removing noise in PET images. We found that 3D-MedTranCSGAN outperformed other transformation methods in our experiments. For the first task, the proposed model yields SSIM is 0.914, PSNR is 26.12, MSE is 255.5, VIF is 0.4862, UQI is 0.9067 and LPIPs is 0.2284. For the second task, the model yields 0.9197, 25.7, 257.56, 0.4962, 0.9027, 0.2262. For the third task, the model yields 0.8862, 24.94, 0.4071, 0.6410, 0.2196. For the final task, the model yields 0.9521, 33.67, 33.57, 0.6091, 0.9255, 0.0244. Based on the result analysis, the proposed model outperforms the other techniques.
Collapse
Affiliation(s)
- S Poonkodi
- Department of Computing Technologies, School of Computing, SRM Institute of Science and Technology, Kattankulathur, India
| | - M Kanchana
- Department of Computing Technologies, School of Computing, SRM Institute of Science and Technology, Kattankulathur, India.
| |
Collapse
|
96
|
Wright C, Mäkelä P, Bigot A, Anttinen M, Boström PJ, Blanco Sequeiros R. Deep learning prediction of non-perfused volume without contrast agents during prostate ablation therapy. Biomed Eng Lett 2023; 13:31-40. [PMID: 36711157 PMCID: PMC9873841 DOI: 10.1007/s13534-022-00250-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Revised: 09/29/2022] [Accepted: 10/22/2022] [Indexed: 11/09/2022] Open
Abstract
The non-perfused volume (NPV) is an important indicator of treatment success immediately after prostate ablation. However, visualization of the NPV first requires an injection of MRI contrast agents into the bloodstream, which has many downsides. Purpose of this study was to develop a deep learning model capable of predicting the NPV immediately after prostate ablation therapy without the need for MRI contrast agents. A modified 2D deep learning UNet model was developed to predict the post-treatment NPV. MRI imaging data from 95 patients who had previously undergone prostate ablation therapy for treatment of localized prostate cancer were used to train, validate, and test the model. Model inputs were T1/T2-weighted and thermometry MRI images, which were always acquired without any MRI contrast agents and prior to the final NPV image on treatment-day. Model output was the predicted NPV. Model accuracy was assessed using the Dice-Similarity Coefficient (DSC) by comparing the predicted to ground truth NPV. A radiologist also performed a qualitative assessment of NPV. Mean (std) DSC score for predicted NPV was 85% ± 8.1% compared to ground truth. Model performance was significantly better for slices with larger prostate radii (> 24 mm) and for whole-gland rather than partial ablation slices. The predicted NPV was indistinguishable from ground truth for 31% of images. Feasibility of predicting NPV using a UNet model without MRI contrast agents was clearly established. If developed further, this could improve patient treatment outcomes and could obviate the need for contrast agents altogether. Trial Registration Numbers Three studies were used to populate the data: NCT02766543, NCT03814252 and NCT03350529. Supplementary Information The online version contains supplementary material available at 10.1007/s13534-022-00250-y.
Collapse
Affiliation(s)
- Cameron Wright
- Department of Urology, University of Turku and Turku University Hospital, Turku, Finland
- Department of Diagnostic Radiology, University of Turku and Turku University Hospital, Turku, Finland
| | - Pietari Mäkelä
- Department of Diagnostic Radiology, University of Turku and Turku University Hospital, Turku, Finland
| | | | - Mikael Anttinen
- Department of Urology, University of Turku and Turku University Hospital, Turku, Finland
| | - Peter J. Boström
- Department of Urology, University of Turku and Turku University Hospital, Turku, Finland
| | - Roberto Blanco Sequeiros
- Department of Diagnostic Radiology, University of Turku and Turku University Hospital, Turku, Finland
| |
Collapse
|
97
|
Free-breathing and instantaneous abdominal T 2 mapping via single-shot multiple overlapping-echo acquisition and deep learning reconstruction. Eur Radiol 2023:10.1007/s00330-023-09417-2. [PMID: 36692597 DOI: 10.1007/s00330-023-09417-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Revised: 10/12/2022] [Accepted: 01/01/2023] [Indexed: 01/25/2023]
Abstract
OBJECTIVES To develop a real-time abdominal T2 mapping method without requiring breath-holding or respiratory-gating. METHODS The single-shot multiple overlapping-echo detachment (MOLED) pulse sequence was employed to achieve free-breathing T2 mapping of the abdomen. Deep learning was used to untangle the non-linear relationship between the MOLED signal and T2 mapping. A synthetic data generation flow based on Bloch simulation, modality synthesis, and randomization was proposed to overcome the inadequacy of real-world training set. RESULTS The results from simulation and in vivo experiments demonstrated that our method could deliver high-quality T2 mapping. The average NMSE and R2 values of linear regression in the digital phantom experiments were 0.0178 and 0.9751. Pearson's correlation coefficient between our predicted T2 and reference T2 in the phantom experiments was 0.9996. In the measurements for the patients, real-time capture of the T2 value changes of various abdominal organs before and after contrast agent injection was realized. A total of 33 focal liver lesions were detected in the group, and the mean and standard deviation of T2 values were 141.1 ± 50.0 ms for benign and 63.3 ± 16.0 ms for malignant lesions. The coefficients of variance in a test-retest experiment were 2.9%, 1.2%, 0.9%, 3.1%, and 1.8% for the liver, kidney, gallbladder, spleen, and skeletal muscle, respectively. CONCLUSIONS Free-breathing abdominal T2 mapping is achieved in about 100 ms on a clinical MRI scanner. The work paved the way for the development of real-time dynamic T2 mapping in the abdomen. KEY POINTS • MOLED achieves free-breathing abdominal T2 mapping in about 100 ms, enabling real-time capture of T2 value changes due to CA injection in abdominal organs. • Synthetic data generation flow mitigates the issue of lack of sizable abdominal training datasets.
Collapse
|
98
|
Basty N, Thanaj M, Cule M, Sorokin EP, Liu Y, Thomas EL, Bell JD, Whitcher B. Artifact-free fat-water separation in Dixon MRI using deep learning. JOURNAL OF BIG DATA 2023; 10:4. [PMID: 36686622 PMCID: PMC9835035 DOI: 10.1186/s40537-022-00677-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Accepted: 12/25/2022] [Indexed: 06/17/2023]
Abstract
Chemical-shift encoded MRI (CSE-MRI) is a widely used technique for the study of body composition and metabolic disorders, where derived fat and water signals enable the quantification of adipose tissue and muscle. The UK Biobank is acquiring whole-body Dixon MRI (a specific implementation of CSE-MRI) for over 100,000 participants. Current processing methods associated with large whole-body volumes are time intensive and prone to artifacts during fat-water separation performed by the scanner, making quantitative analysis challenging. The most common artifacts are fat-water swaps, where the labels are inverted at the voxel level. It is common for researchers to discard swapped data (generally around 10%), which is wasteful and may lead to unintended biases. Given the large number of whole-body Dixon MRI acquisitions in the UK Biobank, thousands of swaps are expected to be present in the fat and water volumes from image reconstruction performed on the scanner. If they go undetected, errors will propagate into processes such as organ segmentation, and dilute the results in population-based analyses. There is a clear need for a robust method to accurately separate fat and water volumes in big data collections like the UK Biobank. We formulate fat-water separation as a style transfer problem, where swap-free fat and water volumes are predicted from the acquired Dixon MRI data using a conditional generative adversarial network, and introduce a new loss function for the generator model. Our method is able to predict highly accurate fat and water volumes free from artifacts in the UK Biobank. We show that our model separates fat and water volumes using either single input (in-phase only) or dual input (in-phase and opposed-phase) data, with the latter producing superior results. Our proposed method enables faster and more accurate downstream analysis of body composition from Dixon MRI in population studies by eliminating the need for visual inspection or discarding data due to fat-water swaps. Supplementary Information The online version contains supplementary material available at 10.1186/s40537-022-00677-1.
Collapse
Affiliation(s)
- Nicolas Basty
- Research Centre for Optimal Health, University of Westminster, London, UK
| | - Marjola Thanaj
- Research Centre for Optimal Health, University of Westminster, London, UK
| | | | | | - Yi Liu
- Calico Life Sciences LLC, South San Francisco, USA
| | - E. Louise Thomas
- Research Centre for Optimal Health, University of Westminster, London, UK
| | - Jimmy D. Bell
- Research Centre for Optimal Health, University of Westminster, London, UK
| | - Brandon Whitcher
- Research Centre for Optimal Health, University of Westminster, London, UK
| |
Collapse
|
99
|
Abstract
ABSTRACT This review summarizes the existing techniques and methods used to generate synthetic contrasts from magnetic resonance imaging data focusing on musculoskeletal magnetic resonance imaging. To that end, the different approaches were categorized into 3 different methodological groups: mathematical image transformation, physics-based, and data-driven approaches. Each group is characterized, followed by examples and a brief overview of their clinical validation, if present. Finally, we will discuss the advantages, disadvantages, and caveats of synthetic contrasts, focusing on the preservation of image information, validation, and aspects of the clinical workflow.
Collapse
|
100
|
Singh A. Editorial for "Deep-Learning-Based Contrast Synthesis From MRF Parameter Maps in the Knee Joint: A Preliminary Study". J Magn Reson Imaging 2022. [PMID: 36564952 DOI: 10.1002/jmri.28575] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Accepted: 11/24/2022] [Indexed: 12/25/2022] Open
Affiliation(s)
- Anup Singh
- Centre for Biomedical Engineering, Indian Institute of Technology Delhi, New Delhi, India.,Yardi School of Artificial Intelligence, Indian Institute of Technology Delhi, New Delhi, India.,Department of Biomedical Engineering, All India Institute of Medical Sciences, New Delhi, India
| |
Collapse
|