1
|
Ramaniharan AK, Pednekar A, Parikh NA, Nagaraj UD, Manhard MK. A single 1-min brain MRI scan for generating multiple synthetic image contrasts in awake children from quantitative relaxometry maps. Pediatr Radiol 2025; 55:312-323. [PMID: 39692886 DOI: 10.1007/s00247-024-06113-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/16/2024] [Revised: 11/08/2024] [Accepted: 11/16/2024] [Indexed: 12/19/2024]
Abstract
BACKGROUND Diagnostically adequate contrast and spatial resolution in brain MRI require prolonged scan times, leading to motion artifacts and image degradation in awake children. Rapid multi-parametric techniques can produce diagnostic images in awake children, which could help to avoid the need for sedation. OBJECTIVE To evaluate the utility of a rapid echo-planar imaging (EPI)-based multi-inversion spin and gradient echo (MI-SAGE) technique for generating multi-parametric quantitative brain maps and synthetic contrast images in awake pediatric participants. MATERIALS AND METHODS In this prospective IRB-approved study, awake research participants 3-10 years old were scanned using MI-SAGE, MOLLI, GRASE, mGRE, and T1-, T2-, T2*-, and FLAIR-weighted sequences. The MI-SAGE T1, T2, and T2* maps and synthetic images were estimated offline. The MI-SAGE parametric values were compared to those from conventional mapping sequences including MOLLI, GRASE, and mGRE, with assessments of repeatability and reproducibility. Synthetic MI-SAGE images and conventional weighted images were reviewed by a neuroradiologist and scored using a 5-point Likert scale. Gray-to-white matter contrast ratios (GWRs) were compared between MI-SAGE synthetic and conventional weighted images. The results were analyzed using the Bland-Altman analysis and intra-class correlation coefficient (ICC). RESULTS A total of 24 healthy participants aged 3 years to 10 years (mean ± SD, 6.5 ± 1.9; 12 males) completed full imaging exams including the 54-s MI-SAGE acquisition and were included in the analysis. The MI-SAGE T1, T2, and T2* had biases of 32%, -4%, and 23% compared to conventional mapping methods using MOLLI, GRASE, and mGRE, respectively, with moderate to very strong correlations (ICC=0.49-0.99). All MI-SAGE maps exhibited strong to very strong repeatability and reproducibility (ICC=0.80 to 0.99). The synthetic MI-SAGE had average Likert scores of 2.1, 2.1, 2.9, and 2.0 for T1-, T2-, T2*-, and FLAIR-weighted images, respectively, while conventional acquisitions had Likert scores of 3.5, 3.6, 4.6, and 3.8 for T1-, T2-, T2*-, and FLAIR-weighted images, respectively. The MI-SAGE synthetic T1w, T2w, T2*w, and FLAIR GWRs had biases of 17%, 3%, 7%, and 1% compared to the GWR of images from conventional T1w, T2w, T2*w, and FLAIR acquisitions respectively. CONCLUSION The derived T1, T2, and T2* maps were correlated with conventional mapping methods and showed strong repeatability and reproducibility. While synthetic MI-SAGE images had greater susceptibility artifacts and lower Likert scores than conventional images, the MI-SAGE technique produced synthetic weighted images with contrasts similar to conventional weighted images and achieved a ten-fold reduction in scan time.
Collapse
Affiliation(s)
| | - Amol Pednekar
- Cincinnati Children's Hospital Medical Center, 3333 Burnet Ave, Cincinnati, OH, 45229, USA.
- University of Cincinnati College of Medicine, Cincinnati, OH, USA.
| | - Nehal A Parikh
- Cincinnati Children's Hospital Medical Center, 3333 Burnet Ave, Cincinnati, OH, 45229, USA.
- University of Cincinnati College of Medicine, Cincinnati, OH, USA.
| | - Usha D Nagaraj
- Cincinnati Children's Hospital Medical Center, 3333 Burnet Ave, Cincinnati, OH, 45229, USA.
- University of Cincinnati College of Medicine, Cincinnati, OH, USA.
| | - Mary Kate Manhard
- Cincinnati Children's Hospital Medical Center, 3333 Burnet Ave, Cincinnati, OH, 45229, USA.
- University of Cincinnati College of Medicine, Cincinnati, OH, USA.
| |
Collapse
|
2
|
Kim S, Park H, Park SH. A review of deep learning-based reconstruction methods for accelerated MRI using spatiotemporal and multi-contrast redundancies. Biomed Eng Lett 2024; 14:1221-1242. [PMID: 39465106 PMCID: PMC11502678 DOI: 10.1007/s13534-024-00425-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2024] [Revised: 08/27/2024] [Accepted: 09/06/2024] [Indexed: 10/29/2024] Open
Abstract
Accelerated magnetic resonance imaging (MRI) has played an essential role in reducing data acquisition time for MRI. Acceleration can be achieved by acquiring fewer data points in k-space, which results in various artifacts in the image domain. Conventional reconstruction methods have resolved the artifacts by utilizing multi-coil information, but with limited robustness. Recently, numerous deep learning-based reconstruction methods have been developed, enabling outstanding reconstruction performances with higher acceleration. Advances in hardware and developments of specialized network architectures have produced such achievements. Besides, MRI signals contain various redundant information including multi-coil redundancy, multi-contrast redundancy, and spatiotemporal redundancy. Utilization of the redundant information combined with deep learning approaches allow not only higher acceleration, but also well-preserved details in the reconstructed images. Consequently, this review paper introduces the basic concepts of deep learning and conventional accelerated MRI reconstruction methods, followed by review of recent deep learning-based reconstruction methods that exploit various redundancies. Lastly, the paper concludes by discussing the challenges, limitations, and potential directions of future developments.
Collapse
Affiliation(s)
- Seonghyuk Kim
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - HyunWook Park
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - Sung-Hong Park
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, 291 Daehak-ro, Yuseong-gu, Daejeon, 34141 Republic of Korea
| |
Collapse
|
3
|
Diniz E, Santini T, Helmet K, Aizenstein HJ, Ibrahim TS. Cross-modality image translation of 3 Tesla Magnetic Resonance Imaging to 7 Tesla using Generative Adversarial Networks. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2024:2024.10.16.24315609. [PMID: 39484249 PMCID: PMC11527090 DOI: 10.1101/2024.10.16.24315609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 11/03/2024]
Abstract
The rapid advancements in magnetic resonance imaging (MRI) technology have precipitated a new paradigm wherein cross-modality data translation across diverse imaging platforms, field strengths, and different sites is increasingly challenging. This issue is particularly accentuated when transitioning from 3 Tesla (3T) to 7 Tesla (7T) MRI systems. This study proposes a novel solution to these challenges using generative adversarial networks (GANs)-specifically, the CycleGAN architecture-to create synthetic 7T images from 3T data. Employing a dataset of 1112 and 490 unpaired 3T and 7T MR images, respectively, we trained a 2-dimensional (2D) CycleGAN model, evaluating its performance on a paired dataset of 22 participants scanned at 3T and 7T. Independent testing on 22 distinct participants affirmed the model's proficiency in accurately predicting various tissue types, encompassing cerebral spinal fluid, gray matter, and white matter. Our approach provides a reliable and efficient methodology for synthesizing 7T images, achieving a median Dice of 6.82%,7,63%, and 4.85% for Cerebral Spinal Fluid (CSF), Gray Matter (GM), and White Matter (WM), respectively, in the testing dataset, thereby significantly aiding in harmonizing heterogeneous datasets. Furthermore, it delineates the potential of GANs in amplifying the contrast-to-noise ratio (CNR) from 3T, potentially enhancing the diagnostic capability of the images. While acknowledging the risk of model overfitting, our research underscores a promising progression towards harnessing the benefits of 7T MR systems in research investigations while preserving compatibility with existent 3T MR data. This work was previously presented at the ISMRM 2021 conference (Diniz, Helmet, Santini, Aizenstein, & Ibrahim, 2021).
Collapse
Affiliation(s)
- Eduardo Diniz
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pennsylvania, United States
| | - Tales Santini
- Department of Bioengineering, University of Pittsburgh, Pennsylvania, United States
| | - Karim Helmet
- Department of Bioengineering, University of Pittsburgh, Pennsylvania, United States
- Department of Psychiatry, University of Pittsburgh, Pennsylvania, United States
| | - Howard J. Aizenstein
- Department of Bioengineering, University of Pittsburgh, Pennsylvania, United States
- Department of Psychiatry, University of Pittsburgh, Pennsylvania, United States
| | - Tamer S. Ibrahim
- Department of Bioengineering, University of Pittsburgh, Pennsylvania, United States
| |
Collapse
|
4
|
Liao C, Cao X, Iyer SS, Schauman S, Zhou Z, Yan X, Chen Q, Li Z, Wang N, Gong T, Wu Z, He H, Zhong J, Yang Y, Kerr A, Grill-Spector K, Setsompop K. High-resolution myelin-water fraction and quantitative relaxation mapping using 3D ViSTa-MR fingerprinting. Magn Reson Med 2024; 91:2278-2293. [PMID: 38156945 PMCID: PMC10997479 DOI: 10.1002/mrm.29990] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 12/11/2023] [Accepted: 12/11/2023] [Indexed: 01/03/2024]
Abstract
PURPOSE This study aims to develop a high-resolution whole-brain multi-parametric quantitative MRI approach for simultaneous mapping of myelin-water fraction (MWF), T1, T2, and proton-density (PD), all within a clinically feasible scan time. METHODS We developed 3D visualization of short transverse relaxation time component (ViSTa)-MRF, which combined ViSTa technique with MR fingerprinting (MRF), to achieve high-fidelity whole-brain MWF and T1/T2/PD mapping on a clinical 3T scanner. To achieve fast acquisition and memory-efficient reconstruction, the ViSTa-MRF sequence leverages an optimized 3D tiny-golden-angle-shuffling spiral-projection acquisition and joint spatial-temporal subspace reconstruction with optimized preconditioning algorithm. With the proposed ViSTa-MRF approach, high-fidelity direct MWF mapping was achieved without a need for multicompartment fitting that could introduce bias and/or noise from additional assumptions or priors. RESULTS The in vivo results demonstrate the effectiveness of the proposed acquisition and reconstruction framework to provide fast multi-parametric mapping with high SNR and good quality. The in vivo results of 1 mm- and 0.66 mm-isotropic resolution datasets indicate that the MWF values measured by the proposed method are consistent with standard ViSTa results that are 30× slower with lower SNR. Furthermore, we applied the proposed method to enable 5-min whole-brain 1 mm-iso assessment of MWF and T1/T2/PD mappings for infant brain development and for post-mortem brain samples. CONCLUSIONS In this work, we have developed a 3D ViSTa-MRF technique that enables the acquisition of whole-brain MWF, quantitative T1, T2, and PD maps at 1 and 0.66 mm isotropic resolution in 5 and 15 min, respectively. This advancement allows for quantitative investigations of myelination changes in the brain.
Collapse
Affiliation(s)
- Congyu Liao
- Department of Radiology, Stanford University, Stanford, CA, USA
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
| | - Xiaozhi Cao
- Department of Radiology, Stanford University, Stanford, CA, USA
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
| | - Siddharth Srinivasan Iyer
- Department of Radiology, Stanford University, Stanford, CA, USA
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Sophie Schauman
- Department of Radiology, Stanford University, Stanford, CA, USA
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
| | - Zihan Zhou
- Department of Radiology, Stanford University, Stanford, CA, USA
- Center for Brain Imaging Science and Technology, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, Zhejiang, China
| | - Xiaoqian Yan
- Department of Psychology, Stanford University, Stanford, CA, USA
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, China
| | - Quan Chen
- Department of Radiology, Stanford University, Stanford, CA, USA
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
| | - Zhitao Li
- Department of Radiology, Stanford University, Stanford, CA, USA
| | - Nan Wang
- Department of Radiology, Stanford University, Stanford, CA, USA
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
| | - Ting Gong
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Charlestown, MA, USA
| | - Zhe Wu
- Techna Institute, University Health Network, Toronto, ON, Canada
| | - Hongjian He
- Center for Brain Imaging Science and Technology, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, Zhejiang, China
- School of Physics, Zhejiang University, Hangzhou, Zhejiang, China
| | - Jianhui Zhong
- Center for Brain Imaging Science and Technology, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, Zhejiang, China
- Department of Imaging Sciences, University of Rochester, Rochester, NY, USA
| | - Yang Yang
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA, USA
| | - Adam Kerr
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Stanford Center for Cognitive and Neurobiological Imaging, Stanford University, Stanford, CA, USA
| | | | - Kawin Setsompop
- Department of Radiology, Stanford University, Stanford, CA, USA
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
| |
Collapse
|
5
|
Jacobs L, Mandija S, Liu H, van den Berg CAT, Sbrizzi A, Maspero M. Generalizable synthetic MRI with physics-informed convolutional networks. Med Phys 2024; 51:3348-3359. [PMID: 38063208 DOI: 10.1002/mp.16884] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Revised: 11/20/2023] [Accepted: 11/28/2023] [Indexed: 05/08/2024] Open
Abstract
BACKGROUND Magnetic resonance imaging (MRI) provides state-of-the-art image quality for neuroimaging, consisting of multiple separately acquired contrasts. Synthetic MRI aims to accelerate examinations by synthesizing any desirable contrast from a single acquisition. PURPOSE We developed a physics-informed deep learning-based method to synthesize multiple brain MRI contrasts from a single 5-min acquisition and investigate its ability to generalize to arbitrary contrasts. METHODS A dataset of 55 subjects acquired with a clinical MRI protocol and a 5-min transient-state sequence was used. The model, based on a generative adversarial network, maps data acquired from the five-minute scan to "effective" quantitative parameter maps (q*-maps), feeding the generated PD, T1, and T2 maps into a signal model to synthesize four clinical contrasts (proton density-weighted, T1-weighted, T2-weighted, and T2-weighted fluid-attenuated inversion recovery), from which losses are computed. The synthetic contrasts are compared to an end-to-end deep learning-based method proposed by literature. The generalizability of the proposed method is investigated for five volunteers by synthesizing three contrasts unseen during training and comparing these to ground truth acquisitions via qualitative assessment and contrast-to-noise ratio (CNR) assessment. RESULTS The physics-informed method matched the quality of the end-to-end method for the four standard contrasts, with structural similarity metrics above0.75 ± 0.08 $0.75\pm 0.08$ ( ± $\pm$ std), peak signal-to-noise ratios above22.4 ± 1.9 $22.4\pm 1.9$ , representing a portion of compact lesions comparable to standard MRI. Additionally, the physics-informed method enabled contrast adjustment, and similar signal contrast and comparable CNRs to the ground truth acquisitions for three sequences unseen during model training. CONCLUSIONS The study demonstrated the feasibility of physics-informed, deep learning-based synthetic MRI to generate high-quality contrasts and generalize to contrasts beyond the training data. This technology has the potential to accelerate neuroimaging protocols.
Collapse
Affiliation(s)
- Luuk Jacobs
- Department of Radiotherapy, UMC Utrecht, Utrecht, The Netherlands
- Computational Imaging Group for MR Diagnostics and Therapy, UMC Utrecht, Utrecht, The Netherlands
| | - Stefano Mandija
- Department of Radiotherapy, UMC Utrecht, Utrecht, The Netherlands
- Computational Imaging Group for MR Diagnostics and Therapy, UMC Utrecht, Utrecht, The Netherlands
| | - Hongyan Liu
- Department of Radiotherapy, UMC Utrecht, Utrecht, The Netherlands
- Computational Imaging Group for MR Diagnostics and Therapy, UMC Utrecht, Utrecht, The Netherlands
| | - Cornelis A T van den Berg
- Department of Radiotherapy, UMC Utrecht, Utrecht, The Netherlands
- Computational Imaging Group for MR Diagnostics and Therapy, UMC Utrecht, Utrecht, The Netherlands
| | - Alessandro Sbrizzi
- Department of Radiotherapy, UMC Utrecht, Utrecht, The Netherlands
- Computational Imaging Group for MR Diagnostics and Therapy, UMC Utrecht, Utrecht, The Netherlands
| | - Matteo Maspero
- Department of Radiotherapy, UMC Utrecht, Utrecht, The Netherlands
- Computational Imaging Group for MR Diagnostics and Therapy, UMC Utrecht, Utrecht, The Netherlands
| |
Collapse
|
6
|
Dalmaz O, Mirza MU, Elmas G, Ozbey M, Dar SUH, Ceyani E, Oguz KK, Avestimehr S, Çukur T. One model to unite them all: Personalized federated learning of multi-contrast MRI synthesis. Med Image Anal 2024; 94:103121. [PMID: 38402791 DOI: 10.1016/j.media.2024.103121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Revised: 02/20/2024] [Accepted: 02/21/2024] [Indexed: 02/27/2024]
Abstract
Curation of large, diverse MRI datasets via multi-institutional collaborations can help improve learning of generalizable synthesis models that reliably translate source- onto target-contrast images. To facilitate collaborations, federated learning (FL) adopts decentralized model training while mitigating privacy concerns by avoiding sharing of imaging data. However, conventional FL methods can be impaired by the inherent heterogeneity in the data distribution, with domain shifts evident within and across imaging sites. Here we introduce the first personalized FL method for MRI Synthesis (pFLSynth) that improves reliability against data heterogeneity via model specialization to individual sites and synthesis tasks (i.e., source-target contrasts). To do this, pFLSynth leverages an adversarial model equipped with novel personalization blocks that control the statistics of generated feature maps across the spatial/channel dimensions, given latent variables specific to sites and tasks. To further promote communication efficiency and site specialization, partial network aggregation is employed over later generator stages while earlier generator stages and the discriminator are trained locally. As such, pFLSynth enables multi-task training of multi-site synthesis models with high generalization performance across sites and tasks. Comprehensive experiments demonstrate the superior performance and reliability of pFLSynth in MRI synthesis against prior federated methods.
Collapse
Affiliation(s)
- Onat Dalmaz
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey
| | - Muhammad U Mirza
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey
| | - Gokberk Elmas
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey
| | - Muzaffer Ozbey
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey
| | - Salman U H Dar
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey
| | - Emir Ceyani
- Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA 90089, USA
| | - Kader K Oguz
- Department of Radiology, University of California, Davis Medical Center, Sacramento, CA 95817, USA
| | - Salman Avestimehr
- Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA 90089, USA
| | - Tolga Çukur
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey; Neuroscience Program, Bilkent University, Ankara 06800, Turkey.
| |
Collapse
|
7
|
Kumar S, Saber H, Charron O, Freeman L, Tamir JI. Correcting synthetic MRI contrast-weighted images using deep learning. Magn Reson Imaging 2024; 106:43-54. [PMID: 38092082 DOI: 10.1016/j.mri.2023.11.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 11/30/2023] [Accepted: 11/30/2023] [Indexed: 12/17/2023]
Abstract
Synthetic magnetic resonance imaging (MRI) offers a scanning paradigm where a fast multi-contrast sequence can be used to estimate underlying quantitative tissue parameter maps, which are then used to synthesize any desirable clinical contrast by retrospectively changing scan parameters in silico. Two benefits of this approach are the reduced exam time and the ability to generate arbitrary contrasts offline. However, synthetically generated contrasts are known to deviate from the contrast of experimental scans. The reason for contrast mismatch is the necessary exclusion of some unmodeled physical effects such as partial voluming, diffusion, flow, susceptibility, magnetization transfer, and more. The inclusion of these effects in signal encoding would improve the synthetic images, but would make the quantitative imaging protocol impractical due to long scan times. Therefore, in this work, we propose a novel deep learning approach that generates a multiplicative correction term to capture unmodeled effects and correct the synthetic contrast images to better match experimental contrasts for arbitrary scan parameters. The physics inspired deep learning model implicitly accounts for some unmodeled physical effects occurring during the scan. As a proof of principle, we validate our approach on synthesizing arbitrary inversion recovery fast spin-echo scans using a commercially available 2D multi-contrast sequence. We observe that the proposed correction visually and numerically reduces the mismatch with experimentally collected contrasts compared to conventional synthetic MRI. Finally, we show results of a preliminary reader study and find that the proposed method statistically significantly improves in contrast and SNR as compared to synthetic MR images.
Collapse
Affiliation(s)
- Sidharth Kumar
- Chandra Family Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin 78712, TX, USA.
| | - Hamidreza Saber
- Dell Medical School Department of Neurology, The University of Texas at Austin, Austin 78712, TX, USA; Dell Medical School Department of Neurosurgery, The University of Texas at Austin, Austin 78712, TX, USA
| | - Odelin Charron
- Dell Medical School Department of Neurology, The University of Texas at Austin, Austin 78712, TX, USA
| | - Leorah Freeman
- Dell Medical School Department of Neurology, The University of Texas at Austin, Austin 78712, TX, USA; Dell Medical School Department of Diagnostic Medicine, The University of Texas at Austin, Austin 78712, TX, USA
| | - Jonathan I Tamir
- Chandra Family Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin 78712, TX, USA; Dell Medical School Department of Diagnostic Medicine, The University of Texas at Austin, Austin 78712, TX, USA; Oden Institute for Computational Engineering and Sciences, The University of Texas at Austin, Austin 78712, TX, USA
| |
Collapse
|
8
|
Aggarwal K, Manso Jimeno M, Ravi KS, Gonzalez G, Geethanath S. Developing and deploying deep learning models in brain magnetic resonance imaging: A review. NMR IN BIOMEDICINE 2023; 36:e5014. [PMID: 37539775 DOI: 10.1002/nbm.5014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Revised: 07/12/2023] [Accepted: 07/13/2023] [Indexed: 08/05/2023]
Abstract
Magnetic resonance imaging (MRI) of the brain has benefited from deep learning (DL) to alleviate the burden on radiologists and MR technologists, and improve throughput. The easy accessibility of DL tools has resulted in a rapid increase of DL models and subsequent peer-reviewed publications. However, the rate of deployment in clinical settings is low. Therefore, this review attempts to bring together the ideas from data collection to deployment in the clinic, building on the guidelines and principles that accreditation agencies have espoused. We introduce the need for and the role of DL to deliver accessible MRI. This is followed by a brief review of DL examples in the context of neuropathologies. Based on these studies and others, we collate the prerequisites to develop and deploy DL models for brain MRI. We then delve into the guiding principles to develop good machine learning practices in the context of neuroimaging, with a focus on explainability. A checklist based on the United States Food and Drug Administration's good machine learning practices is provided as a summary of these guidelines. Finally, we review the current challenges and future opportunities in DL for brain MRI.
Collapse
Affiliation(s)
- Kunal Aggarwal
- Accessible MR Laboratory, Biomedical Engineering and Imaging Institute, Department of Diagnostic, Molecular and Interventional Radiology, Mount Sinai Hospital, New York, USA
- Department of Electrical and Computer Engineering, Technical University Munich, Munich, Germany
| | - Marina Manso Jimeno
- Department of Biomedical Engineering, Columbia University in the City of New York, New York, New York, USA
- Columbia Magnetic Resonance Research Center, Columbia University in the City of New York, New York, New York, USA
| | - Keerthi Sravan Ravi
- Department of Biomedical Engineering, Columbia University in the City of New York, New York, New York, USA
- Columbia Magnetic Resonance Research Center, Columbia University in the City of New York, New York, New York, USA
| | - Gilberto Gonzalez
- Division of Neuroradiology, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Sairam Geethanath
- Accessible MR Laboratory, Biomedical Engineering and Imaging Institute, Department of Diagnostic, Molecular and Interventional Radiology, Mount Sinai Hospital, New York, USA
| |
Collapse
|
9
|
Ozbey M, Dalmaz O, Dar SUH, Bedel HA, Ozturk S, Gungor A, Cukur T. Unsupervised Medical Image Translation With Adversarial Diffusion Models. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3524-3539. [PMID: 37379177 DOI: 10.1109/tmi.2023.3290149] [Citation(s) in RCA: 78] [Impact Index Per Article: 39.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/30/2023]
Abstract
Imputation of missing images via source-to-target modality translation can improve diversity in medical imaging protocols. A pervasive approach for synthesizing target images involves one-shot mapping through generative adversarial networks (GAN). Yet, GAN models that implicitly characterize the image distribution can suffer from limited sample fidelity. Here, we propose a novel method based on adversarial diffusion modeling, SynDiff, for improved performance in medical image translation. To capture a direct correlate of the image distribution, SynDiff leverages a conditional diffusion process that progressively maps noise and source images onto the target image. For fast and accurate image sampling during inference, large diffusion steps are taken with adversarial projections in the reverse diffusion direction. To enable training on unpaired datasets, a cycle-consistent architecture is devised with coupled diffusive and non-diffusive modules that bilaterally translate between two modalities. Extensive assessments are reported on the utility of SynDiff against competing GAN and diffusion models in multi-contrast MRI and MRI-CT translation. Our demonstrations indicate that SynDiff offers quantitatively and qualitatively superior performance against competing baselines.
Collapse
|
10
|
Wang K, Doneva M, Meineke J, Amthor T, Karasan E, Tan F, Tamir JI, Yu SX, Lustig M. High-fidelity direct contrast synthesis from magnetic resonance fingerprinting. Magn Reson Med 2023; 90:2116-2129. [PMID: 37332200 DOI: 10.1002/mrm.29766] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Revised: 05/03/2023] [Accepted: 05/31/2023] [Indexed: 06/20/2023]
Abstract
PURPOSE This work was aimed at proposing a supervised learning-based method that directly synthesizes contrast-weighted images from the Magnetic Resonance Fingerprinting (MRF) data without performing quantitative mapping and spin-dynamics simulations. METHODS To implement our direct contrast synthesis (DCS) method, we deploy a conditional generative adversarial network (GAN) framework with a multi-branch U-Net as the generator and a multilayer CNN (PatchGAN) as the discriminator. We refer to our proposed approach as N-DCSNet. The input MRF data are used to directly synthesize T1-weighted, T2-weighted, and fluid-attenuated inversion recovery (FLAIR) images through supervised training on paired MRF and target spin echo-based contrast-weighted scans. The performance of our proposed method is demonstrated on in vivo MRF scans from healthy volunteers. Quantitative metrics, including normalized root mean square error (nRMSE), peak signal-to-noise ratio (PSNR), structural similarity (SSIM), learned perceptual image patch similarity (LPIPS), and Fréchet inception distance (FID), were used to evaluate the performance of the proposed method and compare it with others. RESULTS In-vivo experiments demonstrated excellent image quality with respect to that of simulation-based contrast synthesis and previous DCS methods, both visually and according to quantitative metrics. We also demonstrate cases in which our trained model is able to mitigate the in-flow and spiral off-resonance artifacts typically seen in MRF reconstructions, and thus more faithfully represent conventional spin echo-based contrast-weighted images. CONCLUSION We present N-DCSNet to directly synthesize high-fidelity multicontrast MR images from a single MRF acquisition. This method can significantly decrease examination time. By directly training a network to generate contrast-weighted images, our method does not require any model-based simulation and therefore can avoid reconstruction errors due to dictionary matching and contrast simulation (code available at:https://github.com/mikgroup/DCSNet).
Collapse
Affiliation(s)
- Ke Wang
- Electrical Engineering and Computer Sciences, University of California at Berkeley, Berkeley, California, USA
- International Computer Science Institute, University of California at Berkeley, Berkeley, California, USA
| | | | | | | | - Ekin Karasan
- Electrical Engineering and Computer Sciences, University of California at Berkeley, Berkeley, California, USA
| | - Fei Tan
- Bioengineering, UC Berkeley-UCSF, San Francisco, California, USA
| | - Jonathan I Tamir
- Chandra Family Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, Texas, USA
| | - Stella X Yu
- Electrical Engineering and Computer Sciences, University of California at Berkeley, Berkeley, California, USA
- International Computer Science Institute, University of California at Berkeley, Berkeley, California, USA
- Computer Science and Engineering, University of Michigan, Ann Arbor, Michigan, USA
| | | |
Collapse
|
11
|
Qiu S, Ma S, Wang L, Chen Y, Fan Z, Moser FG, Maya M, Sati P, Sicotte NL, Christodoulou AG, Xie Y, Li D. Direct synthesis of multi-contrast brain MR images from MR multitasking spatial factors using deep learning. Magn Reson Med 2023; 90:1672-1681. [PMID: 37246485 PMCID: PMC10524469 DOI: 10.1002/mrm.29715] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Revised: 04/27/2023] [Accepted: 05/03/2023] [Indexed: 05/30/2023]
Abstract
PURPOSE To develop a deep learning method to synthesize conventional contrast-weighted images in the brain from MR multitasking spatial factors. METHODS Eighteen subjects were imaged using a whole-brain quantitative T1 -T2 -T1ρ MR multitasking sequence. Conventional contrast-weighted images consisting of T1 MPRAGE, T1 gradient echo, and T2 fluid-attenuated inversion recovery were acquired as target images. A 2D U-Net-based neural network was trained to synthesize conventional weighted images from MR multitasking spatial factors. Quantitative assessment and image quality rating by two radiologists were performed to evaluate the quality of deep-learning-based synthesis, in comparison with Bloch-equation-based synthesis from MR multitasking quantitative maps. RESULTS The deep-learning synthetic images showed comparable contrasts of brain tissues with the reference images from true acquisitions and were substantially better than the Bloch-equation-based synthesis results. Averaging on the three contrasts, the deep learning synthesis achieved normalized root mean square error = 0.184 ± 0.075, peak SNR = 28.14 ± 2.51, and structural-similarity index = 0.918 ± 0.034, which were significantly better than Bloch-equation-based synthesis (p < 0.05). Radiologists' rating results show that compared with true acquisitions, deep learning synthesis had no notable quality degradation and was better than Bloch-equation-based synthesis. CONCLUSION A deep learning technique was developed to synthesize conventional weighted images from MR multitasking spatial factors in the brain, enabling the simultaneous acquisition of multiparametric quantitative maps and clinical contrast-weighted images in a single scan.
Collapse
Affiliation(s)
- Shihan Qiu
- Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, California, USA
- Department of Bioengineering, UCLA, Los Angeles, California, USA
| | - Sen Ma
- Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, California, USA
| | - Lixia Wang
- Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, California, USA
| | - Yuhua Chen
- Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, California, USA
- Department of Bioengineering, UCLA, Los Angeles, California, USA
| | - Zhaoyang Fan
- Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, California, USA
- Departments of Radiology and Radiation Oncology, University of Southern California, Los Angeles, California, USA
| | - Franklin G. Moser
- Department of Imaging, Cedars-Sinai Medical Center, Los Angeles, California, USA
| | - Marcel Maya
- Department of Imaging, Cedars-Sinai Medical Center, Los Angeles, California, USA
| | - Pascal Sati
- Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, California, USA
- Department of Neurology, Cedars-Sinai Medical Center, Los Angeles, California, USA
| | - Nancy L. Sicotte
- Department of Neurology, Cedars-Sinai Medical Center, Los Angeles, California, USA
| | - Anthony G. Christodoulou
- Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, California, USA
- Department of Bioengineering, UCLA, Los Angeles, California, USA
| | - Yibin Xie
- Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, California, USA
| | - Debiao Li
- Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, California, USA
- Department of Bioengineering, UCLA, Los Angeles, California, USA
| |
Collapse
|
12
|
Nykänen O, Nevalainen M, Casula V, Isosalo A, Inkinen S, Nikki M, Lattanzi R, Cloos M, Nissi MJ, Nieminen MT. Deep-Learning-Based Contrast Synthesis From MRF Parameter Maps in the Knee Joint. J Magn Reson Imaging 2023; 58:559-568. [PMID: 36562500 PMCID: PMC10287835 DOI: 10.1002/jmri.28573] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Revised: 12/07/2022] [Accepted: 12/07/2022] [Indexed: 12/24/2022] Open
Abstract
BACKGROUND Magnetic resonance fingerprinting (MRF) is a method to speed up acquisition of quantitative MRI data. However, MRF does not usually produce contrast-weighted images that are required by radiologists, limiting reachable total scan time improvement. Contrast synthesis from MRF could significantly decrease the imaging time. PURPOSE To improve clinical utility of MRF by synthesizing contrast-weighted MR images from the quantitative data provided by MRF, using U-nets that were trained for the synthesis task utilizing L1- and perceptual loss functions, and their combinations. STUDY TYPE Retrospective. POPULATION Knee joint MRI data from 184 subjects from Northern Finland 1986 Birth Cohort (ages 33-35, gender distribution not available). FIELD STRENGTH AND SEQUENCE A 3 T, multislice-MRF, proton density (PD)-weighted 3D-SPACE (sampling perfection with application optimized contrasts using different flip angle evolution), fat-saturated T2-weighted 3D-space, water-excited double echo steady state (DESS). ASSESSMENT Data were divided into training, validation, test, and radiologist's assessment sets in the following way: 136 subjects to training, 3 for validation, 3 for testing, and 42 for radiologist's assessment. The synthetic and target images were evaluated using 5-point Likert scale by two musculoskeletal radiologists blinded and with quantitative error metrics. STATISTICAL TESTS Friedman's test accompanied with post hoc Wilcoxon signed-rank test and intraclass correlation coefficient. The statistical cutoff P <0.05 adjusted by Bonferroni correction as necessary was utilized. RESULTS The networks trained in the study could synthesize conventional images with high image quality (Likert scores 3-4 on a 5-point scale). Qualitatively, the best synthetic images were produced with combination of L1- and perceptual loss functions and perceptual loss alone, while L1-loss alone led to significantly poorer image quality (Likert scores below 3). The interreader and intrareader agreement were high (0.80 and 0.92, respectively) and significant. However, quantitative image quality metrics indicated best performance for the pure L1-loss. DATA CONCLUSION Synthesizing high-quality contrast-weighted images from MRF data using deep learning is feasible. However, more studies are needed to validate the diagnostic accuracy of these synthetic images. EVIDENCE LEVEL 4. TECHNICAL EFFICACY Stage 1.
Collapse
Affiliation(s)
- Olli Nykänen
- Department of Applied Physics, Faculty of Science and Forestry, University of Eastern Finland, Yliopistonranta 1 F, Kuopio, Finland
- Research Unit of Medical Imaging, Physics and Technology, Faculty of Medicine, University of Oulu, Aapistie 5 A, Oulu
| | - Mika Nevalainen
- Research Unit of Medical Imaging, Physics and Technology, Faculty of Medicine, University of Oulu, Aapistie 5 A, Oulu
- Medical Research Center, University of Oulu and Oulu University Hospital, Kajaanintie 50, Oulu, Finland
- Department of Diagnostic Radiology, Oulu University Hospital, Kajaanintie 50, Oulu, Finland
| | - Victor Casula
- Research Unit of Medical Imaging, Physics and Technology, Faculty of Medicine, University of Oulu, Aapistie 5 A, Oulu
- Medical Research Center, University of Oulu and Oulu University Hospital, Kajaanintie 50, Oulu, Finland
| | - Antti Isosalo
- Research Unit of Medical Imaging, Physics and Technology, Faculty of Medicine, University of Oulu, Aapistie 5 A, Oulu
| | - Satu Inkinen
- Research Unit of Medical Imaging, Physics and Technology, Faculty of Medicine, University of Oulu, Aapistie 5 A, Oulu
- Helsinki University Hospital, Helsinki, Finland
| | - Marko Nikki
- Department of Diagnostic Radiology, Oulu University Hospital, Kajaanintie 50, Oulu, Finland
| | - Riccardo Lattanzi
- Center for Advanced Imaging Innovation and Research (CAI2R) and Bernard and Irene Schwartz Center for Biomedical Imaging, Department of Radiology, New York University Grossman School of Medicine, 550 1st Avenue, New York, NY, USA
| | - Martijn Cloos
- Centre for Advanced Imaging, University of Queensland, Building 57 of University Dr, Brisbane, Australia
| | - Mikko J. Nissi
- Department of Applied Physics, Faculty of Science and Forestry, University of Eastern Finland, Yliopistonranta 1 F, Kuopio, Finland
| | - Miika T. Nieminen
- Research Unit of Medical Imaging, Physics and Technology, Faculty of Medicine, University of Oulu, Aapistie 5 A, Oulu
- Medical Research Center, University of Oulu and Oulu University Hospital, Kajaanintie 50, Oulu, Finland
- Department of Diagnostic Radiology, Oulu University Hospital, Kajaanintie 50, Oulu, Finland
| |
Collapse
|
13
|
Ruffle JK, Mohinta S, Gray R, Hyare H, Nachev P. Brain tumour segmentation with incomplete imaging data. Brain Commun 2023; 5:fcad118. [PMID: 37124946 PMCID: PMC10144694 DOI: 10.1093/braincomms/fcad118] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Revised: 02/22/2023] [Accepted: 04/08/2023] [Indexed: 05/02/2023] Open
Abstract
Progress in neuro-oncology is increasingly recognized to be obstructed by the marked heterogeneity-genetic, pathological, and clinical-of brain tumours. If the treatment susceptibilities and outcomes of individual patients differ widely, determined by the interactions of many multimodal characteristics, then large-scale, fully-inclusive, richly phenotyped data-including imaging-will be needed to predict them at the individual level. Such data can realistically be acquired only in the routine clinical stream, where its quality is inevitably degraded by the constraints of real-world clinical care. Although contemporary machine learning could theoretically provide a solution to this task, especially in the domain of imaging, its ability to cope with realistic, incomplete, low-quality data is yet to be determined. In the largest and most comprehensive study of its kind, applying state-of-the-art brain tumour segmentation models to large scale, multi-site MRI data of 1251 individuals, here we quantify the comparative fidelity of automated segmentation models drawn from MR data replicating the various levels of completeness observed in real life. We demonstrate that models trained on incomplete data can segment lesions very well, often equivalently to those trained on the full completement of images, exhibiting Dice coefficients of 0.907 (single sequence) to 0.945 (complete set) for whole tumours and 0.701 (single sequence) to 0.891 (complete set) for component tissue types. This finding opens the door both to the application of segmentation models to large-scale historical data, for the purpose of building treatment and outcome predictive models, and their application to real-world clinical care. We further ascertain that segmentation models can accurately detect enhancing tumour in the absence of contrast-enhancing imaging, quantifying the burden of enhancing tumour with an R 2 > 0.97, varying negligibly with lesion morphology. Such models can quantify enhancing tumour without the administration of intravenous contrast, inviting a revision of the notion of tumour enhancement if the same information can be extracted without contrast-enhanced imaging. Our analysis includes validation on a heterogeneous, real-world 50 patient sample of brain tumour imaging acquired over the last 15 years at our tertiary centre, demonstrating maintained accuracy even on non-isotropic MRI acquisitions, or even on complex post-operative imaging with tumour recurrence. This work substantially extends the translational opportunity for quantitative analysis to clinical situations where the full complement of sequences is not available and potentially enables the characterization of contrast-enhanced regions where contrast administration is infeasible or undesirable.
Collapse
Affiliation(s)
- James K Ruffle
- UCL Queen Square Institute of Neurology, University College London, London, UK
| | - Samia Mohinta
- UCL Queen Square Institute of Neurology, University College London, London, UK
| | - Robert Gray
- UCL Queen Square Institute of Neurology, University College London, London, UK
| | - Harpreet Hyare
- UCL Queen Square Institute of Neurology, University College London, London, UK
| | - Parashkev Nachev
- UCL Queen Square Institute of Neurology, University College London, London, UK
| |
Collapse
|
14
|
Zhao Y, Wang X, Che T, Bao G, Li S. Multi-task deep learning for medical image computing and analysis: A review. Comput Biol Med 2023; 153:106496. [PMID: 36634599 DOI: 10.1016/j.compbiomed.2022.106496] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 12/06/2022] [Accepted: 12/27/2022] [Indexed: 12/29/2022]
Abstract
The renaissance of deep learning has provided promising solutions to various tasks. While conventional deep learning models are constructed for a single specific task, multi-task deep learning (MTDL) that is capable to simultaneously accomplish at least two tasks has attracted research attention. MTDL is a joint learning paradigm that harnesses the inherent correlation of multiple related tasks to achieve reciprocal benefits in improving performance, enhancing generalizability, and reducing the overall computational cost. This review focuses on the advanced applications of MTDL for medical image computing and analysis. We first summarize four popular MTDL network architectures (i.e., cascaded, parallel, interacted, and hybrid). Then, we review the representative MTDL-based networks for eight application areas, including the brain, eye, chest, cardiac, abdomen, musculoskeletal, pathology, and other human body regions. While MTDL-based medical image processing has been flourishing and demonstrating outstanding performance in many tasks, in the meanwhile, there are performance gaps in some tasks, and accordingly we perceive the open challenges and the perspective trends. For instance, in the 2018 Ischemic Stroke Lesion Segmentation challenge, the reported top dice score of 0.51 and top recall of 0.55 achieved by the cascaded MTDL model indicate further research efforts in high demand to escalate the performance of current models.
Collapse
Affiliation(s)
- Yan Zhao
- Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China
| | - Xiuying Wang
- School of Computer Science, The University of Sydney, Sydney, NSW, 2008, Australia.
| | - Tongtong Che
- Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China
| | - Guoqing Bao
- School of Computer Science, The University of Sydney, Sydney, NSW, 2008, Australia
| | - Shuyu Li
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, 100875, China.
| |
Collapse
|
15
|
Al-Masni MA, Lee S, Al-Shamiri AK, Gho SM, Choi YH, Kim DH. A knowledge interaction learning for multi-echo MRI motion artifact correction towards better enhancement of SWI. Comput Biol Med 2023; 153:106553. [PMID: 36641933 DOI: 10.1016/j.compbiomed.2023.106553] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Revised: 01/01/2023] [Accepted: 01/11/2023] [Indexed: 01/15/2023]
Abstract
Patient movement during Magnetic Resonance Imaging (MRI) scan can cause severe degradation of image quality. In Susceptibility Weighted Imaging (SWI), several echoes are typically measured during a single repetition period, where the earliest echoes show less contrast between various tissues, while the later echoes are more susceptible to artifacts and signal dropout. In this paper, we propose a knowledge interaction paradigm that jointly learns feature details from multiple distorted echoes by sharing their knowledge with unified training parameters, thereby simultaneously reducing motion artifacts of all echoes. This is accomplished by developing a new scheme that boosts a Single Encoder with Multiple Decoders (SEMD), which assures that the generated features not only get fused but also learned together. We called the proposed method Knowledge Interaction Learning between Multi-Echo data (KIL-ME-based SEMD). The proposed KIL-ME-based SEMD allows to share information and gain an understanding of the correlations between the multiple echoes. The main purpose of this work is to correct the motion artifacts and maintain image quality and structure details of all motion-corrupted echoes towards generating high-resolution susceptibility enhanced contrast images, i.e., SWI, using a weighted average of multi-echo motion-corrected acquisitions. We also compare various potential strategies that might be used to address the problem of reducing artifacts in multi-echoes data. The experimental results demonstrate the feasibility and effectiveness of the proposed method, reducing the severity of motion artifacts and improving the overall clinical image quality of all echoes with their associated SWI maps. Significant improvement of image quality is observed using both motion-simulated test data and actual volunteer data with various motion severity strengths. Eventually, by enhancing the overall image quality, the proposed network can increase the effectiveness of the physicians' capability to evaluate and correctly diagnose brain MR images.
Collapse
Affiliation(s)
- Mohammed A Al-Masni
- Department of Artificial Intelligence, College of Software & Convergence Technology, Daeyang AI Center, Sejong University, Seoul, 05006, Republic of Korea
| | - Seul Lee
- Department of Electrical and Electronic Engineering, College of Engineering, Yonsei University, Seoul, Republic of Korea
| | | | | | - Young Hun Choi
- Department of Radiology, Seoul National University Hospital, Seoul, Republic of Korea
| | - Dong-Hyun Kim
- Department of Electrical and Electronic Engineering, College of Engineering, Yonsei University, Seoul, Republic of Korea.
| |
Collapse
|
16
|
Mukhatov A, Le T, Pham TT, Do TD. A comprehensive review on magnetic imaging techniques for biomedical applications. NANO SELECT 2023. [DOI: 10.1002/nano.202200219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023] Open
Affiliation(s)
- Azamat Mukhatov
- Department of Robotics School of Engineering and Digital Sciences Nazarbayev University Astana Kazakhstan
| | - Tuan‐Anh Le
- Department of Physiology and Biomedical Engineering Mayo Clinic Scottsdale Arizona USA
| | - Tri T. Pham
- Department of Biology School of Sciences and Humanities Nazarbayev University Astana Kazakhstan
| | - Ton Duc Do
- Department of Robotics School of Engineering and Digital Sciences Nazarbayev University Astana Kazakhstan
| |
Collapse
|
17
|
Singh A. Editorial for "Deep-Learning-Based Contrast Synthesis From MRF Parameter Maps in the Knee Joint: A Preliminary Study". J Magn Reson Imaging 2022. [PMID: 36564952 DOI: 10.1002/jmri.28575] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Accepted: 11/24/2022] [Indexed: 12/25/2022] Open
Affiliation(s)
- Anup Singh
- Centre for Biomedical Engineering, Indian Institute of Technology Delhi, New Delhi, India.,Yardi School of Artificial Intelligence, Indian Institute of Technology Delhi, New Delhi, India.,Department of Biomedical Engineering, All India Institute of Medical Sciences, New Delhi, India
| |
Collapse
|
18
|
Yurt M, Dalmaz O, Dar S, Ozbey M, Tinaz B, Oguz K, Cukur T. Semi-Supervised Learning of MRI Synthesis Without Fully-Sampled Ground Truths. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:3895-3906. [PMID: 35969576 DOI: 10.1109/tmi.2022.3199155] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Learning-based translation between MRI contrasts involves supervised deep models trained using high-quality source- and target-contrast images derived from fully-sampled acquisitions, which might be difficult to collect under limitations on scan costs or time. To facilitate curation of training sets, here we introduce the first semi-supervised model for MRI contrast translation (ssGAN) that can be trained directly using undersampled k-space data. To enable semi-supervised learning on undersampled data, ssGAN introduces novel multi-coil losses in image, k-space, and adversarial domains. The multi-coil losses are selectively enforced on acquired k-space samples unlike traditional losses in single-coil synthesis models. Comprehensive experiments on retrospectively undersampled multi-contrast brain MRI datasets are provided. Our results demonstrate that ssGAN yields on par performance to a supervised model, while outperforming single-coil models trained on coil-combined magnitude images. It also outperforms cascaded reconstruction-synthesis models where a supervised synthesis model is trained following self-supervised reconstruction of undersampled data. Thus, ssGAN holds great promise to improve the feasibility of learning-based multi-contrast MRI synthesis.
Collapse
|
19
|
A Systematic Literature Review on Applications of GAN-Synthesized Images for Brain MRI. FUTURE INTERNET 2022. [DOI: 10.3390/fi14120351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
With the advances in brain imaging, magnetic resonance imaging (MRI) is evolving as a popular radiological tool in clinical diagnosis. Deep learning (DL) methods can detect abnormalities in brain images without an extensive manual feature extraction process. Generative adversarial network (GAN)-synthesized images have many applications in this field besides augmentation, such as image translation, registration, super-resolution, denoising, motion correction, segmentation, reconstruction, and contrast enhancement. The existing literature was reviewed systematically to understand the role of GAN-synthesized dummy images in brain disease diagnosis. Web of Science and Scopus databases were extensively searched to find relevant studies from the last 6 years to write this systematic literature review (SLR). Predefined inclusion and exclusion criteria helped in filtering the search results. Data extraction is based on related research questions (RQ). This SLR identifies various loss functions used in the above applications and software to process brain MRIs. A comparative study of existing evaluation metrics for GAN-synthesized images helps choose the proper metric for an application. GAN-synthesized images will have a crucial role in the clinical sector in the coming years, and this paper gives a baseline for other researchers in the field.
Collapse
|
20
|
Dalmaz O, Yurt M, Cukur T. ResViT: Residual Vision Transformers for Multimodal Medical Image Synthesis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2598-2614. [PMID: 35436184 DOI: 10.1109/tmi.2022.3167808] [Citation(s) in RCA: 122] [Impact Index Per Article: 40.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Generative adversarial models with convolutional neural network (CNN) backbones have recently been established as state-of-the-art in numerous medical image synthesis tasks. However, CNNs are designed to perform local processing with compact filters, and this inductive bias compromises learning of contextual features. Here, we propose a novel generative adversarial approach for medical image synthesis, ResViT, that leverages the contextual sensitivity of vision transformers along with the precision of convolution operators and realism of adversarial learning. ResViT's generator employs a central bottleneck comprising novel aggregated residual transformer (ART) blocks that synergistically combine residual convolutional and transformer modules. Residual connections in ART blocks promote diversity in captured representations, while a channel compression module distills task-relevant information. A weight sharing strategy is introduced among ART blocks to mitigate computational burden. A unified implementation is introduced to avoid the need to rebuild separate synthesis models for varying source-target modality configurations. Comprehensive demonstrations are performed for synthesizing missing sequences in multi-contrast MRI, and CT images from MRI. Our results indicate superiority of ResViT against competing CNN- and transformer-based methods in terms of qualitative observations and quantitative metrics.
Collapse
|
21
|
Xie H, Lei Y, Wang T, Roper J, Axente M, Bradley JD, Liu T, Yang X. Magnetic resonance imaging contrast enhancement synthesis using cascade networks with local supervision. Med Phys 2022; 49:3278-3287. [PMID: 35229344 PMCID: PMC11747766 DOI: 10.1002/mp.15578] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Revised: 12/03/2021] [Accepted: 02/22/2022] [Indexed: 12/22/2022] Open
Abstract
PURPOSE Gadolinium-based contrast agents (GBCAs) are widely administrated in MR imaging for diagnostic studies and treatment planning. Although GBCAs are generally thought to be safe, various health and environmental concerns have been raised recently about their use in MR imaging. The purpose of this work is to derive synthetic contrast enhance MR images from unenhanced counterpart images, thereby eliminating the need for GBCAs, using a cascade deep learning workflow that incorporates contour information into the network. METHODS AND MATERIALS The proposed workflow consists of two sequential networks: (1) a retina U-Net, which is first trained to derive semantic features from the non-contrast MR images in representing the tumor regions; and (2) a synthesis module, which is trained after the retina U-Net to take the concatenation of the semantic feature maps and non-contrast MR image as input and to generate the synthetic contrast enhanced MR images. After network training, only the non-contrast enhanced MR images are required for the input in the proposed workflow. The MR images of 369 patients from the multimodal brain tumor segmentation challenge 2020 (BraTS2020) dataset were used in this study to evaluate the proposed workflow for synthesizing contrast enhanced MR images (200 patients for five-fold cross-validation and 169 patients for hold-out test). Quantitative evaluations were conducted by calculating the normalized mean absolute error (NMAE), structural similarity index measurement (SSIM), and Pearson correlation coefficient (PCC). The original contrast enhanced MR images were considered as the ground truth in this analysis. RESULTS The proposed cascade deep learning workflow synthesized contrast enhanced MR images that are not visually differentiable from the ground truth with and without supervision of the tumor contours during the network training. Difference images and profiles of the synthetic contrast enhanced MR images revealed that intensity differences could be observed in the tumor region if the contour information was not incorporated in network training. Among the hold-out test patients, mean values and standard deviations of the NMAE, SSIM, and PCC were 0.063±0.022, 0.991±0.007 and 0.995±0.006, respectively, for the whole brain; and were 0.050±0.025, 0.993±0.008 and 0.999±0.003, respectively, for the tumor contour regions. Quantitative evaluations with five-fold cross-validation and hold-out test showed that the calculated metrics can be significantly enhanced (p-values ≤ 0.002) with the tumor contour supervision in network training. CONCLUSION The proposed workflow was able to generate synthetic contrast enhanced MR images that closely resemble the ground truth images from non-contrast enhanced MR images when the network training included tumor contours. These results suggest that it may be possible to minimize the use of GBCAs in cranial MR imaging studies.
Collapse
Affiliation(s)
- Huiqiao Xie
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Marian Axente
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Jeffrey D Bradley
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| |
Collapse
|
22
|
Jung W, Kim EH, Ko J, Jeong G, Choi MH. Convolutional neural network-based reconstruction for acceleration of prostate T 2 weighted MR imaging: a retro- and prospective study. Br J Radiol 2022; 95:20211378. [PMID: 35148172 PMCID: PMC10993971 DOI: 10.1259/bjr.20211378] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2021] [Revised: 01/19/2022] [Accepted: 01/24/2022] [Indexed: 11/05/2022] Open
Abstract
OBJECTIVE The aim of this study was to develop a deep neural network (DNN)-based parallel imaging reconstruction for highly accelerated 2D turbo spin echo (TSE) data in prostate MRI without quality degradation compared to conventional scans. METHODS 155 participant data were acquired for training and testing. Two DNN models were generated according to the number of acquisitions (NAQ) of input images: DNN-N1 for NAQ = 1 and DNN-N2 for NAQ = 2. In the test data, DNN and TSE images were compared by quantitative error metrics. The visual appropriateness of DNN reconstructions on accelerated scans (DNN-N1 and DNN-N2) and conventional scans (TSE-Conv) was assessed for nine parameters by two radiologists. The lesion detection was evaluated at DNNs and TES-Conv by prostate imaging-reporting and data system. RESULTS The scan time was reduced by 71% at NAQ = 1, and 42% at NAQ = 2. Quantitative evaluation demonstrated the better error metrics of DNN images (29-43% lower NRMSE, 4-13% higher structure similarity index, and 2.8-4.8 dB higher peak signal-to-noise ratio; p < 0.001) than TSE images. In the assessment of the visual appropriateness, both radiologists evaluated that DNN-N2 showed better or comparable performance in all parameters compared to TSE-Conv. In the lesion detection, DNN images showed almost perfect agreement (κ > 0.9) scores with TSE-Conv. CONCLUSIONS DNN-based reconstruction in highly accelerated prostate TSE imaging showed comparable quality to conventional TSE. ADVANCES IN KNOWLEDGE Our framework reduces the scan time by 42% of conventional prostate TSE imaging without sequence modification, revealing great potential for clinical application.
Collapse
Affiliation(s)
| | - Eu Hyun Kim
- Department of Radiology, St.Vincent’s Hospital, College
of Medicine, The Catholic University of Korea, Suwon,
Gyeonggi-do, Republic of Korea
| | - Jingyu Ko
- AIRS Medical, Seoul,
Republic of Korea
| | | | - Moon Hyung Choi
- Department of Radiology, Eunpyeong St. Mary’s Hospital,
College of Medicine, The Catholic University of Korea,
Seoul, Republic of Korea
| |
Collapse
|
23
|
Zhang H, Li H, Dillman JR, Parikh NA, He L. Multi-Contrast MRI Image Synthesis Using Switchable Cycle-Consistent Generative Adversarial Networks. Diagnostics (Basel) 2022; 12:816. [PMID: 35453864 PMCID: PMC9026507 DOI: 10.3390/diagnostics12040816] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2022] [Revised: 03/19/2022] [Accepted: 03/24/2022] [Indexed: 02/01/2023] Open
Abstract
Multi-contrast MRI images use different echo and repetition times to highlight different tissues. However, not all desired image contrasts may be available due to scan-time limitations, suboptimal signal-to-noise ratio, and/or image artifacts. Deep learning approaches have brought revolutionary advances in medical image synthesis, enabling the generation of unacquired image contrasts (e.g., T1-weighted MRI images) from available image contrasts (e.g., T2-weighted images). Particularly, CycleGAN is an advanced technique for image synthesis using unpaired images. However, it requires two separate image generators, demanding more training resources and computations. Recently, a switchable CycleGAN has been proposed to address this limitation and successfully implemented using CT images. However, it remains unclear if switchable CycleGAN can be applied to cross-contrast MRI synthesis. In addition, whether switchable CycleGAN is able to outperform original CycleGAN on cross-contrast MRI image synthesis is still an open question. In this paper, we developed a switchable CycleGAN model for image synthesis between multi-contrast brain MRI images using a large set of publicly accessible pediatric structural brain MRI images. We conducted extensive experiments to compare switchable CycleGAN with original CycleGAN both quantitatively and qualitatively. Experimental results demonstrate that switchable CycleGAN is able to outperform CycleGAN model on pediatric MRI brain image synthesis.
Collapse
Affiliation(s)
- Huixian Zhang
- Imaging Research Center, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH 45229, USA; (H.Z.); (H.L.); (J.R.D.)
- Department of Radiology, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH 45229, USA
| | - Hailong Li
- Imaging Research Center, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH 45229, USA; (H.Z.); (H.L.); (J.R.D.)
- Department of Radiology, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH 45229, USA
- Center for Artificial Intelligence in Imaging Research, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH 45229, USA
- Center for Prevention of Neurodevelopmental Disorders, Perinatal Institute, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH 45229, USA;
| | - Jonathan R. Dillman
- Imaging Research Center, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH 45229, USA; (H.Z.); (H.L.); (J.R.D.)
- Department of Radiology, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH 45229, USA
- Center for Artificial Intelligence in Imaging Research, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH 45229, USA
- Department of Radiology, University of Cincinnati College of Medicine, Cincinnati, OH 45229, USA
| | - Nehal A. Parikh
- Center for Prevention of Neurodevelopmental Disorders, Perinatal Institute, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH 45229, USA;
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH 45229, USA
| | - Lili He
- Imaging Research Center, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH 45229, USA; (H.Z.); (H.L.); (J.R.D.)
- Department of Radiology, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH 45229, USA
- Center for Artificial Intelligence in Imaging Research, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH 45229, USA
- Center for Prevention of Neurodevelopmental Disorders, Perinatal Institute, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH 45229, USA;
- Department of Radiology, University of Cincinnati College of Medicine, Cincinnati, OH 45229, USA
| |
Collapse
|
24
|
Yurt M, Özbey M, UH Dar S, Tinaz B, Oguz KK, Çukur T. Progressively Volumetrized Deep Generative Models for Data-Efficient Contextual Learning of MR Image Recovery. Med Image Anal 2022; 78:102429. [DOI: 10.1016/j.media.2022.102429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2021] [Revised: 03/14/2022] [Accepted: 03/18/2022] [Indexed: 10/18/2022]
|
25
|
Wei H, Li Z, Wang S, Li R. Undersampled Multi-contrast MRI Reconstruction Based on Double-domain Generative Adversarial Network. IEEE J Biomed Health Inform 2022; 26:4371-4377. [PMID: 35030086 DOI: 10.1109/jbhi.2022.3143104] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Multi-contrast magnetic resonance imaging can provide comprehensive information for clinical diagnosis. However, multi-contrast imaging suffers from long acquisition time, which makes it inhibitive for daily clinical practice. Subsampling k-space is one of the main methods to speed up scan time. Missing k-space samples will lead to inevitable serious artifacts and noise. Considering the assumption that different contrast modalities share some mutual information, it may be possible to exploit this redundancy to accelerate multi-contrast imaging acquisition. Recently, generative adversarial network shows superior performance in image reconstruction and synthesis. Some studies based on k-space reconstruction also exhibit superior performance over conventional state-of-art method. In this study, we propose a cross-domain two-stage generative adversarial network for multi-contrast images reconstruction based on prior full-sampled contrast and undersampled information. The new approach integrates reconstruction and synthesis, which estimates and completes the missing k-space and then refines in image space. It takes one fully-sampled contrast modality data and highly undersampled data from several other modalities as input, and outputs high quality images for each contrast simultaneously. The network is trained and tested on a public brain dataset from healthy subjects. Quantitative comparisons against baseline clearly indicate that the proposed method can effectively reconstruct undersampled images. Even under high acceleration, the network still can recover texture details and reduce artifacts.
Collapse
|
26
|
Ljungberg E, Damestani NL, Wood TC, Lythgoe DJ, Zelaya F, Williams SCR, Solana AB, Barker GJ, Wiesinger F. Silent zero TE MR neuroimaging: Current state-of-the-art and future directions. PROGRESS IN NUCLEAR MAGNETIC RESONANCE SPECTROSCOPY 2021; 123:73-93. [PMID: 34078538 PMCID: PMC7616227 DOI: 10.1016/j.pnmrs.2021.03.002] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Revised: 03/08/2021] [Accepted: 03/09/2021] [Indexed: 06/12/2023]
Abstract
Magnetic Resonance Imaging (MRI) scanners produce loud acoustic noise originating from vibrational Lorentz forces induced by rapidly changing currents in the magnetic field gradient coils. Using zero echo time (ZTE) MRI pulse sequences, gradient switching can be reduced to a minimum, which enables near silent operation.Besides silent MRI, ZTE offers further interesting characteristics, including a nominal echo time of TE = 0 (thus capturing short-lived signals from MR tissues which are otherwise MR-invisible), 3D radial sampling (providing motion robustness), and ultra-short repetition times (providing fast and efficient scanning).In this work we describe the main concepts behind ZTE imaging with a focus on conceptual understanding of the imaging sequences, relevant acquisition parameters, commonly observed image artefacts, and image contrasts. We will further describe a range of methods for anatomical and functional neuroimaging, together with recommendations for successful implementation.
Collapse
Affiliation(s)
- Emil Ljungberg
- Department of Neuroimaging, Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, United Kingdom.
| | - Nikou L Damestani
- Department of Neuroimaging, Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, United Kingdom
| | - Tobias C Wood
- Department of Neuroimaging, Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, United Kingdom
| | - David J Lythgoe
- Department of Neuroimaging, Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, United Kingdom
| | - Fernando Zelaya
- Department of Neuroimaging, Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, United Kingdom
| | - Steven C R Williams
- Department of Neuroimaging, Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, United Kingdom
| | | | - Gareth J Barker
- Department of Neuroimaging, Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, United Kingdom
| | - Florian Wiesinger
- Department of Neuroimaging, Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, United Kingdom; ASL Europe, GE Healthcare, Munich, Germany
| |
Collapse
|
27
|
Affiliation(s)
- Hugh Harvey
- Institute of Cognitive Neuroscience, University College London, London WC1N 3AZ, UK.
| | - Eric J Topol
- Scripps Research Translational Institute and the Scripps Research Institute, La Jolla, CA, USA
| |
Collapse
|