1
|
Chen L, Tian X, Wu J, Feng R, Lao G, Zhang Y, Liao H, Wei H. Joint coil sensitivity and motion correction in parallel MRI with a self-calibrating score-based diffusion model. Med Image Anal 2025; 102:103502. [PMID: 40049027 DOI: 10.1016/j.media.2025.103502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2024] [Revised: 01/28/2025] [Accepted: 02/08/2025] [Indexed: 04/15/2025]
Abstract
Magnetic Resonance Imaging (MRI) stands as a powerful modality in clinical diagnosis. However, it faces challenges such as long acquisition time and vulnerability to motion-induced artifacts. While many existing motion correction algorithms have shown success, most fail to account for the impact of motion artifacts on coil sensitivity map (CSM) estimation during fast MRI reconstruction. This oversight can lead to significant performance degradation, as errors in the estimated CSMs can propagate and compromise motion correction. In this work, we propose JSMoCo, a novel method for jointly estimating motion parameters and time-varying coil sensitivity maps for under-sampled MRI reconstruction. The joint estimation presents a highly ill-posed inverse problem due to the increased number of unknowns. To address this challenge, we leverage score-based diffusion models as powerful priors and apply MRI physical principles to effectively constrain the solution space. Specifically, we parameterize rigid motion with trainable variables and model CSMs as polynomial functions. A Gibbs sampler is employed to ensure system consistency between the sensitivity maps and the reconstructed images, effectively preventing error propagation from pre-estimated sensitivity maps to the final reconstructed images. We evaluate JSMoCo through 2D and 3D motion correction experiments on simulated motion-corrupted fastMRI dataset and in-vivo real MRI brain scans. The results demonstrate that JSMoCo successfully reconstructs high-quality MRI images from under-sampled k-space data, achieving robust motion correction by accurately estimating time-varying coil sensitivities. The code is available at https://github.com/MeijiTian/JSMoCo.
Collapse
Affiliation(s)
- Lixuan Chen
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Xuanyu Tian
- School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China; Lingang Laboratory, Shanghai 200031, China
| | - Jiangjie Wu
- School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
| | - Ruimin Feng
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Guoyan Lao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Yuyao Zhang
- School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
| | - Hongen Liao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Hongjiang Wei
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China; National Engineering Research Center of Advanced Magnetic Resonance Technologies for Diagnosis and Therapy, Shanghai Jiao Tong University, Shanghai, China.
| |
Collapse
|
2
|
Sun Y, Wang L, Li G, Lin W, Wang L. A foundation model for enhancing magnetic resonance images and downstream segmentation, registration and diagnostic tasks. Nat Biomed Eng 2025; 9:521-538. [PMID: 39638876 DOI: 10.1038/s41551-024-01283-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Accepted: 10/17/2024] [Indexed: 12/07/2024]
Abstract
In structural magnetic resonance (MR) imaging, motion artefacts, low resolution, imaging noise and variability in acquisition protocols frequently degrade image quality and confound downstream analyses. Here we report a foundation model for the motion correction, resolution enhancement, denoising and harmonization of MR images. Specifically, we trained a tissue-classification neural network to predict tissue labels, which are then leveraged by a 'tissue-aware' enhancement network to generate high-quality MR images. We validated the model's effectiveness on a large and diverse dataset comprising 2,448 deliberately corrupted images and 10,963 images spanning a wide age range (from foetuses to elderly individuals) acquired using a variety of clinical scanners across 19 public datasets. The model consistently outperformed state-of-the-art algorithms in improving the quality of MR images, handling pathological brains with multiple sclerosis or gliomas, generating 7-T-like images from 3 T scans and harmonizing images acquired from different scanners. The high-quality, high-resolution and harmonized images generated by the model can be used to enhance the performance of models for tissue segmentation, registration, diagnosis and other downstream tasks.
Collapse
Affiliation(s)
- Yue Sun
- Developing Brain Computing Lab, Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
- Joint Department of Biomedical Engineering, University of North Carolina at Chapel Hill and North Carolina State University, Chapel Hill, NC, USA
| | - Limei Wang
- Developing Brain Computing Lab, Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
- Joint Department of Biomedical Engineering, University of North Carolina at Chapel Hill and North Carolina State University, Chapel Hill, NC, USA
| | - Gang Li
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Weili Lin
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Li Wang
- Developing Brain Computing Lab, Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA.
| |
Collapse
|
3
|
Dalboni da Rocha JL, Lai J, Pandey P, Myat PSM, Loschinskey Z, Bag AK, Sitaram R. Artificial Intelligence for Neuroimaging in Pediatric Cancer. Cancers (Basel) 2025; 17:622. [PMID: 40002217 PMCID: PMC11852968 DOI: 10.3390/cancers17040622] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2024] [Revised: 02/06/2025] [Accepted: 02/07/2025] [Indexed: 02/27/2025] Open
Abstract
BACKGROUND/OBJECTIVES Artificial intelligence (AI) is transforming neuroimaging by enhancing diagnostic precision and treatment planning. However, its applications in pediatric cancer neuroimaging remain limited. This review assesses the current state, potential applications, and challenges of AI in pediatric neuroimaging for cancer, emphasizing the unique needs of the pediatric population. METHODS A comprehensive literature review was conducted, focusing on AI's impact on pediatric neuroimaging through accelerated image acquisition, reduced radiation, and improved tumor detection. Key methods include convolutional neural networks for tumor segmentation, radiomics for tumor characterization, and several tools for functional imaging. Challenges such as limited pediatric datasets, developmental variability, ethical concerns, and the need for explainable models were analyzed. RESULTS AI has shown significant potential to improve imaging quality, reduce scan times, and enhance diagnostic accuracy in pediatric neuroimaging, resulting in improved accuracy in tumor segmentation and outcome prediction for treatment. However, progress is hindered by the scarcity of pediatric datasets, issues with data sharing, and the ethical implications of applying AI in vulnerable populations. CONCLUSIONS To overcome current limitations, future research should focus on building robust pediatric datasets, fostering multi-institutional collaborations for data sharing, and developing interpretable AI models that align with clinical practice and ethical standards. These efforts are essential in harnessing the full potential of AI in pediatric neuroimaging and improving outcomes for children with cancer.
Collapse
Affiliation(s)
- Josue Luiz Dalboni da Rocha
- Department of Radiology, St. Jude Children’s Research Hospital, Memphis, TN 38105, USA; (J.L.); (P.P.); (P.S.M.M.); (Z.L.); (A.K.B.)
| | - Jesyin Lai
- Department of Radiology, St. Jude Children’s Research Hospital, Memphis, TN 38105, USA; (J.L.); (P.P.); (P.S.M.M.); (Z.L.); (A.K.B.)
| | - Pankaj Pandey
- Department of Radiology, St. Jude Children’s Research Hospital, Memphis, TN 38105, USA; (J.L.); (P.P.); (P.S.M.M.); (Z.L.); (A.K.B.)
| | - Phyu Sin M. Myat
- Department of Radiology, St. Jude Children’s Research Hospital, Memphis, TN 38105, USA; (J.L.); (P.P.); (P.S.M.M.); (Z.L.); (A.K.B.)
| | - Zachary Loschinskey
- Department of Radiology, St. Jude Children’s Research Hospital, Memphis, TN 38105, USA; (J.L.); (P.P.); (P.S.M.M.); (Z.L.); (A.K.B.)
- Department of Chemical and Biomedical Engineering, University of Missouri-Columbia, Columbia, MO 65211, USA
- Department of Biomedical Engineering, Boston University, Boston, MA 02215, USA
| | - Asim K. Bag
- Department of Radiology, St. Jude Children’s Research Hospital, Memphis, TN 38105, USA; (J.L.); (P.P.); (P.S.M.M.); (Z.L.); (A.K.B.)
| | - Ranganatha Sitaram
- Department of Radiology, St. Jude Children’s Research Hospital, Memphis, TN 38105, USA; (J.L.); (P.P.); (P.S.M.M.); (Z.L.); (A.K.B.)
| |
Collapse
|
4
|
Leukert LS, Heitkötter KH, Kronfeld A, Paul RH, Polak D, Splitthoff DN, Brockmann MA, Altmann S, Othman AE. Clinical Evaluation of 3D Motion-Correction Via Scout Accelerated Motion Estimation and Reduction Framework Versus Conventional T1-Weighted MRI at 1.5 T in Brain Imaging. Invest Radiol 2025:00004424-990000000-00285. [PMID: 39841594 DOI: 10.1097/rli.0000000000001156] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2025]
Abstract
OBJECTIVES The aim of this study was to investigate the occurrence of motion artifacts and image quality of brain magnetic resonance imaging (MRI) T1-weighted imaging applying 3D motion correction via the Scout Accelerated Motion Estimation and Reduction (SAMER) framework compared with conventional T1-weighted imaging at 1.5 T. MATERIALS AND METHODS A preliminary study involving 14 healthy volunteers assessed the impact of the SAMER framework on induced motion during 3 T MRI scans. Participants performed 3 different motion patterns: (1) step up, (2) controlled breathing, and (3) free motion. The patient study included 82 patients who required clinically indicated MRI scans. 3D T1-weighted images (MPRAGE) were acquired at 1.5 T. The MRI data were reconstructed using either regular product reconstruction (non-Moco) or the 3D motion correction SAMER framework (SAMER Moco), resulting in 145 image sequences. For the preliminary and the patient study, 3 experienced radiologists evaluated the image data using a 5-point Likert scale, focusing on overall image quality, artifact presence, diagnostic confidence, delineation of pathology, and image sharpness. Interrater agreement was assessed using Gwet's AC2, and an exploratory analysis (non-Moco vs SAMER Moco) was performed. RESULTS Compared with non-Moco, the preliminary study demonstrated significant improvements across all imaging parameters and motion patterns with SAMER Moco (P < 0.001). Odds ratios favoring SAMER Moco were >999.999 for freedom of artifact and overall image quality (P < 0.0001). Excellent or good ratings for freedom of artifact were 52.4% with SAMER Moco, compared with 21.4% for non-Moco. Similarly, 66.7% of SAMER Moco images were rated excellent or good for overall image quality versus 21.4% for non-Moco. Multireader interrater agreement was excellent across all parameters.The patient study confirmed that SAMER Moco provided significantly superior image quality across all evaluated imaging parameters, particularly in the presence of motion (P < 0.001). Diagnostic confidence was rated as excellent or good in 95.1% of SAMER Moco cases, compared with 78.1% for non-Moco cases. Similarly, overall image quality was rated as excellent or good in 89.8% of SAMER Moco cases versus 65.9% for non-Moco cases. The odds ratios for diagnostic confidence and for overall image quality were 6.698 and 6.030, respectively, both favoring SAMER Moco (P < 0.0001). Multireader interrater agreement was excellent across all parameters. CONCLUSIONS The application of SAMER in T1-weighted imaging datasets is feasible in clinical routine and significantly increases image quality and diagnostic confidence in 1.5 T brain MRI by effectively reducing motion artifacts.
Collapse
Affiliation(s)
- Laura S Leukert
- From the Department of Neuroradiology, University Medical Center Mainz, Johannes Gutenberg University, Mainz, Germany (L.S.L., K.H.H., A.K., M.A.B., S.A., A.E.O.); Institute of Medical Biostatistics, Epidemiology, and Informatics, University Medical Center Mainz, Johannes Gutenberg University, Mainz, Germany (R.H.P.); and Siemens Healthineers AG, Forchheim, Germany (D.P., D.N.S.)
| | | | | | | | | | | | | | | | | |
Collapse
|
5
|
Jang A, Liu F. POSE: POSition Encoding for accelerated quantitative MRI. Magn Reson Imaging 2024; 114:110239. [PMID: 39276808 PMCID: PMC11493528 DOI: 10.1016/j.mri.2024.110239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2024] [Revised: 08/26/2024] [Accepted: 09/10/2024] [Indexed: 09/17/2024]
Abstract
Quantitative MRI utilizes multiple acquisitions with varying sequence parameters to sufficiently characterize a biophysical model of interest, resulting in undesirable scan times. Here we propose, validate and demonstrate a new general strategy for accelerating MRI using subvoxel shifting as a source of encoding called POSition Encoding (POSE). The POSE framework applies unique subvoxel shifts along the acquisition parameter dimension, thereby creating an extra source of encoding. Combining with a biophysical signal model of interest, accelerated and enhanced resolution maps of biophysical parameters are obtained. This has been validated and demonstrated through numerical Bloch equation simulations, phantom experiments and in vivo experiments using the variable flip angle signal model in 3D acquisitions as an application example. Monte Carlo simulations were performed using in vivo data to investigate our method's noise performance. POSE quantification results from numerical Bloch equation simulations of both a numerical phantom and realistic digital brain phantom concur well with the reference method, validating our method both theoretically and for realistic situations. NIST phantom experiment results show excellent overall agreement with the reference method, confirming our method's applicability for a wide range of T1 values. In vivo results not only exhibit good agreement with the reference method, but also show g-factors that significantly outperforms conventional parallel imaging methods with identical acceleration. Furthermore, our results show that POSE can be combined with parallel imaging to further accelerate while maintaining superior noise performance over parallel imaging that uses lower acceleration factors.
Collapse
Affiliation(s)
- Albert Jang
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, United States; Harvard Medical School, Boston, MA, United States
| | - Fang Liu
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, United States; Harvard Medical School, Boston, MA, United States.
| |
Collapse
|
6
|
Hewlett M, Petrov I, Johnson PM, Drangova M. Deep-learning-based motion correction using multichannel MRI data: a study using simulated artifacts in the fastMRI dataset. NMR IN BIOMEDICINE 2024; 37:e5179. [PMID: 38808752 DOI: 10.1002/nbm.5179] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Revised: 04/21/2024] [Accepted: 04/29/2024] [Indexed: 05/30/2024]
Abstract
Deep learning presents a generalizable solution for motion correction requiring no pulse sequence modifications or additional hardware, but previous networks have all been applied to coil-combined data. Multichannel MRI data provide a degree of spatial encoding that may be useful for motion correction. We hypothesize that incorporating deep learning for motion correction prior to coil combination will improve results. A conditional generative adversarial network was trained using simulated rigid motion artifacts in brain images acquired at multiple sites with multiple contrasts (not limited to healthy subjects). We compared the performance of deep-learning-based motion correction on individual channel images (single-channel model) with that performed after coil combination (channel-combined model). We also investigate simultaneous motion correction of all channel data from an image volume (multichannel model). The single-channel model significantly (p < 0.0001) improved mean absolute error, with an average 50.9% improvement compared with the uncorrected images. This was significantly (p < 0.0001) better than the 36.3% improvement achieved by the channel-combined model (conventional approach). The multichannel model provided no significant improvement in quantitative measures of image quality compared with the uncorrected images. Results were independent of the presence of pathology, and generalizable to a new center unseen during training. Performing motion correction on single-channel images prior to coil combination provided an improvement in performance compared with conventional deep-learning-based motion correction. Improved deep learning methods for retrospective correction of motion-affected MR images could reduce the need for repeat scans if applied in a clinical setting.
Collapse
Affiliation(s)
- Miriam Hewlett
- Robarts Research Institute, The University of Western Ontario, London, Ontario, Canada
- Department of Medical Biophysics, The University of Western Ontario, London, Ontario, Canada
| | - Ivailo Petrov
- Robarts Research Institute, The University of Western Ontario, London, Ontario, Canada
| | - Patricia M Johnson
- Department of Radiology, New York Medicine School of Medicine, New York, New York, USA
| | - Maria Drangova
- Robarts Research Institute, The University of Western Ontario, London, Ontario, Canada
- Department of Medical Biophysics, The University of Western Ontario, London, Ontario, Canada
| |
Collapse
|
7
|
Levac B, Kumar S, Jalal A, Tamir JI. Accelerated motion correction with deep generative diffusion models. Magn Reson Med 2024; 92:853-868. [PMID: 38688874 DOI: 10.1002/mrm.30082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 01/02/2024] [Accepted: 02/23/2024] [Indexed: 05/02/2024]
Abstract
PURPOSE The aim of this work is to develop a method to solve the ill-posed inverse problem of accelerated image reconstruction while correcting forward model imperfections in the context of subject motion during MRI examinations. METHODS The proposed solution uses a Bayesian framework based on deep generative diffusion models to jointly estimate a motion-free image and rigid motion estimates from subsampled and motion-corrupt two-dimensional (2D) k-space data. RESULTS We demonstrate the ability to reconstruct motion-free images from accelerated two-dimensional (2D) Cartesian and non-Cartesian scans without any external reference signal. We show that our method improves over existing correction techniques on both simulated and prospectively accelerated data. CONCLUSION We propose a flexible framework for retrospective motion correction of accelerated MRI based on deep generative diffusion models, with potential application to other forward model corruptions.
Collapse
Affiliation(s)
- Brett Levac
- Chandra Family Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, Texas, USA
| | - Sidharth Kumar
- Chandra Family Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, Texas, USA
| | - Ajil Jalal
- Electrical Engineering and Computer Sciences, University of California at Berkeley, Berkeley, California, USA
| | - Jonathan I Tamir
- Chandra Family Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, Texas, USA
| |
Collapse
|
8
|
Chen X, Wu W, Chiew M. Motion compensated structured low-rank reconstruction for 3D multi-shot EPI. Magn Reson Med 2024; 91:2443-2458. [PMID: 38361309 DOI: 10.1002/mrm.30019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Revised: 12/08/2023] [Accepted: 01/05/2024] [Indexed: 02/17/2024]
Abstract
PURPOSE The 3D multi-shot EPI imaging offers several benefits including higher SNR and high isotropic resolution compared to 2D single shot EPI. However, it suffers from shot-to-shot inconsistencies arising from physiologically induced phase variations and bulk motion. This work proposed a motion compensated structured low-rank (mcSLR) reconstruction method to address both issues for 3D multi-shot EPI. METHODS Structured low-rank reconstruction has been successfully used in previous work to deal with inter-shot phase variations for 3D multi-shot EPI imaging. It circumvents the estimation of phase variations by reconstructing an individual image for each phase state which are then sum-of-squares combined, exploiting their linear interdependency encoded in structured low-rank constraints. However, structured low-rank constraints become less effective in the presence of inter-shot motion, which corrupts image magnitude consistency and invalidates the linear relationship between shots. Thus, this work jointly models inter-shot phase variations and motion corruptions by incorporating rigid motion compensation for structured low-rank reconstruction, where motion estimates are obtained in a fully data-driven way without relying on external hardware or imaging navigators. RESULTS Simulation and in vivo experiments at 7T have demonstrated that the mcSLR method can effectively reduce image artifacts and improve the robustness of 3D multi-shot EPI, outperforming existing methods which only address inter-shot phase variations or motion, but not both. CONCLUSION The proposed mcSLR reconstruction compensates for rigid motion, and thus improves the validity of structured low-rank constraints, resulting in improved robustness of 3D multi-shot EPI to both inter-shot motion and phase variations.
Collapse
Affiliation(s)
- Xi Chen
- Department of Radiological Sciences, David Geffen School of Medicine at UCLA, Los Angeles, California, USA
| | - Wenchuan Wu
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, Oxfordshire, UK
| | - Mark Chiew
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, Oxfordshire, UK
- Physical Sciences, Sunnybrook Research Institute, Toronto, Ontario, Canada
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
9
|
Safari M, Yang X, Chang CW, Qiu RLJ, Fatemi A, Archambault L. Unsupervised MRI motion artifact disentanglement: introducing MAUDGAN. Phys Med Biol 2024; 69:115057. [PMID: 38714192 DOI: 10.1088/1361-6560/ad4845] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Accepted: 05/07/2024] [Indexed: 05/09/2024]
Abstract
Objective.This study developed an unsupervised motion artifact reduction method for magnetic resonance imaging (MRI) images of patients with brain tumors. The proposed novel design uses multi-parametric multicenter contrast-enhanced T1W (ceT1W) and T2-FLAIR MRI images.Approach.The proposed framework included two generators, two discriminators, and two feature extractor networks. A 3-fold cross-validation was used to train and fine-tune the hyperparameters of the proposed model using 230 brain MRI images with tumors, which were then tested on 148 patients'in-vivodatasets. An ablation was performed to evaluate the model's compartments. Our model was compared with Pix2pix and CycleGAN. Six evaluation metrics were reported, including normalized mean squared error (NMSE), structural similarity index (SSIM), multi-scale-SSIM (MS-SSIM), peak signal-to-noise ratio (PSNR), visual information fidelity (VIF), and multi-scale gradient magnitude similarity deviation (MS-GMSD). Artifact reduction and consistency of tumor regions, image contrast, and sharpness were evaluated by three evaluators using Likert scales and compared with ANOVA and Tukey's HSD tests.Main results.On average, our method outperforms comparative models to remove heavy motion artifacts with the lowest NMSE (18.34±5.07%) and MS-GMSD (0.07 ± 0.03) for heavy motion artifact level. Additionally, our method creates motion-free images with the highest SSIM (0.93 ± 0.04), PSNR (30.63 ± 4.96), and VIF (0.45 ± 0.05) values, along with comparable MS-SSIM (0.96 ± 0.31). Similarly, our method outperformed comparative models in removingin-vivomotion artifacts for different distortion levels except for MS- SSIM and VIF, which have comparable performance with CycleGAN. Moreover, our method had a consistent performance for different artifact levels. For the heavy level of motion artifacts, our method got the highest Likert scores of 2.82 ± 0.52, 1.88 ± 0.71, and 1.02 ± 0.14 (p-values≪0.0001) for our method, CycleGAN, and Pix2pix respectively. Similar trends were also found for other motion artifact levels.Significance.Our proposed unsupervised method was demonstrated to reduce motion artifacts from the ceT1W brain images under a multi-parametric framework.
Collapse
Affiliation(s)
- Mojtaba Safari
- Département de physique, de génie physique et d'optique, et Centre de recherche sur le cancer, Université Laval, Québec, Québec, Canada
- Service de physique médicale et radioprotection, Centre Intégré de Cancérologie, CHU de Québec-Université Laval et Centre de recherche du CHU de Québec, Québec, Québec, Canada
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Chih-Wei Chang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Richard L J Qiu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Ali Fatemi
- Department of Physics, Jackson State University, Jackson, MS, United States of America
- Merit Health Central, Department of Radiation Oncology, Gamma Knife Center, Jackson, MS, United States of America
| | - Louis Archambault
- Département de physique, de génie physique et d'optique, et Centre de recherche sur le cancer, Université Laval, Québec, Québec, Canada
- Service de physique médicale et radioprotection, Centre Intégré de Cancérologie, CHU de Québec-Université Laval et Centre de recherche du CHU de Québec, Québec, Québec, Canada
| |
Collapse
|
10
|
Jiang N, Zhang Y, Li Q, Fu X, Fang D. A cardiac MRI motion artifact reduction method based on edge enhancement network. Phys Med Biol 2024; 69:095004. [PMID: 38537303 DOI: 10.1088/1361-6560/ad3884] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Accepted: 03/26/2024] [Indexed: 04/16/2024]
Abstract
Cardiac magnetic resonance imaging (MRI) usually requires a long acquisition time. The movement of the patients during MRI acquisition will produce image artifacts. Previous studies have shown that clear MR image texture edges are of great significance for pathological diagnosis. In this paper, a motion artifact reduction method for cardiac MRI based on edge enhancement network is proposed. Firstly, the four-plane normal vector adaptive fractional differential mask is applied to extract the edge features of blurred images. The four-plane normal vector method can reduce the noise information in the edge feature maps. The adaptive fractional order is selected according to the normal mean gradient and the local Gaussian curvature entropy of the images. Secondly, the extracted edge feature maps and blurred images are input into the de-artifact network. In this network, the edge fusion feature extraction network and the edge fusion transformer network are specially designed. The former combines the edge feature maps with the fuzzy feature maps to extract the edge feature information. The latter combines the edge attention network and the fuzzy attention network, which can focus on the blurred image edges. Finally, extensive experiments show that the proposed method can obtain higher peak signal-to-noise ratio and structural similarity index measure compared to state-of-art methods. The de-artifact images have clear texture edges.
Collapse
Affiliation(s)
- Nanhe Jiang
- School of Electrical Engineering, Yanshan University, Qinhuangdao, 066004, Hebei, People's Republic of China
| | - Yucun Zhang
- School of Electrical Engineering, Yanshan University, Qinhuangdao, 066004, Hebei, People's Republic of China
| | - Qun Li
- School of Mechanical Engineering, Yanshan University, Qinhuangdao, 066004, Hebei, People's Republic of China
| | - Xianbin Fu
- Hebei University of Environmental Engineering, Qinhuangdao, 066102, Hebei, People's Republic of China
| | - Dongqing Fang
- Capital Aerospace Machinery Co, Ltd, Fengtai, 100076, Beijing, People's Republic of China
| |
Collapse
|
11
|
Safari M, Yang X, Fatemi A, Archambault L. MRI motion artifact reduction using a conditional diffusion probabilistic model (MAR-CDPM). Med Phys 2024; 51:2598-2610. [PMID: 38009583 DOI: 10.1002/mp.16844] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Revised: 11/09/2023] [Accepted: 11/09/2023] [Indexed: 11/29/2023] Open
Abstract
BACKGROUND High-resolution magnetic resonance imaging (MRI) with excellent soft-tissue contrast is a valuable tool utilized for diagnosis and prognosis. However, MRI sequences with long acquisition time are susceptible to motion artifacts, which can adversely affect the accuracy of post-processing algorithms. PURPOSE This study proposes a novel retrospective motion correction method named "motion artifact reduction using conditional diffusion probabilistic model" (MAR-CDPM). The MAR-CDPM aimed to remove motion artifacts from multicenter three-dimensional contrast-enhanced T1 magnetization-prepared rapid acquisition gradient echo (3D ceT1 MPRAGE) brain dataset with different brain tumor types. MATERIALS AND METHODS This study employed two publicly accessible MRI datasets: one containing 3D ceT1 MPRAGE and 2D T2-fluid attenuated inversion recovery (FLAIR) images from 230 patients with diverse brain tumors, and the other comprising 3D T1-weighted (T1W) MRI images of 148 healthy volunteers, which included real motion artifacts. The former was used to train and evaluate the model using the in silico data, and the latter was used to evaluate the model performance to remove real motion artifacts. A motion simulation was performed in k-space domain to generate an in silico dataset with minor, moderate, and heavy distortion levels. The diffusion process of the MAR-CDPM was then implemented in k-space to convert structure data into Gaussian noise by gradually increasing motion artifact levels. A conditional network with a Unet backbone was trained to reverse the diffusion process to convert the distorted images to structured data. The MAR-CDPM was trained in two scenarios: one conditioning on the time step t $t$ of the diffusion process, and the other conditioning on both t $t$ and T2-FLAIR images. The MAR-CDPM was quantitatively and qualitatively compared with supervised Unet, Unet conditioned on T2-FLAIR, CycleGAN, Pix2pix, and Pix2pix conditioned on T2-FLAIR models. To quantify the spatial distortions and the level of remaining motion artifacts after applying the models, quantitative metrics were reported including normalized mean squared error (NMSE), structural similarity index (SSIM), multiscale structural similarity index (MS-SSIM), peak signal-to-noise ratio (PSNR), visual information fidelity (VIF), and multiscale gradient magnitude similarity deviation (MS-GMSD). Tukey's Honestly Significant Difference multiple comparison test was employed to quantify the difference between the models where p-value < 0.05 $ < 0.05$ was considered statistically significant. RESULTS Qualitatively, MAR-CDPM outperformed these methods in preserving soft-tissue contrast and different brain regions. It also successfully preserved tumor boundaries for heavy motion artifacts, like the supervised method. Our MAR-CDPM recovered motion-free in silico images with the highest PSNR and VIF for all distortion levels where the differences were statistically significant (p-values< 0.05 $< 0.05$ ). In addition, our method conditioned on t and T2-FLAIR outperformed (p-values< 0.05 $< 0.05$ ) other methods to remove motion artifacts from the in silico dataset in terms of NMSE, MS-SSIM, SSIM, and MS-GMSD. Moreover, our method conditioned on only t outperformed generative models (p-values< 0.05 $< 0.05$ ) and had comparable performances compared with the supervised model (p-values> 0.05 $> 0.05$ ) to remove real motion artifacts. CONCLUSIONS The MAR-CDPM could successfully remove motion artifacts from 3D ceT1 MPRAGE. It is particularly beneficial for elderly who may experience involuntary movements during high-resolution MRI imaging with long acquisition times.
Collapse
Affiliation(s)
- Mojtaba Safari
- Département de physique, de génie physique et d'optique, et Centre de recherche sur le cancer, Université Laval, Quebec, Quebec, Canada
- Service de physique médicale et radioprotection, Centre Intégré de Cancérologie, CHU de Québec-Université Laval et Centre de recherche du CHU de Québec, Quebec, Quebec, Canada
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Ali Fatemi
- Department of Physics, Jackson State University, Jackson, Mississippi, USA
- Merit Health Central, Department of Radiation Oncology, Gamma Knife Center, Jackson, Mississippi, USA
| | - Louis Archambault
- Département de physique, de génie physique et d'optique, et Centre de recherche sur le cancer, Université Laval, Quebec, Quebec, Canada
- Service de physique médicale et radioprotection, Centre Intégré de Cancérologie, CHU de Québec-Université Laval et Centre de recherche du CHU de Québec, Quebec, Quebec, Canada
| |
Collapse
|
12
|
Wu B, Li C, Zhang J, Lai H, Feng Q, Huang M. Unsupervised dual-domain disentangled network for removal of rigid motion artifacts in MRI. Comput Biol Med 2023; 165:107373. [PMID: 37611424 DOI: 10.1016/j.compbiomed.2023.107373] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Revised: 07/28/2023] [Accepted: 08/12/2023] [Indexed: 08/25/2023]
Abstract
Motion artifacts in magnetic resonance imaging (MRI) have always been a serious issue because they can affect subsequent diagnosis and treatment. Supervised deep learning methods have been investigated for the removal of motion artifacts; however, they require paired data that are difficult to obtain in clinical settings. Although unsupervised methods are widely proposed to fully use clinical unpaired data, they generally focus on anatomical structures generated by the spatial domain while ignoring phase error (deviations or inaccuracies in phase information that are possibly caused by rigid motion artifacts during image acquisition) provided by the frequency domain. In this study, a 2D unsupervised deep learning method named unsupervised disentangled dual-domain network (UDDN) was proposed to effectively disentangle and remove unwanted rigid motion artifacts from images. In UDDN, a dual-domain encoding module was presented to capture different types of information from the spatial and frequency domains to enrich the information. Moreover, a cross-domain attention fusion module was proposed to effectively fuse information from different domains, reduce information redundancy, and improve the performance of motion artifact removal. UDDN was validated on a publicly available dataset and a clinical dataset. Qualitative and quantitative experimental results showed that our method could effectively remove motion artifacts and reconstruct image details. Moreover, the performance of UDDN surpasses that of several state-of-the-art unsupervised methods and is comparable with that of the supervised method. Therefore, our method has great potential for clinical application in MRI, such as real-time removal of rigid motion artifacts.
Collapse
Affiliation(s)
- Boya Wu
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China.
| | - Caixia Li
- Department of Medical Imaging Center, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China.
| | - Jiawei Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China.
| | - Haoran Lai
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China.
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China.
| | - Meiyan Huang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China.
| |
Collapse
|
13
|
Polak D, Hossbach J, Splitthoff DN, Clifford B, Lo WC, Tabari A, Lang M, Huang SY, Conklin J, Wald LL, Cauley S. Motion guidance lines for robust data consistency-based retrospective motion correction in 2D and 3D MRI. Magn Reson Med 2023; 89:1777-1790. [PMID: 36744619 PMCID: PMC10518424 DOI: 10.1002/mrm.29534] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 10/06/2022] [Accepted: 10/31/2022] [Indexed: 02/07/2023]
Abstract
PURPOSE To develop a robust retrospective motion-correction technique based on repeating k-space guidance lines for improving motion correction in Cartesian 2D and 3D brain MRI. METHODS The motion guidance lines are inserted into the standard sequence orderings for 2D turbo spin echo and 3D MPRAGE to inform a data consistency-based motion estimation and reconstruction, which can be guided by a low-resolution scout. The extremely limited number of required guidance lines are repeated during each echo train and discarded in the final image reconstruction. Thus, integration within a standard k-space acquisition ordering ensures the expected image quality/contrast and motion sensitivity of that sequence. RESULTS Through simulation and in vivo 2D multislice and 3D motion experiments, we demonstrate that respectively 2 or 4 optimized motion guidance lines per shot enables accurate motion estimation and correction. Clinically acceptable reconstruction times are achieved through fully separable on-the-fly motion optimizations (˜1 s/shot) using standard scanner GPU hardware. CONCLUSION The addition of guidance lines to scout accelerated motion estimation facilitates robust retrospective motion correction that can be effectively introduced without perturbing standard clinical protocols and workflows.
Collapse
Affiliation(s)
- Daniel Polak
- Department of Radiology, A. A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts, USA
- Siemens Healthcare GmbH, Erlangen, Germany
| | | | | | | | | | - Azadeh Tabari
- Department of Radiology, A. A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts, USA
- Harvard Medical School, Boston, Massachusetts, USA
| | - Min Lang
- Department of Radiology, A. A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts, USA
- Harvard Medical School, Boston, Massachusetts, USA
| | - Susie Y. Huang
- Department of Radiology, A. A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts, USA
- Harvard Medical School, Boston, Massachusetts, USA
- Harvard-MIT Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
| | - John Conklin
- Department of Radiology, A. A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts, USA
- Harvard Medical School, Boston, Massachusetts, USA
| | - Lawrence L. Wald
- Department of Radiology, A. A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts, USA
- Harvard Medical School, Boston, Massachusetts, USA
- Harvard-MIT Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
| | - Stephen Cauley
- Department of Radiology, A. A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts, USA
- Harvard Medical School, Boston, Massachusetts, USA
| |
Collapse
|
14
|
Cruz G, Hammernik K, Kuestner T, Velasco C, Hua A, Ismail TF, Rueckert D, Botnar RM, Prieto C. Single-heartbeat cardiac cine imaging via jointly regularized nonrigid motion-corrected reconstruction. NMR IN BIOMEDICINE 2023; 36:e4942. [PMID: 36999225 PMCID: PMC10909414 DOI: 10.1002/nbm.4942] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Revised: 03/07/2023] [Accepted: 03/26/2023] [Indexed: 05/14/2023]
Abstract
The aim of the current study was to develop a novel approach for 2D breath-hold cardiac cine imaging from a single heartbeat, by combining cardiac motion-corrected reconstructions and nonrigidly aligned patch-based regularization. Conventional cardiac cine imaging is obtained via motion-resolved reconstructions of data acquired over multiple heartbeats. Here, we achieve single-heartbeat cine imaging by incorporating nonrigid cardiac motion correction into the reconstruction of each cardiac phase, in conjunction with a motion-aligned patch-based regularization. The proposed Motion-Corrected CINE (MC-CINE) incorporates all acquired data into the reconstruction of each (motion-corrected) cardiac phase, resulting in a better posed problem than motion-resolved approaches. MC-CINE was compared with iterative sensitivity encoding (itSENSE) and Extra-Dimensional Golden Angle Radial Sparse Parallel (XD-GRASP) in 14 healthy subjects in terms of image sharpness, reader scoring (range: 1-5) and reader ranking (range: 1-9) of image quality, and single-slice left ventricular assessment. MC-CINE was significantly superior to both itSENSE and XD-GRASP using 20 heartbeats, two heartbeats, and one heartbeat. Iterative SENSE, XD-GRASP, and MC-CINE achieved a sharpness of 74%, 74%, and 82% using 20 heartbeats, and 53%, 66%, and 82% with one heartbeat, respectively. The corresponding results for reader scoring were 4.0, 4.7, and 4.9 with 20 heartbeats, and 1.1, 3.0, and 3.9 with one heartbeat. The corresponding results for reader ranking were 5.3, 7.3, and 8.6 with 20 heartbeats, and 1.0, 3.2, and 5.4 with one heartbeat. MC-CINE using a single heartbeat presented nonsignificant differences in image quality to itSENSE with 20 heartbeats. MC-CINE and XD-GRASP at one heartbeat both presented a nonsignificant negative bias of less than 2% in ejection fraction relative to the reference itSENSE. It was concluded that the proposed MC-CINE significantly improves image quality relative to itSENSE and XD-GRASP, enabling 2D cine from a single heartbeat.
Collapse
Affiliation(s)
- Gastao Cruz
- School of Biomedical Engineering and Imaging SciencesKing's College LondonLondonUK
| | - Kerstin Hammernik
- Department of ComputingImperial College LondonLondonUK
- Institute for Artificial Intelligence and Informatics in Medicine, Klinikum rechts der IsarTechnical University of MunichMunichGermany
| | - Thomas Kuestner
- School of Biomedical Engineering and Imaging SciencesKing's College LondonLondonUK
- Medical Image and Data Analysis, Department of Diagnostic and Interventional RadiologyUniversity Hospital TübingenTübingenGermany
| | - Carlos Velasco
- School of Biomedical Engineering and Imaging SciencesKing's College LondonLondonUK
| | - Alina Hua
- School of Biomedical Engineering and Imaging SciencesKing's College LondonLondonUK
| | - Tevfik Fehmi Ismail
- School of Biomedical Engineering and Imaging SciencesKing's College LondonLondonUK
| | - Daniel Rueckert
- Department of ComputingImperial College LondonLondonUK
- Institute for Artificial Intelligence and Informatics in Medicine, Klinikum rechts der IsarTechnical University of MunichMunichGermany
| | - Rene Michael Botnar
- School of Biomedical Engineering and Imaging SciencesKing's College LondonLondonUK
- Escuela de IngenieríaPontificia Universidad Católica de ChileSantiagoChile
- Millennium Institute for Intelligent Healthcare EngineeringSantiagoChile
| | - Claudia Prieto
- School of Biomedical Engineering and Imaging SciencesKing's College LondonLondonUK
- Escuela de IngenieríaPontificia Universidad Católica de ChileSantiagoChile
- Millennium Institute for Intelligent Healthcare EngineeringSantiagoChile
| |
Collapse
|
15
|
Lang M, Tabari A, Polak D, Ford J, Clifford B, Lo WC, Manzoor K, Splitthoff DN, Wald LL, Rapalino O, Schaefer P, Conklin J, Cauley S, Huang SY. Clinical Evaluation of Scout Accelerated Motion Estimation and Reduction Technique for 3D MR Imaging in the Inpatient and Emergency Department Settings. AJNR Am J Neuroradiol 2023; 44:125-133. [PMID: 36702502 PMCID: PMC9891324 DOI: 10.3174/ajnr.a7777] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Accepted: 12/11/2022] [Indexed: 01/27/2023]
Abstract
BACKGROUND AND PURPOSE A scout accelerated motion estimation and reduction (SAMER) framework has been developed for efficient retrospective motion correction. The goal of this study was to perform an initial evaluation of SAMER in a series of clinical brain MR imaging examinations. MATERIALS AND METHODS Ninety-seven patients who underwent MR imaging in the inpatient and emergency department settings were included in the study. SAMER motion correction was retrospectively applied to an accelerated T1-weighted MPRAGE sequence that was included in brain MR imaging examinations performed with and without contrast. Two blinded neuroradiologists graded images with and without SAMER motion correction on a 5-tier motion severity scale (none = 1, minimal = 2, mild = 3, moderate = 4, severe = 5). RESULTS The median SAMER reconstruction time was 1 minute 47 seconds. SAMER motion correction significantly improved overall motion grades across all examinations (P < .005). Motion artifacts were reduced in 28% of cases, unchanged in 64% of cases, and increased in 8% of cases. SAMER improved motion grades in 100% of moderate motion cases and 75% of severe motion cases. Sixty-nine percent of nondiagnostic motion cases (grades 4 and 5) were considered diagnostic after SAMER motion correction. For cases with minimal or no motion, SAMER had negligible impact on the overall motion grade. For cases with mild, moderate, and severe motion, SAMER improved the motion grade by an average of 0.3 (SD, 0.5), 1.1 (SD, 0.3), and 1.1 (SD, 0.8) grades, respectively. CONCLUSIONS SAMER improved the diagnostic image quality of clinical brain MR imaging examinations with motion artifacts. The improvement was most pronounced for cases with moderate or severe motion.
Collapse
Affiliation(s)
- M Lang
- From the Department of Radiology (M.L., A.T., D.P., J.F., K.M., L.L.W., O.R., P.S., J.C., S.C., S.Y.H.), Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts
- Harvard Medical School (M.L., A.T., J.F., K.M., L.L.W., O.R., P.S., J.C., S.C., S.Y.H.), Boston, Massachusetts
| | - A Tabari
- From the Department of Radiology (M.L., A.T., D.P., J.F., K.M., L.L.W., O.R., P.S., J.C., S.C., S.Y.H.), Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts
- Harvard Medical School (M.L., A.T., J.F., K.M., L.L.W., O.R., P.S., J.C., S.C., S.Y.H.), Boston, Massachusetts
| | - D Polak
- From the Department of Radiology (M.L., A.T., D.P., J.F., K.M., L.L.W., O.R., P.S., J.C., S.C., S.Y.H.), Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts
- Siemens Healthcare GmbH (D.P., D.N.S.), Erlangen, Germany
| | - J Ford
- From the Department of Radiology (M.L., A.T., D.P., J.F., K.M., L.L.W., O.R., P.S., J.C., S.C., S.Y.H.), Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts
- Harvard Medical School (M.L., A.T., J.F., K.M., L.L.W., O.R., P.S., J.C., S.C., S.Y.H.), Boston, Massachusetts
| | - B Clifford
- Siemens Medical Solutions (B.C., W.-C.L.), Boston, Massachusetts
| | - W-C Lo
- Siemens Medical Solutions (B.C., W.-C.L.), Boston, Massachusetts
| | - K Manzoor
- From the Department of Radiology (M.L., A.T., D.P., J.F., K.M., L.L.W., O.R., P.S., J.C., S.C., S.Y.H.), Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts
- Harvard Medical School (M.L., A.T., J.F., K.M., L.L.W., O.R., P.S., J.C., S.C., S.Y.H.), Boston, Massachusetts
| | - D N Splitthoff
- Siemens Healthcare GmbH (D.P., D.N.S.), Erlangen, Germany
| | - L L Wald
- From the Department of Radiology (M.L., A.T., D.P., J.F., K.M., L.L.W., O.R., P.S., J.C., S.C., S.Y.H.), Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts
- Harvard Medical School (M.L., A.T., J.F., K.M., L.L.W., O.R., P.S., J.C., S.C., S.Y.H.), Boston, Massachusetts
- Harvard-MIT Health Sciences and Technology (L.L.W.), Massachusetts Institute of Technology, Cambridge, Massachusetts
| | - O Rapalino
- From the Department of Radiology (M.L., A.T., D.P., J.F., K.M., L.L.W., O.R., P.S., J.C., S.C., S.Y.H.), Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts
- Harvard Medical School (M.L., A.T., J.F., K.M., L.L.W., O.R., P.S., J.C., S.C., S.Y.H.), Boston, Massachusetts
| | - P Schaefer
- From the Department of Radiology (M.L., A.T., D.P., J.F., K.M., L.L.W., O.R., P.S., J.C., S.C., S.Y.H.), Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts
- Harvard Medical School (M.L., A.T., J.F., K.M., L.L.W., O.R., P.S., J.C., S.C., S.Y.H.), Boston, Massachusetts
| | - J Conklin
- From the Department of Radiology (M.L., A.T., D.P., J.F., K.M., L.L.W., O.R., P.S., J.C., S.C., S.Y.H.), Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts
- Harvard Medical School (M.L., A.T., J.F., K.M., L.L.W., O.R., P.S., J.C., S.C., S.Y.H.), Boston, Massachusetts
| | - S Cauley
- From the Department of Radiology (M.L., A.T., D.P., J.F., K.M., L.L.W., O.R., P.S., J.C., S.C., S.Y.H.), Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts
- Harvard Medical School (M.L., A.T., J.F., K.M., L.L.W., O.R., P.S., J.C., S.C., S.Y.H.), Boston, Massachusetts
| | - S Y Huang
- From the Department of Radiology (M.L., A.T., D.P., J.F., K.M., L.L.W., O.R., P.S., J.C., S.C., S.Y.H.), Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts
- Harvard Medical School (M.L., A.T., J.F., K.M., L.L.W., O.R., P.S., J.C., S.C., S.Y.H.), Boston, Massachusetts
| |
Collapse
|
16
|
Velikina JV, Jung Y, Field AS, Samsonov AA. High-resolution dynamic susceptibility contrast perfusion imaging using higher-order temporal smoothness regularization. Magn Reson Med 2023; 89:112-127. [PMID: 36198002 PMCID: PMC9617779 DOI: 10.1002/mrm.29425] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2022] [Revised: 08/03/2022] [Accepted: 08/05/2022] [Indexed: 01/12/2023]
Abstract
PURPOSE To improve image quality and resolution of dynamic susceptibility contrast perfusion weighted imaging (DSC-PWI) by developing acquisition and reconstruction methods exploiting the temporal regularity property of DSC-PWI signal. THEORY AND METHODS A novel regularized reconstruction is proposed that recovers DSC-PWI series from interleaved segmented spiral k-space acquisition using higher order temporal smoothness (HOTS) properties of the DSC-PWI signal. The HOTS regularization is designed to tackle representational insufficiency of the standard first-order temporal regularizations for supporting higher accelerations. The higher accelerations allow for k-space coverage with shorter spiral interleaves resulting in improved acquisition point spread function, and acquisition of images at multiple TEs for more accurate DSC-PWI analysis. RESULTS The methods were evaluated in simulated and in-vivo studies. HOTS regularization provided increasingly more accurate models for DSC-PWI than the standard first-order methods with either quadratic or robust norms at the expense of increased noise. HOTS DSC-PWI optimized for noise and accuracy demonstrated significant advantages over both spiral DSC-PWI without temporal regularization and traditional echo-planar DSC-PWI, improving resolution and mitigating image artifacts associated with long readout, including blurring and geometric distortions. In context of multi-echo DSC-PWI, the novel methods allowed ∼4.3× decrease of voxel volume, providing 2× number of TEs compared to the previously published results. CONCLUSIONS Proposed HOTS reconstruction combined with dynamic spiral sampling represents a valid mechanism for improving image quality and resolution of DSC-PWI significantly beyond those available with established fast imaging techniques.
Collapse
Affiliation(s)
- Julia V. Velikina
- Department of RadiologyUniversity of Wisconsin‐Madison
MadisonWisconsinUSA
| | - Youngkyoo Jung
- Department of RadiologyUniversity of California‐DavisDavisCaliforniaUSA
| | - Aaron S. Field
- Department of RadiologyUniversity of Wisconsin‐Madison
MadisonWisconsinUSA
| | - Alexey A. Samsonov
- Department of RadiologyUniversity of Wisconsin‐Madison
MadisonWisconsinUSA
| |
Collapse
|
17
|
Devi S, Bakshi S, Sahoo MN. Effect of situational and instrumental distortions on the classification of brain MR images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
18
|
Hossbach J, Splitthoff DN, Cauley S, Clifford B, Polak D, Lo WC, Meyer H, Maier A. Deep learning-based motion quantification from k-space for fast model-based magnetic resonance imaging motion correction. Med Phys 2022; 50:2148-2161. [PMID: 36433748 DOI: 10.1002/mp.16119] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2022] [Revised: 10/19/2022] [Accepted: 10/21/2022] [Indexed: 11/28/2022] Open
Abstract
BACKGROUND Intra-scan rigid-body motion is a costly and ubiquitous problem in clinical magnetic resonance imaging (MRI) of the head. PURPOSE State-of-the-art methods for retrospective motion correction in MRI are often computationally expensive or in the case of image-to-image deep learning (DL) based methods can be prone to undesired alterations of the image (hallucinations'). In this work we introduce a novel rigid-body motion correction method which combines the advantages of classical model-driven and data-consistency (DC) preserving approaches with a novel DL algorithm, to provide fast and robust retrospective motion correction. METHODS The proposed Motion Parameter Estimating Densenet (MoPED) retrospectively estimates subject head motion during MRI acquisitions using a DL network with DenseBlocks and multitask learning. It quantifies the 2D rigid in-plane motion parameters slice-wise for each echo train (ET) of a Cartesian T2-weighted 2D Turbo-Spin-Echo sequence. The network receives a center patch of the motion corrupted k-space as well as an additional motion-free low-resolution reference scan to provide the ground truth orientation. The supervised training utilizes motion simulations based on 28 acquisitions with subject-wise training, validation, and test data splits of 70%, 23%, and 7%. During inference, MoPED is embedded in an iterative DC-driven motion correction algorithm which alternatingly updates estimates of the motion parameters and motion-corrected low-resolution k-space data. The estimated motion parameters are then used to reconstruct the final motion corrected image. The mean absolute/squared error and the Pearson correlation coefficient were used to analyze the motion parameter estimation quality on in-silico data in a quantitative evaluation. Structural similarity (SSIM), DC error and root mean squared error (RMSE) were used as metrics of image quality improvement. Furthermore, the generalization capability of the network was analyzed on two in-vivo motion volumes with 28 slices each and on one simulated T1-weighted volume. RESULTS The motion estimation achieves a Pearson correlation of 0.968 to the simulated ground-truth of the 2433 test data slices used. In-silico results indicate that MoPED decreases the time for the optimization by a factor of around 27 compared to a conventional method and is able to reduce the RMSE of the reconstructions and average DC error by more than a factor of two compared to uncorrected images. In-vivo experiments show a decrease in computation time by a factor of around 20, a RMSE decrease from 0.055 to 0.033 and an SSIM increase from 0.795 to 0.862. Furthermore, contrast independence is demonstrated as MoPED is also able to correct T1-weighted images in simulations without retraining. Due to the model-based correction, no hallucinations were observed. CONCLUSIONS Incorporating DL in a model-based motion correction algorithm shows great benefit on the optimization and computation time. The k-space-based estimation also allows a data consistent correction and therefore avoids the risk of hallucinations of image-to-image approaches.
Collapse
Affiliation(s)
- Julian Hossbach
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.,Siemens Healthcare GmbH, Erlangen, Germany
| | | | - Stephen Cauley
- Department of Radiology, A. A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Massachusetts, Charlestown, USA
| | - Bryan Clifford
- Siemens Medical Solutions USA, Massachusetts, Boston, USA
| | - Daniel Polak
- Siemens Healthcare GmbH, Erlangen, Germany.,Department of Radiology, A. A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Massachusetts, Charlestown, USA
| | - Wei-Ching Lo
- Siemens Medical Solutions USA, Massachusetts, Boston, USA
| | | | - Andreas Maier
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| |
Collapse
|
19
|
Singh NM, Harrod JB, Subramanian S, Robinson M, Chang K, Cetin-Karayumak S, Dalca AV, Eickhoff S, Fox M, Franke L, Golland P, Haehn D, Iglesias JE, O'Donnell LJ, Ou Y, Rathi Y, Siddiqi SH, Sun H, Westover MB, Whitfield-Gabrieli S, Gollub RL. How Machine Learning is Powering Neuroimaging to Improve Brain Health. Neuroinformatics 2022; 20:943-964. [PMID: 35347570 PMCID: PMC9515245 DOI: 10.1007/s12021-022-09572-9] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/07/2022] [Indexed: 12/31/2022]
Abstract
This report presents an overview of how machine learning is rapidly advancing clinical translational imaging in ways that will aid in the early detection, prediction, and treatment of diseases that threaten brain health. Towards this goal, we aresharing the information presented at a symposium, "Neuroimaging Indicators of Brain Structure and Function - Closing the Gap Between Research and Clinical Application", co-hosted by the McCance Center for Brain Health at Mass General Hospital and the MIT HST Neuroimaging Training Program on February 12, 2021. The symposium focused on the potential for machine learning approaches, applied to increasingly large-scale neuroimaging datasets, to transform healthcare delivery and change the trajectory of brain health by addressing brain care earlier in the lifespan. While not exhaustive, this overview uniquely addresses many of the technical challenges from image formation, to analysis and visualization, to synthesis and incorporation into the clinical workflow. Some of the ethical challenges inherent to this work are also explored, as are some of the regulatory requirements for implementation. We seek to educate, motivate, and inspire graduate students, postdoctoral fellows, and early career investigators to contribute to a future where neuroimaging meaningfully contributes to the maintenance of brain health.
Collapse
Affiliation(s)
- Nalini M Singh
- Harvard-MIT Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Jordan B Harrod
- Harvard-MIT Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Sandya Subramanian
- Harvard-MIT Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Mitchell Robinson
- Harvard-MIT Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Ken Chang
- Harvard-MIT Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Suheyla Cetin-Karayumak
- Department of Psychiatry, Brigham and Women's Hospital and Harvard Medical School, Boston, 02115, USA
| | | | - Simon Eickhoff
- Institute of Systems Neuroscience, Medical Faculty, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
- Institute of Neuroscience and Medicine, Brain & Behaviour (INM-7) Research Centre Jülich, Jülich, Germany
| | - Michael Fox
- Center for Brain Circuit Therapeutics, Department of Neurology, Psychiatry, and Radiology, Brigham and Women's Hospital and Harvard Medical School, 02115, Boston, USA
| | - Loraine Franke
- University of Massachusetts Boston, Boston, MA, 02125, USA
| | - Polina Golland
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Daniel Haehn
- University of Massachusetts Boston, Boston, MA, 02125, USA
| | - Juan Eugenio Iglesias
- Centre for Medical Image Computing, University College London, London, UK
- Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, 02114, USA
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Lauren J O'Donnell
- Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, MA, 02115, Boston, USA
| | - Yangming Ou
- Department of Radiology, Boston Children's Hospital, Harvard Medical School, Boston, MA, 02115, USA
| | - Yogesh Rathi
- Department of Psychiatry, Brigham and Women's Hospital and Harvard Medical School, Boston, 02115, USA
| | - Shan H Siddiqi
- Department of Psychiatry, Brigham and Women's Hospital and Harvard Medical School, Boston, 02115, USA
| | - Haoqi Sun
- Department of Neurology and McCance Center for Brain Health / Harvard Medical School, Massachusetts General Hospital, Boston, 02114, USA
| | - M Brandon Westover
- Department of Neurology and McCance Center for Brain Health / Harvard Medical School, Massachusetts General Hospital, Boston, 02114, USA
| | | | - Randy L Gollub
- Department of Psychiatry and Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, 02114, USA.
| |
Collapse
|
20
|
Dong Z, Wang F, Setsompop K. Motion-corrected 3D-EPTI with efficient 4D navigator acquisition for fast and robust whole-brain quantitative imaging. Magn Reson Med 2022; 88:1112-1125. [PMID: 35481604 PMCID: PMC9246907 DOI: 10.1002/mrm.29277] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2021] [Revised: 04/03/2022] [Accepted: 04/04/2022] [Indexed: 12/14/2022]
Abstract
PURPOSE To develop a motion estimation and correction method for motion-robust three-dimensional (3D) quantitative imaging with 3D-echo-planar time-resolved imaging. THEORY AND METHODS The 3D-echo-planar time-resolved imaging technique was designed with additional four-dimensional navigator acquisition (x-y-z-echoes) to achieve fast and motion-robust quantitative imaging of the human brain. The four-dimensional-navigator is inserted into the relaxation-recovery deadtime of the sequence in every pulse TR (∼2 s) to avoid extra scan time, and to provide continuous tracking of the 3D head motion and B0 -inhomogeneity changes. By using an optimized spatiotemporal encoding combined with a partial-Fourier scheme, the navigator acquires a large central k-t data block for accurate motion estimation using only four small-flip-angle excitations and readouts, resulting in negligible signal-recovery reduction to the 3D-echo-planar time-resolved imaging acquisition. By incorporating the estimated motion and B0 -inhomogeneity changes into the reconstruction, multi-contrast images can be recovered with reduced motion artifacts. RESULTS Simulation shows the cost to the SNR efficiency from the added navigator acquisitions is <1%. Both simulation and in vivo retrospective experiments were conducted, that demonstrate the four-dimensional navigator provided accurate estimation of the 3D motion and B0 -inhomogeneity changes, allowing effective reduction of image artifacts in quantitative maps. Finally, in vivo prospective undersampling acquisition was performed with and without head motion, in which the motion corrupted data after correction show close image quality and consistent quantifications to the motion-free scan, providing reliable quantitative measurements even with head motion. CONCLUSION The proposed four-dimensional navigator acquisition provides reliable tracking of the head motion and B0 change with negligible SNR cost, equips the 3D-echo-planar time-resolved imaging technique for motion-robust and efficient quantitative imaging.
Collapse
Affiliation(s)
- Zijing Dong
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Department of Electrical Engineering and Computer Science, MIT, Cambridge, MA, USA
| | - Fuyixue Wang
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Harvard-MIT Health Sciences and Technology, MIT, Cambridge, MA, USA
| | - Kawin Setsompop
- Department of Radiology, Stanford University, Stanford, CA, USA
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
| |
Collapse
|
21
|
Stacked U-Nets with self-assisted priors towards robust correction of rigid motion artifact in brain MRI. Neuroimage 2022; 259:119411. [PMID: 35753594 DOI: 10.1016/j.neuroimage.2022.119411] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Revised: 05/12/2022] [Accepted: 06/22/2022] [Indexed: 11/23/2022] Open
Abstract
Magnetic Resonance Imaging (MRI) is sensitive to motion caused by patient movement due to the relatively long data acquisition time. This could cause severe degradation of image quality and therefore affect the overall diagnosis. In this paper, we develop an efficient retrospective 2D deep learning method called stacked U-Nets with self-assisted priors to address the problem of rigid motion artifacts in 3D brain MRI. The proposed work exploits the usage of additional knowledge priors from the corrupted images themselves without the need for additional contrast data. The proposed network learns the missed structural details through sharing auxiliary information from the contiguous slices of the same distorted subject. We further design a refinement stacked U-Nets that facilitates preserving the spatial image details and improves the pixel-to-pixel dependency. To perform network training, simulation of MRI motion artifacts is inevitable. The proposed network is optimized by minimizing the loss of structural similarity (SSIM) using the synthesized motion-corrupted images from 83 real motion-free subjects. We present an intensive analysis using various types of image priors: the proposed self-assisted priors and priors from other image contrast of the same subject. The experimental analysis proves the effectiveness and feasibility of our self-assisted priors since it does not require any further data scans. The overall image quality of the motion-corrected images via the proposed motion correction network significantly improves SSIM from 71.66% to 95.03% and declines the mean square error from 99.25 to 29.76. These results indicate the high similarity of the brain's anatomical structure in the corrected images compared to the motion-free data. The motion-corrected results of both the simulated and real motion data showed the potential of the proposed motion correction network to be feasible and applicable in clinical practices.
Collapse
|
22
|
Singh NM, Iglesias JE, Adalsteinsson E, Dalca AV, Golland P. Joint Frequency and Image Space Learning for MRI Reconstruction and Analysis. THE JOURNAL OF MACHINE LEARNING FOR BIOMEDICAL IMAGING 2022; 2022:018. [PMID: 36349348 PMCID: PMC9639401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
We propose neural network layers that explicitly combine frequency and image feature representations and show that they can be used as a versatile building block for reconstruction from frequency space data. Our work is motivated by the challenges arising in MRI acquisition where the signal is a corrupted Fourier transform of the desired image. The proposed joint learning schemes enable both correction of artifacts native to the frequency space and manipulation of image space representations to reconstruct coherent image structures at every layer of the network. This is in contrast to most current deep learning approaches for image reconstruction that treat frequency and image space features separately and often operate exclusively in one of the two spaces. We demonstrate the advantages of joint convolutional learning for a variety of tasks, including motion correction, denoising, reconstruction from undersampled acquisitions, and combined undersampling and motion correction on simulated and real world multicoil MRI data. The joint models produce consistently high quality output images across all tasks and datasets. When integrated into a state of the art unrolled optimization network with physics-inspired data consistency constraints for undersampled reconstruction, the proposed architectures significantly improve the optimization landscape, which yields an order of magnitude reduction of training time. This result suggests that joint representations are particularly well suited for MRI signals in deep learning networks. Our code and pretrained models are publicly available at https://github.com/nalinimsingh/interlacer.
Collapse
Affiliation(s)
- Nalini M Singh
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
- Dept. of Health Sciences & Technology, MIT, Cambridge, MA, USA
| | - Juan Eugenio Iglesias
- A. A. Martinos Center, Massachusetts General Hospital, Boston, MA, USA
- Harvard Medical School, Cambridge, MA, USA
- Centre for Medical Image Computing, UCL, London, UK
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
| | - Elfar Adalsteinsson
- Research Laboratory of Electronics, MIT, Cambridge, MA, USA
- Dept. of Electrical Engineering & Computer Science, MIT, Cambridge, MA, USA
| | - Adrian V Dalca
- A. A. Martinos Center, Massachusetts General Hospital, Boston, MA, USA
- Harvard Medical School, Cambridge, MA, USA
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
| | - Polina Golland
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
- Dept. of Electrical Engineering & Computer Science, MIT, Cambridge, MA, USA
| |
Collapse
|
23
|
Xu X, Kothapalli SVVN, Liu J, Kahali S, Gan W, Yablonskiy DA, Kamilov US. Learning-based motion artifact removal networks for quantitative R 2 ∗ mapping. Magn Reson Med 2022; 88:106-119. [PMID: 35257400 DOI: 10.1002/mrm.29188] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2021] [Revised: 01/11/2022] [Accepted: 01/18/2022] [Indexed: 11/12/2022]
Abstract
PURPOSE To introduce two novel learning-based motion artifact removal networks (LEARN) for the estimation of quantitative motion- and B 0 -inhomogeneity-corrected R 2 ∗ maps from motion-corrupted multi-Gradient-Recalled Echo (mGRE) MRI data. METHODS We train two convolutional neural networks (CNNs) to correct motion artifacts for high-quality estimation of quantitative B 0 -inhomogeneity-corrected R 2 ∗ maps from mGRE sequences. The first CNN, LEARN-IMG, performs motion correction on complex mGRE images, to enable the subsequent computation of high-quality motion-free quantitative R 2 ∗ (and any other mGRE-enabled) maps using the standard voxel-wise analysis or machine learning-based analysis. The second CNN, LEARN-BIO, is trained to directly generate motion- and B 0 -inhomogeneity-corrected quantitative R 2 ∗ maps from motion-corrupted magnitude-only mGRE images by taking advantage of the biophysical model describing the mGRE signal decay. RESULTS We show that both CNNs trained on synthetic MR images are capable of suppressing motion artifacts while preserving details in the predicted quantitative R 2 ∗ maps. Significant reduction of motion artifacts on experimental in vivo motion-corrupted data has also been achieved by using our trained models. CONCLUSION Both LEARN-IMG and LEARN-BIO can enable the computation of high-quality motion- and B 0 -inhomogeneity-corrected R 2 ∗ maps. LEARN-IMG performs motion correction on mGRE images and relies on the subsequent analysis for the estimation of R 2 ∗ maps, while LEARN-BIO directly performs motion- and B 0 -inhomogeneity-corrected R 2 ∗ estimation. Both LEARN-IMG and LEARN-BIO jointly process all the available gradient echoes, which enables them to exploit spatial patterns available in the data. The high computational speed of LEARN-BIO is an advantage that can lead to a broader clinical application.
Collapse
Affiliation(s)
- Xiaojian Xu
- Department of Computer Science and Engineering, Washington University in St. Louis, St. Louis, Missouri, USA
| | | | - Jiaming Liu
- Department of Electrical and Systems Engineering, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Sayan Kahali
- Department of Radiology, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Weijie Gan
- Department of Computer Science and Engineering, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Dmitriy A Yablonskiy
- Department of Radiology, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Ulugbek S Kamilov
- Department of Computer Science and Engineering, Washington University in St. Louis, St. Louis, Missouri, USA.,Department of Electrical and Systems Engineering, Washington University in St. Louis, St. Louis, Missouri, USA
| |
Collapse
|
24
|
Slipsager JM, Glimberg SL, Højgaard L, Paulsen RR, Wighton P, Tisdall MD, Jaimes C, Gagoski BA, Grant PE, van der Kouwe A, Olesen OV, Frost R. Comparison of prospective and retrospective motion correction in 3D-encoded neuroanatomical MRI. Magn Reson Med 2022; 87:629-645. [PMID: 34490929 PMCID: PMC8635810 DOI: 10.1002/mrm.28991] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2021] [Revised: 07/17/2021] [Accepted: 08/10/2021] [Indexed: 02/03/2023]
Abstract
PURPOSE To compare prospective motion correction (PMC) and retrospective motion correction (RMC) in Cartesian 3D-encoded MPRAGE scans and to investigate the effects of correction frequency and parallel imaging on the performance of RMC. METHODS Head motion was estimated using a markerless tracking system and sent to a modified MPRAGE sequence, which can continuously update the imaging FOV to perform PMC. The prospective correction was applied either before each echo train (before-ET) or at every sixth readout within the ET (within-ET). RMC was applied during image reconstruction by adjusting k-space trajectories according to the measured motion. The motion correction frequency was retrospectively increased with RMC or decreased with reverse RMC. Phantom and in vivo experiments were used to compare PMC and RMC, as well as to compare within-ET and before-ET correction frequency during continuous motion. The correction quality was quantitatively evaluated using the structural similarity index measure with a reference image without motion correction and without intentional motion. RESULTS PMC resulted in superior image quality compared to RMC both visually and quantitatively. Increasing the correction frequency from before-ET to within-ET reduced the motion artifacts in RMC. A hybrid PMC and RMC correction, that is, retrospectively increasing the correction frequency of before-ET PMC to within-ET, also reduced motion artifacts. Inferior performance of RMC compared to PMC was shown with GRAPPA calibration data without intentional motion and without any GRAPPA acceleration. CONCLUSION Reductions in local Nyquist violations with PMC resulted in superior image quality compared to RMC. Increasing the motion correction frequency to within-ET reduced the motion artifacts in both RMC and PMC.
Collapse
Affiliation(s)
- Jakob M. Slipsager
- DTU Compute, Technical University of Denmark, Denmark
- Dept. of Clinical Physiology, Nuclear Medicine & PET, Rigshospitalet, University of Copenhagen, Denmark
- TracInnovations, Ballerup, Denmark
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts
| | | | - Liselotte Højgaard
- Dept. of Clinical Physiology, Nuclear Medicine & PET, Rigshospitalet, University of Copenhagen, Denmark
| | | | - Paul Wighton
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts
| | - M. Dylan Tisdall
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Camilo Jaimes
- Boston Children’s Hospital, Boston, Massachusetts
- Dept. of Radiology, Harvard Medical School, Boston, Massachusetts
| | - Borjan A. Gagoski
- Dept. of Radiology, Harvard Medical School, Boston, Massachusetts
- Fetal-Neonatal Neuroimaging & Developmental Science Center, Boston Children’s Hospital, Boston, Massachusetts
| | - P. Ellen Grant
- Dept. of Radiology, Harvard Medical School, Boston, Massachusetts
- Fetal-Neonatal Neuroimaging & Developmental Science Center, Boston Children’s Hospital, Boston, Massachusetts
| | - André van der Kouwe
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts
- Dept. of Radiology, Harvard Medical School, Boston, Massachusetts
| | - Oline V. Olesen
- DTU Compute, Technical University of Denmark, Denmark
- Dept. of Clinical Physiology, Nuclear Medicine & PET, Rigshospitalet, University of Copenhagen, Denmark
- TracInnovations, Ballerup, Denmark
| | - Robert Frost
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts
- Dept. of Radiology, Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
25
|
Henry D, Fulton R, Maclaren J, Aksoy M, Bammer R, Kyme A. Optimizing a Feature-Based Motion Tracking System for Prospective Head Motion Estimation in MRI and PET/MRI. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2022. [DOI: 10.1109/trpms.2021.3063260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
26
|
Polak D, Splitthoff DN, Clifford B, Lo WC, Huang SY, Conklin J, Wald LL, Setsompop K, Cauley S. Scout accelerated motion estimation and reduction (SAMER). Magn Reson Med 2022; 87:163-178. [PMID: 34390505 PMCID: PMC8616778 DOI: 10.1002/mrm.28971] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Revised: 06/29/2021] [Accepted: 07/26/2021] [Indexed: 01/03/2023]
Abstract
PURPOSE To demonstrate a navigator/tracking-free retrospective motion estimation technique that facilitates clinically acceptable reconstruction time. METHODS Scout accelerated motion estimation and reduction (SAMER) uses a single 3-5 s, low-resolution scout scan and a novel sequence reordering to independently determine motion states by minimizing the data-consistency error in a SENSE plus motion forward model. This eliminates time-consuming alternating optimization as no updates to the imaging volume are required during the motion estimation. The SAMER approach was assessed quantitatively through extensive simulation and was evaluated in vivo across multiple motion scenarios and clinical imaging contrasts. Finally, SAMER was synergistically combined with advanced encoding (Wave-CAIPI) to facilitate rapid motion-free imaging. RESULTS The highly accelerated scout provided sufficient information to achieve accurate motion trajectory estimation (accuracy ~0.2 mm or degrees). The novel sequence reordering improved the stability of the motion parameter estimation and image reconstruction while preserving the clinical imaging contrast. Clinically acceptable computation times for the motion estimation (~4 s/shot) are demonstrated through a fully separable (non-alternating) motion search across the shots. Substantial artifact reduction was demonstrated in vivo as well as corresponding improvement in the quantitative error metric. Finally, the extension of SAMER to Wave-encoding enabled rapid high-quality imaging at up to R = 9-fold acceleration. CONCLUSION SAMER significantly improved the computational scalability for retrospective motion estimation and correction.
Collapse
Affiliation(s)
- Daniel Polak
- Department of Radiology, A. A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts, USA.,Siemens Healthcare GmbH, Erlangen, Germany
| | | | | | | | - Susie Y. Huang
- Department of Radiology, A. A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts, USA.,Harvard Medical School, Boston, Massachusetts, USA.,Harvard-MIT Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
| | - John Conklin
- Department of Radiology, A. A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts, USA.,Harvard Medical School, Boston, Massachusetts, USA
| | - Lawrence L. Wald
- Department of Radiology, A. A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts, USA.,Harvard Medical School, Boston, Massachusetts, USA.,Harvard-MIT Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
| | - Kawin Setsompop
- Department of Radiology, Stanford School of Medicine, Stanford, California, USA
| | - Stephen Cauley
- Department of Radiology, A. A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts, USA.,Harvard Medical School, Boston, Massachusetts, USA
| |
Collapse
|
27
|
Abstract
MR imaging is used in conjunction with ultrasound screening for fetal brain abnormalities because it offers better contrast, higher resolution, and has multiplanar capabilities that increase the accuracy and confidence of diagnosis. Fetal motion still severely limits the MR imaging sequences that can be acquired. We outline the current acquisition strategies for fetal brain MR imaging and discuss the near term advances that will improve its reliability. Prospective and retrospective motion correction aim to make the complement of MR neuroimaging modalities available for fetal diagnosis, improve the performance of existing modalities, and open new horizons to understanding in utero brain development.
Collapse
Affiliation(s)
- Jeffrey N Stout
- Fetal and Neonatal Neuroimaging and Developmental Science Center, Boston Children's Hospital, 300 Longwood Avenue, Boston, MA 02115, USA.
| | - M Alejandra Bedoya
- Department of Radiology, Boston Children's Hospital, 300 Longwood Avenue, Boston, MA 02115, USA
| | - P Ellen Grant
- Fetal and Neonatal Neuroimaging and Developmental Science Center, Boston Children's Hospital, 300 Longwood Avenue, Boston, MA 02115, USA; Department of Radiology, Boston Children's Hospital, 300 Longwood Avenue, Boston, MA 02115, USA; Department of Pediatrics, Boston Children's Hospital, 300 Longwood Avenue, Boston, MA 02115, USA
| | - Judy A Estroff
- Department of Radiology, Boston Children's Hospital, 300 Longwood Avenue, Boston, MA 02115, USA; Maternal Fetal Care Center, Boston Children's Hospital, 300 Longwood Avenue, Boston, MA 02115, USA
| |
Collapse
|
28
|
Gutierrez A, Mullen M, Xiao D, Jang A, Froelich T, Garwood M, Haupt J. Reducing the Complexity of Model-Based MRI Reconstructions via Sparsification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2477-2486. [PMID: 33999816 PMCID: PMC8569912 DOI: 10.1109/tmi.2021.3081013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Model-based reconstruction methods have emerged as a powerful alternative to classical Fourier-based MRI techniques, largely because of their ability to explicitly model (and therefore, potentially overcome) moderate field inhomogeneities, streamline reconstruction from non-Cartesian sampling, and even allow for the use of custom designed non-Fourier encoding methods. Their application in such scenarios, however, often comes with a substantial increase in computational cost, owing to the fact that the corresponding forward model in such settings no longer possesses a direct Fourier Transform based implementation. This paper introduces an algorithmic framework designed to reduce the computational burden associated with model-based MRI reconstruction tasks. The key innovation is the strategic sparsification of the corresponding forward operators for these models, giving rise to approximations of the forward models (and their adjoints) that admit low computational complexity application. This enables overall a reduced computational complexity application of popular iterative first-order reconstruction methods for these reconstruction tasks. Computational results obtained on both synthetic and experimental data illustrate the viability and efficiency of the approach.
Collapse
|
29
|
Manhard MK, Stockmann J, Liao C, Park D, Han S, Fair M, van den Boomen M, Polimeni J, Bilgic B, Setsompop K. A multi-inversion multi-echo spin and gradient echo echo planar imaging sequence with low image distortion for rapid quantitative parameter mapping and synthetic image contrasts. Magn Reson Med 2021; 86:866-880. [PMID: 33764563 PMCID: PMC8793364 DOI: 10.1002/mrm.28761] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2020] [Revised: 02/02/2021] [Accepted: 02/12/2021] [Indexed: 02/06/2023]
Abstract
PURPOSE Brain imaging exams typically take 10-20 min and involve multiple sequential acquisitions. A low-distortion whole-brain echo planar imaging (EPI)-based approach was developed to efficiently encode multiple contrasts in one acquisition, allowing for calculation of quantitative parameter maps and synthetic contrast-weighted images. METHODS Inversion prepared spin- and gradient-echo EPI was developed with slice-order shuffling across measurements for efficient acquisition with T1 , T2 , and T 2 ∗ weighting. A dictionary-matching approach was used to fit the images to quantitative parameter maps, which in turn were used to create synthetic weighted images with typical clinical contrasts. Dynamic slice-optimized multi-coil shimming with a B0 shim array was used to reduce B0 inhomogeneity and, therefore, image distortion by >50%. Multi-shot EPI was also implemented to minimize distortion and blurring while enabling high in-plane resolution. A low-rank reconstruction approach was used to mitigate errors from shot-to-shot phase variation. RESULTS The slice-optimized shimming approach was combined with in-plane parallel-imaging acceleration of 4× to enable single-shot EPI with more than eight-fold distortion reduction. The proposed sequence efficiently obtained 40 contrasts across the whole-brain in just over 1 min at 1.2 × 1.2 × 3 mm resolution. The multi-shot variant of the sequence achieved higher in-plane resolution of 1 × 1 × 4 mm with good image quality in 4 min. Derived quantitative maps showed comparable values to conventional mapping methods. CONCLUSION The approach allows fast whole-brain imaging with quantitative parameter maps and synthetic weighted contrasts. The slice-optimized multi-coil shimming and multi-shot reconstruction approaches result in minimal EPI distortion, giving the sequence the potential to be used in rapid screening applications.
Collapse
Affiliation(s)
- Mary Kate Manhard
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Jason Stockmann
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Congyu Liao
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Daniel Park
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
| | - Sohyun Han
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon, Korea, Republic of
| | - Merlin Fair
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Maaike van den Boomen
- Department of Radiology, Harvard Medical School, Boston, MA, USA
- Department of Radiology, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
| | - Jon Polimeni
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
- Harvard-MIT Division of Health Sciences and Technology, Cambridge, MA, USA
| | - Berkin Bilgic
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Kawin Setsompop
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
- Harvard-MIT Division of Health Sciences and Technology, Cambridge, MA, USA
| |
Collapse
|
30
|
Duffy BA, Zhao L, Sepehrband F, Min J, Wang DJ, Shi Y, Toga AW, Kim H. Retrospective motion artifact correction of structural MRI images using deep learning improves the quality of cortical surface reconstructions. Neuroimage 2021; 230:117756. [PMID: 33460797 PMCID: PMC8044025 DOI: 10.1016/j.neuroimage.2021.117756] [Citation(s) in RCA: 41] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Revised: 12/23/2020] [Accepted: 01/07/2021] [Indexed: 11/28/2022] Open
Abstract
Head motion during MRI acquisition presents significant challenges for neuroimaging analyses. In this work, we present a retrospective motion correction framework built on a Fourier domain motion simulation model combined with established 3D convolutional neural network (CNN) architectures. Quantitative evaluation metrics were used to validate the method on three separate multi-site datasets. The 3D CNN was trained using motion-free images that were corrupted using simulated artifacts. CNN based correction successfully diminished the severity of artifacts on real motion affected data on a separate test dataset as measured by significant improvements in image quality metrics compared to a minimal motion reference image. On the test set of 13 image pairs, the mean peak signal-to-noise-ratio was improved from 31.7 to 33.3 dB. Furthermore, improvements in cortical surface reconstruction quality were demonstrated using a blinded manual quality assessment on the Parkinson's Progression Markers Initiative (PPMI) dataset. Upon applying the correction algorithm, out of a total of 617 images, the number of quality control failures was reduced from 61 to 38. On this same dataset, we investigated whether motion correction resulted in a more statistically significant relationship between cortical thickness and Parkinson's disease. Before correction, significant cortical thinning was found to be restricted to limited regions within the temporal and frontal lobes. After correction, there was found to be more widespread and significant cortical thinning bilaterally across the temporal lobes and frontal cortex. Our results highlight the utility of image domain motion correction for use in studies with a high prevalence of motion artifacts, such as studies of movement disorders as well as infant and pediatric subjects.
Collapse
Affiliation(s)
- Ben A Duffy
- Laboratory of Neuro Imaging (LONI), Stevens Institute for Neuroimaging and Informatics, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Lu Zhao
- Laboratory of Neuro Imaging (LONI), Stevens Institute for Neuroimaging and Informatics, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Farshid Sepehrband
- Laboratory of Neuro Imaging (LONI), Stevens Institute for Neuroimaging and Informatics, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Joyce Min
- Laboratory of Neuro Imaging (LONI), Stevens Institute for Neuroimaging and Informatics, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Danny Jj Wang
- Laboratory of Neuro Imaging (LONI), Stevens Institute for Neuroimaging and Informatics, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Yonggang Shi
- Laboratory of Neuro Imaging (LONI), Stevens Institute for Neuroimaging and Informatics, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Arthur W Toga
- Laboratory of Neuro Imaging (LONI), Stevens Institute for Neuroimaging and Informatics, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Hosung Kim
- Laboratory of Neuro Imaging (LONI), Stevens Institute for Neuroimaging and Informatics, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA.
| |
Collapse
|
31
|
Preuhs A, Manhart M, Roser P, Hoppe E, Huang Y, Psychogios M, Kowarschik M, Maier A. Appearance Learning for Image-Based Motion Estimation in Tomography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3667-3678. [PMID: 32746114 DOI: 10.1109/tmi.2020.3002695] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
In tomographic imaging, anatomical structures are reconstructed by applying a pseudo-inverse forward model to acquired signals. Geometric information within this process is usually depending on the system setting only, i.e., the scanner position or readout direction. Patient motion therefore corrupts the geometry alignment in the reconstruction process resulting in motion artifacts. We propose an appearance learning approach recognizing the structures of rigid motion independently from the scanned object. To this end, we train a siamese triplet network to predict the reprojection error (RPE) for the complete acquisition as well as an approximate distribution of the RPE along the single views from the reconstructed volume in a multi-task learning approach. The RPE measures the motion-induced geometric deviations independent of the object based on virtual marker positions, which are available during training. We train our network using 27 patients and deploy a 21-4-2 split for training, validation and testing. In average, we achieve a residual mean RPE of 0.013mm with an inter-patient standard deviation of 0.022mm. This is twice the accuracy compared to previously published results. In a motion estimation benchmark the proposed approach achieves superior results in comparison with two state-of-the-art measures in nine out of twelve experiments. The clinical applicability of the proposed method is demonstrated on a motion-affected clinical dataset.
Collapse
|
32
|
Polak D, Cauley S, Bilgic B, Gong E, Bachert P, Adalsteinsson E, Setsompop K. Joint multi-contrast variational network reconstruction (jVN) with application to rapid 2D and 3D imaging. Magn Reson Med 2020; 84:1456-1469. [PMID: 32129529 PMCID: PMC7539238 DOI: 10.1002/mrm.28219] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2019] [Revised: 01/20/2020] [Accepted: 01/29/2020] [Indexed: 12/14/2022]
Abstract
PURPOSE To improve the image quality of highly accelerated multi-channel MRI data by learning a joint variational network that reconstructs multiple clinical contrasts jointly. METHODS Data from our multi-contrast acquisition were embedded into the variational network architecture where shared anatomical information is exchanged by mixing the input contrasts. Complementary k-space sampling across imaging contrasts and Bunch-Phase/Wave-Encoding were used for data acquisition to improve the reconstruction at high accelerations. At 3T, our joint variational network approach across T1w, T2w and T2-FLAIR-weighted brain scans was tested for retrospective under-sampling at R = 6 (2D) and R = 4 × 4 (3D) acceleration. Prospective acceleration was also performed for 3D data where the combined acquisition time for whole brain coverage at 1 mm isotropic resolution across three contrasts was less than 3 min. RESULTS Across all test datasets, our joint multi-contrast network better preserved fine anatomical details with reduced image-blurring when compared to the corresponding single-contrast reconstructions. Improvement in image quality was also obtained through complementary k-space sampling and Bunch-Phase/Wave-Encoding where the synergistic combination yielded the overall best performance as evidenced by exemplary slices and quantitative error metrics. CONCLUSION By leveraging shared anatomical structures across the jointly reconstructed scans, our joint multi-contrast approach learnt more efficient regularizers, which helped to retain natural image appearance and avoid over-smoothing. When synergistically combined with advanced encoding techniques, the performance was further improved, enabling up to R = 16-fold acceleration with good image quality. This should help pave the way to very rapid high-resolution brain exams.
Collapse
Affiliation(s)
- Daniel Polak
- Department of Physics and Astronomy, Heidelberg University, Heidelberg, Germany
- Department of Radiology, A. A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Siemens Healthcare GmbH, Erlangen, Germany
| | - Stephen Cauley
- Department of Radiology, A. A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Harvard Medical School, Boston, MA, USA
- Harvard-MIT Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Berkin Bilgic
- Department of Radiology, A. A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Harvard Medical School, Boston, MA, USA
- Harvard-MIT Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, USA
| | | | - Peter Bachert
- Department of Physics and Astronomy, Heidelberg University, Heidelberg, Germany
- Medical Physics in Radiology, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Elfar Adalsteinsson
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Kawin Setsompop
- Department of Radiology, A. A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Harvard Medical School, Boston, MA, USA
- Harvard-MIT Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
33
|
Bhat SS, Fernandes TT, Poojar P, Silva Ferreira M, Rao PC, Hanumantharaju MC, Ogbole G, Nunes RG, Geethanath S. Low‐Field MRI of Stroke: Challenges and Opportunities. J Magn Reson Imaging 2020; 54:372-390. [DOI: 10.1002/jmri.27324] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2020] [Revised: 07/24/2020] [Accepted: 07/27/2020] [Indexed: 12/12/2022] Open
Affiliation(s)
- Seema S. Bhat
- Medical Imaging Research Centre Dayananda Sagar College of Engineering Bangalore India
| | - Tiago T. Fernandes
- Institute for Systems and Robotics and Department of Bioengineering, Instituto Superior Técnico Universidade de Lisboa Lisbon Portugal
| | - Pavan Poojar
- Medical Imaging Research Centre Dayananda Sagar College of Engineering Bangalore India
- Columbia University Magnetic Resonance Research Center New York New York USA
| | - Marta Silva Ferreira
- Institute for Systems and Robotics and Department of Bioengineering, Instituto Superior Técnico Universidade de Lisboa Lisbon Portugal
| | - Padma Chennagiri Rao
- Medical Imaging Research Centre Dayananda Sagar College of Engineering Bangalore India
| | | | - Godwin Ogbole
- Department of Radiology, College of Medicine University of Ibadan Ibadan Nigeria
| | - Rita G. Nunes
- Institute for Systems and Robotics and Department of Bioengineering, Instituto Superior Técnico Universidade de Lisboa Lisbon Portugal
| | - Sairam Geethanath
- Medical Imaging Research Centre Dayananda Sagar College of Engineering Bangalore India
- Columbia University Magnetic Resonance Research Center New York New York USA
| |
Collapse
|
34
|
A High-Speed Low-Cost VLSI System Capable of On-Chip Online Learning for Dynamic Vision Sensor Data Classification. SENSORS 2020; 20:s20174715. [PMID: 32825560 PMCID: PMC7506740 DOI: 10.3390/s20174715] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/04/2020] [Revised: 08/09/2020] [Accepted: 08/16/2020] [Indexed: 11/21/2022]
Abstract
This paper proposes a high-speed low-cost VLSI system capable of on-chip online learning for classifying address-event representation (AER) streams from dynamic vision sensor (DVS) retina chips. The proposed system executes a lightweight statistic algorithm based on simple binary features extracted from AER streams and a Random Ferns classifier to classify these features. The proposed system’s characteristics of multi-level pipelines and parallel processing circuits achieves a high throughput up to 1 spike event per clock cycle for AER data processing. Thanks to the nature of the lightweight algorithm, our hardware system is realized in a low-cost memory-centric paradigm. In addition, the system is capable of on-chip online learning to flexibly adapt to different in-situ application scenarios. The extra overheads for on-chip learning in terms of time and resource consumption are quite low, as the training procedure of the Random Ferns is quite simple, requiring few auxiliary learning circuits. An FPGA prototype of the proposed VLSI system was implemented with 9.5~96.7% memory consumption and <11% computational and logic resources on a Xilinx Zynq-7045 chip platform. It was running at a clock frequency of 100 MHz and achieved a peak processing throughput up to 100 Meps (Mega events per second), with an estimated power consumption of 690 mW leading to a high energy efficiency of 145 Meps/W or 145 event/μJ. We tested the prototype system on MNIST-DVS, Poker-DVS, and Posture-DVS datasets, and obtained classification accuracies of 77.9%, 99.4% and 99.3%, respectively. Compared to prior works, our VLSI system achieves higher processing speeds, higher computing efficiency, comparable accuracy, and lower resource costs.
Collapse
|
35
|
Correction of out-of-FOV motion artifacts using convolutional neural network. Magn Reson Imaging 2020; 71:93-102. [PMID: 32464243 DOI: 10.1016/j.mri.2020.05.004] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2020] [Accepted: 05/14/2020] [Indexed: 11/23/2022]
Abstract
PURPOSE Subject motion during MRI scan can result in severe degradation of image quality. Existing motion correction algorithms rely on the assumption that no information is missing during motions. However, this assumption does not hold when out-of-FOV motion happens. Currently available algorithms are not able to correct for image artifacts introduced by out-of-FOV motion. The purpose of this study is to demonstrate the feasibility of incorporating convolutional neural network (CNN) derived prior image into solving the out-of-FOV motion problem. METHODS AND MATERIALS A modified U-net network was proposed to correct out-of-FOV motion artifacts by incorporating motion parameters into the loss function. A motion model based data fidelity term was applied in combination with the CNN prediction to further improve the motion correction performance. We trained the CNN on 1113 MPRAGE images with simulated oscillating and sudden motion trajectories, and compared our algorithm to a gradient-based autofocusing (AF) algorithm in both 2D and 3D images. Additional experiment was performed to demonstrate the feasibility of transferring the networks to different dataset. We also evaluated the robustness of this algorithm by adding Gaussian noise to the motion parameters. The motion correction performance was evaluated using mean square error (NMSE), peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). RESULTS The proposed algorithm outperformed AF-based algorithm for both 2D (NMSE: 0.0066 ± 0.0009 vs 0.0141 ± 0.008, P < .01; PSNR: 29.60 ± 0.74 vs 21.71 ± 0.27, P < .01; SSIM: 0.89 ± 0.014 vs 0.73 ± 0.004, P < .01) and 3D imaging (NMSE: 0.0067 ± 0.0008 vs 0.070 ± 0.021, P < .01; PSNR: 32.40 ± 1.63 vs 22.32 ± 2.378, P < .01; SSIM: 0.89 ± 0.01 vs 0.62 ± 0.03, P < .01). Robust reconstruction was achieved with 20% data missed due to the out-of-FOV motion. CONCLUSION In conclusion, the proposed CNN-based motion correction algorithm can significantly reduce out-of-FOV motion artifacts and achieve better image quality compared to AF-based algorithm.
Collapse
|
36
|
Conklin J, Longo MGF, Cauley SF, Setsompop K, González RG, Schaefer PW, Kirsch JE, Rapalino O, Huang SY. Validation of Highly Accelerated Wave-CAIPI SWI Compared with Conventional SWI and T2*-Weighted Gradient Recalled-Echo for Routine Clinical Brain MRI at 3T. AJNR Am J Neuroradiol 2019; 40:2073-2080. [PMID: 31727749 DOI: 10.3174/ajnr.a6295] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2019] [Accepted: 09/09/2019] [Indexed: 11/07/2022]
Abstract
BACKGROUND AND PURPOSE SWI is valuable for characterization of intracranial hemorrhage and mineralization but has long acquisition times. We compared a highly accelerated wave-controlled aliasing in parallel imaging (CAIPI) SWI sequence with 2 commonly used alternatives, standard SWI and T2*-weighted gradient recalled-echo (T2*W GRE), for routine clinical brain imaging at 3T. MATERIALS AND METHODS A total of 246 consecutive adult patients were prospectively evaluated using a conventional SWI or T2*W GRE sequence and an optimized wave-CAIPI SWI sequence, which was 3-5 times faster than the standard sequence. Two blinded radiologists scored each sequence for the presence of hemorrhage, the number of microhemorrhages, and severity of motion artifacts. Wave-CAIPI SWI was then evaluated in head-to-head comparison with the conventional sequences for visualization of pathology, artifacts, and overall diagnostic quality. Forced-choice comparisons were used for all scores. Wave-CAIPI SWI was tested for superiority relative to T2*W GRE and for noninferiority relative to standard SWI using a 15% noninferiority margin. RESULTS Compared with T2*W GRE, wave-CAIPI SWI detected hemorrhages in more cases (P < .001) and detected more microhemorrhages (P < .001). Wave-CAIPI SWI was superior to T2*W GRE for visualization of pathology, artifacts, and overall diagnostic quality (all P < .001). Compared with standard SWI, wave-CAIPI SWI showed no difference in the presence or number of hemorrhages identified. Wave-CAIPI SWI was noninferior to standard SWI for the visualization of pathology (P < .001), artifacts (P < .01), and overall diagnostic quality (P < .01). Motion was less severe with wave-CAIPI SWI than with standard SWI (P < .01). CONCLUSIONS Wave-CAIPI SWI provided superior visualization of pathology and overall diagnostic quality compared with T2*W GRE and was noninferior to standard SWI with reduced scan times and reduced motion artifacts.
Collapse
Affiliation(s)
- J Conklin
- From the Department of Radiology (J.C., M.G.F.L., S.F.C., K.S., R.G.G., P.W.S., J.E.K., O.R., S.Y.H.), Massachusetts General Hospital, Boston, Massachusetts
| | - M G F Longo
- From the Department of Radiology (J.C., M.G.F.L., S.F.C., K.S., R.G.G., P.W.S., J.E.K., O.R., S.Y.H.), Massachusetts General Hospital, Boston, Massachusetts
| | - S F Cauley
- From the Department of Radiology (J.C., M.G.F.L., S.F.C., K.S., R.G.G., P.W.S., J.E.K., O.R., S.Y.H.), Massachusetts General Hospital, Boston, Massachusetts.,Athinoula A. Martinos Center for Biomedical Imaging (S.F.C., K.S., S.Y.H.), Boston, Massachusetts
| | - K Setsompop
- From the Department of Radiology (J.C., M.G.F.L., S.F.C., K.S., R.G.G., P.W.S., J.E.K., O.R., S.Y.H.), Massachusetts General Hospital, Boston, Massachusetts.,Athinoula A. Martinos Center for Biomedical Imaging (S.F.C., K.S., S.Y.H.), Boston, Massachusetts.,Harvard-MIT Division of Health Sciences and Technology (K.S., S.Y.H.), Massachusetts Institute of Technology, Cambridge, Massachusetts
| | - R G González
- From the Department of Radiology (J.C., M.G.F.L., S.F.C., K.S., R.G.G., P.W.S., J.E.K., O.R., S.Y.H.), Massachusetts General Hospital, Boston, Massachusetts
| | - P W Schaefer
- From the Department of Radiology (J.C., M.G.F.L., S.F.C., K.S., R.G.G., P.W.S., J.E.K., O.R., S.Y.H.), Massachusetts General Hospital, Boston, Massachusetts
| | - J E Kirsch
- From the Department of Radiology (J.C., M.G.F.L., S.F.C., K.S., R.G.G., P.W.S., J.E.K., O.R., S.Y.H.), Massachusetts General Hospital, Boston, Massachusetts
| | - O Rapalino
- From the Department of Radiology (J.C., M.G.F.L., S.F.C., K.S., R.G.G., P.W.S., J.E.K., O.R., S.Y.H.), Massachusetts General Hospital, Boston, Massachusetts
| | - S Y Huang
- From the Department of Radiology (J.C., M.G.F.L., S.F.C., K.S., R.G.G., P.W.S., J.E.K., O.R., S.Y.H.), Massachusetts General Hospital, Boston, Massachusetts.,Athinoula A. Martinos Center for Biomedical Imaging (S.F.C., K.S., S.Y.H.), Boston, Massachusetts.,Harvard-MIT Division of Health Sciences and Technology (K.S., S.Y.H.), Massachusetts Institute of Technology, Cambridge, Massachusetts
| |
Collapse
|
37
|
Haskell MW, Cauley SF, Bilgic B, Hossbach J, Splitthoff DN, Pfeuffer J, Setsompop K, Wald LL. Network Accelerated Motion Estimation and Reduction (NAMER): Convolutional neural network guided retrospective motion correction using a separable motion model. Magn Reson Med 2019; 82:1452-1461. [PMID: 31045278 PMCID: PMC6626557 DOI: 10.1002/mrm.27771] [Citation(s) in RCA: 55] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2019] [Revised: 02/26/2019] [Accepted: 03/21/2019] [Indexed: 01/28/2023]
Abstract
PURPOSE We introduce and validate a scalable retrospective motion correction technique for brain imaging that incorporates a machine learning component into a model-based motion minimization. METHODS A convolutional neural network (CNN) trained to remove motion artifacts from 2D T2 -weighted rapid acquisition with refocused echoes (RARE) images is introduced into a model-based data-consistency optimization to jointly search for 2D motion parameters and the uncorrupted image. Our separable motion model allows for efficient intrashot (line-by-line) motion correction of highly corrupted shots, as opposed to previous methods which do not scale well with this refinement of the motion model. Final image generation incorporates the motion parameters within a model-based image reconstruction. The method is tested in simulations and in vivo motion experiments of in-plane motion corruption. RESULTS While the convolutional neural network alone provides some motion mitigation (at the expense of introduced blurring), allowing it to guide the iterative joint-optimization both improves the search convergence and renders the joint-optimization separable. This enables rapid mitigation within shots in addition to between shots. For 2D in-plane motion correction experiments, the result is a significant reduction of both image space root mean square error in simulations, and a reduction of motion artifacts in the in vivo motion tests. CONCLUSION The separability and convergence improvements afforded by the combined convolutional neural network+model-based method shows the potential for meaningful postacquisition motion mitigation in clinical MRI.
Collapse
Affiliation(s)
- Melissa W. Haskell
- A. A. Martinos Center for Biomedical Imaging, Department of Radiology, MGH, Charlestown, MA, United States
- Graduate Program in Biophysics, Harvard University, Cambridge, MA, United States
| | - Stephen F. Cauley
- A. A. Martinos Center for Biomedical Imaging, Department of Radiology, MGH, Charlestown, MA, United States
- Harvard Medical School, Boston, MA, United States
| | - Berkin Bilgic
- A. A. Martinos Center for Biomedical Imaging, Department of Radiology, MGH, Charlestown, MA, United States
- Harvard Medical School, Boston, MA, United States
| | | | | | | | - Kawin Setsompop
- A. A. Martinos Center for Biomedical Imaging, Department of Radiology, MGH, Charlestown, MA, United States
- Harvard Medical School, Boston, MA, United States
- Harvard-MIT Division of Health Sciences and Technology, MIT, Cambridge, MA, United States
| | - Lawrence L. Wald
- A. A. Martinos Center for Biomedical Imaging, Department of Radiology, MGH, Charlestown, MA, United States
- Harvard Medical School, Boston, MA, United States
- Harvard-MIT Division of Health Sciences and Technology, MIT, Cambridge, MA, United States
| |
Collapse
|
38
|
Bilgic B, Chatnuntawech I, Manhard MK, Tian Q, Liao C, Iyer SS, Cauley SF, Huang SY, Polimeni JR, Wald LL, Setsompop K. Highly accelerated multishot echo planar imaging through synergistic machine learning and joint reconstruction. Magn Reson Med 2019; 82:1343-1358. [PMID: 31106902 PMCID: PMC6626584 DOI: 10.1002/mrm.27813] [Citation(s) in RCA: 39] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2018] [Revised: 04/22/2019] [Accepted: 04/22/2019] [Indexed: 12/13/2022]
Abstract
PURPOSE To introduce a combined machine learning (ML)- and physics-based image reconstruction framework that enables navigator-free, highly accelerated multishot echo planar imaging (msEPI) and demonstrate its application in high-resolution structural and diffusion imaging. METHODS Single-shot EPI is an efficient encoding technique, but does not lend itself well to high-resolution imaging because of severe distortion artifacts and blurring. Although msEPI can mitigate these artifacts, high-quality msEPI has been elusive because of phase mismatch arising from shot-to-shot variations which preclude the combination of the multiple-shot data into a single image. We utilize deep learning to obtain an interim image with minimal artifacts, which permits estimation of image phase variations attributed to shot-to-shot changes. These variations are then included in a joint virtual coil sensitivity encoding (JVC-SENSE) reconstruction to utilize data from all shots and improve upon the ML solution. RESULTS Our combined ML + physics approach enabled Rinplane × multiband (MB) = 8- × 2-fold acceleration using 2 EPI shots for multiecho imaging, so that whole-brain T2 and T2 * parameter maps could be derived from an 8.3-second acquisition at 1 × 1 × 3-mm3 resolution. This has also allowed high-resolution diffusion imaging with high geometrical fidelity using 5 shots at Rinplane × MB = 9- × 2-fold acceleration. To make these possible, we extended the state-of-the-art MUSSELS reconstruction technique to simultaneous multislice encoding and used it as an input to our ML network. CONCLUSION Combination of ML and JVC-SENSE enabled navigator-free msEPI at higher accelerations than previously possible while using fewer shots, with reduced vulnerability to poor generalizability and poor acceptance of end-to-end ML approaches.
Collapse
Affiliation(s)
- Berkin Bilgic
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
- Harvard-MIT Health Sciences and Technology, MIT, Cambridge, MA, USA
| | - Itthi Chatnuntawech
- National Nanotechnology Center, National Science and Technology Development Agency, Pathum Thani, Thailand
| | - Mary Kate Manhard
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Qiyuan Tian
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Congyu Liao
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Siddharth S. Iyer
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, USA
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Stephen F. Cauley
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Susie Y. Huang
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
- Harvard-MIT Health Sciences and Technology, MIT, Cambridge, MA, USA
| | - Jonathan R. Polimeni
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
- Harvard-MIT Health Sciences and Technology, MIT, Cambridge, MA, USA
| | - Lawrence L. Wald
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
- Harvard-MIT Health Sciences and Technology, MIT, Cambridge, MA, USA
| | - Kawin Setsompop
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
- Harvard-MIT Health Sciences and Technology, MIT, Cambridge, MA, USA
| |
Collapse
|
39
|
Wald LL. Ultimate MRI. JOURNAL OF MAGNETIC RESONANCE (SAN DIEGO, CALIF. : 1997) 2019; 306:139-144. [PMID: 31350164 PMCID: PMC6708442 DOI: 10.1016/j.jmr.2019.07.016] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/31/2019] [Accepted: 07/08/2019] [Indexed: 06/10/2023]
Abstract
The basic principles of Magnetic Resonance have been understood for over 70 years and a mainstay of medical imaging for over 40. At this point, it's no longer about simply porting these principles to medical imaging. But we are by no means confined to simply polishing either. Significant innovation and even revolution can come to old technologies. The recent revolution in optical microscopy shattered the resolution constraint imposed by a seemingly fundamental physical law (the diffraction limit) and reinvigorated a 500-year-old modality. Progress comes from re-examining old-ways and sidestepping underlying assumptions. This is already underway for MRI; and is fueled by advances in image reconstruction. Reconstruction increasingly employs sophisticated general models often using subtle and hopefully innocuous prior knowledge about the object. This allows a careful re-examination of some basic prerequisites for MRI such as uniform static fields, linear encoding fields, full Nyquist sampling, or even a stationary object. These powerful reconstruction tools are driving changes in acquisition strategy and basic hardware. The scanner of the future will know more about itself and its patient and his/her biology than ever before. This strategy emboldens relaxed hardware constraints and more specialized scanners, hopefully expanding the reach and value offered by MR imaging.
Collapse
Affiliation(s)
- Lawrence L Wald
- Athinoula A. Martinos Center for Biomedical Imaging, Dept. of Radiology, Harvard Medical School, Massachusetts General Hospital, Charlestown, MA, USA; Harvard-Massachusetts Institute of Technology Division of Health Sciences and Technology, Cambridge, MA, USA.
| |
Collapse
|
40
|
Tamada D, Kromrey ML, Ichikawa S, Onishi H, Motosugi U. Motion Artifact Reduction Using a Convolutional Neural Network for Dynamic Contrast Enhanced MR Imaging of the Liver. Magn Reson Med Sci 2019; 19:64-76. [PMID: 31061259 PMCID: PMC7067907 DOI: 10.2463/mrms.mp.2018-0156] [Citation(s) in RCA: 55] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022] Open
Abstract
Purpose: To improve the quality of images obtained via dynamic contrast enhanced MRI (DCE-MRI), which contain motion artifacts and blurring using a deep learning approach. Materials and Methods: A multi-channel convolutional neural network-based method is proposed for reducing the motion artifacts and blurring caused by respiratory motion in images obtained via DCE-MRI of the liver. The training datasets for the neural network included images with and without respiration-induced motion artifacts or blurring, and the distortions were generated by simulating the phase error in k-space. Patient studies were conducted using a multi-phase T1-weighted spoiled gradient echo sequence for the liver, which contained breath-hold failures occurring during data acquisition. The trained network was applied to the acquired images to analyze the filtering performance, and the intensities and contrast ratios before and after denoising were compared via Bland–Altman plots. Results: The proposed network was found to be significantly reducing the magnitude of the artifacts and blurring induced by respiratory motion, and the contrast ratios of the images after processing via the network were consistent with those of the unprocessed images. Conclusion: A deep learning-based method for removing motion artifacts in images obtained via DCE-MRI of the liver was demonstrated and validated.
Collapse
Affiliation(s)
- Daiki Tamada
- Department of Radiology, University of Yamanashi
| | | | | | | | | |
Collapse
|
41
|
Frost R, Wighton P, Karahanoğlu FI, Robertson RL, Grant PE, Fischl B, Tisdall MD, van der Kouwe A. Markerless high-frequency prospective motion correction for neuroanatomical MRI. Magn Reson Med 2019; 82:126-144. [PMID: 30821010 DOI: 10.1002/mrm.27705] [Citation(s) in RCA: 37] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2018] [Revised: 01/09/2019] [Accepted: 01/30/2019] [Indexed: 11/07/2022]
Abstract
PURPOSE To integrate markerless head motion tracking with prospectively corrected neuroanatomical MRI sequences and to investigate high-frequency motion correction during imaging echo trains. METHODS A commercial 3D surface tracking system, which estimates head motion by registering point cloud reconstructions of the face, was used to adapt the imaging FOV based on head movement during MPRAGE and T2 SPACE (3D variable flip-angle turbo spin-echo) sequences. The FOV position and orientation were updated every 6 lines of k-space (< 50 ms) to enable "within-echo-train" prospective motion correction (PMC). Comparisons were made with scans using "before-echo-train" PMC, in which the FOV was updated only once per TR, before the start of each echo train (ET). Continuous-motion experiments with phantoms and in vivo were used to compare these high-frequency and low-frequency correction strategies. MPRAGE images were processed with FreeSurfer to compare estimates of brain structure volumes and cortical thickness in scans with different PMC. RESULTS The median absolute pose differences between markerless tracking and MR image registration were 0.07/0.26/0.15 mm for x/y/z translation and 0.06º/0.02º/0.12° for rotation about x/y/z. The PMC with markerless tracking substantially reduced motion artifacts. The continuous-motion experiments showed that within-ET PMC, which minimizes FOV encoding errors during ETs that last over 1 second, reduces artifacts compared with before-ET PMC. T2 SPACE was found to be more sensitive to motion during ETs than MPRAGE. FreeSurfer morphometry estimates from within-ET PMC MPRAGE images were the most accurate. CONCLUSION Markerless head tracking can be used for PMC, and high-frequency within-ET PMC can reduce sensitivity to motion during long imaging ETs.
Collapse
Affiliation(s)
- Robert Frost
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts.,Department of Radiology, Harvard Medical School, Boston, Massachusetts
| | - Paul Wighton
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts
| | - F Işık Karahanoğlu
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts.,Department of Radiology, Harvard Medical School, Boston, Massachusetts
| | - Richard L Robertson
- Department of Radiology, Boston Children's Hospital, Harvard Medical School, Boston, Massachusetts
| | - P Ellen Grant
- Department of Radiology, Boston Children's Hospital, Harvard Medical School, Boston, Massachusetts.,Fetal-Neonatal Neuroimaging and Developmental Science Center, Boston Children's Hospital, Boston, Massachusetts
| | - Bruce Fischl
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts.,Department of Radiology, Harvard Medical School, Boston, Massachusetts.,Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, Massachusetts
| | - M Dylan Tisdall
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania
| | - André van der Kouwe
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts.,Department of Radiology, Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
42
|
Affiliation(s)
- Doohee Lee
- Laboratory for Imaging Science and Technology, Department of Electrical and Computer Engineering, Institute of Engineering Research, Seoul National University, Seoul, Korea
| | - Jingu Lee
- Laboratory for Imaging Science and Technology, Department of Electrical and Computer Engineering, Institute of Engineering Research, Seoul National University, Seoul, Korea
| | - Jingyu Ko
- Laboratory for Imaging Science and Technology, Department of Electrical and Computer Engineering, Institute of Engineering Research, Seoul National University, Seoul, Korea
| | - Jaeyeon Yoon
- Laboratory for Imaging Science and Technology, Department of Electrical and Computer Engineering, Institute of Engineering Research, Seoul National University, Seoul, Korea
| | - Kanghyun Ryu
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Korea
| | - Yoonho Nam
- Department of Radiology, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Korea
| |
Collapse
|
43
|
Zhu B, Liu JZ, Cauley SF, Rosen BR, Rosen MS. Image reconstruction by domain-transform manifold learning. Nature 2018; 555:487-492. [PMID: 29565357 DOI: 10.1038/nature25988] [Citation(s) in RCA: 754] [Impact Index Per Article: 107.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2017] [Accepted: 01/23/2018] [Indexed: 02/06/2023]
Abstract
Image reconstruction is essential for imaging applications across the physical and life sciences, including optical and radar systems, magnetic resonance imaging, X-ray computed tomography, positron emission tomography, ultrasound imaging and radio astronomy. During image acquisition, the sensor encodes an intermediate representation of an object in the sensor domain, which is subsequently reconstructed into an image by an inversion of the encoding function. Image reconstruction is challenging because analytic knowledge of the exact inverse transform may not exist a priori, especially in the presence of sensor non-idealities and noise. Thus, the standard reconstruction approach involves approximating the inverse function with multiple ad hoc stages in a signal processing chain, the composition of which depends on the details of each acquisition strategy, and often requires expert parameter tuning to optimize reconstruction performance. Here we present a unified framework for image reconstruction-automated transform by manifold approximation (AUTOMAP)-which recasts image reconstruction as a data-driven supervised learning task that allows a mapping between the sensor and the image domain to emerge from an appropriate corpus of training data. We implement AUTOMAP with a deep neural network and exhibit its flexibility in learning reconstruction transforms for various magnetic resonance imaging acquisition strategies, using the same network architecture and hyperparameters. We further demonstrate that manifold learning during training results in sparse representations of domain transforms along low-dimensional data manifolds, and observe superior immunity to noise and a reduction in reconstruction artefacts compared with conventional handcrafted reconstruction methods. In addition to improving the reconstruction performance of existing acquisition methodologies, we anticipate that AUTOMAP and other learned reconstruction approaches will accelerate the development of new acquisition strategies across imaging modalities.
Collapse
Affiliation(s)
- Bo Zhu
- A. A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, Massachusetts, USA.,Harvard Medical School, Boston, Massachusetts, USA.,Department of Physics, Harvard University, Cambridge, Massachusetts, USA
| | - Jeremiah Z Liu
- Department of Biostatistics, Harvard University, Cambridge, Massachusetts, USA
| | - Stephen F Cauley
- A. A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, Massachusetts, USA.,Harvard Medical School, Boston, Massachusetts, USA
| | - Bruce R Rosen
- A. A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, Massachusetts, USA.,Harvard Medical School, Boston, Massachusetts, USA
| | - Matthew S Rosen
- A. A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, Massachusetts, USA.,Harvard Medical School, Boston, Massachusetts, USA.,Department of Physics, Harvard University, Cambridge, Massachusetts, USA
| |
Collapse
|
44
|
Cauley SF, Setsompop K, Bilgic B, Bhat H, Gagoski B, Wald LL. Autocalibrated wave-CAIPI reconstruction; Joint optimization of k-space trajectory and parallel imaging reconstruction. Magn Reson Med 2016; 78:1093-1099. [PMID: 27770457 DOI: 10.1002/mrm.26499] [Citation(s) in RCA: 39] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2016] [Revised: 08/31/2016] [Accepted: 09/16/2016] [Indexed: 02/05/2023]
Abstract
PURPOSE Fast MRI acquisitions often rely on efficient traversal of k-space and hardware limitations, or other physical effects can cause the k-space trajectory to deviate from a theoretical path in a manner dependent on the image prescription and protocol parameters. Additional measurements or generalized calibrations are typically needed to characterize the discrepancies. We propose an autocalibrated technique to determine these discrepancies. METHODS A joint optimization is used to estimate the trajectory simultaneously with the parallel imaging reconstruction, without the need for additional measurements. Model reduction is introduced to make this optimization computationally efficient, and to ensure final image quality. RESULTS We demonstrate our approach for the wave-CAIPI fast acquisition method that uses a corkscrew k-space path to efficiently encode k-space and spread the voxel aliasing. Model reduction allows for the 3D trajectory to be automatically calculated in fewer than 30 s on standard vendor hardware. The method achieves equivalent accuracy to full-gradient calibration scans. CONCLUSIONS The proposed method allows for high-quality wave-CAIPI reconstruction across wide ranges of protocol parameters, such as field of view (FOV) location/orientation, bandwidth, echo time (TE), resolution, and sinusoidal amplitude/frequency. Our framework should allow for the autocalibration of gradient trajectories from many other fast MRI techniques in clinically relevant time. Magn Reson Med 78:1093-1099, 2017. © 2016 International Society for Magnetic Resonance in Medicine.
Collapse
Affiliation(s)
- Stephen F Cauley
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, Massachusetts, USA.,Department of Radiology, Harvard Medical School, Boston, Massachusetts, USA
| | - Kawin Setsompop
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, Massachusetts, USA.,Department of Radiology, Harvard Medical School, Boston, Massachusetts, USA
| | - Berkin Bilgic
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, Massachusetts, USA.,Department of Radiology, Harvard Medical School, Boston, Massachusetts, USA
| | - Himanshu Bhat
- Siemens Medical Solutions, Malvern, Pennsylvania, USA
| | - Borjan Gagoski
- Department of Radiology, Harvard Medical School, Boston, Massachusetts, USA.,Fetal-Neonatal Neuroimaging & Developmental Science Center, Boston Children's Hospital, Boston, Massachusetts, USA
| | - Lawrence L Wald
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, Massachusetts, USA.,Department of Radiology, Harvard Medical School, Boston, Massachusetts, USA.,Harvard-MIT Division of Health Sciences and Technology; Institute of Medical Engineering and Science, MIT, Cambridge, Massachusetts, USA
| |
Collapse
|