1
|
Singh D, Regatte RR, Zibetti MVW. HDNLS: Hybrid Deep-Learning and Non-Linear Least Squares-Based Method for Fast Multi-Component T1ρ Mapping in the Knee Joint. Bioengineering (Basel) 2024; 12:8. [PMID: 39851282 PMCID: PMC11761554 DOI: 10.3390/bioengineering12010008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2024] [Revised: 12/10/2024] [Accepted: 12/20/2024] [Indexed: 01/26/2025] Open
Abstract
Non-linear least squares (NLS) methods are commonly used for quantitative magnetic resonance imaging (MRI), especially for multi-exponential T1ρ mapping, which provides precise parameter estimation for different relaxation models in tissues, such as mono-exponential (ME), bi-exponential (BE), and stretched-exponential (SE) models. However, NLS may suffer from problems like sensitivity to initial guesses, slow convergence speed, and high computational cost. While deep learning (DL)-based T1ρ fitting methods offer faster alternatives, they often face challenges such as noise sensitivity and reliance on NLS-generated reference data for training. To address these limitations of both approaches, we propose the HDNLS, a hybrid model for fast multi-component parameter mapping, particularly targeted for T1ρ mapping in the knee joint. HDNLS combines voxel-wise DL, trained with synthetic data, with a few iterations of NLS to accelerate the fitting process, thus eliminating the need for reference MRI data for training. Due to the inverse-problem nature of the parameter mapping, certain parameters in a specific model may be more sensitive to noise, such as the short component in the BE model. To address this, the number of NLS iterations in HDNLS can act as a regularization, stabilizing the estimation to obtain meaningful solutions. Thus, in this work, we conducted a comprehensive analysis of the impact of NLS iterations on HDNLS performance and proposed four variants that balance estimation accuracy and computational speed. These variants are Ultrafast-NLS, Superfast-HDNLS, HDNLS, and Relaxed-HDNLS. These methods allow users to select a suitable configuration based on their specific speed and performance requirements. Among these, HDNLS emerges as the optimal trade-off between performance and fitting time. Extensive experiments on synthetic data demonstrate that HDNLS achieves comparable performance to NLS and regularized-NLS (RNLS) with a minimum of a 13-fold improvement in speed. HDNLS is just a little slower than DL-based methods; however, it significantly improves estimation quality, offering a solution for T1ρ fitting that is fast and reliable.
Collapse
Affiliation(s)
- Dilbag Singh
- Center of Biomedical Imaging, Department of Radiology, New York University Grossman School of Medicine, New York, NY 10016, USA;
| | | | - Marcelo V. W. Zibetti
- Center of Biomedical Imaging, Department of Radiology, New York University Grossman School of Medicine, New York, NY 10016, USA;
| |
Collapse
|
2
|
Pei H, Shepherd TM, Wang Y, Liu F, Sodickson DK, Ben-Eliezer N, Feng L. DeepEMC-T 2 mapping: Deep learning-enabled T 2 mapping based on echo modulation curve modeling. Magn Reson Med 2024; 92:2707-2722. [PMID: 39129209 PMCID: PMC11436299 DOI: 10.1002/mrm.30239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Revised: 07/14/2024] [Accepted: 07/15/2024] [Indexed: 08/13/2024]
Abstract
PURPOSE Echo modulation curve (EMC) modeling enables accurate quantification of T2 relaxation times in multi-echo spin-echo (MESE) imaging. The standard EMC-T2 mapping framework, however, requires sufficient echoes and cumbersome pixel-wise dictionary-matching steps. This work proposes a deep learning version of EMC-T2 mapping, called DeepEMC-T2 mapping, to efficiently estimate accurate T2 maps from fewer echoes. METHODS DeepEMC-T2 mapping was developed using a modified U-Net to estimate both T2 and proton density (PD) maps directly from MESE images. The network implements several new features to improve the accuracy of T2/PD estimation. A total of 67 MESE datasets acquired in axial orientation were used for network training and evaluation. An additional 57 datasets acquired in coronal orientation with different scan parameters were used to evaluate the generalizability of the framework. The performance of DeepEMC-T2 mapping was evaluated in seven experiments. RESULTS Compared to the reference, DeepEMC-T2 mapping achieved T2 estimation errors from 1% to 11% and PD estimation errors from 0.4% to 1.5% with ten/seven/five/three echoes, which are more accurate than standard EMC-T2 mapping. By incorporating datasets acquired with different scan parameters and orientations for joint training, DeepEMC-T2 exhibits robust generalizability across varying imaging protocols. Increasing the echo spacing and including longer echoes improve the accuracy of parameter estimation. The new features proposed in DeepEMC-T2 mapping all enabled more accurate T2 estimation. CONCLUSIONS DeepEMC-T2 mapping enables simplified, efficient, and accurate T2 quantification directly from MESE images without dictionary matching. Accurate T2 estimation from fewer echoes allows for increased volumetric coverage and/or higher slice resolution without prolonging total scan times.
Collapse
Affiliation(s)
- Haoyang Pei
- Bernard and Irene Schwartz Center for Biomedical Imaging, Department of Radiology, New York University Grossman School of Medicine, New York, NY, USA
- Center for Advanced Imaging Innovation and Research (CAIR), Department of Radiology, New York University Grossman School of Medicine, USA
- Department of Electrical and Computer Engineering and Department of Biomedical Engineering, NYU Tandon School of Engineering, New York, NY, USA
| | - Timothy M. Shepherd
- Bernard and Irene Schwartz Center for Biomedical Imaging, Department of Radiology, New York University Grossman School of Medicine, New York, NY, USA
- Center for Advanced Imaging Innovation and Research (CAIR), Department of Radiology, New York University Grossman School of Medicine, USA
| | - Yao Wang
- Department of Electrical and Computer Engineering and Department of Biomedical Engineering, NYU Tandon School of Engineering, New York, NY, USA
| | - Fang Liu
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts, USA
- Harvard Medical School, Boston, Massachusetts, USA
| | - Daniel K Sodickson
- Bernard and Irene Schwartz Center for Biomedical Imaging, Department of Radiology, New York University Grossman School of Medicine, New York, NY, USA
- Center for Advanced Imaging Innovation and Research (CAIR), Department of Radiology, New York University Grossman School of Medicine, USA
| | - Noam Ben-Eliezer
- Bernard and Irene Schwartz Center for Biomedical Imaging, Department of Radiology, New York University Grossman School of Medicine, New York, NY, USA
- Center for Advanced Imaging Innovation and Research (CAIR), Department of Radiology, New York University Grossman School of Medicine, USA
- Department of Biomedical Engineering, Tel Aviv University, Tel Aviv, Israel
- Sagol School of Neuroscience, Tel Aviv University, Tel-Aviv, Israel
| | - Li Feng
- Bernard and Irene Schwartz Center for Biomedical Imaging, Department of Radiology, New York University Grossman School of Medicine, New York, NY, USA
- Center for Advanced Imaging Innovation and Research (CAIR), Department of Radiology, New York University Grossman School of Medicine, USA
| |
Collapse
|
3
|
Jiao M, Xian X, Wang B, Zhang Y, Yang S, Chen S, Sun H, Liu F. XDL-ESI: Electrophysiological Sources Imaging via explainable deep learning framework with validation on simultaneous EEG and iEEG. Neuroimage 2024; 299:120802. [PMID: 39173694 PMCID: PMC11549933 DOI: 10.1016/j.neuroimage.2024.120802] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Revised: 08/17/2024] [Accepted: 08/19/2024] [Indexed: 08/24/2024] Open
Abstract
Electroencephalography (EEG) or Magnetoencephalography (MEG) source imaging aims to estimate the underlying activated brain sources to explain the observed EEG/MEG recordings. Solving the inverse problem of EEG/MEG Source Imaging (ESI) is challenging due to its ill-posed nature. To achieve a unique solution, it is essential to apply sophisticated regularization constraints to restrict the solution space. Traditionally, the design of regularization terms is based on assumptions about the spatiotemporal structure of the underlying source dynamics. In this paper, we propose a novel paradigm for ESI via an Explainable Deep Learning framework, termed as XDL-ESI, which connects the iterative optimization algorithm with deep learning architecture by unfolding the iterative updates with neural network modules. The proposed framework has the advantages of (1) establishing a data-driven approach to model the source solution structure instead of using hand-crafted regularization terms; (2) improving the robustness of source solutions by introducing a topological loss that leverages the geometric spatial information applying varying penalties on distinct localization errors; (3) improving the reconstruction efficiency and interpretability as it inherits the advantages from both the iterative optimization algorithms (interpretability) and deep learning approaches (function approximation). The proposed XDL-ESI framework provides an efficient, accurate, and interpretable paradigm to solve the ESI inverse problem with satisfactory performance in both simulated data and real clinical data. Specially, this approach is further validated using simultaneous EEG and intracranial EEG (iEEG).
Collapse
Affiliation(s)
- Meng Jiao
- Department of Systems and Enterprises, Stevens Institute of Technology, Hoboken, NJ, 07030, United States
| | - Xiaochen Xian
- H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA, 30332, United States
| | - Boyu Wang
- Department of Computer Science, University of Western Ontario, Ontario, N6A 3K7, Canada
| | - Yu Zhang
- Department of Bioengineering, Lehigh University, Bethlehem, PA, 18015, United States
| | - Shihao Yang
- Department of Systems and Enterprises, Stevens Institute of Technology, Hoboken, NJ, 07030, United States
| | - Spencer Chen
- Department of Neurosurgery, Rutgers Robert Wood Johnson Medical School, New Brunswick, NJ, 08901, United States
| | - Hai Sun
- Department of Neurosurgery, Rutgers Robert Wood Johnson Medical School, New Brunswick, NJ, 08901, United States
| | - Feng Liu
- Department of Systems and Enterprises, Stevens Institute of Technology, Hoboken, NJ, 07030, United States; Semcer Center for Healthcare Innovation, Stevens Institute of Technology, Hoboken, NJ, 07030, United States.
| |
Collapse
|
4
|
Zheng M, Lou F, Huang Y, Pan S, Zhang X. MR-based electrical property tomography using a physics-informed network at 3 and 7 T. NMR IN BIOMEDICINE 2024; 37:e5137. [PMID: 38439522 DOI: 10.1002/nbm.5137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Revised: 01/29/2024] [Accepted: 02/11/2024] [Indexed: 03/06/2024]
Abstract
Magnetic resonance electrical propert tomography promises to retrieve electrical properties (EPs) quantitatively and non-invasively in vivo, providing valuable information for tissue characterization and pathology diagnosis. However, its clinical implementation has been hindered by, for example, B1 measurement accuracy, reconstruction artifacts resulting from inaccuracies in underlying models, and stringent hardware/software requirements. To address these challenges, we present a novel approach aimed at accurate and high-resolution EPs reconstruction based on water content maps by using a physics-informed network (PIN-wEPT). The proposed method utilizes standard clinical protocols and conventional multi-channel receive arrays that have been routinely equipped in clinical settings, thus eliminating the need for specialized RF sequence/coil configurations. Compared with the original wEPT method, the network generates accurate water content maps that effectively eliminate the influence ofB → 1 + andB → 1 - by incorporating data mismatch with electrodynamic constraints derived from the Helmholtz equation. Subsequent regression analysis develops a broad relationship between water content and EPs across various types of brain tissue. A series of numerical simulations was conducted at 7 T to assess the feasibility and performance of the method, which encompassed four normal head models and models with tumorous tissues incorporated, and the results showed normalized mean square error below 1.0% in water content, below 11.7% in conductivity, and below 1.1% in permittivity reconstructions for normal brain tissues. Moreover, in vivo validations conducted over five healthy subjects at both 3 and 7 T showed reasonably good consistency with empirical EPs values across the white matter, gray matter, and cerebrospinal fluid. The PIN-wEPT method, with its demonstrated efficacy, flexibility, and compatibility with current MRI scanners, holds promising potential for future clinical application.
Collapse
Affiliation(s)
- Mengxuan Zheng
- Interdisciplinary Institute of Neuroscience and Technology, Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, China
- MOE Frontier Science Center for Brain Science and Brain-Machine Integration, Zhejiang University, Hangzhou, China
| | - Feiyang Lou
- MOE Frontier Science Center for Brain Science and Brain-Machine Integration, Zhejiang University, Hangzhou, China
- School of Medicine, Zhejiang University, Hangzhou, China
| | - Yiman Huang
- MOE Frontier Science Center for Brain Science and Brain-Machine Integration, Zhejiang University, Hangzhou, China
- College of Electrical Engineering, Zhejiang University, Hangzhou, China
| | - Sihong Pan
- MOE Frontier Science Center for Brain Science and Brain-Machine Integration, Zhejiang University, Hangzhou, China
- College of Electrical Engineering, Zhejiang University, Hangzhou, China
| | - Xiaotong Zhang
- Interdisciplinary Institute of Neuroscience and Technology, Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, China
- MOE Frontier Science Center for Brain Science and Brain-Machine Integration, Zhejiang University, Hangzhou, China
- School of Medicine, Zhejiang University, Hangzhou, China
- College of Electrical Engineering, Zhejiang University, Hangzhou, China
- Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| |
Collapse
|
5
|
Wang S, Wu R, Jia S, Diakite A, Li C, Liu Q, Zheng H, Ying L. Knowledge-driven deep learning for fast MR imaging: Undersampled MR image reconstruction from supervised to un-supervised learning. Magn Reson Med 2024; 92:496-518. [PMID: 38624162 DOI: 10.1002/mrm.30105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 03/19/2024] [Accepted: 03/20/2024] [Indexed: 04/17/2024]
Abstract
Deep learning (DL) has emerged as a leading approach in accelerating MRI. It employs deep neural networks to extract knowledge from available datasets and then applies the trained networks to reconstruct accurate images from limited measurements. Unlike natural image restoration problems, MRI involves physics-based imaging processes, unique data properties, and diverse imaging tasks. This domain knowledge needs to be integrated with data-driven approaches. Our review will introduce the significant challenges faced by such knowledge-driven DL approaches in the context of fast MRI along with several notable solutions, which include learning neural networks and addressing different imaging application scenarios. The traits and trends of these techniques have also been given which have shifted from supervised learning to semi-supervised learning, and finally, to unsupervised learning methods. In addition, MR vendors' choices of DL reconstruction have been provided along with some discussions on open questions and future directions, which are critical for the reliable imaging systems.
Collapse
Affiliation(s)
- Shanshan Wang
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Ruoyou Wu
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Sen Jia
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Alou Diakite
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Cheng Li
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Qiegen Liu
- Department of Electronic Information Engineering, Nanchang University, Nanchang, China
| | - Hairong Zheng
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Leslie Ying
- Department of Biomedical Engineering and Department of Electrical Engineering, The State University of New York, Buffalo, New York, USA
| |
Collapse
|
6
|
Bian W, Jang A, Liu F. Improving quantitative MRI using self-supervised deep learning with model reinforcement: Demonstration for rapid T1 mapping. Magn Reson Med 2024; 92:98-111. [PMID: 38342980 PMCID: PMC11055673 DOI: 10.1002/mrm.30045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Revised: 12/21/2023] [Accepted: 01/23/2024] [Indexed: 02/13/2024]
Abstract
PURPOSE This paper proposes a novel self-supervised learning framework that uses model reinforcement, REference-free LAtent map eXtraction with MOdel REinforcement (RELAX-MORE), for accelerated quantitative MRI (qMRI) reconstruction. The proposed method uses an optimization algorithm to unroll an iterative model-based qMRI reconstruction into a deep learning framework, enabling accelerated MR parameter maps that are highly accurate and robust. METHODS Unlike conventional deep learning methods which require large amounts of training data, RELAX-MORE is a subject-specific method that can be trained on single-subject data through self-supervised learning, making it accessible and practically applicable to many qMRI studies. Using quantitativeT 1 $$ {\mathrm{T}}_1 $$ mapping as an example, the proposed method was applied to the brain, knee and phantom data. RESULTS The proposed method generates high-quality MR parameter maps that correct for image artifacts, removes noise, and recovers image features in regions of imperfect image conditions. Compared with other state-of-the-art conventional and deep learning methods, RELAX-MORE significantly improves efficiency, accuracy, robustness, and generalizability for rapid MR parameter mapping. CONCLUSION This work demonstrates the feasibility of a new self-supervised learning method for rapid MR parameter mapping, that is readily adaptable to the clinical translation of qMRI.
Collapse
Affiliation(s)
- Wanyu Bian
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts, USA
- Harvard Medical School, Boston, Massachusetts, USA
| | - Albert Jang
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts, USA
- Harvard Medical School, Boston, Massachusetts, USA
| | - Fang Liu
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts, USA
- Harvard Medical School, Boston, Massachusetts, USA
| |
Collapse
|
7
|
Jun Y, Arefeen Y, Cho J, Fujita S, Wang X, Ellen Grant P, Gagoski B, Jaimes C, Gee MS, Bilgic B. Zero-DeepSub: Zero-shot deep subspace reconstruction for rapid multiparametric quantitative MRI using 3D-QALAS. Magn Reson Med 2024; 91:2459-2482. [PMID: 38282270 PMCID: PMC11005062 DOI: 10.1002/mrm.30018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Revised: 12/15/2023] [Accepted: 01/06/2024] [Indexed: 01/30/2024]
Abstract
PURPOSE To develop and evaluate methods for (1) reconstructing 3D-quantification using an interleaved Look-Locker acquisition sequence with T2 preparation pulse (3D-QALAS) time-series images using a low-rank subspace method, which enables accurate and rapid T1 and T2 mapping, and (2) improving the fidelity of subspace QALAS by combining scan-specific deep-learning-based reconstruction and subspace modeling. THEORY AND METHODS A low-rank subspace method for 3D-QALAS (i.e., subspace QALAS) and zero-shot deep-learning subspace method (i.e., Zero-DeepSub) were proposed for rapid and high fidelity T1 and T2 mapping and time-resolved imaging using 3D-QALAS. Using an ISMRM/NIST system phantom, the accuracy and reproducibility of the T1 and T2 maps estimated using the proposed methods were evaluated by comparing them with reference techniques. The reconstruction performance of the proposed subspace QALAS using Zero-DeepSub was evaluated in vivo and compared with conventional QALAS at high reduction factors of up to nine-fold. RESULTS Phantom experiments showed that subspace QALAS had good linearity with respect to the reference methods while reducing biases and improving precision compared to conventional QALAS, especially for T2 maps. Moreover, in vivo results demonstrated that subspace QALAS had better g-factor maps and could reduce voxel blurring, noise, and artifacts compared to conventional QALAS and showed robust performance at up to nine-fold acceleration with Zero-DeepSub, which enabled whole-brain T1, T2, and PD mapping at 1 mm isotropic resolution within 2 min of scan time. CONCLUSION The proposed subspace QALAS along with Zero-DeepSub enabled high fidelity and rapid whole-brain multiparametric quantification and time-resolved imaging.
Collapse
Affiliation(s)
- Yohan Jun
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, MA, United States
- Department of Radiology, Harvard Medical School, Boston, MA, United States
| | - Yamin Arefeen
- Chandra Family Department of Electrical and Computer Engineering, The University of Texas, Austin, TX, United States
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Jaejin Cho
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, MA, United States
- Department of Radiology, Harvard Medical School, Boston, MA, United States
| | - Shohei Fujita
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, MA, United States
- Department of Radiology, Harvard Medical School, Boston, MA, United States
| | - Xiaoqing Wang
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, MA, United States
- Department of Radiology, Harvard Medical School, Boston, MA, United States
| | - P. Ellen Grant
- Department of Radiology, Harvard Medical School, Boston, MA, United States
- Fetal-Neonatal Neuroimaging & Developmental Science Center, Boston Children’s Hospital, Boston, MA, United States
| | - Borjan Gagoski
- Department of Radiology, Harvard Medical School, Boston, MA, United States
- Fetal-Neonatal Neuroimaging & Developmental Science Center, Boston Children’s Hospital, Boston, MA, United States
| | - Camilo Jaimes
- Department of Radiology, Harvard Medical School, Boston, MA, United States
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
| | - Michael S. Gee
- Department of Radiology, Harvard Medical School, Boston, MA, United States
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
| | - Berkin Bilgic
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, MA, United States
- Department of Radiology, Harvard Medical School, Boston, MA, United States
- Harvard/MIT Health Sciences and Technology, Cambridge, MA, United States
| |
Collapse
|
8
|
Lu Q, Li J, Lian Z, Zhang X, Feng Q, Chen W, Ma J, Feng Y. A model-based MR parameter mapping network robust to substantial variations in acquisition settings. Med Image Anal 2024; 94:103148. [PMID: 38554550 DOI: 10.1016/j.media.2024.103148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Revised: 12/03/2023] [Accepted: 03/20/2024] [Indexed: 04/01/2024]
Abstract
Deep learning methods show great potential for the efficient and precise estimation of quantitative parameter maps from multiple magnetic resonance (MR) images. Current deep learning-based MR parameter mapping (MPM) methods are mostly trained and tested using data with specific acquisition settings. However, scan protocols usually vary with centers, scanners, and studies in practice. Thus, deep learning methods applicable to MPM with varying acquisition settings are highly required but still rarely investigated. In this work, we develop a model-based deep network termed MMPM-Net for robust MPM with varying acquisition settings. A deep learning-based denoiser is introduced to construct the regularization term in the nonlinear inversion problem of MPM. The alternating direction method of multipliers is used to solve the optimization problem and then unrolled to construct MMPM-Net. The variation in acquisition parameters can be addressed by the data fidelity component in MMPM-Net. Extensive experiments are performed on R2 mapping and R1 mapping datasets with substantial variations in acquisition settings, and the results demonstrate that the proposed MMPM-Net method outperforms other state-of-the-art MR parameter mapping methods both qualitatively and quantitatively.
Collapse
Affiliation(s)
- Qiqi Lu
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510000, China; Guangdong Provincial Key Laboratory of Medical Image Processing & Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510000, China; Guangdong-Hong Kong-Macao Greater Bay Area Center for Brain Science and Brain-Inspired Intelligence & Key Laboratory of Mental Health of the Ministry of Education & Guangdong-Hong Kong Joint Laboratory for Psychiatric Disorders, Southern Medical University, Guangzhou 510000, China; Department of Radiology, Shunde Hospital, Southern Medical University (The First People's Hospital of Shunde, Foshan), Foshan 528000, China
| | - Jialong Li
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510000, China; Guangdong Provincial Key Laboratory of Medical Image Processing & Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510000, China; Guangdong-Hong Kong-Macao Greater Bay Area Center for Brain Science and Brain-Inspired Intelligence & Key Laboratory of Mental Health of the Ministry of Education & Guangdong-Hong Kong Joint Laboratory for Psychiatric Disorders, Southern Medical University, Guangzhou 510000, China; Department of Radiology, Shunde Hospital, Southern Medical University (The First People's Hospital of Shunde, Foshan), Foshan 528000, China
| | - Zifeng Lian
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510000, China; Guangdong Provincial Key Laboratory of Medical Image Processing & Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510000, China; Guangdong-Hong Kong-Macao Greater Bay Area Center for Brain Science and Brain-Inspired Intelligence & Key Laboratory of Mental Health of the Ministry of Education & Guangdong-Hong Kong Joint Laboratory for Psychiatric Disorders, Southern Medical University, Guangzhou 510000, China; Department of Radiology, Shunde Hospital, Southern Medical University (The First People's Hospital of Shunde, Foshan), Foshan 528000, China
| | - Xinyuan Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510000, China; Guangdong Provincial Key Laboratory of Medical Image Processing & Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510000, China
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510000, China; Guangdong Provincial Key Laboratory of Medical Image Processing & Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510000, China
| | - Wufan Chen
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510000, China; Guangdong Provincial Key Laboratory of Medical Image Processing & Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510000, China
| | - Jianhua Ma
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510000, China; Guangdong Provincial Key Laboratory of Medical Image Processing & Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510000, China
| | - Yanqiu Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510000, China; Guangdong Provincial Key Laboratory of Medical Image Processing & Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510000, China; Guangdong-Hong Kong-Macao Greater Bay Area Center for Brain Science and Brain-Inspired Intelligence & Key Laboratory of Mental Health of the Ministry of Education & Guangdong-Hong Kong Joint Laboratory for Psychiatric Disorders, Southern Medical University, Guangzhou 510000, China; Department of Radiology, Shunde Hospital, Southern Medical University (The First People's Hospital of Shunde, Foshan), Foshan 528000, China.
| |
Collapse
|
9
|
Hellström M, Löfstedt T, Garpebring A. Denoising and uncertainty estimation in parameter mapping with approximate Bayesian deep image priors. Magn Reson Med 2023; 90:2557-2571. [PMID: 37582257 DOI: 10.1002/mrm.29823] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2022] [Revised: 06/26/2023] [Accepted: 07/18/2023] [Indexed: 08/17/2023]
Abstract
PURPOSE To mitigate the problem of noisy parameter maps with high uncertainties by casting parameter mapping as a denoising task based on Deep Image Priors. METHODS We extend the concept of denoising with Deep Image Prior (DIP) into parameter mapping by treating the output of an image-generating network as a parametrization of tissue parameter maps. The method implicitly denoises the parameter mapping process by filtering low-level image features with an untrained convolutional neural network (CNN). Our implementation includes uncertainty estimation from Bernoulli approximate variational inference, implemented with MC dropout, which provides model uncertainty in each voxel of the denoised parameter maps. The method is modular, so the specifics of different applications (e.g., T1 mapping) separate into application-specific signal equation blocks. We evaluate the method on variable flip angle T1 mapping, multi-echo T2 mapping, and apparent diffusion coefficient mapping. RESULTS We found that deep image prior adapts successfully to several applications in parameter mapping. In all evaluations, the method produces noise-reduced parameter maps with decreased uncertainty compared to conventional methods. The downsides of the proposed method are the long computational time and the introduction of some bias from the denoising prior. CONCLUSION DIP successfully denoise the parameter mapping process and applies to several applications with limited hyperparameter tuning. Further, it is easy to implement since DIP methods do not use network training data. Although time-consuming, uncertainty information from MC dropout makes the method more robust and provides useful information when properly calibrated.
Collapse
Affiliation(s)
- Max Hellström
- Department of Radiation Sciences, Umeå University, Umeå, Sweden
| | - Tommy Löfstedt
- Department of Radiation Sciences, Umeå University, Umeå, Sweden
- Department of Computing Science, Umeå University, Umeå, Sweden
| | | |
Collapse
|
10
|
Jun Y, Cho J, Wang X, Gee M, Grant PE, Bilgic B, Gagoski B. SSL-QALAS: Self-Supervised Learning for rapid multiparameter estimation in quantitative MRI using 3D-QALAS. Magn Reson Med 2023; 90:2019-2032. [PMID: 37415389 PMCID: PMC10527557 DOI: 10.1002/mrm.29786] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Revised: 05/27/2023] [Accepted: 06/15/2023] [Indexed: 07/08/2023]
Abstract
PURPOSE To develop and evaluate a method for rapid estimation of multiparametric T1 , T2 , proton density, and inversion efficiency maps from 3D-quantification using an interleaved Look-Locker acquisition sequence with T2 preparation pulse (3D-QALAS) measurements using self-supervised learning (SSL) without the need for an external dictionary. METHODS An SSL-based QALAS mapping method (SSL-QALAS) was developed for rapid and dictionary-free estimation of multiparametric maps from 3D-QALAS measurements. The accuracy of the reconstructed quantitative maps using dictionary matching and SSL-QALAS was evaluated by comparing the estimated T1 and T2 values with those obtained from the reference methods on an International Society for Magnetic Resonance in Medicine/National Institute of Standards and Technology phantom. The SSL-QALAS and the dictionary-matching methods were also compared in vivo, and generalizability was evaluated by comparing the scan-specific, pre-trained, and transfer learning models. RESULTS Phantom experiments showed that both the dictionary-matching and SSL-QALAS methods produced T1 and T2 estimates that had a strong linear agreement with the reference values in the International Society for Magnetic Resonance in Medicine/National Institute of Standards and Technology phantom. Further, SSL-QALAS showed similar performance with dictionary matching in reconstructing the T1 , T2 , proton density, and inversion efficiency maps on in vivo data. Rapid reconstruction of multiparametric maps was enabled by inferring the data using a pre-trained SSL-QALAS model within 10 s. Fast scan-specific tuning was also demonstrated by fine-tuning the pre-trained model with the target subject's data within 15 min. CONCLUSION The proposed SSL-QALAS method enabled rapid reconstruction of multiparametric maps from 3D-QALAS measurements without an external dictionary or labeled ground-truth training data.
Collapse
Affiliation(s)
- Yohan Jun
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, United States
- Department of Radiology, Harvard Medical School, Boston, MA, United States
| | - Jaejin Cho
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, United States
- Department of Radiology, Harvard Medical School, Boston, MA, United States
| | - Xiaoqing Wang
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, United States
- Department of Radiology, Harvard Medical School, Boston, MA, United States
| | - Michael Gee
- Department of Radiology, Harvard Medical School, Boston, MA, United States
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
| | - P. Ellen Grant
- Department of Radiology, Harvard Medical School, Boston, MA, United States
- Fetal-Neonatal Neuroimaging & Developmental Science Center, Boston Children’s Hospital, Boston, MA, United States
| | - Berkin Bilgic
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, United States
- Department of Radiology, Harvard Medical School, Boston, MA, United States
- Harvard/MIT Health Sciences and Technology, Cambridge, MA, United States
| | - Borjan Gagoski
- Department of Radiology, Harvard Medical School, Boston, MA, United States
- Fetal-Neonatal Neuroimaging & Developmental Science Center, Boston Children’s Hospital, Boston, MA, United States
| |
Collapse
|
11
|
Jun Y, Park YW, Shin H, Shin Y, Lee JR, Han K, Ahn SS, Lim SM, Hwang D, Lee SK. Intelligent noninvasive meningioma grading with a fully automatic segmentation using interpretable multiparametric deep learning. Eur Radiol 2023; 33:6124-6133. [PMID: 37052658 DOI: 10.1007/s00330-023-09590-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Revised: 11/30/2022] [Accepted: 02/09/2023] [Indexed: 04/14/2023]
Abstract
OBJECTIVES To establish a robust interpretable multiparametric deep learning (DL) model for automatic noninvasive grading of meningiomas along with segmentation. METHODS In total, 257 patients with pathologically confirmed meningiomas (162 low-grade, 95 high-grade) who underwent a preoperative brain MRI, including T2-weighted (T2) and contrast-enhanced T1-weighted images (T1C), were included in the institutional training set. A two-stage DL grading model was constructed for segmentation and classification based on multiparametric three-dimensional U-net and ResNet. The models were validated in the external validation set consisting of 61 patients with meningiomas (46 low-grade, 15 high-grade). Relevance-weighted Class Activation Mapping (RCAM) method was used to interpret the DL features contributing to the prediction of the DL grading model. RESULTS On external validation, the combined T1C and T2 model showed a Dice coefficient of 0.910 in segmentation and the highest performance for meningioma grading compared to the T2 or T1C only models, with an area under the curve (AUC) of 0.770 (95% confidence interval: 0.644-0.895) and accuracy, sensitivity, and specificity of 72.1%, 73.3%, and 71.7%, respectively. The AUC and accuracy of the combined DL grading model were higher than those of the human readers (AUCs of 0.675-0.690 and accuracies of 65.6-68.9%, respectively). The RCAM of the DL grading model showed activated maps at the surface regions of meningiomas indicating that the model recognized the features at the tumor margin for grading. CONCLUSIONS An interpretable multiparametric DL model combining T1C and T2 can enable fully automatic grading of meningiomas along with segmentation. KEY POINTS • The multiparametric DL model showed robustness in grading and segmentation on external validation. • The diagnostic performance of the combined DL grading model was higher than that of the human readers. • The RCAM interpreted that DL grading model recognized the meaningful features at the tumor margin for grading.
Collapse
Affiliation(s)
- Yohan Jun
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Yae Won Park
- Department of Radiology and Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, 50-1 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Hyungseob Shin
- School of Electrical and Electronic Engineering, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Yejee Shin
- School of Electrical and Electronic Engineering, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Jeong Ryong Lee
- School of Electrical and Electronic Engineering, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Kyunghwa Han
- Department of Radiology and Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, 50-1 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Sung Soo Ahn
- Department of Radiology and Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, 50-1 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Korea.
| | - Soo Mee Lim
- Department of Radiology, Ewha Womans University College of Medicine, Seoul, Korea
| | - Dosik Hwang
- Department of Radiology and Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, 50-1 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Korea.
- School of Electrical and Electronic Engineering, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Korea.
- Center for Healthcare Robotics, Korea Institute of Science and Technology, Seoul, Korea.
- Department of Oral and Maxillofacial Radiology, Yonsei University College of Dentistry, Seoul, Korea.
| | - Seung-Koo Lee
- Department of Radiology and Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, 50-1 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Korea
| |
Collapse
|
12
|
Shin H, Park JE, Jun Y, Eo T, Lee J, Kim JE, Lee DH, Moon HH, Park SI, Kim S, Hwang D, Kim HS. Deep learning referral suggestion and tumour discrimination using explainable artificial intelligence applied to multiparametric MRI. Eur Radiol 2023; 33:5859-5870. [PMID: 37150781 DOI: 10.1007/s00330-023-09710-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Revised: 01/31/2023] [Accepted: 03/06/2023] [Indexed: 05/09/2023]
Abstract
OBJECTIVES An appropriate and fast clinical referral suggestion is important for intra-axial mass-like lesions (IMLLs) in the emergency setting. We aimed to apply an interpretable deep learning (DL) system to multiparametric MRI to obtain clinical referral suggestion for IMLLs, and to validate it in the setting of nontraumatic emergency neuroradiology. METHODS A DL system was developed in 747 patients with IMLLs ranging 30 diseases who underwent pre- and post-contrast T1-weighted (T1CE), FLAIR, and diffusion-weighted imaging (DWI). A DL system that segments IMLLs, classifies tumourous conditions, and suggests clinical referral among surgery, systematic work-up, medical treatment, and conservative treatment, was developed. The system was validated in an independent cohort of 130 emergency patients, and performance in referral suggestion and tumour discrimination was compared with that of radiologists using receiver operating characteristics curve, precision-recall curve analysis, and confusion matrices. Multiparametric interpretable visualisation of high-relevance regions from layer-wise relevance propagation overlaid on contrast-enhanced T1WI and DWI was analysed. RESULTS The DL system provided correct referral suggestions in 94 of 130 patients (72.3%) and performed comparably to radiologists (accuracy 72.6%, McNemar test; p = .942). For distinguishing tumours from non-tumourous conditions, the DL system (AUC, 0.90 and AUPRC, 0.94) performed similarly to human readers (AUC, 0.81~0.92, and AUPRC, 0.88~0.95). Solid portions of tumours showed a high overlap of relevance, but non-tumours did not (Dice coefficient 0.77 vs. 0.33, p < .001), demonstrating the DL's decision. CONCLUSIONS Our DL system could appropriately triage patients using multiparametric MRI and provide interpretability through multiparametric heatmaps, and may thereby aid neuroradiologic diagnoses in emergency settings. CLINICAL RELEVANCE STATEMENT Our AI triages patients with raw MRI images to clinical referral pathways in brain intra-axial mass-like lesions. We demonstrate that the decision is based on the relative relevance between contrast-enhanced T1-weighted and diffusion-weighted images, providing explainability across multiparametric MRI data. KEY POINTS • A deep learning (DL) system using multiparametric MRI suggested clinical referral to patients with intra-axial mass-like lesions (IMLLs) similar to radiologists (accuracy 72.3% vs. 72.6%). • In the differentiation of tumourous and non-tumourous conditions, the DL system (AUC, 0.90) performed similar with radiologists (AUC, 0.81-0.92). • The DL's decision basis for differentiating tumours from non-tumours can be quantified using multiparametric heatmaps obtained via the layer-wise relevance propagation method.
Collapse
Affiliation(s)
- Hyungseob Shin
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Korea
| | - Ji Eun Park
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| | - Yohan Jun
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
| | - Taejoon Eo
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Korea
| | - Jeongryong Lee
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Korea
| | - Ji Eun Kim
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| | - Da Hyun Lee
- Department of Radiology, Ajou University School of Medicine, Gyeonggi-Do, Korea
| | - Hye Hyeon Moon
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| | - Sang Ik Park
- Department of Radiology, Chung-Ang University Hospital, Seoul, Korea
| | - Seonok Kim
- Department of Clinical Epidemiology and Biostatistics, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| | - Dosik Hwang
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Korea.
- Center for Healthcare Robotics, Korea Institute of Science and Technology, 5, Hwarang-Ro 14-Gil, Seongbuk-Gu, Seoul, 02792, Korea.
- Department of Oral and Maxillofacial Radiology, Yonsei University College of Dentistry, Seoul, Korea.
- Department of Radiology and Center for Clinical Imaging Data Science (CCIDS), Yonsei University College of Medicine, Seoul, Korea.
| | - Ho Sung Kim
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea.
| |
Collapse
|
13
|
Cascade of Denoising and Mapping Neural Networks for MRI R2* Relaxometry of Iron-Loaded Liver. Bioengineering (Basel) 2023; 10:bioengineering10020209. [PMID: 36829703 PMCID: PMC9952355 DOI: 10.3390/bioengineering10020209] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 02/01/2023] [Accepted: 02/02/2023] [Indexed: 02/09/2023] Open
Abstract
MRI of effective transverse relaxation rate (R2*) measurement is a reliable method for liver iron concentration quantification. However, R2* mapping can be degraded by noise, especially in the case of iron overload. This study aimed to develop a deep learning method for MRI R2* relaxometry of an iron-loaded liver using a two-stage cascaded neural network. The proposed method, named CadamNet, combines two convolutional neural networks separately designed for image denoising and parameter mapping into a cascade framework, and the physics-based R2* decay model was incorporated in training the mapping network to enforce data consistency further. CadamNet was trained using simulated liver data with Rician noise, which was constructed from clinical liver data. The performance of CadamNet was quantitatively evaluated on simulated data with varying noise levels as well as clinical liver data and compared with the single-stage parameter mapping network (MappingNet) and two conventional model-based R2* mapping methods. CadamNet consistently achieved high-quality R2* maps and outperformed MappingNet at varying noise levels. Compared with conventional R2* mapping methods, CadamNet yielded R2* maps with lower errors, higher quality, and substantially increased efficiency. In conclusion, the proposed CadamNet enables accurate and efficient iron-loaded liver R2* mapping, especially in the presence of severe noise.
Collapse
|
14
|
A unified model for reconstruction and R 2* mapping of accelerated 7T data using the quantitative recurrent inference machine. Neuroimage 2022; 264:119680. [PMID: 36240989 DOI: 10.1016/j.neuroimage.2022.119680] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 09/16/2022] [Accepted: 10/10/2022] [Indexed: 11/07/2022] Open
Abstract
Quantitative MRI (qMRI) acquired at the ultra-high field of 7 Tesla has been used in visualizing and analyzing subcortical structures. qMRI relies on the acquisition of multiple images with different scan settings, leading to extended scanning times. Data redundancy and prior information from the relaxometry model can be exploited by deep learning to accelerate the imaging process. We propose the quantitative Recurrent Inference Machine (qRIM), with a unified forward model for joint reconstruction and R2*-mapping from sparse data, embedded in a Recurrent Inference Machine (RIM), an iterative inverse problem-solving network. To study the dependency of the proposed extension of the unified forward model to network architecture, we implemented and compared a quantitative End-to-End Variational Network (qE2EVN). Experiments were performed with high-resolution multi-echo gradient echo data of the brain at 7T of a cohort study covering the entire adult life span. The error in reconstructed R2* from undersampled data relative to reference data significantly decreased for the unified model compared to sequential image reconstruction and parameter fitting using the RIM. With increasing acceleration factor, an increasing reduction in the reconstruction error was observed, pointing to a larger benefit for sparser data. Qualitatively, this was following an observed reduction of image blurriness in R2*-maps. In contrast, when using the U-Net as network architecture, a negative bias in R2* in selected regions of interest was observed. Compressed Sensing rendered accurate, but less precise estimates of R2*. The qE2EVN showed slightly inferior reconstruction quality compared to the qRIM but better quality than the U-Net and Compressed Sensing. Subcortical maturation over age measured by a linearly increasing interquartile range of R2* in the striatum was preserved up to an acceleration factor of 9. With the integrated prior of the unified forward model, the proposed qRIM can exploit the redundancy among repeated measurements and shared information between tasks, facilitating relaxometry in accelerated MRI.
Collapse
|
15
|
Amyar A, Guo R, Cai X, Assana S, Chow K, Rodriguez J, Yankama T, Cirillo J, Pierce P, Goddu B, Ngo L, Nezafat R. Impact of deep learning architectures on accelerated cardiac T 1 mapping using MyoMapNet. NMR IN BIOMEDICINE 2022; 35:e4794. [PMID: 35767308 PMCID: PMC9532368 DOI: 10.1002/nbm.4794] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Revised: 05/19/2022] [Accepted: 06/25/2022] [Indexed: 05/10/2023]
Abstract
The objective of the current study was to investigate the performance of various deep learning (DL) architectures for MyoMapNet, a DL model for T1 estimation using accelerated cardiac T1 mapping from four T1 -weighted images collected after a single inversion pulse (Look-Locker 4 [LL4]). We implemented and tested three DL architectures for MyoMapNet: (a) a fully connected neural network (FC), (b) convolutional neural networks (VGG19, ResNet50), and (c) encoder-decoder networks with skip connections (ResUNet, U-Net). Modified Look-Locker inversion recovery (MOLLI) images from 749 patients at 3 T were used for training, validation, and testing. The first four T1 -weighted images from MOLLI5(3)3 and/or MOLLI4(1)3(1)2 protocols were extracted to create accelerated cardiac T1 mapping data. We also prospectively collected data from 28 subjects using MOLLI and LL4 to further evaluate model performance. Despite rigorous training, conventional VGG19 and ResNet50 models failed to produce anatomically correct T1 maps, and T1 values had significant errors. While ResUNet yielded good quality maps, it significantly underestimated T1 . Both FC and U-Net, however, yielded excellent image quality with good T1 accuracy for both native (FC/U-Net/MOLLI = 1217 ± 64/1208 ± 61/1199 ± 61 ms, all p < 0.05) and postcontrast myocardial T1 (FC/U-Net/MOLLI = 578 ± 57/567 ± 54/574 ± 55 ms, all p < 0.05). In terms of precision, the U-Net model yielded better T1 precision compared with the FC architecture (standard deviation of 61 vs. 67 ms for the myocardium for native [p < 0.05], and 31 vs. 38 ms [p < 0.05], for postcontrast). Similar findings were observed in prospectively collected LL4 data. It was concluded that U-Net and FC DL models in MyoMapNet enable fast myocardial T1 mapping using only four T1 -weighted images collected from a single LL sequence with comparable accuracy. U-Net also provides a slight improvement in precision.
Collapse
Affiliation(s)
- Amine Amyar
- Department of Medicine (Cardiovascular Division), Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, USA
| | - Rui Guo
- Department of Medicine (Cardiovascular Division), Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, USA
| | - Xiaoying Cai
- Department of Medicine (Cardiovascular Division), Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, USA
- Siemens Medical Solutions USA, Inc., Boston, Massachusetts, USA
| | - Salah Assana
- Department of Medicine (Cardiovascular Division), Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, USA
| | - Kelvin Chow
- Siemens Medical Solutions USA, Inc., Chicago, Illinois, USA
| | - Jennifer Rodriguez
- Department of Medicine (Cardiovascular Division), Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, USA
| | - Tuyen Yankama
- Department of Medicine (Cardiovascular Division), Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, USA
| | - Julia Cirillo
- Department of Medicine (Cardiovascular Division), Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, USA
| | - Patrick Pierce
- Department of Medicine (Cardiovascular Division), Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, USA
| | - Beth Goddu
- Department of Medicine (Cardiovascular Division), Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, USA
| | - Long Ngo
- Department of Medicine (Cardiovascular Division), Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, USA
| | - Reza Nezafat
- Department of Medicine (Cardiovascular Division), Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, USA
| |
Collapse
|
16
|
Gurney-Champion OJ, Landry G, Redalen KR, Thorwarth D. Potential of Deep Learning in Quantitative Magnetic Resonance Imaging for Personalized Radiotherapy. Semin Radiat Oncol 2022; 32:377-388. [DOI: 10.1016/j.semradonc.2022.06.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
17
|
Velasco C, Fletcher TJ, Botnar RM, Prieto C. Artificial intelligence in cardiac magnetic resonance fingerprinting. Front Cardiovasc Med 2022; 9:1009131. [PMID: 36204566 PMCID: PMC9530662 DOI: 10.3389/fcvm.2022.1009131] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Accepted: 08/30/2022] [Indexed: 11/13/2022] Open
Abstract
Magnetic resonance fingerprinting (MRF) is a fast MRI-based technique that allows for multiparametric quantitative characterization of the tissues of interest in a single acquisition. In particular, it has gained attention in the field of cardiac imaging due to its ability to provide simultaneous and co-registered myocardial T1 and T2 mapping in a single breath-held cardiac MRF scan, in addition to other parameters. Initial results in small healthy subject groups and clinical studies have demonstrated the feasibility and potential of MRF imaging. Ongoing research is being conducted to improve the accuracy, efficiency, and robustness of cardiac MRF. However, these improvements usually increase the complexity of image reconstruction and dictionary generation and introduce the need for sequence optimization. Each of these steps increase the computational demand and processing time of MRF. The latest advances in artificial intelligence (AI), including progress in deep learning and the development of neural networks for MRI, now present an opportunity to efficiently address these issues. Artificial intelligence can be used to optimize candidate sequences and reduce the memory demand and computational time required for reconstruction and post-processing. Recently, proposed machine learning-based approaches have been shown to reduce dictionary generation and reconstruction times by several orders of magnitude. Such applications of AI should help to remove these bottlenecks and speed up cardiac MRF, improving its practical utility and allowing for its potential inclusion in clinical routine. This review aims to summarize the latest developments in artificial intelligence applied to cardiac MRF. Particularly, we focus on the application of machine learning at different steps of the MRF process, such as sequence optimization, dictionary generation and image reconstruction.
Collapse
Affiliation(s)
- Carlos Velasco
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
- *Correspondence: Carlos Velasco
| | - Thomas J. Fletcher
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
| | - René M. Botnar
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
- Institute for Biological and Medical Engineering, Pontificia Universidad Católica de Chile, Santiago, Chile
- Millennium Institute for Intelligent Healthcare Engineering, Santiago, Chile
| | - Claudia Prieto
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
- Institute for Biological and Medical Engineering, Pontificia Universidad Católica de Chile, Santiago, Chile
- Millennium Institute for Intelligent Healthcare Engineering, Santiago, Chile
| |
Collapse
|
18
|
Li S, Shen C, Ding Z, She H, Du YP. Accelerating multi-echo chemical shift encoded water-fat MRI using model-guided deep learning. Magn Reson Med 2022; 88:1851-1866. [PMID: 35649172 DOI: 10.1002/mrm.29307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Revised: 04/30/2022] [Accepted: 05/02/2022] [Indexed: 11/10/2022]
Abstract
PURPOSE To accelerate chemical shift encoded (CSE) water-fat imaging by applying a model-guided deep learning water-fat separation (MGDL-WF) framework to the undersampled k-space data. METHODS A model-guided deep learning water-fat separation framework is proposed for the acceleration using Cartesian/radial undersampling data. The proposed MGDL-WF combines the power of CSE water-fat imaging model and data-driven deep learning by jointly using a multi-peak fat model and a modified residual U-net network. The model is used to guide the image reconstruction, and the network is used to capture the artifacts induced by the undersampling. A data consistency layer is used in MGDL-WF to ensure the output images to be consistent with the k-space measurements. A Gauss-Newton iteration algorithm is adapted for the gradient updating of the networks. RESULTS Compared with the compressed sensing water-fat separation (CS-WF) algorithm/2-step procedure algorithm, the MGDL-WF increased peak signal-to-noise ratio (PSNR) by 5.31/5.23, 6.11/4.54, and 4.75 dB/1.88 dB with Cartesian sampling, and by 4.13/6.53, 2.90/4.68, and 1.68 dB/3.48 dB with radial sampling, at acceleration rates (R) of 4, 6, and 8, respectively. By using MGDL-WF, radial sampling increased the PSNR by 2.07 dB at R = 8, compared with Cartesian sampling. CONCLUSIONS The proposed MGDL-WF enables exploiting features of the water images and fat images from the undersampled multi-echo data, leading to improved performance in the accelerated CSE water-fat imaging. By using MGDL-WF, radial sampling can further improve the image quality with comparable scan time in comparison with Cartesian sampling.
Collapse
Affiliation(s)
- Shuo Li
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Chenfei Shen
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Zekang Ding
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Huajun She
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Yiping P Du
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|