1
|
Wang S, Wu R, Jia S, Diakite A, Li C, Liu Q, Zheng H, Ying L. Knowledge-driven deep learning for fast MR imaging: Undersampled MR image reconstruction from supervised to un-supervised learning. Magn Reson Med 2024; 92:496-518. [PMID: 38624162 DOI: 10.1002/mrm.30105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 03/19/2024] [Accepted: 03/20/2024] [Indexed: 04/17/2024]
Abstract
Deep learning (DL) has emerged as a leading approach in accelerating MRI. It employs deep neural networks to extract knowledge from available datasets and then applies the trained networks to reconstruct accurate images from limited measurements. Unlike natural image restoration problems, MRI involves physics-based imaging processes, unique data properties, and diverse imaging tasks. This domain knowledge needs to be integrated with data-driven approaches. Our review will introduce the significant challenges faced by such knowledge-driven DL approaches in the context of fast MRI along with several notable solutions, which include learning neural networks and addressing different imaging application scenarios. The traits and trends of these techniques have also been given which have shifted from supervised learning to semi-supervised learning, and finally, to unsupervised learning methods. In addition, MR vendors' choices of DL reconstruction have been provided along with some discussions on open questions and future directions, which are critical for the reliable imaging systems.
Collapse
Affiliation(s)
- Shanshan Wang
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Ruoyou Wu
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Sen Jia
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Alou Diakite
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Cheng Li
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Qiegen Liu
- Department of Electronic Information Engineering, Nanchang University, Nanchang, China
| | - Hairong Zheng
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Leslie Ying
- Department of Biomedical Engineering and Department of Electrical Engineering, The State University of New York, Buffalo, New York, USA
| |
Collapse
|
2
|
Zhao R, Peng X, Kelkar VA, Anastasio MA, Lam F. High-Dimensional MR Reconstruction Integrating Subspace and Adaptive Generative Models. IEEE Trans Biomed Eng 2024; 71:1969-1979. [PMID: 38265912 PMCID: PMC11105985 DOI: 10.1109/tbme.2024.3358223] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
OBJECTIVE To develop a new method that integrates subspace and generative image models for high-dimensional MR image reconstruction. METHODS We proposed a formulation that synergizes a low-dimensional subspace model of high-dimensional images, an adaptive generative image prior serving as spatial constraints on the sequence of "contrast-weighted" images or spatial coefficients of the subspace model, and a conventional sparsity regularization. A special pretraining plus subject-specific network adaptation strategy was proposed to construct an accurate generative-network-based representation for images with varying contrasts. An iterative algorithm was introduced to jointly update the subspace coefficients and the multi-resolution latent space of the generative image model that leveraged an recently proposed intermediate layer optimization technique for network inversion. RESULTS We evaluated the utility of the proposed method for two high-dimensional imaging applications: accelerated MR parameter mapping and high-resolution MR spectroscopic imaging. Improved performance over state-of-the-art subspace-based methods was demonstrated in both cases. CONCLUSION The proposed method provided a new way to address high-dimensional MR image reconstruction problems by incorporating an adaptive generative model as a data-driven spatial prior for constraining subspace reconstruction. SIGNIFICANCE Our work demonstrated the potential of integrating data-driven and adaptive generative priors with canonical low-dimensional modeling for high-dimensional imaging problems.
Collapse
|
3
|
Peng H, Jiang C, Cheng J, Zhang M, Wang S, Liang D, Liu Q. One-Shot Generative Prior in Hankel-k-Space for Parallel Imaging Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3420-3435. [PMID: 37342955 DOI: 10.1109/tmi.2023.3288219] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/23/2023]
Abstract
Magnetic resonance imaging serves as an essential tool for clinical diagnosis. However, it suffers from a long acquisition time. The utilization of deep learning, especially the deep generative models, offers aggressive acceleration and better reconstruction in magnetic resonance imaging. Nevertheless, learning the data distribution as prior knowledge and reconstructing the image from limited data remains challenging. In this work, we propose a novel Hankel-k-space generative model (HKGM), which can generate samples from a training set of as little as one k-space. At the prior learning stage, we first construct a large Hankel matrix from k-space data, then extract multiple structured k-space patches from the Hankel matrix to capture the internal distribution among different patches. Extracting patches from a Hankel matrix enables the generative model to be learned from the redundant and low-rank data space. At the iterative reconstruction stage, the desired solution obeys the learned prior knowledge. The intermediate reconstruction solution is updated by taking it as the input of the generative model. The updated result is then alternatively operated by imposing low-rank penalty on its Hankel matrix and data consistency constraint on the measurement data. Experimental results confirmed that the internal statistics of patches within single k-space data carry enough information for learning a powerful generative model and providing state-of-the-art reconstruction.
Collapse
|
4
|
Tu Z, Liu D, Wang X, Jiang C, Zhu P, Zhang M, Wang S, Liang D, Liu Q. WKGM: weighted k-space generative model for parallel imaging reconstruction. NMR IN BIOMEDICINE 2023; 36:e5005. [PMID: 37547964 DOI: 10.1002/nbm.5005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Revised: 06/12/2023] [Accepted: 06/24/2023] [Indexed: 08/08/2023]
Abstract
Deep learning based parallel imaging (PI) has made great progress in recent years to accelerate MRI. Nevertheless, it still has some limitations: for example, the robustness and flexibility of existing methods are greatly deficient. In this work, we propose a method to explore the k-space domain learning via robust generative modeling for flexible calibrationless PI reconstruction, coined the weighted k-space generative model (WKGM). Specifically, WKGM is a generalized k-space domain model, where the k-space weighting technology and high-dimensional space augmentation design are efficiently incorporated for score-based generative model training, resulting in good and robust reconstructions. In addition, WKGM is flexible and thus can be synergistically combined with various traditional k-space PI models, which can make full use of the correlation between multi-coil data and realize calibrationless PI. Even though our model was trained on only 500 images, experimental results with varying sampling patterns and acceleration factors demonstrate that WKGM can attain state-of-the-art reconstruction results with the well learned k-space generative prior.
Collapse
Affiliation(s)
- Zongjiang Tu
- Department of Electronic Information Engineering, Nanchang University, Nanchang, China
| | - Die Liu
- Department of Electronic Information Engineering, Nanchang University, Nanchang, China
| | - Xiaoqing Wang
- Department of Biomedical Imaging, Graz University of Technology, Graz, Austria
| | - Chen Jiang
- Department of Mathematics and Computer Sciences, Nanchang University, Nanchang, China
| | - Pengwen Zhu
- Department of Engineering, Pennsylvania State University, Pennsylvania, State College, USA
| | - Minghui Zhang
- Department of Electronic Information Engineering, Nanchang University, Nanchang, China
| | - Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, SIAT, CAS, Shenzhen, China
| | - Dong Liang
- Paul C. Lauterbur Research Center for Biomedical Imaging, SIAT, CAS, Shenzhen, China
| | - Qiegen Liu
- Department of Electronic Information Engineering, Nanchang University, Nanchang, China
| |
Collapse
|
5
|
Tu Z, Jiang C, Guan Y, Liu J, Liu Q. K-space and image domain collaborative energy-based model for parallel MRI reconstruction. Magn Reson Imaging 2023; 99:110-122. [PMID: 36796460 DOI: 10.1016/j.mri.2023.02.004] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Revised: 02/08/2023] [Accepted: 02/10/2023] [Indexed: 02/17/2023]
Abstract
Decreasing magnetic resonance (MR) image acquisition times can potentially make MR examinations more accessible. Prior arts including the deep learning models have been devoted to solving the problem of long MRI imaging time. Recently, deep generative models have exhibited great potentials in algorithm robustness and usage flexibility. Nevertheless, none of existing schemes can be learned from or employed to the k-space measurement directly. Furthermore, how do the deep generative models work well in hybrid domain is also worth being investigated. In this work, by taking advantage of the deep energy-based models, we propose a k-space and image domain collaborative generative model to comprehensively estimate the MR data from under-sampled measurement. Equipped with parallel and sequential orders, experimental comparisons with the state-of-the-arts demonstrated that they involve less error in reconstruction accuracy and are more stable under different acceleration factors.
Collapse
Affiliation(s)
- Zongjiang Tu
- Department of Electronic Information Engineering, Nanchang University, Nanchang 330031, China
| | - Chen Jiang
- Department of Mathematics and Computer Sciences, Nanchang University, Nanchang 330031, China
| | - Yu Guan
- Department of Electronic Information Engineering, Nanchang University, Nanchang 330031, China
| | - Jijun Liu
- Department of Mathematics, Southeast University, Nanjing 210096, China; Nanjing Center for Applied Mathemtics, Nanjing, 211135,China
| | - Qiegen Liu
- Department of Electronic Information Engineering, Nanchang University, Nanchang 330031, China.
| |
Collapse
|
6
|
Faiyaz A, Doyley MM, Schifitto G, Uddin MN. Artificial intelligence for diffusion MRI-based tissue microstructure estimation in the human brain: an overview. Front Neurol 2023; 14:1168833. [PMID: 37153663 PMCID: PMC10160660 DOI: 10.3389/fneur.2023.1168833] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2023] [Accepted: 03/27/2023] [Indexed: 05/10/2023] Open
Abstract
Artificial intelligence (AI) has made significant advances in the field of diffusion magnetic resonance imaging (dMRI) and other neuroimaging modalities. These techniques have been applied to various areas such as image reconstruction, denoising, detecting and removing artifacts, segmentation, tissue microstructure modeling, brain connectivity analysis, and diagnosis support. State-of-the-art AI algorithms have the potential to leverage optimization techniques in dMRI to advance sensitivity and inference through biophysical models. While the use of AI in brain microstructures has the potential to revolutionize the way we study the brain and understand brain disorders, we need to be aware of the pitfalls and emerging best practices that can further advance this field. Additionally, since dMRI scans rely on sampling of the q-space geometry, it leaves room for creativity in data engineering in such a way that it maximizes the prior inference. Utilization of the inherent geometry has been shown to improve general inference quality and might be more reliable in identifying pathological differences. We acknowledge and classify AI-based approaches for dMRI using these unifying characteristics. This article also highlighted and reviewed general practices and pitfalls involving tissue microstructure estimation through data-driven techniques and provided directions for building on them.
Collapse
Affiliation(s)
- Abrar Faiyaz
- Department of Electrical and Computer Engineering, University of Rochester, Rochester, NY, United States
| | - Marvin M. Doyley
- Department of Electrical and Computer Engineering, University of Rochester, Rochester, NY, United States
- Department of Imaging Sciences, University of Rochester, Rochester, NY, United States
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, United States
| | - Giovanni Schifitto
- Department of Electrical and Computer Engineering, University of Rochester, Rochester, NY, United States
- Department of Imaging Sciences, University of Rochester, Rochester, NY, United States
- Department of Neurology, University of Rochester, Rochester, NY, United States
| | - Md Nasir Uddin
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, United States
- Department of Neurology, University of Rochester, Rochester, NY, United States
| |
Collapse
|
7
|
Sabidussi ER, Klein S, Jeurissen B, Poot DHJ. dtiRIM: A generalisable deep learning method for diffusion tensor imaging. Neuroimage 2023; 269:119900. [PMID: 36702213 DOI: 10.1016/j.neuroimage.2023.119900] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Revised: 01/19/2023] [Accepted: 01/21/2023] [Indexed: 01/25/2023] Open
Abstract
Diffusion weighted MRI is an indispensable tool for routine patient screening and diagnostics of pathology. Recently, several deep learning methods have been proposed to quantify diffusion parameters, but poor generalisation to new data prevents broader use of these methods, as they require retraining of the neural network for each new scan protocol. In this work, we present the dtiRIM, a new deep learning method for Diffusion Tensor Imaging (DTI) based on the Recurrent Inference Machines. Thanks to its ability to learn how to solve inverse problems and to use the diffusion tensor model to promote data consistency, the dtiRIM can generalise to variations in the acquisition settings. This enables a single trained network to produce high quality tensor estimates for a variety of cases. We performed extensive validation of our method using simulation and in vivo data, and compared it to the Iterated Weighted Linear Least Squares (IWLLS), the approach of the state-of-the-art MRTrix3 software, and to an implementation of the Maximum Likelihood Estimator (MLE). Our results show that dtiRIM predictions present low dependency on tissue properties, anatomy and scanning parameters, with results comparable to or better than both IWLLS and MLE. Further, we demonstrate that a single dtiRIM model can be used for a diversity of data sets without significant loss in quality, representing, to our knowledge, the first generalisable deep learning based solver for DTI.
Collapse
Affiliation(s)
- E R Sabidussi
- Erasmus MC University Medical Center, Department of Radiology and Nuclear Medicine, Rotterdam, the Netherlands.
| | - S Klein
- Erasmus MC University Medical Center, Department of Radiology and Nuclear Medicine, Rotterdam, the Netherlands
| | - B Jeurissen
- imec-Vision Lab, Department of Physics, University of Antwerp, Antwerp, Belgium; Lab for Equilibrium Investigations and Aerospace, Department of Physics, University of Antwerp, Antwerp, Belgium
| | - D H J Poot
- Erasmus MC University Medical Center, Department of Radiology and Nuclear Medicine, Rotterdam, the Netherlands
| |
Collapse
|
8
|
Li Z, Wang H, Han Q, Liu J, Hou M, Chen G, Tian Y, Weng T. Convolutional Neural Network with Multiscale Fusion and Attention Mechanism for Skin Diseases Assisted Diagnosis. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:8390997. [PMID: 35747726 PMCID: PMC9213118 DOI: 10.1155/2022/8390997] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Accepted: 05/17/2022] [Indexed: 11/17/2022]
Abstract
Melanoma segmentation based on a convolutional neural network (CNN) has recently attracted extensive attention. However, the features captured by CNN are always local that result in discontinuous feature extraction. To solve this problem, we propose a novel multiscale feature fusion network (MSFA-Net). MSFA-Net can extract feature information at different scales through a multiscale feature fusion structure (MSF) in the network and then calibrate and restore the extracted information to achieve the purpose of melanoma segmentation. Specifically, based on the popular encoder-decoder structure, we designed three functional modules, namely MSF, asymmetric skip connection structure (ASCS), and calibration decoder (Decoder). In addition, a weighted cross-entropy loss and two-stage learning rate optimization strategy are designed to train the network more effectively. Compared qualitatively and quantitatively with the representative neural network methods with encoder-decoder structure, such as U-Net, the proposed method can achieve advanced performance.
Collapse
Affiliation(s)
- Zhong Li
- School of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing 401331, China
| | - Hongyi Wang
- School of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing 401331, China
| | - Qi Han
- School of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing 401331, China
| | - Jingcheng Liu
- Liquor Making Microbial Application & Detection Technology of Luzhou Key Laboratory, Luzhou Vocational & Technical College, Luzhou, Sichuan 646000, China
| | - Mingyang Hou
- School of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing 401331, China
| | - Guorong Chen
- School of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing 401331, China
| | - Yuan Tian
- School of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing 401331, China
| | - Tengfei Weng
- School of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing 401331, China
| |
Collapse
|
9
|
Li K, Zhou W, Li H, Anastasio MA. A Hybrid Approach for Approximating the Ideal Observer for Joint Signal Detection and Estimation Tasks by Use of Supervised Learning and Markov-Chain Monte Carlo Methods. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1114-1124. [PMID: 34898433 PMCID: PMC9128572 DOI: 10.1109/tmi.2021.3135147] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The ideal observer (IO) sets an upper performance limit among all observers and has been advocated for assessing and optimizing imaging systems. For general joint detection and estimation (detection-estimation) tasks, estimation ROC (EROC) analysis has been established for evaluating the performance of observers. However, in general, it is difficult to accurately approximate the IO that maximizes the area under the EROC curve. In this study, a hybrid method that employs machine learning is proposed to accomplish this. Specifically, a hybrid approach is developed that combines a multi-task convolutional neural network and a Markov-Chain Monte Carlo (MCMC) method in order to approximate the IO for detection-estimation tasks. Unlike traditional MCMC methods, the hybrid method is not limited to use of specific utility functions. In addition, a purely supervised learning-based sub-ideal observer is proposed. Computer-simulation studies are conducted to validate the proposed method, which include signal-known-statistically/background-known-exactly and signal-known-statistically/background-known-statistically tasks. The EROC curves produced by the proposed method are compared to those produced by the MCMC approach or analytical computation when feasible. The proposed method provides a new approach for approximating the IO and may advance the application of EROC analysis for optimizing imaging systems.
Collapse
|