1
|
Zou J, Liu L, Chen Q, Wang S, Hu Z, Xing X, Qin J. MMR-Mamba: Multi-modal MRI reconstruction with Mamba and spatial-frequency information fusion. Med Image Anal 2025; 102:103549. [PMID: 40127589 DOI: 10.1016/j.media.2025.103549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2024] [Revised: 03/07/2025] [Accepted: 03/08/2025] [Indexed: 03/26/2025]
Abstract
Multi-modal MRI offers valuable complementary information for diagnosis and treatment; however, its clinical utility is limited by prolonged scanning time. To accelerate the acquisition process, a practical approach is to reconstruct images of the target modality, which requires longer scanning time, from under-sampled k-space data using the fully-sampled reference modality with shorter scanning time as guidance. The primary challenge of this task lies in comprehensively and efficiently integrating complementary information from different modalities to achieve high-quality reconstruction. Existing methods struggle with this challenge: (1) convolution-based models fail to capture long-range dependencies; (2) transformer-based models, while excelling in global feature modeling, suffer from quadratic computational complexity. To address this dilemma, we propose MMR-Mamba, a novel framework that thoroughly and efficiently integrates multi-modal features for MRI reconstruction, leveraging Mamba's capability to capture long-range dependencies with linear computational complexity while exploiting global properties of the Fourier domain. Specifically, we first design a Target modality-guided Cross Mamba (TCM) module in the spatial domain, which maximally restores the target modality information by selectively incorporating relevant information from the reference modality. Then, we introduce a Selective Frequency Fusion (SFF) module to efficiently integrate global information in the Fourier domain and recover high-frequency signals for the reconstruction of structural details. Furthermore, we devise an Adaptive Spatial-Frequency Fusion (ASFF) module, which mutually enhances the spatial and frequency domains by supplementing less informative channels from one domain with corresponding channels from the other. Extensive experiments on the BraTS and fastMRI knee datasets demonstrate the superiority of our MMR-Mamba over state-of-the-art reconstruction methods. The code is publicly available at https://github.com/zoujing925/MMR-Mamba.
Collapse
Affiliation(s)
- Jing Zou
- Center for Smart Health, School of Nursing, The Hong Kong Polytechnic University, Hong Kong Special Administrative Region of China
| | - Lanqing Liu
- Center for Smart Health, School of Nursing, The Hong Kong Polytechnic University, Hong Kong Special Administrative Region of China
| | - Qi Chen
- Department of Electronic Engineering and Information Science, University of Science and Technology of China, Anhui, China
| | - Shujun Wang
- Department of Biomedical Engineering, The Hong Kong Polytechnic University, Hong Kong Special Administrative Region of China
| | - Zhanli Hu
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Xiaohan Xing
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA.
| | - Jing Qin
- Center for Smart Health, School of Nursing, The Hong Kong Polytechnic University, Hong Kong Special Administrative Region of China
| |
Collapse
|
2
|
Huang Y, Wu Z, Xu X, Zhang M, Wang S, Liu Q. Partition-based k-space synthesis for multi-contrast parallel imaging. Magn Reson Imaging 2025; 117:110297. [PMID: 39647517 DOI: 10.1016/j.mri.2024.110297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2024] [Revised: 12/02/2024] [Accepted: 12/03/2024] [Indexed: 12/10/2024]
Abstract
PURPOSE Multi-contrast magnetic resonance imaging is a significant and essential medical imaging technique. However, multi-contrast imaging has longer acquisition time and is easy to cause motion artifacts. In particular, the acquisition time for a T2-weighted image is prolonged due to its longer repetition time (TR). On the contrary, T1-weighted image has a shorter TR. Therefore, utilizing complementary information across T1 and T2-weighted image is a way to decrease the overall imaging time. Previous T1-assisted T2 reconstruction methods have mostly focused on image domain using whole-based image fusion approaches. The image domain reconstruction method has the defects of high computational complexity and limited flexibility. To address this issue, we propose a novel multi-contrast imaging method called partition-based k-space synthesis (PKS) which can achieve better reconstruction quality of T2-weighted image by feature fusion. METHODS Concretely, we first decompose fully-sampled T1 k-space data and under-sampled T2 k-space data into two sub-data, separately. Then two new objects are constructed by combining the two sub-T1/T2 data. After that, the two new objects as the whole data to realize the reconstruction of T2-weighted image. RESULTS Experimental results showed that the developed PKS scheme can achieve comparable or better results than using traditional k-space parallel imaging (SAKE) that processes each contrast independently. At the same time, our method showed good adaptability and robustness under different contrast-assisted and T1-T2 ratios. Efficient target modal image reconstruction under various conditions were realized and had excellent performance in restoring image quality and preserving details. CONCLUSIONS This work proposed a PKS multi-contrast method to assist in target mode image reconstruction. We have conducted extensive experiments on different multi-contrast, diverse ratios of T1 to T2 and different sampling masks to demonstrate the generalization and robustness of our proposed model.
Collapse
Affiliation(s)
- Yuxia Huang
- Department of Electronic Information Engineering, Nanchang University, Nanchang 330031, China
| | - Zhonghui Wu
- Department of Electronic Information Engineering, Nanchang University, Nanchang 330031, China
| | - Xiaoling Xu
- Department of Electronic Information Engineering, Nanchang University, Nanchang 330031, China
| | - Minghui Zhang
- Department of Electronic Information Engineering, Nanchang University, Nanchang 330031, China
| | - Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, SIAT, Chinese Academy of Sciences, Shenzhen 518055, China.
| | - Qiegen Liu
- Department of Electronic Information Engineering, Nanchang University, Nanchang 330031, China.
| |
Collapse
|
3
|
Bian W, Jang A, Liu F. Multi-task magnetic resonance imaging reconstruction using meta-learning. Magn Reson Imaging 2025; 116:110278. [PMID: 39580007 PMCID: PMC11645196 DOI: 10.1016/j.mri.2024.110278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2024] [Revised: 08/30/2024] [Accepted: 11/13/2024] [Indexed: 11/25/2024]
Abstract
Using single-task deep learning methods to reconstruct Magnetic Resonance Imaging (MRI) data acquired with different imaging sequences is inherently challenging. The trained deep learning model typically lacks generalizability, and the dissimilarity among image datasets with different types of contrast leads to suboptimal learning performance. This paper proposes a meta-learning approach to efficiently learn image features from multiple MRI datasets. Our algorithm can perform multi-task learning to simultaneously reconstruct MRI images acquired using different imaging sequences with various image contrasts. We have developed a proximal gradient descent-inspired optimization method to learn image features across image and k-space domains, ensuring high-performance learning for every image contrast. Meanwhile, meta-learning, a "learning-to-learn" process, is incorporated into our framework to improve the learning of mutual features embedded in multiple image contrasts. The experimental results reveal that our proposed multi-task meta-learning approach surpasses state-of-the-art single-task learning methods at high acceleration rates. Our meta-learning consistently delivers accurate and detailed reconstructions, achieves the lowest pixel-wise errors, and significantly enhances qualitative performance across all tested acceleration rates. We have demonstrated the ability of our new meta-learning reconstruction method to successfully reconstruct highly-undersampled k-space data from multiple MRI datasets simultaneously, outperforming other compelling reconstruction methods previously developed for single-task learning.
Collapse
Affiliation(s)
- Wanyu Bian
- Harvard Medical School, Boston, MA 02115, USA; Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA 02129, USA
| | - Albert Jang
- Harvard Medical School, Boston, MA 02115, USA; Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA 02129, USA
| | - Fang Liu
- Harvard Medical School, Boston, MA 02115, USA; Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA 02129, USA.
| |
Collapse
|
4
|
Zhang H, Wang Q, Shi J, Ying S, Wen Z. Deep unfolding network with spatial alignment for multi-modal MRI reconstruction. Med Image Anal 2025; 99:103331. [PMID: 39243598 DOI: 10.1016/j.media.2024.103331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2023] [Revised: 07/10/2024] [Accepted: 08/29/2024] [Indexed: 09/09/2024]
Abstract
Multi-modal Magnetic Resonance Imaging (MRI) offers complementary diagnostic information, but some modalities are limited by the long scanning time. To accelerate the whole acquisition process, MRI reconstruction of one modality from highly under-sampled k-space data with another fully-sampled reference modality is an efficient solution. However, the misalignment between modalities, which is common in clinic practice, can negatively affect reconstruction quality. Existing deep learning-based methods that account for inter-modality misalignment perform better, but still share two main common limitations: (1) The spatial alignment task is not adaptively integrated with the reconstruction process, resulting in insufficient complementarity between the two tasks; (2) the entire framework has weak interpretability. In this paper, we construct a novel Deep Unfolding Network with Spatial Alignment, termed DUN-SA, to appropriately embed the spatial alignment task into the reconstruction process. Concretely, we derive a novel joint alignment-reconstruction model with a specially designed aligned cross-modal prior term. By relaxing the model into cross-modal spatial alignment and multi-modal reconstruction tasks, we propose an effective algorithm to solve this model alternatively. Then, we unfold the iterative stages of the proposed algorithm and design corresponding network modules to build DUN-SA with interpretability. Through end-to-end training, we effectively compensate for spatial misalignment using only reconstruction loss, and utilize the progressively aligned reference modality to provide inter-modality prior to improve the reconstruction of the target modality. Comprehensive experiments on four real datasets demonstrate that our method exhibits superior reconstruction performance compared to state-of-the-art methods.
Collapse
Affiliation(s)
- Hao Zhang
- Department of Mathematics, School of Science, Shanghai University, Shanghai 200444, China
| | - Qi Wang
- Department of Mathematics, School of Science, Shanghai University, Shanghai 200444, China
| | - Jun Shi
- School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China
| | - Shihui Ying
- Shanghai Institute of Applied Mathematics and Mechanics, Shanghai University, Shanghai 200072, China; School of Mechanics and Engineering Science, Shanghai University, Shanghai 200072, China.
| | - Zhijie Wen
- Department of Mathematics, School of Science, Shanghai University, Shanghai 200444, China
| |
Collapse
|
5
|
Kim S, Park H, Park SH. A review of deep learning-based reconstruction methods for accelerated MRI using spatiotemporal and multi-contrast redundancies. Biomed Eng Lett 2024; 14:1221-1242. [PMID: 39465106 PMCID: PMC11502678 DOI: 10.1007/s13534-024-00425-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2024] [Revised: 08/27/2024] [Accepted: 09/06/2024] [Indexed: 10/29/2024] Open
Abstract
Accelerated magnetic resonance imaging (MRI) has played an essential role in reducing data acquisition time for MRI. Acceleration can be achieved by acquiring fewer data points in k-space, which results in various artifacts in the image domain. Conventional reconstruction methods have resolved the artifacts by utilizing multi-coil information, but with limited robustness. Recently, numerous deep learning-based reconstruction methods have been developed, enabling outstanding reconstruction performances with higher acceleration. Advances in hardware and developments of specialized network architectures have produced such achievements. Besides, MRI signals contain various redundant information including multi-coil redundancy, multi-contrast redundancy, and spatiotemporal redundancy. Utilization of the redundant information combined with deep learning approaches allow not only higher acceleration, but also well-preserved details in the reconstructed images. Consequently, this review paper introduces the basic concepts of deep learning and conventional accelerated MRI reconstruction methods, followed by review of recent deep learning-based reconstruction methods that exploit various redundancies. Lastly, the paper concludes by discussing the challenges, limitations, and potential directions of future developments.
Collapse
Affiliation(s)
- Seonghyuk Kim
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - HyunWook Park
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - Sung-Hong Park
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, 291 Daehak-ro, Yuseong-gu, Daejeon, 34141 Republic of Korea
| |
Collapse
|
6
|
Chen X, Ma L, Ying S, Shen D, Zeng T. FEFA: Frequency Enhanced Multi-Modal MRI Reconstruction With Deep Feature Alignment. IEEE J Biomed Health Inform 2024; 28:6751-6763. [PMID: 39042545 DOI: 10.1109/jbhi.2024.3432139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/25/2024]
Abstract
Integrating complementary information from multiple magnetic resonance imaging (MRI) modalities is often necessary to make accurate and reliable diagnostic decisions. However, the different acquisition speeds of these modalities mean that obtaining information can be time consuming and require significant effort. Reference-based MRI reconstruction aims to accelerate slower, under-sampled imaging modalities, such as T2-modality, by utilizing redundant information from faster, fully sampled modalities, such as T1-modality. Unfortunately, spatial misalignment between different modalities often negatively impacts the final results. To address this issue, we propose FEFA, which consists of cascading FEFA blocks. The FEFA block first aligns and fuses the two modalities at the feature level. The combined features are then filtered in the frequency domain to enhance the important features while simultaneously suppressing the less essential ones, thereby ensuring accurate reconstruction. Furthermore, we emphasize the advantages of combining the reconstruction results from multiple cascaded blocks, which also contributes to stabilizing the training process. Compared to existing registration-then-reconstruction and cross-attention-based approaches, our method is end-to-end trainable without requiring additional supervision, extensive parameters, or heavy computation. Experiments on the public fastMRI, IXI and in-house datasets demonstrate that our approach is effective across various under-sampling patterns and ratios.
Collapse
|
7
|
Lyu J, Li G, Wang C, Cai Q, Dou Q, Zhang D, Qin J. Multicontrast MRI Super-Resolution via Transformer-Empowered Multiscale Contextual Matching and Aggregation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:12004-12014. [PMID: 37028326 DOI: 10.1109/tnnls.2023.3250491] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Magnetic resonance imaging (MRI) possesses the unique versatility to acquire images under a diverse array of distinct tissue contrasts, which makes multicontrast super-resolution (SR) techniques possible and needful. Compared with single-contrast MRI SR, multicontrast SR is expected to produce higher quality images by exploiting a variety of complementary information embedded in different imaging contrasts. However, existing approaches still have two shortcomings: 1) most of them are convolution-based methods and, hence, weak in capturing long-range dependencies, which are essential for MR images with complicated anatomical patterns and 2) they ignore to make full use of the multicontrast features at different scales and lack effective modules to match and aggregate these features for faithful SR. To address these issues, we develop a novel multicontrast MRI SR network via transformer-empowered multiscale feature matching and aggregation, dubbed McMRSR ++ . First, we tame transformers to model long-range dependencies in both reference and target images at different scales. Then, a novel multiscale feature matching and aggregation method is proposed to transfer corresponding contexts from reference features at different scales to the target features and interactively aggregate them. Furthermore, a texture-preserving branch and a contrastive constraint are incorporated into our framework for enhancing the textural details in the SR images. Experimental results on both public and clinical in vivo datasets show that McMRSR ++ outperforms state-of-the-art methods under peak signal to noise ratio (PSNR), structure similarity index measure (SSIM), and root mean square error (RMSE) metrics significantly. Visual results demonstrate the superiority of our method in restoring structures, demonstrating its great potential to improve scan efficiency in clinical practice.
Collapse
|
8
|
Lei P, Hu L, Fang F, Zhang G. Joint Under-Sampling Pattern and Dual-Domain Reconstruction for Accelerating Multi-Contrast MRI. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2024; 33:4686-4701. [PMID: 39178087 DOI: 10.1109/tip.2024.3445729] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/25/2024]
Abstract
Multi-Contrast Magnetic Resonance Imaging (MCMRI) utilizes the short-time reference image to facilitate the reconstruction of the long-time target one, providing a new solution for fast MRI. Although various methods have been proposed, they still have certain limitations. 1) existing methods featuring the preset under-sampling patterns give rise to redundancy between multi-contrast images and limit their model performance; 2) most methods focus on the information in the image domain, prior knowledge in the k-space domain has not been fully explored; and 3) most networks are manually designed and lack certain physical interpretability. To address these issues, we propose a joint optimization of the under-sampling pattern and a deep-unfolding dual-domain network for accelerating MCMRI. Firstly, to reduce the redundant information and sample more contrast-specific information, we propose a new framework to learn the optimal under-sampling pattern for MCMRI. Secondly, a dual-domain model is established to reconstruct the target image in both the image domain and the k-space frequency domain. The model in the image domain introduces a spatial transformation to explicitly model the inconsistent and unaligned structures of MCMRI. The model in the k-space learns prior knowledge from the frequency domain, enabling the model to capture more global information from the input images. Thirdly, we employ the proximal gradient algorithm to optimize the proposed model and then unfold the iterative results into a deep-unfolding network, called MC-DuDoN. We evaluate the proposed MC-DuDoN on MCMRI super-resolution and reconstruction tasks. Experimental results give credence to the superiority of the current model. In particular, since our approach explicitly models the inconsistent structures, it shows robustness on spatially misaligned MCMRI. In the reconstruction task, compared with conventional masks, the learned mask restores more realistic images, even under an ultra-high acceleration ratio ( ×30 ). Code is available at https://github.com/lpcccc-cv/MC-DuDoNet.
Collapse
|
9
|
Tan J, Zhang X, Qing C, Xu X. Fourier Domain Robust Denoising Decomposition and Adaptive Patch MRI Reconstruction. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:7299-7311. [PMID: 37015441 DOI: 10.1109/tnnls.2022.3222394] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
The sparsity of the Fourier transform domain has been applied to magnetic resonance imaging (MRI) reconstruction in k -space. Although unsupervised adaptive patch optimization methods have shown promise compared to data-driven-based supervised methods, the following challenges exist in MRI reconstruction: 1) in previous k -space MRI reconstruction tasks, MRI with noise interference in the acquisition process is rarely considered. 2) Differences in transform domains should be resolved to achieve the high-quality reconstruction of low undersampled MRI data. 3) Robust patch dictionary learning problems are usually nonconvex and NP-hard, and alternate minimization methods are often computationally expensive. In this article, we propose a method for Fourier domain robust denoising decomposition and adaptive patch MRI reconstruction (DDAPR). DDAPR is a two-step optimization method for MRI reconstruction in the presence of noise and low undersampled data. It includes the low-rank and sparse denoising reconstruction model (LSDRM) and the robust dictionary learning reconstruction model (RDLRM). In the first step, we propose LSDRM for different domains. For the optimization solution, the proximal gradient method is used to optimize LSDRM by singular value decomposition and soft threshold algorithms. In the second step, we propose RDLRM, which is an effective adaptive patch method by introducing a low-rank and sparse penalty adaptive patch dictionary and using a sparse rank-one matrix to approximate the undersampled data. Then, the block coordinate descent (BCD) method is used to optimize the variables. The BCD optimization process involves valid closed-form solutions. Extensive numerical experiments show that the proposed method has a better performance than previous methods in image reconstruction based on compressed sensing or deep learning.
Collapse
|
10
|
Kofler A, Kerkering KM, Goschel L, Fillmer A, Kolbitsch C. Quantitative MR Image Reconstruction Using Parameter-Specific Dictionary Learning With Adaptive Dictionary-Size and Sparsity-Level Choice. IEEE Trans Biomed Eng 2024; 71:388-399. [PMID: 37540614 DOI: 10.1109/tbme.2023.3300090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/06/2023]
Abstract
OBJECTIVE We propose a method for the reconstruction of parameter-maps in Quantitative Magnetic Resonance Imaging (QMRI). METHODS Because different quantitative parameter-maps differ from each other in terms of local features, we propose a method where the employed dictionary learning (DL) and sparse coding (SC) algorithms automatically estimate the optimal dictionary-size and sparsity level separately for each parameter-map. We evaluated the method on a T1-mapping QMRI problem in the brain using the BrainWeb data as well as in-vivo brain images acquired on an ultra-high field 7 T scanner. We compared it to a model-based acceleration for parameter mapping (MAP) approach, other sparsity-based methods using total variation (TV), Wavelets (Wl), and Shearlets (Sh) to a method which uses DL and SC to reconstruct qualitative images, followed by a non-linear (DL+Fit). RESULTS Our algorithm surpasses MAP, TV, Wl, and Sh in terms of RMSE and PSNR. It yields better or comparable results to DL+Fit by additionally significantly accelerating the reconstruction by a factor of approximately seven. CONCLUSION The proposed method outperforms the reported methods of comparison and yields accurate T1-maps. Although presented for T1-mapping in the brain, our method's structure is general and thus most probably also applicable for the the reconstruction of other quantitative parameters in other organs. SIGNIFICANCE From a clinical perspective, the obtained T1-maps could be utilized to differentiate between healthy subjects and patients with Alzheimer's disease. From a technical perspective, the proposed unsupervised method could be employed to obtain ground-truth data for the development of data-driven methods based on supervised learning.
Collapse
|
11
|
Bousse A, Kandarpa VSS, Rit S, Perelli A, Li M, Wang G, Zhou J, Wang G. Systematic Review on Learning-based Spectral CT. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2024; 8:113-137. [PMID: 38476981 PMCID: PMC10927029 DOI: 10.1109/trpms.2023.3314131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/14/2024]
Abstract
Spectral computed tomography (CT) has recently emerged as an advanced version of medical CT and significantly improves conventional (single-energy) CT. Spectral CT has two main forms: dual-energy computed tomography (DECT) and photon-counting computed tomography (PCCT), which offer image improvement, material decomposition, and feature quantification relative to conventional CT. However, the inherent challenges of spectral CT, evidenced by data and image artifacts, remain a bottleneck for clinical applications. To address these problems, machine learning techniques have been widely applied to spectral CT. In this review, we present the state-of-the-art data-driven techniques for spectral CT.
Collapse
Affiliation(s)
- Alexandre Bousse
- LaTIM, Inserm UMR 1101, Université de Bretagne Occidentale, 29238 Brest, France
| | | | - Simon Rit
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, UJM-Saint Étienne, CNRS, Inserm, CREATIS UMR 5220, U1294, F-69373, Lyon, France
| | - Alessandro Perelli
- Department of Biomedical Engineering, School of Science and Engineering, University of Dundee, DD1 4HN, UK
| | - Mengzhou Li
- Biomedical Imaging Center, Rensselaer Polytechnic Institute, Troy, New York, USA
| | - Guobao Wang
- Department of Radiology, University of California Davis Health, Sacramento, USA
| | - Jian Zhou
- CTIQ, Canon Medical Research USA, Inc., Vernon Hills, 60061, USA
| | - Ge Wang
- Biomedical Imaging Center, Rensselaer Polytechnic Institute, Troy, New York, USA
| |
Collapse
|
12
|
Sun K, Wang Q, Shen D. Joint Cross-Attention Network With Deep Modality Prior for Fast MRI Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:558-569. [PMID: 37695966 DOI: 10.1109/tmi.2023.3314008] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/13/2023]
Abstract
Current deep learning-based reconstruction models for accelerated multi-coil magnetic resonance imaging (MRI) mainly focus on subsampled k-space data of single modality using convolutional neural network (CNN). Although dual-domain information and data consistency constraint are commonly adopted in fast MRI reconstruction, the performance of existing models is still limited mainly by three factors: inaccurate estimation of coil sensitivity, inadequate utilization of structural prior, and inductive bias of CNN. To tackle these challenges, we propose an unrolling-based joint Cross-Attention Network, dubbed as jCAN, using deep guidance of the already acquired intra-subject data. Particularly, to improve the performance of coil sensitivity estimation, we simultaneously optimize the latent MR image and sensitivity map (SM). Besides, we introduce Gating layer and Gaussian layer into SM estimation to alleviate the "defocus" and "over-coupling" effects and further ameliorate the SM estimation. To enhance the representation ability of the proposed model, we deploy Vision Transformer (ViT) and CNN in the image and k-space domains, respectively. Moreover, we exploit pre-acquired intra-subject scan as reference modality to guide the reconstruction of subsampled target modality by resorting to the self- and cross-attention scheme. Experimental results on public knee and in-house brain datasets demonstrate that the proposed jCAN outperforms the state-of-the-art methods by a large margin in terms of SSIM and PSNR for different acceleration factors and sampling masks. Our code is publicly available at https://github.com/sunkg/jCAN.
Collapse
|
13
|
Desai AD, Ozturkler BM, Sandino CM, Boutin R, Willis M, Vasanawala S, Hargreaves BA, Ré C, Pauly JM, Chaudhari AS. Noise2Recon: Enabling SNR-robust MRI reconstruction with semi-supervised and self-supervised learning. Magn Reson Med 2023; 90:2052-2070. [PMID: 37427449 DOI: 10.1002/mrm.29759] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Revised: 05/23/2023] [Accepted: 05/24/2023] [Indexed: 07/11/2023]
Abstract
PURPOSE To develop a method for building MRI reconstruction neural networks robust to changes in signal-to-noise ratio (SNR) and trainable with a limited number of fully sampled scans. METHODS We propose Noise2Recon, a consistency training method for SNR-robust accelerated MRI reconstruction that can use both fully sampled (labeled) and undersampled (unlabeled) scans. Noise2Recon uses unlabeled data by enforcing consistency between model reconstructions of undersampled scans and their noise-augmented counterparts. Noise2Recon was compared to compressed sensing and both supervised and self-supervised deep learning baselines. Experiments were conducted using retrospectively accelerated data from the mridata three-dimensional fast-spin-echo knee and two-dimensional fastMRI brain datasets. All methods were evaluated in label-limited settings and among out-of-distribution (OOD) shifts, including changes in SNR, acceleration factors, and datasets. An extensive ablation study was conducted to characterize the sensitivity of Noise2Recon to hyperparameter choices. RESULTS In label-limited settings, Noise2Recon achieved better structural similarity, peak signal-to-noise ratio, and normalized-RMS error than all baselines and matched performance of supervised models, which were trained with14 × $$ 14\times $$ more fully sampled scans. Noise2Recon outperformed all baselines, including state-of-the-art fine-tuning and augmentation techniques, among low-SNR scans and when generalizing to OOD acceleration factors. Augmentation extent and loss weighting hyperparameters had negligible impact on Noise2Recon compared to supervised methods, which may indicate increased training stability. CONCLUSION Noise2Recon is a label-efficient reconstruction method that is robust to distribution shifts, such as changes in SNR, acceleration factors, and others, with limited or no fully sampled training data.
Collapse
Affiliation(s)
- Arjun D Desai
- Department of Electrical Engineering, Stanford University, Stanford, California, USA
- Department of Radiology, Stanford University, Stanford, California, USA
| | - Batu M Ozturkler
- Department of Electrical Engineering, Stanford University, Stanford, California, USA
| | - Christopher M Sandino
- Department of Electrical Engineering, Stanford University, Stanford, California, USA
| | - Robert Boutin
- Department of Radiology, Stanford University, Stanford, California, USA
| | - Marc Willis
- Department of Radiology, Stanford University, Stanford, California, USA
| | | | - Brian A Hargreaves
- Department of Electrical Engineering, Stanford University, Stanford, California, USA
- Department of Radiology, Stanford University, Stanford, California, USA
| | - Christopher Ré
- Department of Computer Science, Stanford University, Stanford, California, USA
| | - John M Pauly
- Department of Electrical Engineering, Stanford University, Stanford, California, USA
| | - Akshay S Chaudhari
- Department of Radiology, Stanford University, Stanford, California, USA
- Department of Biomedical Data Science, Stanford University, Stanford, California, USA
| |
Collapse
|
14
|
|
15
|
Fan X, Liao M, Xue J, Wu H, Jin L, Zhao J, Zhu L. Joint coupled representation and homogeneous reconstruction for multi-resolution small sample face recognition. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.12.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
|
16
|
Groun N, Villalba-Orero M, Lara-Pezzi E, Valero E, Garicano-Mena J, Le Clainche S. A novel data-driven method for the analysis and reconstruction of cardiac cine MRI. Comput Biol Med 2022; 151:106317. [PMID: 36442273 DOI: 10.1016/j.compbiomed.2022.106317] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 11/08/2022] [Accepted: 11/13/2022] [Indexed: 11/17/2022]
Abstract
Cardiac cine magnetic resonance imaging (MRI) can be considered the optimal criterion for measuring cardiac function. This imaging technique can provide us with detailed information about cardiac structure, tissue composition and even blood flow, which makes it highly used in medical science. But due to the image time acquisition and several other factors the MRI sequences can easily get corrupted, causing radiologists to misdiagnose 40 million people worldwide each and every single year. Hence, the urge to decrease these numbers, researchers from different fields have been introducing novel tools and methods in the medical field. Aiming to the same target, we consider in this work the application of the higher order dynamic mode decomposition (HODMD) technique. The HODMD algorithm is a linear method, which was originally introduced in the fluid dynamics domain, for the analysis of complex systems. Nevertheless, the proposed method has extended its applicability to numerous domains, including medicine. In this work, HODMD in used to analyze sets of MR images of a heart, with the ultimate goal of identifying the main patterns and frequencies driving the heart dynamics. Furthermore, a novel interpolation algorithm based on singular value decomposition combined with HODMD is introduced, providing a three-dimensional reconstruction of the heart. This algorithm is applied (i) to reconstruct corrupted or missing images, and (ii) to build a reduced order model of the heart dynamics.
Collapse
Affiliation(s)
- Nourelhouda Groun
- ETSI Aeronáutica y del Espacio and ETSI Telecomunicación - Universidad Politécnica de Madrid, 28040 Madrid, Spain.
| | - María Villalba-Orero
- Centro Nacional de Investigaciones Cardiovasculares (CNIC), C. de Melchor Fernández Almagro, 3, 28029 Madrid, Spain; Departamento de Medicina y Cirugía Animal, Facultad de Veterinaria, Universidad Complutense de Madrid, 28040 Madrid, Spain
| | - Enrique Lara-Pezzi
- Centro Nacional de Investigaciones Cardiovasculares (CNIC), C. de Melchor Fernández Almagro, 3, 28029 Madrid, Spain
| | - Eusebio Valero
- ETSI Aeronáutica y del Espacio - Universidad Politécnica de Madrid, 28040 Madrid, Spain; Center for Computational Simulation (CCS), 28660 Boadilla del Monte, Spain
| | - Jesús Garicano-Mena
- ETSI Aeronáutica y del Espacio - Universidad Politécnica de Madrid, 28040 Madrid, Spain; Center for Computational Simulation (CCS), 28660 Boadilla del Monte, Spain
| | - Soledad Le Clainche
- ETSI Aeronáutica y del Espacio - Universidad Politécnica de Madrid, 28040 Madrid, Spain; Center for Computational Simulation (CCS), 28660 Boadilla del Monte, Spain.
| |
Collapse
|
17
|
Yaqub M, Jinchao F, Ahmed S, Arshid K, Bilal MA, Akhter MP, Zia MS. GAN-TL: Generative Adversarial Networks with Transfer Learning for MRI Reconstruction. APPLIED SCIENCES 2022; 12:8841. [DOI: 10.3390/app12178841] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/22/2024]
Abstract
Generative adversarial networks (GAN), which are fueled by deep learning, are an efficient technique for image reconstruction using under-sampled MR data. In most cases, the performance of a particular model’s reconstruction must be improved by using a substantial proportion of the training data. However, gathering tens of thousands of raw patient data for training the model in actual clinical applications is difficult because retaining k-space data is not customary in the clinical process. Therefore, it is imperative to increase the generalizability of a network that was created using a small number of samples as quickly as possible. This research explored two unique applications based on deep learning-based GAN and transfer learning. Seeing as MRI reconstruction procedures go for brain and knee imaging, the proposed method outperforms current techniques in terms of signal-to-noise ratio (PSNR) and structural similarity index (SSIM). As compared to the results of transfer learning for the brain and knee, using a smaller number of training cases produced superior results, with acceleration factor (AF) 2 (for brain PSNR (39.33); SSIM (0.97), for knee PSNR (35.48); SSIM (0.90)) and AF 4 (for brain PSNR (38.13); SSIM (0.95), for knee PSNR (33.95); SSIM (0.86)). The approach that has been described would make it easier to apply future models for MRI reconstruction without necessitating the acquisition of vast imaging datasets.
Collapse
Affiliation(s)
- Muhammad Yaqub
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Feng Jinchao
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Shahzad Ahmed
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Kaleem Arshid
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Muhammad Atif Bilal
- Riphah College of Computing, Faisalabad Campus, Riphah International University, Islamabad 38000, Pakistan
- College of Geoexploration Science and Technology, Jilin University, Changchun 130061, China
| | - Muhammad Pervez Akhter
- Riphah College of Computing, Faisalabad Campus, Riphah International University, Islamabad 38000, Pakistan
| | - Muhammad Sultan Zia
- Department of Computer Science, The University of Chenab, Gujranwala 50250, Pakistan
| |
Collapse
|
18
|
Xuan K, Xiang L, Huang X, Zhang L, Liao S, Shen D, Wang Q. Multimodal MRI Reconstruction Assisted With Spatial Alignment Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2499-2509. [PMID: 35363610 DOI: 10.1109/tmi.2022.3164050] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
In clinical practice, multi-modal magnetic resonance imaging (MRI) with different contrasts is usually acquired in a single study to assess different properties of the same region of interest in the human body. The whole acquisition process can be accelerated by having one or more modalities under-sampled in the k -space. Recent research has shown that, considering the redundancy between different modalities, a target MRI modality under-sampled in the k -space can be more efficiently reconstructed with a fully-sampled reference MRI modality. However, we find that the performance of the aforementioned multi-modal reconstruction can be negatively affected by subtle spatial misalignment between different modalities, which is actually common in clinical practice. In this paper, we improve the quality of multi-modal reconstruction by compensating for such spatial misalignment with a spatial alignment network. First, our spatial alignment network estimates the displacement between the fully-sampled reference and the under-sampled target images, and warps the reference image accordingly. Then, the aligned fully-sampled reference image joins the multi-modal reconstruction of the under-sampled target image. Also, considering the contrast difference between the target and reference images, we have designed a cross-modality-synthesis-based registration loss in combination with the reconstruction loss, to jointly train the spatial alignment network and the reconstruction network. The experiments on both clinical MRI and multi-coil k -space raw data demonstrate the superiority and robustness of the multi-modal MRI reconstruction empowered with our spatial alignment network. Our code is publicly available at https://github.com/woxuankai/SpatialAlignmentNetwork.
Collapse
|
19
|
Perelli A, Alfonso Garcia S, Bousse A, Tasu JP, Efthimiadis N, Visvikis D. Multi-channel convolutional analysis operator learning for dual-energy CT reconstruction. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac4c32] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2021] [Accepted: 01/17/2022] [Indexed: 11/11/2022]
Abstract
Abstract
Objective. Dual-energy computed tomography (DECT) has the potential to improve contrast and reduce artifacts and the ability to perform material decomposition in advanced imaging applications. The increased number of measurements results in a higher radiation dose, and it is therefore essential to reduce either the number of projections for each energy or the source x-ray intensity, but this makes tomographic reconstruction more ill-posed. Approach. We developed the multi-channel convolutional analysis operator learning (MCAOL) method to exploit common spatial features within attenuation images at different energies and we propose an optimization method which jointly reconstructs the attenuation images at low and high energies with mixed norm regularization on the sparse features obtained by pre-trained convolutional filters through the convolutional analysis operator learning (CAOL) algorithm. Main results. Extensive experiments with simulated and real computed tomography data were performed to validate the effectiveness of the proposed methods, and we report increased reconstruction accuracy compared with CAOL and iterative methods with single and joint total variation regularization. Significance. Qualitative and quantitative results on sparse views and low-dose DECT demonstrate that the proposed MCAOL method outperforms both CAOL applied on each energy independently and several existing state-of-the-art model-based iterative reconstruction techniques, thus paving the way for dose reduction.
Collapse
|
20
|
Li X, Li Y, Chen P, Li F. Combining convolutional sparse coding with total variation for sparse-view CT reconstruction. APPLIED OPTICS 2022; 61:C116-C124. [PMID: 35201005 DOI: 10.1364/ao.445315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/11/2021] [Accepted: 01/13/2022] [Indexed: 06/14/2023]
Abstract
Conventional dictionary-learning-based computed tomography (CT) reconstruction methods extract patches from an original image to train, ignoring the consistency of pixels in overlapping patches. To address the problem, this paper proposes a method combining convolutional sparse coding (CSC) with total variation (TV) for sparse-view CT reconstruction. The proposed method inherits the advantages of CSC by directly processing the whole image without dividing it into overlapping patches, which preserves more details and reduces artifacts caused by patch aggregation. By introducing a TV regularization term to enhance the constraint of the image domain, the noise can be effectively further suppressed. The alternating direction method of multipliers algorithm is employed to solve the objective function. Numerous experiments are conducted to validate the performance of the proposed method in different views. Qualitative and quantitative results show the superiority of the proposed method in terms of noise suppression, artifact reduction, and image details recovery.
Collapse
|
21
|
Diao Y, Zhang Z. Dictionary Learning-Based Ultrasound Image Combined with Gastroscope for Diagnosis of Helicobacter pylori-Caused Gastrointestinal Bleeding. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2021; 2021:6598631. [PMID: 34992675 PMCID: PMC8727121 DOI: 10.1155/2021/6598631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/24/2021] [Revised: 11/28/2021] [Accepted: 12/08/2021] [Indexed: 11/26/2022]
Abstract
The study is aimed at evaluating the application value of ultrasound combined with gastroscopy in diagnosing gastrointestinal bleeding (GIB) caused by Helicobacter pylori (HP). An ultrasound combined with a gastroscopy diagnostic model based on improved K-means Singular Value Decomposition (N-KSVD) was proposed first. 86 patients with Peptic ulcer (PU) and GIB admitted to our Hospital were selected and defined as the test group, and 86 PU patients free of GIB during the same period were selected as the control group. The two groups were observed for clinical manifestations and HP detection results. The results showed that when the noise ρ was 10, 30, 50, and 70, the Peak Signal to Noise Ratio (PSNR) values of N-KSVD dictionary after denoising were 35.55, 30.47, 27.91, and 26.08, respectively, and the structure similarity index measure (SSIM) values were 0.91, 0.827, 0.763, and 0.709, respectively. Those were greater than those of DCT dictionary and Global dictionary and showed statistically significant differences versus the DCT dictionary (P < 0.05). In the test group, there were 60 HP-positives and 26 HP-negatives, and there was significant difference in the numbers of HP-positives and HP-negatives (P < 0.05), but no significant difference in gender and age (P > 0.05). Of the subjects with abdominal pain, HP-positives accounted for 59.02% and HP-negatives accounted for 37.67%, showing significant differences (P < 0.05). Finally, the size of the ulcer lesion in HP-positives and HP-negatives was compared. It was found that 71.57% of HP-positives had ulcers with a diameter of 0-1 cm, and 28.43% had ulcers with a diameter of ≥1 cm. Compared with HP-negatives, the difference was statistically significant (P < 0.05). In conclusion, N-KSVD-based ultrasound combined with gastroscopy demonstrated good denoising effects and was effective in the diagnosis of GIB caused by HP.
Collapse
Affiliation(s)
- Yunyun Diao
- Department of Digestion and Hematology, Sinopharm North Hospital, Baotou, 014030 Inner Mongolia, China
| | - Zhenzhou Zhang
- Department of Digestion and Hematology, Sinopharm North Hospital, Baotou, 014030 Inner Mongolia, China
| |
Collapse
|
22
|
Quan C, Zhou J, Zhu Y, Chen Y, Wang S, Liang D, Liu Q. Homotopic Gradients of Generative Density Priors for MR Image Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3265-3278. [PMID: 34010128 DOI: 10.1109/tmi.2021.3081677] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Deep learning, particularly the generative model, has demonstrated tremendous potential to significantly speed up image reconstruction with reduced measurements recently. Rather than the existing generative models that often optimize the density priors, in this work, by taking advantage of the denoising score matching, homotopic gradients of generative density priors (HGGDP) are exploited for magnetic resonance imaging (MRI) reconstruction. More precisely, to tackle the low-dimensional manifold and low data density region issues in generative density prior, we estimate the target gradients in higher-dimensional space. We train a more powerful noise conditional score network by forming high-dimensional tensor as the network input at the training phase. More artificial noise is also injected in the embedding space. At the reconstruction stage, a homotopy method is employed to pursue the density prior, such as to boost the reconstruction performance. Experiment results implied the remarkable performance of HGGDP in terms of high reconstruction accuracy. Only 10% of the k-space data can still generate image of high quality as effectively as standard MRI reconstructions with the fully sampled data.
Collapse
|
23
|
Liu X, Wang J, Lin S, Crozier S, Liu F. Optimizing multicontrast MRI reconstruction with shareable feature aggregation and selection. NMR IN BIOMEDICINE 2021; 34:e4540. [PMID: 33974306 DOI: 10.1002/nbm.4540] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/27/2020] [Revised: 04/05/2021] [Accepted: 04/25/2021] [Indexed: 06/12/2023]
Abstract
This paper proposes a new method for optimizing feature sharing in deep neural network-based, rapid, multicontrast magnetic resonance imaging (MC-MRI). Using the shareable information of MC images for accelerated MC-MRI reconstruction, current algorithms stack the MC images or features without optimizing the sharing protocols, leading to suboptimal reconstruction results. In this paper, we propose a novel feature aggregation and selection scheme in a deep neural network to better leverage the MC features and improve the reconstruction results. First, we propose to extract and use the shareable information by mapping the MC images into multiresolution feature maps with multilevel layers of the neural network. In this way, the extracted features capture complementary image properties, including local patterns from the shallow layers and semantic information from the deep layers. Then, an explicit selection module is designed to compile the extracted features optimally. That is, larger weights are learned to incorporate the constructive, shareable features; and smaller weights are assigned to the unshareable information. We conduct comparative studies on publicly available T2-weighted and T2-weighted fluid attenuated inversion recovery brain images, and the results show that the proposed network consistently outperforms existing algorithms. In addition, the proposed method can recover the images with high fidelity under 16 times acceleration. The ablation studies are conducted to evaluate the effectiveness of the proposed feature aggregation and selection mechanism. The results and the visualization of the weighted features show that the proposed method does effectively improve the usage of the useful features and suppress useless information, leading to overall enhanced reconstruction results. Additionally, the selection module can zero-out repeated and redundant features and improve network efficiency.
Collapse
Affiliation(s)
- Xinwen Liu
- School of Information Technology and Electrical Engineering, The University of Queensland, Brisbane, Australia
| | - Jing Wang
- School of Information and Communication Technology, Griffith University, Brisbane, Australia
| | - Suzhen Lin
- School of Data Science and Technology, North University of China, Taiyuan, China
- The Key Laboratory of Biomedical Imaging and Big Data Processing in Shanxi Province, Shanxi, China
| | - Stuart Crozier
- School of Information Technology and Electrical Engineering, The University of Queensland, Brisbane, Australia
| | - Feng Liu
- School of Information Technology and Electrical Engineering, The University of Queensland, Brisbane, Australia
| |
Collapse
|
24
|
Chandra SS, Bran Lorenzana M, Liu X, Liu S, Bollmann S, Crozier S. Deep learning in magnetic resonance image reconstruction. J Med Imaging Radiat Oncol 2021; 65:564-577. [PMID: 34254448 DOI: 10.1111/1754-9485.13276] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Accepted: 06/10/2021] [Indexed: 11/26/2022]
Abstract
Magnetic resonance (MR) imaging visualises soft tissue contrast in exquisite detail without harmful ionising radiation. In this work, we provide a state-of-the-art review on the use of deep learning in MR image reconstruction from different image acquisition types involving compressed sensing techniques, parallel image acquisition and multi-contrast imaging. Publications with deep learning-based image reconstruction for MR imaging were identified from the literature (PubMed and Google Scholar), and a comprehensive description of each of the works was provided. A detailed comparison that highlights the differences, the data used and the performance of each of these works were also made. A discussion of the potential use cases for each of these methods is provided. The sparse image reconstruction methods were found to be most popular in using deep learning for improved performance, accelerating acquisitions by around 4-8 times. Multi-contrast image reconstruction methods rely on at least one pre-acquired image, but can achieve 16-fold, and even up to 32- to 50-fold acceleration depending on the set-up. Parallel imaging provides frameworks to be integrated in many of these methods for additional speed-up potential. The successful use of compressed sensing techniques and multi-contrast imaging with deep learning and parallel acquisition methods could yield significant MR acquisition speed-ups within clinical routines in the near future.
Collapse
Affiliation(s)
- Shekhar S Chandra
- School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Queensland, Australia
| | - Marlon Bran Lorenzana
- School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Queensland, Australia
| | - Xinwen Liu
- School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Queensland, Australia
| | - Siyu Liu
- School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Queensland, Australia
| | - Steffen Bollmann
- School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Queensland, Australia
| | - Stuart Crozier
- School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Queensland, Australia
| |
Collapse
|
25
|
Qi H, Cruz G, Botnar R, Prieto C. Synergistic multi-contrast cardiac magnetic resonance image reconstruction. PHILOSOPHICAL TRANSACTIONS. SERIES A, MATHEMATICAL, PHYSICAL, AND ENGINEERING SCIENCES 2021; 379:20200197. [PMID: 33966456 DOI: 10.1098/rsta.2020.0197] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Cardiac magnetic resonance imaging (CMR) is an important tool for the non-invasive diagnosis of a variety of cardiovascular diseases. Parametric mapping with multi-contrast CMR is able to quantify tissue alterations in myocardial disease and promises to improve patient care. However, magnetic resonance imaging is an inherently slow imaging modality, resulting in long acquisition times for parametric mapping which acquires a series of cardiac images with different contrasts for signal fitting or dictionary matching. Furthermore, extra efforts to deal with respiratory and cardiac motion by triggering and gating further increase the scan time. Several techniques have been developed to speed up CMR acquisitions, which usually acquire less data than that required by the Nyquist-Shannon sampling theorem, followed by regularized reconstruction to mitigate undersampling artefacts. Recent advances in CMR parametric mapping speed up CMR by synergistically exploiting spatial-temporal and contrast redundancies. In this article, we will review the recent developments in multi-contrast CMR image reconstruction for parametric mapping with special focus on low-rank and model-based reconstructions. Deep learning-based multi-contrast reconstruction has recently been proposed in other magnetic resonance applications. These developments will be covered to introduce the general methodology. Current technical limitations and potential future directions are discussed. This article is part of the theme issue 'Synergistic tomographic image reconstruction: part 1'.
Collapse
Affiliation(s)
- Haikun Qi
- School of Biomedical Engineering and Imaging Sciences, King's College London, 3rd Floor, Lambeth Wing, St Thomas' Hospital, London SE1 7EH, UK
| | - Gastao Cruz
- School of Biomedical Engineering and Imaging Sciences, King's College London, 3rd Floor, Lambeth Wing, St Thomas' Hospital, London SE1 7EH, UK
| | - René Botnar
- School of Biomedical Engineering and Imaging Sciences, King's College London, 3rd Floor, Lambeth Wing, St Thomas' Hospital, London SE1 7EH, UK
- Escuela de Ingeniería, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - Claudia Prieto
- School of Biomedical Engineering and Imaging Sciences, King's College London, 3rd Floor, Lambeth Wing, St Thomas' Hospital, London SE1 7EH, UK
- Escuela de Ingeniería, Pontificia Universidad Católica de Chile, Santiago, Chile
| |
Collapse
|
26
|
Arridge SR, Ehrhardt MJ, Thielemans K. (An overview of) Synergistic reconstruction for multimodality/multichannel imaging methods. PHILOSOPHICAL TRANSACTIONS. SERIES A, MATHEMATICAL, PHYSICAL, AND ENGINEERING SCIENCES 2021; 379:20200205. [PMID: 33966461 DOI: 10.1098/rsta.2020.0205] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Imaging is omnipresent in modern society with imaging devices based on a zoo of physical principles, probing a specimen across different wavelengths, energies and time. Recent years have seen a change in the imaging landscape with more and more imaging devices combining that which previously was used separately. Motivated by these hardware developments, an ever increasing set of mathematical ideas is appearing regarding how data from different imaging modalities or channels can be synergistically combined in the image reconstruction process, exploiting structural and/or functional correlations between the multiple images. Here we review these developments, give pointers to important challenges and provide an outlook as to how the field may develop in the forthcoming years. This article is part of the theme issue 'Synergistic tomographic image reconstruction: part 1'.
Collapse
Affiliation(s)
- Simon R Arridge
- Department of Computer Science, University College London, London, UK
| | - Matthias J Ehrhardt
- Department of Mathematical Sciences, University of Bath, Bath, UK
- Institute for Mathematical Innovation, University of Bath, Bath, UK
| | - Kris Thielemans
- Institute of Nuclear Medicine, University College London, London, UK
| |
Collapse
|
27
|
Liu X, Wang J, Jin J, Li M, Tang F, Crozier S, Liu F. Deep unregistered multi-contrast MRI reconstruction. Magn Reson Imaging 2021; 81:33-41. [PMID: 34051290 DOI: 10.1016/j.mri.2021.05.005] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Revised: 04/18/2021] [Accepted: 05/23/2021] [Indexed: 11/18/2022]
Abstract
Multiple magnetic resonance images of different contrasts are normally acquired for clinical diagnosis. Recently, research has shown that the previously acquired multi-contrast (MC) images of the same patient can be used as anatomical prior to accelerating magnetic resonance imaging (MRI). However, current MC-MRI networks are based on the assumption that the images are perfectly registered, which is rarely the case in real-world applications. In this paper, we propose an end-to-end deep neural network to reconstruct highly accelerated images by exploiting the shareable information from potentially misaligned reference images of an arbitrary contrast. Specifically, a spatial transformation (ST) module is designed and integrated into the reconstruction network to align the pre-acquired reference images with the images to be reconstructed. The misalignment is further alleviated by maximizing the normalized cross-correlation (NCC) between the MC images. The visualization of feature maps demonstrates that the proposed method effectively reduces the misalignment between the images for shareable information extraction when applied to the publicly available brain datasets. Additionally, the experimental results on these datasets show the proposed network allows the robust exploitation of shareable information across the misaligned MC images, leading to improved reconstruction results.
Collapse
Affiliation(s)
- Xinwen Liu
- School of Information Technology and Electrical Engineering, the University of Queensland, Brisbane, Australia
| | | | - Jin Jin
- School of Information Technology and Electrical Engineering, the University of Queensland, Brisbane, Australia; Siemens Healthcare Pty. Ltd., Brisbane, Australia
| | - Mingyan Li
- School of Information Technology and Electrical Engineering, the University of Queensland, Brisbane, Australia
| | - Fangfang Tang
- School of Information Technology and Electrical Engineering, the University of Queensland, Brisbane, Australia
| | - Stuart Crozier
- School of Information Technology and Electrical Engineering, the University of Queensland, Brisbane, Australia
| | - Feng Liu
- School of Information Technology and Electrical Engineering, the University of Queensland, Brisbane, Australia.
| |
Collapse
|
28
|
Sveinsson B, Gold GE, Hargreaves BA, Yoon D. Utilizing shared information between gradient-spoiled and RF-spoiled steady-state MRI signals. Phys Med Biol 2021; 66:01NT03. [PMID: 33246317 DOI: 10.1088/1361-6560/abce8a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
This work presents an analytical relationship between gradient-spoiled and RF-spoiled steady-state signals. The two echoes acquired in double-echo in steady-state scans are shown to lie on a line in the signal plane, where the two axes represent the amplitudes of each echo. The location along the line depends on the amount of spoiling and the diffusivity. The line terminates in a point corresponding to an RF-spoiled signal. In addition to the main contribution of demonstrating this signal relationship, we also include the secondary contribution of preliminary results from an example application of the relationship, in the form of a heuristic denoising method when both types of scans are performed. This is investigated in simulations, phantom scans, and in vivo scans. For the signal model, the main topic of this study, simulations confirmed its accuracy and explored its dependency on signal parameters and image noise. For the secondary topic of its preliminary application to reduce noise, simulations demonstrated the denoising method giving a reduction in noise-induced standard deviation of about 30%. The relative effect of the method on the signals is shown to depend on the slope of the described line, which is demonstrated to be zero at the Ernst angle. The phantom scans show a similar effect as the simulations. In vivo scans showed a slightly lower average improvement of about 28%.
Collapse
Affiliation(s)
- Bragi Sveinsson
- Athinoula A. Martinos Center, Department of Radiology, Massachusetts General Hospital, Boston, MA, United States of America. Harvard Medical School, Boston, MA, United States of America
| | | | | | | |
Collapse
|
29
|
On the regularization of feature fusion and mapping for fast MR multi-contrast imaging via iterative networks. Magn Reson Imaging 2021; 77:159-168. [PMID: 33400936 DOI: 10.1016/j.mri.2020.12.019] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2020] [Revised: 12/01/2020] [Accepted: 12/29/2020] [Indexed: 01/23/2023]
Abstract
Multi-contrast (MC) Magnetic Resonance Imaging (MRI) of the same patient usually requires long scanning times, despite the images sharing redundant information. In this work, we propose a new iterative network that utilizes the sharable information among MC images for MRI acceleration. The proposed network has reinforced data fidelity control and anatomy guidance through an iterative optimization procedure of Gradient Descent, leading to reduced uncertainties and improved reconstruction results. Through a convolutional network, the new method incorporates a learnable regularization unit that is capable of extracting, fusing, and mapping shareable information among different contrasts. Specifically, a dilated inception block is proposed to promote multi-scale feature extractions and increase the receptive field diversity for contextual information incorporation. Lastly, an optimal MC information feeding protocol is built through the design of a complementary feature extractor block. Comprehensive experiments demonstrated the superiority of the proposed network, both qualitatively and quantitatively.
Collapse
|
30
|
Liu R, Zhang Y, Cheng S, Luo Z, Fan X. A Deep Framework Assembling Principled Modules for CS-MRI: Unrolling Perspective, Convergence Behaviors, and Practical Modeling. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:4150-4163. [PMID: 32746155 DOI: 10.1109/tmi.2020.3014193] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Compressed Sensing Magnetic Resonance Imaging (CS-MRI) significantly accelerates MR acquisition at a sampling rate much lower than the Nyquist criterion. A major challenge for CS-MRI lies in solving the severely ill-posed inverse problem to reconstruct aliasing-free MR images from the sparse k -space data. Conventional methods typically optimize an energy function, producing restoration of high quality, but their iterative numerical solvers unavoidably bring extremely large time consumption. Recent deep techniques provide fast restoration by either learning direct prediction to final reconstruction or plugging learned modules into the energy optimizer. Nevertheless, these data-driven predictors cannot guarantee the reconstruction following principled constraints underlying the domain knowledge so that the reliability of their reconstruction process is questionable. In this paper, we propose a deep framework assembling principled modules for CS-MRI that fuses learning strategy with the iterative solver of a conventional reconstruction energy. This framework embeds an optimal condition checking mechanism, fostering efficient and reliable reconstruction. We also apply the framework to three practical tasks, i.e., complex-valued data reconstruction, parallel imaging and reconstruction with Rician noise. Extensive experiments on both benchmark and manufacturer-testing images demonstrate that the proposed method reliably converges to the optimal solution more efficiently and accurately than the state-of-the-art in various scenarios.
Collapse
|
31
|
Pali MC, Schaeffter T, Kolbitsch C, Kofler A. Adaptive sparsity level and dictionary size estimation for image reconstruction in accelerated 2D radial cine MRI. Med Phys 2020; 48:178-192. [PMID: 33090537 DOI: 10.1002/mp.14547] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2020] [Revised: 09/16/2020] [Accepted: 10/06/2020] [Indexed: 11/10/2022] Open
Abstract
PURPOSE In the past, dictionary learning (DL) and sparse coding (SC) have been proposed for the regularization of image reconstruction problems. The regularization is given by a sparse approximation of all image patches using a learned dictionary, that is, an overcomplete set of basis functions learned from data. Despite its competitiveness, DL and SC require the tuning of two essential hyperparameters: the sparsity level S - the number of basis functions of the dictionary, called atoms, which are used to approximate each patch, and K - the overall number of such atoms in the dictionary. These two hyperparameters usually have to be chosen a priori and are determined by repetitive and computationally expensive experiments. Furthermore, the final reported values vary depending on the specific situation. As a result, the clinical application of the method is limited, as standardized reconstruction protocols have to be used. METHODS In this work, we use adaptive DL and propose a novel adaptive sparse coding algorithm for two-dimensional (2D) radial cine MR image reconstruction. Using adaptive DL and adaptive SC, the optimal dictionary size K as well as the optimal sparsity level S are chosen dependent on the considered data. RESULTS Our three main results are the following: First, adaptive DL and adaptive SC deliver results which are comparable or better than the most widely used nonadaptive version of DL and SC. Second, the time needed for the regularization is accelerated due to the fact that the sparsity level S is never overestimated. Finally, the a priori choice of S and K is no longer needed but is optimally chosen dependent on the data under consideration. CONCLUSIONS Adaptive DL and adaptive SC can highly facilitate the application of DL- and SC-based regularization methods. While in this work we focused on 2D radial cine MR image reconstruction, we expect the method to be applicable to different imaging modalities as well.
Collapse
Affiliation(s)
| | - Tobias Schaeffter
- Physikalisch-Technische Bundesanstalt (PTB), Braunschweig, Berlin, 10587, Germany.,School of Imaging Sciences and Biomedical Engineering, King's College London, London, SE1 7EH, UK.,Department of Biomedical Engineering, Technical University of Berlin, Berlin, 10623, Germany
| | - Christoph Kolbitsch
- Physikalisch-Technische Bundesanstalt (PTB), Braunschweig, Berlin, 10587, Germany.,School of Imaging Sciences and Biomedical Engineering, King's College London, London, SE1 7EH, UK
| | - Andreas Kofler
- Physikalisch-Technische Bundesanstalt (PTB), Braunschweig, Berlin, 10587, Germany.,Department of Radiology, Charité-Universitätsmedizin Berlin, Berlin, 10117, Germany
| |
Collapse
|
32
|
Li Y, Chai Y, Yin H, Chen B. A novel feature learning framework for high-dimensional data classification. INT J MACH LEARN CYB 2020. [DOI: 10.1007/s13042-020-01188-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|