1
|
Zhu Q, Liu B, Cui ZX, Cao C, Yan X, Liu Y, Cheng J, Zhou Y, Zhu Y, Wang H, Zeng H, Liang D. PEARL: Cascaded Self-Supervised Cross-Fusion Learning for Parallel MRI Acceleration. IEEE J Biomed Health Inform 2025; 29:3086-3097. [PMID: 38147421 DOI: 10.1109/jbhi.2023.3347355] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2023]
Abstract
Supervised deep learning (SDL) methodology holds promise for accelerated magnetic resonance imaging (AMRI) but is hampered by the reliance on extensive training data. Some self-supervised frameworks, such as deep image prior (DIP), have emerged, eliminating the explicit training procedure but often struggling to remove noise and artifacts under significant degradation. This work introduces a novel self-supervised accelerated parallel MRI approach called PEARL, leveraging a multiple-stream joint deep decoder with two cross-fusion schemes to accurately reconstruct one or more target images from compressively sampled k-space. Each stream comprises cascaded cross-fusion sub-block networks (SBNs) that sequentially perform combined upsampling, 2D convolution, joint attention, ReLU activation and batch normalization (BN). Among them, combined upsampling and joint attention facilitate mutual learning between multiple-stream networks by integrating multi-parameter priors in both additive and multiplicative manners. Long-range unified skip connections within SBNs ensure effective information propagation between distant cross-fusion layers. Additionally, incorporating dual-normalized edge-orientation similarity regularization into the training loss enhances detail reconstruction and prevents overfitting. Experimental results consistently demonstrate that PEARL outperforms the existing state-of-the-art (SOTA) self-supervised AMRI technologies in various MRI cases. Notably, 5-fold$\sim$6-fold accelerated acquisition yields a 1$\%$ $\sim$ 2$\%$ improvement in SSIM$_{\mathsf{ROI}}$ and a 3$\%$ $\sim$ 6$\%$ improvement in PSNR$_{\mathsf{ROI}}$, along with a significant 15$\%$ $\sim$ 20$\%$ reduction in RLNE$_{\mathsf{ROI}}$.
Collapse
|
2
|
Huang Y, Wu Z, Xu X, Zhang M, Wang S, Liu Q. Partition-based k-space synthesis for multi-contrast parallel imaging. Magn Reson Imaging 2025; 117:110297. [PMID: 39647517 DOI: 10.1016/j.mri.2024.110297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2024] [Revised: 12/02/2024] [Accepted: 12/03/2024] [Indexed: 12/10/2024]
Abstract
PURPOSE Multi-contrast magnetic resonance imaging is a significant and essential medical imaging technique. However, multi-contrast imaging has longer acquisition time and is easy to cause motion artifacts. In particular, the acquisition time for a T2-weighted image is prolonged due to its longer repetition time (TR). On the contrary, T1-weighted image has a shorter TR. Therefore, utilizing complementary information across T1 and T2-weighted image is a way to decrease the overall imaging time. Previous T1-assisted T2 reconstruction methods have mostly focused on image domain using whole-based image fusion approaches. The image domain reconstruction method has the defects of high computational complexity and limited flexibility. To address this issue, we propose a novel multi-contrast imaging method called partition-based k-space synthesis (PKS) which can achieve better reconstruction quality of T2-weighted image by feature fusion. METHODS Concretely, we first decompose fully-sampled T1 k-space data and under-sampled T2 k-space data into two sub-data, separately. Then two new objects are constructed by combining the two sub-T1/T2 data. After that, the two new objects as the whole data to realize the reconstruction of T2-weighted image. RESULTS Experimental results showed that the developed PKS scheme can achieve comparable or better results than using traditional k-space parallel imaging (SAKE) that processes each contrast independently. At the same time, our method showed good adaptability and robustness under different contrast-assisted and T1-T2 ratios. Efficient target modal image reconstruction under various conditions were realized and had excellent performance in restoring image quality and preserving details. CONCLUSIONS This work proposed a PKS multi-contrast method to assist in target mode image reconstruction. We have conducted extensive experiments on different multi-contrast, diverse ratios of T1 to T2 and different sampling masks to demonstrate the generalization and robustness of our proposed model.
Collapse
Affiliation(s)
- Yuxia Huang
- Department of Electronic Information Engineering, Nanchang University, Nanchang 330031, China
| | - Zhonghui Wu
- Department of Electronic Information Engineering, Nanchang University, Nanchang 330031, China
| | - Xiaoling Xu
- Department of Electronic Information Engineering, Nanchang University, Nanchang 330031, China
| | - Minghui Zhang
- Department of Electronic Information Engineering, Nanchang University, Nanchang 330031, China
| | - Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, SIAT, Chinese Academy of Sciences, Shenzhen 518055, China.
| | - Qiegen Liu
- Department of Electronic Information Engineering, Nanchang University, Nanchang 330031, China.
| |
Collapse
|
3
|
Huang J, Wu Y, Wang F, Fang Y, Nan Y, Alkan C, Abraham D, Liao C, Xu L, Gao Z, Wu W, Zhu L, Chen Z, Lally P, Bangerter N, Setsompop K, Guo Y, Rueckert D, Wang G, Yang G. Data- and Physics-Driven Deep Learning Based Reconstruction for Fast MRI: Fundamentals and Methodologies. IEEE Rev Biomed Eng 2025; 18:152-171. [PMID: 39437302 DOI: 10.1109/rbme.2024.3485022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2024]
Abstract
Magnetic Resonance Imaging (MRI) is a pivotal clinical diagnostic tool, yet its extended scanning times often compromise patient comfort and image quality, especially in volumetric, temporal and quantitative scans. This review elucidates recent advances in MRI acceleration via data and physics-driven models, leveraging techniques from algorithm unrolling models, enhancement-based methods, and plug-and-play models to the emerging full spectrum of generative model-based methods. We also explore the synergistic integration of data models with physics-based insights, encompassing the advancements in multi-coil hardware accelerations like parallel imaging and simultaneous multi-slice imaging, and the optimization of sampling patterns. We then focus on domain-specific challenges and opportunities, including image redundancy exploitation, image integrity, evaluation metrics, data heterogeneity, and model generalization. This work also discusses potential solutions and future research directions, with an emphasis on the role of data harmonization and federated learning for further improving the general applicability and performance of these methods in MRI reconstruction.
Collapse
|
4
|
Zhang H, Wang Q, Shi J, Ying S, Wen Z. Deep unfolding network with spatial alignment for multi-modal MRI reconstruction. Med Image Anal 2025; 99:103331. [PMID: 39243598 DOI: 10.1016/j.media.2024.103331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2023] [Revised: 07/10/2024] [Accepted: 08/29/2024] [Indexed: 09/09/2024]
Abstract
Multi-modal Magnetic Resonance Imaging (MRI) offers complementary diagnostic information, but some modalities are limited by the long scanning time. To accelerate the whole acquisition process, MRI reconstruction of one modality from highly under-sampled k-space data with another fully-sampled reference modality is an efficient solution. However, the misalignment between modalities, which is common in clinic practice, can negatively affect reconstruction quality. Existing deep learning-based methods that account for inter-modality misalignment perform better, but still share two main common limitations: (1) The spatial alignment task is not adaptively integrated with the reconstruction process, resulting in insufficient complementarity between the two tasks; (2) the entire framework has weak interpretability. In this paper, we construct a novel Deep Unfolding Network with Spatial Alignment, termed DUN-SA, to appropriately embed the spatial alignment task into the reconstruction process. Concretely, we derive a novel joint alignment-reconstruction model with a specially designed aligned cross-modal prior term. By relaxing the model into cross-modal spatial alignment and multi-modal reconstruction tasks, we propose an effective algorithm to solve this model alternatively. Then, we unfold the iterative stages of the proposed algorithm and design corresponding network modules to build DUN-SA with interpretability. Through end-to-end training, we effectively compensate for spatial misalignment using only reconstruction loss, and utilize the progressively aligned reference modality to provide inter-modality prior to improve the reconstruction of the target modality. Comprehensive experiments on four real datasets demonstrate that our method exhibits superior reconstruction performance compared to state-of-the-art methods.
Collapse
Affiliation(s)
- Hao Zhang
- Department of Mathematics, School of Science, Shanghai University, Shanghai 200444, China
| | - Qi Wang
- Department of Mathematics, School of Science, Shanghai University, Shanghai 200444, China
| | - Jun Shi
- School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China
| | - Shihui Ying
- Shanghai Institute of Applied Mathematics and Mechanics, Shanghai University, Shanghai 200072, China; School of Mechanics and Engineering Science, Shanghai University, Shanghai 200072, China.
| | - Zhijie Wen
- Department of Mathematics, School of Science, Shanghai University, Shanghai 200444, China
| |
Collapse
|
5
|
Feng Y, Deng S, Lyu J, Cai J, Wei M, Qin J. Bridging MRI Cross-Modality Synthesis and Multi-Contrast Super-Resolution by Fine-Grained Difference Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2025; 44:373-383. [PMID: 39159018 DOI: 10.1109/tmi.2024.3445969] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/21/2024]
Abstract
In multi-modal magnetic resonance imaging (MRI), the tasks of imputing or reconstructing the target modality share a common obstacle: the accurate modeling of fine-grained inter-modal differences, which has been sparingly addressed in current literature. These differences stem from two sources: 1) spatial misalignment remaining after coarse registration and 2) structural distinction arising from modality-specific signal manifestations. This paper integrates the previously separate research trajectories of cross-modality synthesis (CMS) and multi-contrast super-resolution (MCSR) to address this pervasive challenge within a unified framework. Connected through generalized down-sampling ratios, this unification not only emphasizes their common goal in reducing structural differences, but also identifies the key task distinguishing MCSR from CMS: modeling the structural distinctions using the limited information from the misaligned target input. Specifically, we propose a composite network architecture with several key components: a label correction module to align the coordinates of multi-modal training pairs, a CMS module serving as the base model, an SR branch to handle target inputs, and a difference projection discriminator for structural distinction-centered adversarial training. When training the SR branch as the generator, the adversarial learning is enhanced with distinction-aware incremental modulation to ensure better-controlled generation. Moreover, the SR branch integrates deformable convolutions to address cross-modal spatial misalignment at the feature level. Experiments conducted on three public datasets demonstrate that our approach effectively balances structural accuracy and realism, exhibiting overall superiority in comprehensive evaluations for both tasks over current state-of-the-art approaches. The code is available at https://github.com/papshare/FGDL.
Collapse
|
6
|
Zhang H, Ma Q, Qiu Y, Lai Z. ACGRHA-Net: Accelerated multi-contrast MR imaging with adjacency complementary graph assisted residual hybrid attention network. Neuroimage 2024; 303:120921. [PMID: 39521395 DOI: 10.1016/j.neuroimage.2024.120921] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2024] [Revised: 11/04/2024] [Accepted: 11/06/2024] [Indexed: 11/16/2024] Open
Abstract
Multi-contrast magnetic resonance (MR) imaging is an advanced technology used in medical diagnosis, but the long acquisition process can lead to patient discomfort and limit its broader application. Shortening acquisition time by undersampling k-space data introduces noticeable aliasing artifacts. To address this, we propose a method that reconstructs multi-contrast MR images from zero-filled data by utilizing a fully-sampled auxiliary contrast MR image as a prior to learn an adjacency complementary graph. This graph is then combined with a residual hybrid attention network, forming the adjacency complementary graph assisted residual hybrid attention network (ACGRHA-Net) for multi-contrast MR image reconstruction. Specifically, the optimal structural similarity is represented by a graph learned from the fully sampled auxiliary image, where the node features and adjacency matrices are designed to precisely capture structural information among different contrast images. This structural similarity enables effective fusion with the target image, improving the detail reconstruction. Additionally, a residual hybrid attention module is designed in parallel with the graph convolution network, allowing it to effectively capture key features and adaptively emphasize these important features in target contrast MR images. This strategy prioritizes crucial information while preserving shallow features, thereby achieving comprehensive feature fusion at deeper levels to enhance multi-contrast MR image reconstruction. Extensive experiments on the different datasets, using various sampling patterns and accelerated factors demonstrate that the proposed method outperforms the current state-of-the-art reconstruction methods.
Collapse
Affiliation(s)
- Haotian Zhang
- School of Ocean Information Engineering, Jimei University, Xiamen, China
| | - Qiaoyu Ma
- School of Ocean Information Engineering, Jimei University, Xiamen, China
| | - Yiran Qiu
- School of Ocean Information Engineering, Jimei University, Xiamen, China
| | - Zongying Lai
- School of Ocean Information Engineering, Jimei University, Xiamen, China.
| |
Collapse
|
7
|
Kim S, Park H, Park SH. A review of deep learning-based reconstruction methods for accelerated MRI using spatiotemporal and multi-contrast redundancies. Biomed Eng Lett 2024; 14:1221-1242. [PMID: 39465106 PMCID: PMC11502678 DOI: 10.1007/s13534-024-00425-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2024] [Revised: 08/27/2024] [Accepted: 09/06/2024] [Indexed: 10/29/2024] Open
Abstract
Accelerated magnetic resonance imaging (MRI) has played an essential role in reducing data acquisition time for MRI. Acceleration can be achieved by acquiring fewer data points in k-space, which results in various artifacts in the image domain. Conventional reconstruction methods have resolved the artifacts by utilizing multi-coil information, but with limited robustness. Recently, numerous deep learning-based reconstruction methods have been developed, enabling outstanding reconstruction performances with higher acceleration. Advances in hardware and developments of specialized network architectures have produced such achievements. Besides, MRI signals contain various redundant information including multi-coil redundancy, multi-contrast redundancy, and spatiotemporal redundancy. Utilization of the redundant information combined with deep learning approaches allow not only higher acceleration, but also well-preserved details in the reconstructed images. Consequently, this review paper introduces the basic concepts of deep learning and conventional accelerated MRI reconstruction methods, followed by review of recent deep learning-based reconstruction methods that exploit various redundancies. Lastly, the paper concludes by discussing the challenges, limitations, and potential directions of future developments.
Collapse
Affiliation(s)
- Seonghyuk Kim
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - HyunWook Park
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - Sung-Hong Park
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, 291 Daehak-ro, Yuseong-gu, Daejeon, 34141 Republic of Korea
| |
Collapse
|
8
|
Wang Q, Wen Z, Shi J, Wang Q, Shen D, Ying S. Spatial and Modal Optimal Transport for Fast Cross-Modal MRI Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:3924-3935. [PMID: 38805327 DOI: 10.1109/tmi.2024.3406559] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2024]
Abstract
Multi-modal magnetic resonance imaging (MRI) plays a crucial role in comprehensive disease diagnosis in clinical medicine. However, acquiring certain modalities, such as T2-weighted images (T2WIs), is time-consuming and prone to be with motion artifacts. It negatively impacts subsequent multi-modal image analysis. To address this issue, we propose an end-to-end deep learning framework that utilizes T1-weighted images (T1WIs) as auxiliary modalities to expedite T2WIs' acquisitions. While image pre-processing is capable of mitigating misalignment, improper parameter selection leads to adverse pre-processing effects, requiring iterative experimentation and adjustment. To overcome this shortage, we employ Optimal Transport (OT) to synthesize T2WIs by aligning T1WIs and performing cross-modal synthesis, effectively mitigating spatial misalignment effects. Furthermore, we adopt an alternating iteration framework between the reconstruction task and the cross-modal synthesis task to optimize the final results. Then, we prove that the reconstructed T2WIs and the synthetic T2WIs become closer on the T2 image manifold with iterations increasing, and further illustrate that the improved reconstruction result enhances the synthesis process, whereas the enhanced synthesis result improves the reconstruction process. Finally, experimental results from FastMRI and internal datasets confirm the effectiveness of our method, demonstrating significant improvements in image reconstruction quality even at low sampling rates.
Collapse
|
9
|
Chen X, Ma L, Ying S, Shen D, Zeng T. FEFA: Frequency Enhanced Multi-Modal MRI Reconstruction With Deep Feature Alignment. IEEE J Biomed Health Inform 2024; 28:6751-6763. [PMID: 39042545 DOI: 10.1109/jbhi.2024.3432139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/25/2024]
Abstract
Integrating complementary information from multiple magnetic resonance imaging (MRI) modalities is often necessary to make accurate and reliable diagnostic decisions. However, the different acquisition speeds of these modalities mean that obtaining information can be time consuming and require significant effort. Reference-based MRI reconstruction aims to accelerate slower, under-sampled imaging modalities, such as T2-modality, by utilizing redundant information from faster, fully sampled modalities, such as T1-modality. Unfortunately, spatial misalignment between different modalities often negatively impacts the final results. To address this issue, we propose FEFA, which consists of cascading FEFA blocks. The FEFA block first aligns and fuses the two modalities at the feature level. The combined features are then filtered in the frequency domain to enhance the important features while simultaneously suppressing the less essential ones, thereby ensuring accurate reconstruction. Furthermore, we emphasize the advantages of combining the reconstruction results from multiple cascaded blocks, which also contributes to stabilizing the training process. Compared to existing registration-then-reconstruction and cross-attention-based approaches, our method is end-to-end trainable without requiring additional supervision, extensive parameters, or heavy computation. Experiments on the public fastMRI, IXI and in-house datasets demonstrate that our approach is effective across various under-sampling patterns and ratios.
Collapse
|
10
|
Zhou X, Zhang Z, Du H, Qiu B. MLMFNet: A multi-level modality fusion network for multi-modal accelerated MRI reconstruction. Magn Reson Imaging 2024; 111:246-255. [PMID: 38663831 DOI: 10.1016/j.mri.2024.04.028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 04/09/2024] [Accepted: 04/19/2024] [Indexed: 06/01/2024]
Abstract
Magnetic resonance imaging produces detailed anatomical and physiological images of the human body that can be used in the clinical diagnosis and treatment of diseases. However, MRI suffers its comparatively longer acquisition time than other imaging methods and is thus vulnerable to motion artifacts, which ultimately lead to likely failed or even wrong diagnosis. In order to perform faster reconstruction, deep learning-based methods along with traditional strategies such as parallel imaging and compressed sensing come into play in recent years in this field. Meanwhile, in order to better analyze the diseases, it is also often necessary to acquire images in the same region of interest under different modalities, which yield images with different contrast levels. However, most of these aforementioned methods tend to use single-modal images for reconstruction, neglecting the correlation and redundancy information embedded in MR images acquired with different modalities. While there are works on multi-modal reconstruction, the information is yet to be efficiently explored. In this paper, we propose an end-to-end neural network called MLMFNet, which helps the reconstruction of the target modality by using information from the auxiliary modality across feature channels and layers. Specifically, this is highlighted by three components: (I) An encoder based on UNet with a single-stream strategy that fuses auxiliary and target modalities; (II) a decoder that tends to multi-level features from all layers of the encoder, and (III) a channel attention module. Quantitative and qualitative analyses are performed on a public brain dataset and knee brain dataset, which show that the proposed method achieves satisfying results in MRI reconstruction within the multi-modal context, and also demonstrate its effectiveness and potential to be used in clinical practice.
Collapse
Affiliation(s)
- Xiuyun Zhou
- Biomedical Engineering Center, University of Science and Technology of China, Hefei, Anhui 230026, China
| | - Zhenxi Zhang
- Biomedical Engineering Center, University of Science and Technology of China, Hefei, Anhui 230026, China
| | - Hongwei Du
- Biomedical Engineering Center, University of Science and Technology of China, Hefei, Anhui 230026, China.
| | - Bensheng Qiu
- Biomedical Engineering Center, University of Science and Technology of China, Hefei, Anhui 230026, China
| |
Collapse
|
11
|
Sun Y, Liu X, Liu Y, Jin R, Pang Y. DIRECTION: Deep cascaded reconstruction residual-based feature modulation network for fast MRI reconstruction. Magn Reson Imaging 2024; 111:157-167. [PMID: 38642780 DOI: 10.1016/j.mri.2024.04.023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Revised: 02/24/2024] [Accepted: 04/14/2024] [Indexed: 04/22/2024]
Abstract
Deep cascaded networks have been extensively studied and applied to accelerate Magnetic Resonance Imaging (MRI) and have shown promising results. Most existing works employ a large cascading number for the sake of superior performances. However, due to the lack of proper guidance, the reconstruction performance can easily reach a plateau and even face degradation if simply increasing the cascading number. In this paper, we aim to boost the reconstruction performance from a novel perspective by proposing a parallel architecture called DIRECTION that fully exploits the guiding value of the reconstruction residual of each subnetwork. Specifically, we introduce a novel Reconstruction Residual-Based Feature Modulation Mechanism (RRFMM) which utilizes the reconstruction residual of the previous subnetwork to guide the next subnetwork at the feature level. To achieve this, a Residual Attention Modulation Block (RAMB) is proposed to generate attention maps using multi-scale residual features to modulate the image features of the corresponding scales. Equipped with this strategy, each subnetwork within the cascaded network possesses its unique optimization objective and emphasis rather than blindly updating its parameters. To further boost the performance, we introduce the Cross-Stage Feature Reuse Connection (CSFRC) and the Reconstruction Dense Connection (RDC), which can reduce information loss and enhance representative ability. We conduct sufficient experiments and evaluate our method on the fastMRI knee dataset using multiple subsampling masks. Comprehensive experimental results show that our method can markedly boost the performance of cascaded networks and significantly outperforms other compared state-of-the-art methods quantitatively and qualitatively.
Collapse
Affiliation(s)
- Yong Sun
- TJK-BIIT Lab, School of Electrical and Information Engineering, Tianjin 300072, China.
| | - Xiaohan Liu
- TJK-BIIT Lab, School of Electrical and Information Engineering, Tianjin 300072, China.
| | - Yiming Liu
- TJK-BIIT Lab, School of Electrical and Information Engineering, Tianjin 300072, China; Tiandatz Technology, Tianjin 301723, China.
| | - Ruiqi Jin
- TJK-BIIT Lab, School of Electrical and Information Engineering, Tianjin 300072, China.
| | - Yanwei Pang
- TJK-BIIT Lab, School of Electrical and Information Engineering, Tianjin 300072, China.
| |
Collapse
|
12
|
Lei P, Hu L, Fang F, Zhang G. Joint Under-Sampling Pattern and Dual-Domain Reconstruction for Accelerating Multi-Contrast MRI. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2024; 33:4686-4701. [PMID: 39178087 DOI: 10.1109/tip.2024.3445729] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/25/2024]
Abstract
Multi-Contrast Magnetic Resonance Imaging (MCMRI) utilizes the short-time reference image to facilitate the reconstruction of the long-time target one, providing a new solution for fast MRI. Although various methods have been proposed, they still have certain limitations. 1) existing methods featuring the preset under-sampling patterns give rise to redundancy between multi-contrast images and limit their model performance; 2) most methods focus on the information in the image domain, prior knowledge in the k-space domain has not been fully explored; and 3) most networks are manually designed and lack certain physical interpretability. To address these issues, we propose a joint optimization of the under-sampling pattern and a deep-unfolding dual-domain network for accelerating MCMRI. Firstly, to reduce the redundant information and sample more contrast-specific information, we propose a new framework to learn the optimal under-sampling pattern for MCMRI. Secondly, a dual-domain model is established to reconstruct the target image in both the image domain and the k-space frequency domain. The model in the image domain introduces a spatial transformation to explicitly model the inconsistent and unaligned structures of MCMRI. The model in the k-space learns prior knowledge from the frequency domain, enabling the model to capture more global information from the input images. Thirdly, we employ the proximal gradient algorithm to optimize the proposed model and then unfold the iterative results into a deep-unfolding network, called MC-DuDoN. We evaluate the proposed MC-DuDoN on MCMRI super-resolution and reconstruction tasks. Experimental results give credence to the superiority of the current model. In particular, since our approach explicitly models the inconsistent structures, it shows robustness on spatially misaligned MCMRI. In the reconstruction task, compared with conventional masks, the learned mask restores more realistic images, even under an ultra-high acceleration ratio ( ×30 ). Code is available at https://github.com/lpcccc-cv/MC-DuDoNet.
Collapse
|
13
|
Wang S, Wu R, Jia S, Diakite A, Li C, Liu Q, Zheng H, Ying L. Knowledge-driven deep learning for fast MR imaging: Undersampled MR image reconstruction from supervised to un-supervised learning. Magn Reson Med 2024; 92:496-518. [PMID: 38624162 DOI: 10.1002/mrm.30105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 03/19/2024] [Accepted: 03/20/2024] [Indexed: 04/17/2024]
Abstract
Deep learning (DL) has emerged as a leading approach in accelerating MRI. It employs deep neural networks to extract knowledge from available datasets and then applies the trained networks to reconstruct accurate images from limited measurements. Unlike natural image restoration problems, MRI involves physics-based imaging processes, unique data properties, and diverse imaging tasks. This domain knowledge needs to be integrated with data-driven approaches. Our review will introduce the significant challenges faced by such knowledge-driven DL approaches in the context of fast MRI along with several notable solutions, which include learning neural networks and addressing different imaging application scenarios. The traits and trends of these techniques have also been given which have shifted from supervised learning to semi-supervised learning, and finally, to unsupervised learning methods. In addition, MR vendors' choices of DL reconstruction have been provided along with some discussions on open questions and future directions, which are critical for the reliable imaging systems.
Collapse
Affiliation(s)
- Shanshan Wang
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Ruoyou Wu
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Sen Jia
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Alou Diakite
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Cheng Li
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Qiegen Liu
- Department of Electronic Information Engineering, Nanchang University, Nanchang, China
| | - Hairong Zheng
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Leslie Ying
- Department of Biomedical Engineering and Department of Electrical Engineering, The State University of New York, Buffalo, New York, USA
| |
Collapse
|
14
|
Cheng H, Hou X, Huang G, Jia S, Yang G, Nie S. Feature Fusion for Multi-Coil Compressed MR Image Reconstruction. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1969-1979. [PMID: 38459398 PMCID: PMC11300769 DOI: 10.1007/s10278-024-01057-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Revised: 02/14/2024] [Accepted: 02/15/2024] [Indexed: 03/10/2024]
Abstract
Magnetic resonance imaging (MRI) occupies a pivotal position within contemporary diagnostic imaging modalities, offering non-invasive and radiation-free scanning. Despite its significance, MRI's principal limitation is the protracted data acquisition time, which hampers broader practical application. Promising deep learning (DL) methods for undersampled magnetic resonance (MR) image reconstruction outperform the traditional approaches in terms of speed and image quality. However, the intricate inter-coil correlations have been insufficiently addressed, leading to an underexploitation of the rich information inherent in multi-coil acquisitions. In this article, we proposed a method called "Multi-coil Feature Fusion Variation Network" (MFFVN), which introduces an encoder to extract the feature from multi-coil MR image directly and explicitly, followed by a feature fusion operation. Coil reshaping enables the 2D network to achieve satisfactory reconstruction results, while avoiding the introduction of a significant number of parameters and preserving inter-coil information. Compared with VN, MFFVN yields an improvement in the average PSNR and SSIM of the test set, registering enhancements of 0.2622 dB and 0.0021 dB respectively. This uplift can be attributed to the integration of feature extraction and fusion stages into the network's architecture, thereby effectively leveraging and combining the multi-coil information for enhanced image reconstruction quality. The proposed method outperforms the state-of-the-art methods on fastMRI dataset of multi-coil brains under a fourfold acceleration factor without incurring substantial computation overhead.
Collapse
Affiliation(s)
- Hang Cheng
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Xuewen Hou
- Radiotherapy Business Unit, Shanghai United Imaging Healthcare Co., Ltd., Shanghai, 201807, China
| | - Gang Huang
- Shanghai Key Laboratory of Molecular Imaging, Shanghai University of Medicine and Health Sciences, Shanghai, 201318, China
| | - Shouqiang Jia
- Department of Radiology, Jinan People's Hospital Affiliated to Shandong First Medical University, Jinan Shandong, 271199, China.
| | - Guang Yang
- Shanghai Key Laboratory of Magnetic Resonance, Department of Physics, East China Normal University, Shanghai, 200062, China.
| | - Shengdong Nie
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China.
| |
Collapse
|
15
|
Zhang D, Han Q, Xiong Y, Du H. Mutli-modal straight flow matching for accelerated MR imaging. Comput Biol Med 2024; 178:108668. [PMID: 38870720 DOI: 10.1016/j.compbiomed.2024.108668] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2024] [Revised: 04/05/2024] [Accepted: 05/26/2024] [Indexed: 06/15/2024]
Abstract
Diffusion models have garnered great interest lately in Magnetic Resonance (MR) image reconstruction. A key component of generating high-quality samples from noise is iterative denoising for thousands of steps. However, the complexity of inference steps has limited its applications. To solve the challenge in obtaining high-quality reconstructed images with fewer inference steps and computational complexity, we introduce a novel straight flow matching, based on a neural ordinary differential equation (ODE) generative model. Our model creates a linear path between undersampled images and reconstructed images, which can be accurately simulated with a few Euler steps. Furthermore, we propose a multi-modal straight flow matching model, which uses relatively easily available modalities as supplementary information to guide the reconstruction of target modalities. We introduce the low frequency fusion layer and the high frequency fusion layer into our multi-modal model, which has been proved to produce promising results in fusion tasks. The proposed multi-modal straight flow matching (MMSflow) achieves state-of-the-art performances in task of reconstruction in fastMRI and Brats-2020 and improves the sampling rate by an order of magnitude than other methods based on stochastic differential equations (SDE).
Collapse
Affiliation(s)
- Daikun Zhang
- University of Science and Technology of China, Hefei, Anhui 230026, China.
| | - Qiuyi Han
- University of Science and Technology of China, Hefei, Anhui 230026, China.
| | - Yuzhu Xiong
- University of Science and Technology of China, Hefei, Anhui 230026, China.
| | - Hongwei Du
- University of Science and Technology of China, Hefei, Anhui 230026, China.
| |
Collapse
|
16
|
Li B, Hu W, Feng CM, Li Y, Liu Z, Xu Y. Multi-Contrast Complementary Learning for Accelerated MR Imaging. IEEE J Biomed Health Inform 2024; 28:1436-1447. [PMID: 38157466 DOI: 10.1109/jbhi.2023.3348328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2024]
Abstract
Thanks to its powerful ability to depict high-resolution anatomical information, magnetic resonance imaging (MRI) has become an essential non-invasive scanning technique in clinical practice. However, excessive acquisition time often leads to the degradation of image quality and psychological discomfort among subjects, hindering its further popularization. Besides reconstructing images from the undersampled protocol itself, multi-contrast MRI protocols bring promising solutions by leveraging additional morphological priors for the target modality. Nevertheless, previous multi-contrast techniques mainly adopt a simple fusion mechanism that inevitably ignores valuable knowledge. In this work, we propose a novel multi-contrast complementary information aggregation network named MCCA, aiming to exploit available complementary representations fully to reconstruct the undersampled modality. Specifically, a multi-scale feature fusion mechanism has been introduced to incorporate complementary-transferable knowledge into the target modality. Moreover, a hybrid convolution transformer block was developed to extract global-local context dependencies simultaneously, which combines the advantages of CNNs while maintaining the merits of Transformers. Compared to existing MRI reconstruction methods, the proposed method has demonstrated its superiority through extensive experiments on different datasets under different acceleration factors and undersampling patterns.
Collapse
|
17
|
Li H, Jia Y, Zhu H, Han B, Du J, Liu Y. Multi-level feature extraction and reconstruction for 3D MRI image super-resolution. Comput Biol Med 2024; 171:108151. [PMID: 38387383 DOI: 10.1016/j.compbiomed.2024.108151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2023] [Revised: 02/04/2024] [Accepted: 02/12/2024] [Indexed: 02/24/2024]
Abstract
Magnetic resonance imaging (MRI) is an essential radiology technique in clinical diagnosis, but its spatial resolution may not suffice to meet the growing need for precise diagnosis due to hardware limitations and thicker slice thickness. Therefore, it is crucial to explore suitable methods to increase the resolution of MRI images. Recently, deep learning has yielded many impressive results in MRI image super-resolution (SR) reconstruction. However, current SR networks mainly use convolutions to extract relatively single image features, which may not be optimal for further enhancing the quality of image reconstruction. In this work, we propose a multi-level feature extraction and reconstruction (MFER) method to restore the degraded high-resolution details of MRI images. Specifically, to comprehensively extract different types of features, we design the triple-mixed convolution by leveraging the strengths and uniqueness of different filter operations. For the features of each level, we then apply deconvolutions to upsample them separately at the tail of the network, followed by the feature calibration of spatial and channel attention. Besides, we also use a soft cross-scale residual operation to improve the effectiveness of parameter optimization. Experiments on lesion-free and glioma datasets indicate that our method obtains superior quantitative performance and visual effects when compared with state-of-the-art MRI image SR methods.
Collapse
Affiliation(s)
- Hongbi Li
- College of Medical Informatics, Chongqing Medical University, Chongqing 400016, China
| | - Yuanyuan Jia
- College of Medical Informatics, Chongqing Medical University, Chongqing 400016, China
| | - Huazheng Zhu
- College of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing 401331, China
| | - Baoru Han
- College of Medical Informatics, Chongqing Medical University, Chongqing 400016, China
| | - Jinglong Du
- College of Medical Informatics, Chongqing Medical University, Chongqing 400016, China.
| | - Yanbing Liu
- College of Medical Informatics, Chongqing Medical University, Chongqing 400016, China; Chongqing Municipal Education Commission, Chongqing 400020, China.
| |
Collapse
|
18
|
Zhou Y, Wang H, Liu C, Liao B, Li Y, Zhu Y, Hu Z, Liao J, Liang D. Recent advances in highly accelerated 3D MRI. Phys Med Biol 2023; 68:14TR01. [PMID: 36863026 DOI: 10.1088/1361-6560/acc0cd] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Accepted: 03/01/2023] [Indexed: 03/04/2023]
Abstract
Three-dimensional MRI has gained increasing popularity in various clinical applications due to its improved through-plane spatial resolution, which enhances the detection of subtle abnormalities and provides valuable clinical information. However, the long data acquisition time and high computational cost pose significant challenges for 3D MRI. In this comprehensive review article, we aim to summarize the latest advancements in accelerated 3D MR techniques. Covering over 200 remarkable research studies conducted over the past 20 years, we explore the development of MR signal excitation and encoding, advancements in reconstruction algorithms, and potential clinical applications. We hope that this survey serves as a valuable resource, providing insights into the current state of the field and serving as a guide for future research in accelerated 3D MRI.
Collapse
Affiliation(s)
- Yihang Zhou
- Research Centre for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Science, Shenzhen, Guangdong, People's Republic of China
- Research Department, Hong Kong Sanatorium and Hospital, Hong Kong SAR, People's Republic of China
| | - Haifeng Wang
- Paul C. Lauterbur Research Centre for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Science, Shenzhen, Guangdong, People's Republic of China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, Guangdong, People's Republic of China
| | - Congcong Liu
- Paul C. Lauterbur Research Centre for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Science, Shenzhen, Guangdong, People's Republic of China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, Guangdong, People's Republic of China
| | - Binyu Liao
- Paul C. Lauterbur Research Centre for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Science, Shenzhen, Guangdong, People's Republic of China
| | - Ye Li
- Paul C. Lauterbur Research Centre for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Science, Shenzhen, Guangdong, People's Republic of China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, Guangdong, People's Republic of China
| | - Yanjie Zhu
- Paul C. Lauterbur Research Centre for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Science, Shenzhen, Guangdong, People's Republic of China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, Guangdong, People's Republic of China
| | - Zhangqi Hu
- Department of Neurology, Shenzhen Children's Hospital, Shenzhen, Guangdong, People's Republic of China
| | - Jianxiang Liao
- Department of Neurology, Shenzhen Children's Hospital, Shenzhen, Guangdong, People's Republic of China
| | - Dong Liang
- Research Centre for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Science, Shenzhen, Guangdong, People's Republic of China
- Paul C. Lauterbur Research Centre for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Science, Shenzhen, Guangdong, People's Republic of China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, Guangdong, People's Republic of China
| |
Collapse
|
19
|
Cai X, Hou X, Sun R, Chang X, Zhu H, Jia S, Nie S. Accelerating image reconstruction for multi-contrast MRI based on Y-Net3. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2023:XST230012. [PMID: 37248943 DOI: 10.3233/xst-230012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
BACKGROUND As one of the significant preoperative imaging modalities in medical diagnosis, Magnetic resonance imaging (MRI) takes a long scanning time due to its special imaging principle. OBJECTIVE We propose an innovative MRI reconstruction strategy and data consistency method based on deep learning to reconstruct high-quality brain MRIs from down-sampled data and accelerate the MR imaging process. METHODS Sixteen healthy subjects undergoing T1-weighted spin-echo (SE) and T2-weighted fast spin-echo (FSE) sequences by a 1.5T MRI scanner were recruited. A Y-Net3+ network was used to facilitate the high-quality MRI reconstruction through context information. In addition, the existing data consistency fidelity method was improved. The difference between the reconstructed K-space and the original K-space was shorten by the linear regression algorithm. Therefore, the redundant artifacts derived from under-sampling were avoided. The Structural Similarity (SSIM) and Peak Signal to Noise Ratio (PSNR) were applied to quantitatively evaluate image reconstruction performance of different down-sampling patterns. RESULTS Compared with the classical Y-Net, Y-Net3+ network improved SSIM and PSNR of MRI images from 0.9164±0.0178 and 33.2216±3.2919 to 0.9387±0.0363 and 35.1785±3.3105, respectively, under compressed sensing reconstruction with acceleration factor of 4. The improved network increases signal-to-noise ratio and adds more image texture information in the reconstructed images. Furthermore, in the process of data consistency, linear regression analysis was used to reduce the difference between the reconstructed K-space and the original K-space, so that the SSIM and PSNR were increased to 0.9808±0.0081 and 40.9254±1.1911, respectively. CONCLUSIONS The improved Y-Net combined with data consistency fidelity method elucidates its potential in reconstructing high-quality T2-weighted images from the down-sampled data by fully exploring the T1-weighted information. With the advantage of avoiding down-sampled artifacts, the improved network exhibits remarkable clinical promise for fast MRI applications.
Collapse
Affiliation(s)
- Xin Cai
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - Xuewen Hou
- Shanghai Kangda COLORFUL Healthcare Co., Ltd, Shanghai, China
| | - Rong Sun
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - Xiao Chang
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - Honglin Zhu
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - Shouqiang Jia
- Department of Imaging, Jinan People's Hospital affiliated to Shandong First Medical University, Shandong, China
| | - Shengdong Nie
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, China
| |
Collapse
|
20
|
Yang J, Li XX, Liu F, Nie D, Lio P, Qi H, Shen D. Fast Multi-Contrast MRI Acquisition by Optimal Sampling of Information Complementary to Pre-Acquired MRI Contrast. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1363-1373. [PMID: 37015608 DOI: 10.1109/tmi.2022.3227262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Recent studies on multi-contrast MRI reconstruction have demonstrated the potential of further accelerating MRI acquisition by exploiting correlation between contrasts. Most of the state-of-the-art approaches have achieved improvement through the development of network architectures for fixed under-sampling patterns, without considering inter-contrast correlation in the under-sampling pattern design. On the other hand, sampling pattern learning methods have shown better reconstruction performance than those with fixed under-sampling patterns. However, most under-sampling pattern learning algorithms are designed for single contrast MRI without exploiting complementary information between contrasts. To this end, we propose a framework to optimize the under-sampling pattern of a target MRI contrast which complements the acquired fully-sampled reference contrast. Specifically, a novel image synthesis network is introduced to extract the redundant information contained in the reference contrast, which is exploited in the subsequent joint pattern optimization and reconstruction network. We have demonstrated superior performance of our learned under-sampling patterns on both public and in-house datasets, compared to the commonly used under-sampling patterns and state-of-the-art methods that jointly optimize the reconstruction network and the under-sampling patterns, up to 8-fold under-sampling factor.
Collapse
|
21
|
Guo D, Zeng G, Fu H, Wang Z, Yang Y, Qu X. A Joint Group Sparsity-based deep learning for multi-contrast MRI reconstruction. JOURNAL OF MAGNETIC RESONANCE (SAN DIEGO, CALIF. : 1997) 2023; 346:107354. [PMID: 36527935 DOI: 10.1016/j.jmr.2022.107354] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 11/24/2022] [Accepted: 12/03/2022] [Indexed: 06/17/2023]
Abstract
Multi-contrast magnetic resonance imaging (MRI) can provide richer diagnosis information. The data acquisition time, however, is increased than single-contrast imaging. To reduce this time, k-space undersampling is an effective way but a smart reconstruction algorithm is required to remove undersampling image artifacts. Traditional algorithms commonly explore the similarity of multi-contrast images through joint sparsity. However, these algorithms are time-consuming due to the iterative process and require adjusting hyperparameters manually. Recently, data-driven deep learning successfully overcome these limitations but the reconstruction error still needs to be further reduced. Here, we propose a Joint Group Sparsity-based Network (JGSN) for multi-contrast MRI reconstruction, which unrolls the iterative process of the joint sparsity algorithm. The designed network includes data consistency modules, learnable sparse transform modules, and joint group sparsity constraint modules. In particular, weights of different contrasts in the transform module are shared to reduce network parameters without sacrificing the quality of reconstruction. The experiments were performed on the retrospective undersampled brain and knee data. Experimental results on in vivo brain data and knee data show that our method consistently outperforms the state-of-the-art methods under different sampling patterns.
Collapse
Affiliation(s)
- Di Guo
- School of Computer and Information Engineering, Fujian Engineering Research Center for Medical Data Mining and Application, Xiamen University of Technology, Xiamen, China
| | - Gushan Zeng
- School of Computer and Information Engineering, Fujian Engineering Research Center for Medical Data Mining and Application, Xiamen University of Technology, Xiamen, China
| | - Hao Fu
- School of Computer and Information Engineering, Fujian Engineering Research Center for Medical Data Mining and Application, Xiamen University of Technology, Xiamen, China
| | - Zi Wang
- Department of Electronic Science, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, National Institute for Data Science in Health and Medicine, Xiamen University, China
| | - Yonggui Yang
- Department of Radiology, The Second Affiliated Hospital of Xiamen Medical College, Xiamen, China
| | - Xiaobo Qu
- Department of Electronic Science, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, National Institute for Data Science in Health and Medicine, Xiamen University, China.
| |
Collapse
|
22
|
Xuan K, Xiang L, Huang X, Zhang L, Liao S, Shen D, Wang Q. Multimodal MRI Reconstruction Assisted With Spatial Alignment Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2499-2509. [PMID: 35363610 DOI: 10.1109/tmi.2022.3164050] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
In clinical practice, multi-modal magnetic resonance imaging (MRI) with different contrasts is usually acquired in a single study to assess different properties of the same region of interest in the human body. The whole acquisition process can be accelerated by having one or more modalities under-sampled in the k -space. Recent research has shown that, considering the redundancy between different modalities, a target MRI modality under-sampled in the k -space can be more efficiently reconstructed with a fully-sampled reference MRI modality. However, we find that the performance of the aforementioned multi-modal reconstruction can be negatively affected by subtle spatial misalignment between different modalities, which is actually common in clinical practice. In this paper, we improve the quality of multi-modal reconstruction by compensating for such spatial misalignment with a spatial alignment network. First, our spatial alignment network estimates the displacement between the fully-sampled reference and the under-sampled target images, and warps the reference image accordingly. Then, the aligned fully-sampled reference image joins the multi-modal reconstruction of the under-sampled target image. Also, considering the contrast difference between the target and reference images, we have designed a cross-modality-synthesis-based registration loss in combination with the reconstruction loss, to jointly train the spatial alignment network and the reconstruction network. The experiments on both clinical MRI and multi-coil k -space raw data demonstrate the superiority and robustness of the multi-modal MRI reconstruction empowered with our spatial alignment network. Our code is publicly available at https://github.com/woxuankai/SpatialAlignmentNetwork.
Collapse
|
23
|
Seo S, Luu HM, Choi SH, Park SH. Simultaneously optimizing sampling pattern for joint acceleration of multi-contrast MRI using model-based deep learning. Med Phys 2022; 49:5964-5980. [PMID: 35678739 DOI: 10.1002/mp.15790] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2022] [Revised: 05/03/2022] [Accepted: 05/27/2022] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Acceleration of MR imaging (MRI) is a popular research area, and usage of deep learning for acceleration has become highly widespread in the MR community. Joint acceleration of multiple-acquisition MRI was proven to be effective over a single-acquisition approach. Also, optimization in the sampling pattern demonstrated its advantage over conventional undersampling pattern. However, optimizing the sampling patterns for joint acceleration of multiple-acquisition MRI has not been investigated well. PURPOSE To develop a model-based deep learning scheme to optimize sampling patterns for a joint acceleration of multi-contrast MRI. METHODS The proposed scheme combines sampling pattern optimization and multi-contrast MRI reconstruction. It was extended from the physics-guided method of the joint model-based deep learning (J-MoDL) scheme to optimize the separate sampling pattern for each of multiple contrasts simultaneously for their joint reconstruction. Tests were performed with three contrasts of T2-weighted, FLAIR, and T1-weighted images. The proposed multi-contrast method was compared to (i) single-contrast method with sampling optimization (baseline J-MoDL), (ii) multi-contrast method without sampling optimization, and (iii) multi-contrast method with single common sampling optimization for all contrasts. The optimized sampling patterns were analyzed for sampling location overlap across contrasts. The scheme was also tested in a data-driven scenario, where the inversion between input and label was learned from the under-sampled data directly and tested on knee datasets for generalization test. RESULTS The proposed scheme demonstrated a quantitative and qualitative advantage over the single-contrast scheme with sampling pattern optimization and the multi-contrast scheme without sampling pattern optimization. Optimizing the separate sampling pattern for each of the multi-contrasts was superior to optimizing only one common sampling pattern for all contrasts. The proposed scheme showed less overlap in sampling locations than the single-contrast scheme. The main hypothesis was also held in the data-driven situation as well. The brain-trained model worked well on the knee images, demonstrating its generalizability. CONCLUSION Our study introduced an effective scheme that combines the sampling optimization and the multi-contrast acceleration. The seamless combination resulted in superior performance over the other existing methods.
Collapse
Affiliation(s)
- Sunghun Seo
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - Huan Minh Luu
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - Seung Hong Choi
- Department of Radiology, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Sung-Hong Park
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| |
Collapse
|
24
|
Kim KH, Seo S, Do WJ, Luu HM, Park SH. Varying undersampling directions for accelerating multiple acquisition magnetic resonance imaging. NMR IN BIOMEDICINE 2022; 35:e4572. [PMID: 34114253 DOI: 10.1002/nbm.4572] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/29/2020] [Revised: 05/27/2021] [Accepted: 05/27/2021] [Indexed: 06/12/2023]
Abstract
In this study, we propose a new sampling strategy for efficiently accelerating multiple acquisition MRI. The new sampling strategy is to obtain data along different phase-encoding directions across multiple acquisitions. The proposed sampling strategy was evaluated in multicontrast MR imaging (T1, T2, proton density) and multiple phase-cycled (PC) balanced steady-state free precession (bSSFP) imaging by using convolutional neural networks with central and random sampling patterns. In vivo MRI acquisitions as well as a public database were used to test the concept. Based on both visual inspection and quantitative analysis, the proposed sampling strategy showed better performance than sampling along the same phase-encoding direction in both multicontrast MR imaging and multiple PC-bSSFP imaging, regardless of sampling pattern (central, random) or datasets (public, retrospective and prospective in vivo). For the prospective in vivo applications, acceleration was performed by sampling along different phase-encoding directions at the time of acquisition with a conventional rectangular field of view, which demonstrated the advantage of the proposed sampling strategy in the real environment. Preliminary trials on compressed sensing (CS) also demonstrated improvement of CS with the proposed idea. Sampling along different phase-encoding directions across multiple acquisitions is advantageous for accelerating multiacquisition MRI, irrespective of sampling pattern or datasets, with further improvement through transfer learning.
Collapse
Affiliation(s)
- Ki Hwan Kim
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, Daejeon, South Korea
- Graduate School of Medical Science and Engineering, Korea Advanced Institute of Science and Technology, Daejeon, South Korea
| | - Sunghun Seo
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, Daejeon, South Korea
| | - Won-Joon Do
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, Daejeon, South Korea
| | - Huan Minh Luu
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, Daejeon, South Korea
| | - Sung-Hong Park
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, Daejeon, South Korea
- Graduate School of Medical Science and Engineering, Korea Advanced Institute of Science and Technology, Daejeon, South Korea
| |
Collapse
|
25
|
Gong K, Han PK, El Fakhri G, Ma C, Li Q. Arterial spin labeling MR image denoising and reconstruction using unsupervised deep learning. NMR IN BIOMEDICINE 2022; 35:e4224. [PMID: 31865615 PMCID: PMC7306418 DOI: 10.1002/nbm.4224] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/06/2019] [Revised: 10/21/2019] [Accepted: 10/22/2019] [Indexed: 05/07/2023]
Abstract
Arterial spin labeling (ASL) imaging is a powerful magnetic resonance imaging technique that allows to quantitatively measure blood perfusion non-invasively, which has great potential for assessing tissue viability in various clinical settings. However, the clinical applications of ASL are currently limited by its low signal-to-noise ratio (SNR), limited spatial resolution, and long imaging time. In this work, we propose an unsupervised deep learning-based image denoising and reconstruction framework to improve the SNR and accelerate the imaging speed of high resolution ASL imaging. The unique feature of the proposed framework is that it does not require any prior training pairs but only the subject's own anatomical prior, such as T1-weighted images, as network input. The neural network was trained from scratch in the denoising or reconstruction process, with noisy images or sparely sampled k-space data as training labels. Performance of the proposed method was evaluated using in vivo experiment data obtained from 3 healthy subjects on a 3T MR scanner, using ASL images acquired with 44-min acquisition time as the ground truth. Both qualitative and quantitative analyses demonstrate the superior performance of the proposed txtc framework over the reference methods. In summary, our proposed unsupervised deep learning-based denoising and reconstruction framework can improve the image quality and accelerate the imaging speed of ASL imaging.
Collapse
Affiliation(s)
| | | | | | - Chao Ma
- Correspondence Chao Ma and Quanzheng Li, Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA, ,
| | - Quanzheng Li
- Correspondence Chao Ma and Quanzheng Li, Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA, ,
| |
Collapse
|
26
|
Chi N, Wang X, Yu Y, Wu M, Yu J. Neuronal Apoptosis in Patients with Liver Cirrhosis and Neuronal Epileptiform Discharge Model Based upon Multi-Modal Fusion Deep Learning. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:2203737. [PMID: 35340253 PMCID: PMC8947874 DOI: 10.1155/2022/2203737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Revised: 02/08/2022] [Accepted: 02/10/2022] [Indexed: 12/04/2022]
Abstract
Neurons refer to nerve cells. Each neuron is connected with thousands of other neurons to form a corresponding functional area and carry out complex communication with other functional areas. Its importance to the human body is self-evident. There are also many scholars studying the mechanism of apoptosis. This paper proposes a study of neuronal apoptosis in patients with liver cirrhosis and neuronal epileptiform discharge models based on multi-modal fusion deep learning, aiming to study the influencing factors of abnormal neuronal discharge in the brain. The method in this paper is to study multi-modal information fusion methods, perform Bayesian inference, and analyze multi-modal medical data. The function of these research methods is to obtain the relationship between the independence of information and the intersection of information among modalities. In the neuronal epileptiform discharge model, the mRNA expression level of the necroptotic signaling pathway related protein was detected, and the mechanism of neuronal necrosis in patients with liver cirrhosis was explored. Experiments show that the neuron recognition rate has been increased from 67.2% to 84.5%, and the time has been reduced, proving the effectiveness of deep learning.
Collapse
Affiliation(s)
- Nannan Chi
- Digestive Department, the First Affiliated Hospital of Jiamusi University, Jiamusi 154000, Heilongjiang, China
| | - Xiuping Wang
- Department of Neurology, the First Affiliated Hospital of Jiamusi University, Jiamusi 154000, Heilongjiang, China
| | - Yun Yu
- 3 Medical Education Department, the First Affiliated Hospital of Jiamusi University, Jiamusi 154000, Heilongjiang, China
| | - Manman Wu
- Graduate Department, Jiamusi University, Jiamusi 154000, Heilongjiang, China
| | - Jianan Yu
- Department of Neurology, the First Affiliated Hospital of Jiamusi University, Jiamusi 154000, Heilongjiang, China
| |
Collapse
|
27
|
Kong W, Miao Q, Liu R, Lei Y, Cui J, Xie Q. Multimodal medical image fusion using gradient domain guided filter random walk and side window filtering in framelet domain. Inf Sci (N Y) 2022. [DOI: 10.1016/j.ins.2021.11.033] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
28
|
A Review of Deep Learning Methods for Compressed Sensing Image Reconstruction and Its Medical Applications. ELECTRONICS 2022. [DOI: 10.3390/electronics11040586] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Compressed sensing (CS) and its medical applications are active areas of research. In this paper, we review recent works using deep learning method to solve CS problem for images or medical imaging reconstruction including computed tomography (CT), magnetic resonance imaging (MRI) and positron-emission tomography (PET). We propose a novel framework to unify traditional iterative algorithms and deep learning approaches. In short, we define two projection operators toward image prior and data consistency, respectively, and any reconstruction algorithm can be decomposed to the two parts. Though deep learning methods can be divided into several categories, they all satisfies the framework. We built the relationship between different reconstruction methods of deep learning, and connect them to traditional methods through the proposed framework. It also indicates that the key to solve CS problem and its medical applications is how to depict the image prior. Based on the framework, we analyze the current deep learning methods and point out some important directions of research in the future.
Collapse
|
29
|
|
30
|
Wei H, Li Z, Wang S, Li R. Undersampled Multi-contrast MRI Reconstruction Based on Double-domain Generative Adversarial Network. IEEE J Biomed Health Inform 2022; 26:4371-4377. [PMID: 35030086 DOI: 10.1109/jbhi.2022.3143104] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Multi-contrast magnetic resonance imaging can provide comprehensive information for clinical diagnosis. However, multi-contrast imaging suffers from long acquisition time, which makes it inhibitive for daily clinical practice. Subsampling k-space is one of the main methods to speed up scan time. Missing k-space samples will lead to inevitable serious artifacts and noise. Considering the assumption that different contrast modalities share some mutual information, it may be possible to exploit this redundancy to accelerate multi-contrast imaging acquisition. Recently, generative adversarial network shows superior performance in image reconstruction and synthesis. Some studies based on k-space reconstruction also exhibit superior performance over conventional state-of-art method. In this study, we propose a cross-domain two-stage generative adversarial network for multi-contrast images reconstruction based on prior full-sampled contrast and undersampled information. The new approach integrates reconstruction and synthesis, which estimates and completes the missing k-space and then refines in image space. It takes one fully-sampled contrast modality data and highly undersampled data from several other modalities as input, and outputs high quality images for each contrast simultaneously. The network is trained and tested on a public brain dataset from healthy subjects. Quantitative comparisons against baseline clearly indicate that the proposed method can effectively reconstruct undersampled images. Even under high acceleration, the network still can recover texture details and reduce artifacts.
Collapse
|
31
|
Accelerate gas diffusion-weighted MRI for lung morphometry with deep learning. Eur Radiol 2022; 32:702-713. [PMID: 34255160 PMCID: PMC8276538 DOI: 10.1007/s00330-021-08126-y] [Citation(s) in RCA: 56] [Impact Index Per Article: 18.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2020] [Revised: 04/14/2021] [Accepted: 06/08/2021] [Indexed: 02/07/2023]
Abstract
OBJECTIVES Multiple b-value gas diffusion-weighted MRI (DW-MRI) enables non-invasive and quantitative assessment of lung morphometry, but its long acquisition time is not well-tolerated by patients. We aimed to accelerate multiple b-value gas DW-MRI for lung morphometry using deep learning. METHODS A deep cascade of residual dense network (DC-RDN) was developed to reconstruct high-quality DW images from highly undersampled k-space data. Hyperpolarized 129Xe lung ventilation images were acquired from 101 participants and were retrospectively collected to generate synthetic DW-MRI data to train the DC-RDN. Afterwards, the performance of the DC-RDN was evaluated on retrospectively and prospectively undersampled multiple b-value 129Xe MRI datasets. RESULTS Each slice with size of 64 × 64 × 5 could be reconstructed within 7.2 ms. For the retrospective test data, the DC-RDN showed significant improvement on all quantitative metrics compared with the conventional reconstruction methods (p < 0.05). The apparent diffusion coefficient (ADC) and morphometry parameters were not significantly different between the fully sampled and DC-RDN reconstructed images (p > 0.05). For the prospectively accelerated acquisition, the required breath-holding time was reduced from 17.8 to 4.7 s with an acceleration factor of 4. Meanwhile, the prospectively reconstructed results showed good agreement with the fully sampled images, with a mean difference of -0.72% and -0.74% regarding global mean ADC and mean linear intercept (Lm) values. CONCLUSIONS DC-RDN is effective in accelerating multiple b-value gas DW-MRI while maintaining accurate estimation of lung microstructural morphometry, facilitating the clinical potential of studying lung diseases with hyperpolarized DW-MRI. KEY POINTS • The deep cascade of residual dense network allowed fast and high-quality reconstruction of multiple b-value gas diffusion-weighted MRI at an acceleration factor of 4. • The apparent diffusion coefficient and morphometry parameters were not significantly different between the fully sampled images and the reconstructed results (p > 0.05). • The required breath-holding time was reduced from 17.8 to 4.7 s and each slice with size of 64 × 64 × 5 could be reconstructed within 7.2 ms.
Collapse
|
32
|
Kim YJ, Lee SR, Choi JY, Kim KG. Using Convolutional Neural Network with Taguchi Parametric Optimization for Knee Segmentation from X-Ray Images. BIOMED RESEARCH INTERNATIONAL 2021; 2021:5521009. [PMID: 34476259 PMCID: PMC8408001 DOI: 10.1155/2021/5521009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/13/2021] [Revised: 05/15/2021] [Accepted: 08/09/2021] [Indexed: 11/17/2022]
Abstract
Loss of knee cartilage can cause intense pain at the knee epiphysis and this is one of the most common diseases worldwide. To diagnose this condition, the distance between the femur and tibia is calculated based on X-ray images. Accurate segmentation of the femur and tibia is required to assist in the calculation process. Several studies have investigated the use of automatic knee segmentation to assist in the calculation process, but the results are of limited value owing to the complexity of the knee. To address this problem, this study exploits deep learning for robust segmentation not affected by the environment. In addition, the Taguchi method is applied to optimize the deep learning results. Deep learning architecture, optimizer, and learning rate are considered for the Taguchi table to check the impact and interaction of the results. When the Dilated-Resnet architecture is used with the Adam optimizer and a learning rate of 0.001, dice coefficients of 0.964 and 0.942 are obtained for the femur and tibia for knee segmentation. The implemented procedure and the results of this investigation may be beneficial to help in determining the correct margins for the femur and tibia and can be the basis for developing an automatic diagnosis algorithm for orthopedic diseases.
Collapse
Affiliation(s)
- Young Jae Kim
- Department of Biomedical Engineering, Gil Medical Center, Gachon University College of Medicine, Incheon 21565, Republic of Korea
| | - Seung Ro Lee
- Department of Biomedical Engineering, Gil Medical Center, Gachon University College of Medicine, Incheon 21565, Republic of Korea
| | - Ja-Young Choi
- Department of Radiology, Seoul National University Hospital, Seoul 03080, Republic of Korea
| | - Kwang Gi Kim
- Department of Biomedical Engineering, Gil Medical Center, Gachon University College of Medicine, Incheon 21565, Republic of Korea
| |
Collapse
|
33
|
Chandra SS, Bran Lorenzana M, Liu X, Liu S, Bollmann S, Crozier S. Deep learning in magnetic resonance image reconstruction. J Med Imaging Radiat Oncol 2021; 65:564-577. [PMID: 34254448 DOI: 10.1111/1754-9485.13276] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Accepted: 06/10/2021] [Indexed: 11/26/2022]
Abstract
Magnetic resonance (MR) imaging visualises soft tissue contrast in exquisite detail without harmful ionising radiation. In this work, we provide a state-of-the-art review on the use of deep learning in MR image reconstruction from different image acquisition types involving compressed sensing techniques, parallel image acquisition and multi-contrast imaging. Publications with deep learning-based image reconstruction for MR imaging were identified from the literature (PubMed and Google Scholar), and a comprehensive description of each of the works was provided. A detailed comparison that highlights the differences, the data used and the performance of each of these works were also made. A discussion of the potential use cases for each of these methods is provided. The sparse image reconstruction methods were found to be most popular in using deep learning for improved performance, accelerating acquisitions by around 4-8 times. Multi-contrast image reconstruction methods rely on at least one pre-acquired image, but can achieve 16-fold, and even up to 32- to 50-fold acceleration depending on the set-up. Parallel imaging provides frameworks to be integrated in many of these methods for additional speed-up potential. The successful use of compressed sensing techniques and multi-contrast imaging with deep learning and parallel acquisition methods could yield significant MR acquisition speed-ups within clinical routines in the near future.
Collapse
Affiliation(s)
- Shekhar S Chandra
- School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Queensland, Australia
| | - Marlon Bran Lorenzana
- School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Queensland, Australia
| | - Xinwen Liu
- School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Queensland, Australia
| | - Siyu Liu
- School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Queensland, Australia
| | - Steffen Bollmann
- School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Queensland, Australia
| | - Stuart Crozier
- School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Queensland, Australia
| |
Collapse
|
34
|
Wang S, Xiao T, Liu Q, Zheng H. Deep learning for fast MR imaging: A review for learning reconstruction from incomplete k-space data. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102579] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
|
35
|
Arridge SR, Ehrhardt MJ, Thielemans K. (An overview of) Synergistic reconstruction for multimodality/multichannel imaging methods. PHILOSOPHICAL TRANSACTIONS. SERIES A, MATHEMATICAL, PHYSICAL, AND ENGINEERING SCIENCES 2021; 379:20200205. [PMID: 33966461 DOI: 10.1098/rsta.2020.0205] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Imaging is omnipresent in modern society with imaging devices based on a zoo of physical principles, probing a specimen across different wavelengths, energies and time. Recent years have seen a change in the imaging landscape with more and more imaging devices combining that which previously was used separately. Motivated by these hardware developments, an ever increasing set of mathematical ideas is appearing regarding how data from different imaging modalities or channels can be synergistically combined in the image reconstruction process, exploiting structural and/or functional correlations between the multiple images. Here we review these developments, give pointers to important challenges and provide an outlook as to how the field may develop in the forthcoming years. This article is part of the theme issue 'Synergistic tomographic image reconstruction: part 1'.
Collapse
Affiliation(s)
- Simon R Arridge
- Department of Computer Science, University College London, London, UK
| | - Matthias J Ehrhardt
- Department of Mathematical Sciences, University of Bath, Bath, UK
- Institute for Mathematical Innovation, University of Bath, Bath, UK
| | - Kris Thielemans
- Institute of Nuclear Medicine, University College London, London, UK
| |
Collapse
|
36
|
A deep cascade of ensemble of dual domain networks with gradient-based T1 assistance and perceptual refinement for fast MRI reconstruction. Comput Med Imaging Graph 2021; 91:101942. [PMID: 34087612 DOI: 10.1016/j.compmedimag.2021.101942] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Revised: 05/03/2021] [Accepted: 05/14/2021] [Indexed: 11/23/2022]
Abstract
Deep learning networks have shown promising results in fast magnetic resonance imaging (MRI) reconstruction. In our work, we develop deep networks to further improve the quantitative and the perceptual quality of reconstruction. To begin with, we propose reconsynergynet (RSN), a network that combines the complementary benefits of independently operating on both the image and the Fourier domain. For a single-coil acquisition, we introduce deep cascade RSN (DC-RSN), a cascade of RSN blocks interleaved with data fidelity (DF) units. Secondly, we improve the structure recovery of DC-RSN for T2 weighted Imaging (T2WI) through assistance of T1 weighted imaging (T1WI), a sequence with short acquisition time. T1 assistance is provided to DC-RSN through a gradient of log feature (GOLF) fusion. Furthermore, we propose perceptual refinement network (PRN) to refine the reconstructions for better visual information fidelity (VIF), a metric highly correlated to radiologist's opinion on the image quality. Lastly, for multi-coil acquisition, we propose variable splitting RSN (VS-RSN), a deep cascade of blocks, each block containing RSN, multi-coil DF unit, and a weighted average module. We extensively validate our models DC-RSN and VS-RSN for single-coil and multi-coil acquisitions and report the state-of-the-art performance. We obtain a SSIM of 0.768, 0.923, and 0.878 for knee single-coil-4x, multi-coil-4x, and multi-coil-8x in fastMRI, respectively. We also conduct experiments to demonstrate the efficacy of GOLF based T1 assistance and PRN.
Collapse
|
37
|
Liu X, Wang J, Jin J, Li M, Tang F, Crozier S, Liu F. Deep unregistered multi-contrast MRI reconstruction. Magn Reson Imaging 2021; 81:33-41. [PMID: 34051290 DOI: 10.1016/j.mri.2021.05.005] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Revised: 04/18/2021] [Accepted: 05/23/2021] [Indexed: 11/18/2022]
Abstract
Multiple magnetic resonance images of different contrasts are normally acquired for clinical diagnosis. Recently, research has shown that the previously acquired multi-contrast (MC) images of the same patient can be used as anatomical prior to accelerating magnetic resonance imaging (MRI). However, current MC-MRI networks are based on the assumption that the images are perfectly registered, which is rarely the case in real-world applications. In this paper, we propose an end-to-end deep neural network to reconstruct highly accelerated images by exploiting the shareable information from potentially misaligned reference images of an arbitrary contrast. Specifically, a spatial transformation (ST) module is designed and integrated into the reconstruction network to align the pre-acquired reference images with the images to be reconstructed. The misalignment is further alleviated by maximizing the normalized cross-correlation (NCC) between the MC images. The visualization of feature maps demonstrates that the proposed method effectively reduces the misalignment between the images for shareable information extraction when applied to the publicly available brain datasets. Additionally, the experimental results on these datasets show the proposed network allows the robust exploitation of shareable information across the misaligned MC images, leading to improved reconstruction results.
Collapse
Affiliation(s)
- Xinwen Liu
- School of Information Technology and Electrical Engineering, the University of Queensland, Brisbane, Australia
| | | | - Jin Jin
- School of Information Technology and Electrical Engineering, the University of Queensland, Brisbane, Australia; Siemens Healthcare Pty. Ltd., Brisbane, Australia
| | - Mingyan Li
- School of Information Technology and Electrical Engineering, the University of Queensland, Brisbane, Australia
| | - Fangfang Tang
- School of Information Technology and Electrical Engineering, the University of Queensland, Brisbane, Australia
| | - Stuart Crozier
- School of Information Technology and Electrical Engineering, the University of Queensland, Brisbane, Australia
| | - Feng Liu
- School of Information Technology and Electrical Engineering, the University of Queensland, Brisbane, Australia.
| |
Collapse
|
38
|
Zhang Z, Ding J, Xu J, Tang J, Guo F. Multi-Scale Time-Series Kernel-Based Learning Method for Brain Disease Diagnosis. IEEE J Biomed Health Inform 2021; 25:209-217. [PMID: 32248130 DOI: 10.1109/jbhi.2020.2983456] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
The functional magnetic resonance imaging (fMRI) is a noninvasive technique for studying brain activity, such as brain network analysis, neural disease automated diagnosis and so on. However, many existing methods have some drawbacks, such as limitations of graph theory, lack of global topology characteristic, local sensitivity of functional connectivity, and absence of temporal or context information. In addition to many numerical features, fMRI time series data also cover specific contextual knowledge and global fluctuation information. Here, we propose multi-scale time-series kernel-based learning model for brain disease diagnosis, based on Jensen-Shannon divergence. First, we calculate correlation value within and between brain regions over time. In addition, we extract multi-scale synergy expression probability distribution (interactional relation) between brain regions. Also, we produce state transition probability distribution (sequential relation) on single brain regions. Then, we build time-series kernel-based learning model based on Jensen-Shannon divergence to measure similarity of brain functional connectivity. Finally, we provide an efficient system to deal with brain network analysis and neural disease automated diagnosis. On Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset, our proposed method achieves accuracy of 0.8994 and AUC of 0.8623. On Major Depressive Disorder (MDD) dataset, our proposed method achieves accuracy of 0.9166 and AUC of 0.9263. Experiments show that our proposed method outperforms other existing excellent neural disease automated diagnosis approaches. It shows that our novel prediction method performs great accurate for identification of brain diseases as well as existing outstanding prediction tools.
Collapse
|
39
|
On the regularization of feature fusion and mapping for fast MR multi-contrast imaging via iterative networks. Magn Reson Imaging 2021; 77:159-168. [PMID: 33400936 DOI: 10.1016/j.mri.2020.12.019] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2020] [Revised: 12/01/2020] [Accepted: 12/29/2020] [Indexed: 01/23/2023]
Abstract
Multi-contrast (MC) Magnetic Resonance Imaging (MRI) of the same patient usually requires long scanning times, despite the images sharing redundant information. In this work, we propose a new iterative network that utilizes the sharable information among MC images for MRI acceleration. The proposed network has reinforced data fidelity control and anatomy guidance through an iterative optimization procedure of Gradient Descent, leading to reduced uncertainties and improved reconstruction results. Through a convolutional network, the new method incorporates a learnable regularization unit that is capable of extracting, fusing, and mapping shareable information among different contrasts. Specifically, a dilated inception block is proposed to promote multi-scale feature extractions and increase the receptive field diversity for contextual information incorporation. Lastly, an optimal MC information feeding protocol is built through the design of a complementary feature extractor block. Comprehensive experiments demonstrated the superiority of the proposed network, both qualitatively and quantitatively.
Collapse
|
40
|
Wang G, Gong E, Banerjee S, Martin D, Tong E, Choi J, Chen H, Wintermark M, Pauly JM, Zaharchuk G. Synthesize High-Quality Multi-Contrast Magnetic Resonance Imaging From Multi-Echo Acquisition Using Multi-Task Deep Generative Model. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3089-3099. [PMID: 32286966 DOI: 10.1109/tmi.2020.2987026] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Multi-echo saturation recovery sequence can provide redundant information to synthesize multi-contrast magnetic resonance imaging. Traditional synthesis methods, such as GE's MAGiC platform, employ a model-fitting approach to generate parameter-weighted contrasts. However, models' over-simplification, as well as imperfections in the acquisition, can lead to undesirable reconstruction artifacts, especially in T2-FLAIR contrast. To improve the image quality, in this study, a multi-task deep learning model is developed to synthesize multi-contrast neuroimaging jointly using both signal relaxation relationships and spatial information. Compared with previous deep learning-based synthesis, the correlation between different destination contrast is utilized to enhance reconstruction quality. To improve model generalizability and evaluate clinical significance, the proposed model was trained and tested on a large multi-center dataset, including healthy subjects and patients with pathology. Results from both quantitative comparison and clinical reader study demonstrate that the multi-task formulation leads to more efficient and accurate contrast synthesis than previous methods.
Collapse
|
41
|
Meng Z, Guo R, Li Y, Guan Y, Wang T, Zhao Y, Sutton B, Li Y, Liang ZP. Accelerating T 2 mapping of the brain by integrating deep learning priors with low-rank and sparse modeling. Magn Reson Med 2020; 85:1455-1467. [PMID: 32989816 DOI: 10.1002/mrm.28526] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2020] [Revised: 08/07/2020] [Accepted: 08/31/2020] [Indexed: 02/06/2023]
Abstract
PURPOSE To accelerate T2 mapping with highly sparse sampling by integrating deep learning image priors with low-rank and sparse modeling. METHODS The proposed method achieves high-speed T2 mapping by highly sparsely sampling (k, TE)-space. Image reconstruction from the undersampled data was done by exploiting the low-rank structure and sparsity in the T2 -weighted image sequence and image priors learned from training data. The image priors for a single TE were generated from the public Human Connectome Project data using a tissue-based deep learning method; the image priors were then transferred to other TEs using a generalized series-based method. With these image priors, the proposed reconstruction method used a low-rank model and a sparse model to capture subject-dependent novel features. RESULTS The proposed method was evaluated using experimental data obtained from both healthy subjects and tumor patients using a turbo spin-echo sequence. High-quality T2 maps at the resolution of 0.9 × 0.9 × 3.0 mm3 were obtained successfully from highly undersampled data with an acceleration factor of 8. Compared with the existing compressed sensing-based methods, the proposed method produced significantly reduced reconstruction errors. Compared with the deep learning-based methods, the proposed method recovered novel features better. CONCLUSION This paper demonstrates the feasibility of learning T2 -weighted image priors for multiple TEs using tissue-based deep learning and generalized series-based learning. A new method was proposed to effectively integrate these image priors with low-rank and sparse modeling to reconstruct high-quality images from highly undersampled data. The proposed method will supplement other acquisition-based methods to achieve high-speed T2 mapping.
Collapse
Affiliation(s)
- Ziyu Meng
- Beckman Institute of Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA.,Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Rong Guo
- Beckman Institute of Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA.,Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA
| | - Yudu Li
- Beckman Institute of Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA.,Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA
| | - Yue Guan
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Tianyao Wang
- Department of Radiology, The Fifth People's Hospital of Shanghai, Fudan University, Shanghai, China
| | - Yibo Zhao
- Beckman Institute of Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA.,Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA
| | - Brad Sutton
- Beckman Institute of Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA.,Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA.,Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA
| | - Yao Li
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Zhi-Pei Liang
- Beckman Institute of Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA.,Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA
| |
Collapse
|
42
|
Xiang L, Chen Y, Chang WT, Zhan Y, Lin W, Wang Q, Shen D. Corrections to “Deep Learning Based Multi-Modal Fusion for Fast MR Reconstruction” [Nov 18 2105-2114]. IEEE Trans Biomed Eng 2020; 67:2705. [DOI: 10.1109/tbme.2020.3005864] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
43
|
Dual-domain cascade of U-nets for multi-channel magnetic resonance image reconstruction. Magn Reson Imaging 2020; 71:140-153. [DOI: 10.1016/j.mri.2020.06.002] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2020] [Revised: 05/20/2020] [Accepted: 06/09/2020] [Indexed: 11/17/2022]
|
44
|
Sheng K. Artificial intelligence in radiotherapy: a technological review. Front Med 2020; 14:431-449. [PMID: 32728877 DOI: 10.1007/s11684-020-0761-1] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2019] [Accepted: 02/14/2020] [Indexed: 12/19/2022]
Abstract
Radiation therapy (RT) is widely used to treat cancer. Technological advances in RT have occurred in the past 30 years. These advances, such as three-dimensional image guidance, intensity modulation, and robotics, created challenges and opportunities for the next breakthrough, in which artificial intelligence (AI) will possibly play important roles. AI will replace certain repetitive and labor-intensive tasks and improve the accuracy and consistency of others, particularly those with increased complexity because of technological advances. The improvement in efficiency and consistency is important to manage the increasing cancer patient burden to the society. Furthermore, AI may provide new functionalities that facilitate satisfactory RT. The functionalities include superior images for real-time intervention and adaptive and personalized RT. AI may effectively synthesize and analyze big data for such purposes. This review describes the RT workflow and identifies areas, including imaging, treatment planning, quality assurance, and outcome prediction, that benefit from AI. This review primarily focuses on deep-learning techniques, although conventional machine-learning techniques are also mentioned.
Collapse
Affiliation(s)
- Ke Sheng
- Department of Radiation Oncology, University of California, Los Angeles, CA, 90095, USA.
| |
Collapse
|
45
|
Chen XL, Yan TY, Wang N, von Deneen KM. Rising role of artificial intelligence in image reconstruction for biomedical imaging. Artif Intell Med Imaging 2020; 1:1-5. [DOI: 10.35711/aimi.v1.i1.1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Revised: 06/09/2020] [Accepted: 06/16/2020] [Indexed: 02/06/2023] Open
Abstract
In this editorial, we review recent progress on the applications of artificial intelligence (AI) in image reconstruction for biomedical imaging. Because it abandons prior information of traditional artificial design and adopts a completely data-driven mode to obtain deeper prior information via learning, AI technology plays an increasingly important role in biomedical image reconstruction. The combination of AI technology and the biomedical image reconstruction method has become a hotspot in the field. Favoring AI, the performance of biomedical image reconstruction has been improved in terms of accuracy, resolution, imaging speed, etc. We specifically focus on how to use AI technology to improve the performance of biomedical image reconstruction, and propose possible future directions in this field.
Collapse
Affiliation(s)
- Xue-Li Chen
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an 710126, Shaanxi Province, China
| | - Tian-Yu Yan
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an 710126, Shaanxi Province, China
| | - Nan Wang
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an 710126, Shaanxi Province, China
| | - Karen M von Deneen
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an 710126, Shaanxi Province, China
| |
Collapse
|
46
|
Li F, Wu D, Lui S, Gong Q, Sweeney JA. Clinical Strategies and Technical Challenges in Psychoradiology. Neuroimaging Clin N Am 2020; 30:1-13. [PMID: 31759566 DOI: 10.1016/j.nic.2019.09.001] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
Psychoradiology is an emerging discipline at the intersection between radiology and psychiatry. It holds promise for playing a role in clinical diagnosis, evaluation of treatment response and prognosis, and illness risk prediction for patients with psychiatric disorders. Addressing complex issues, such as the biological heterogeneity of psychiatric syndromes and unclear neurobiological mechanisms underpinning radiological abnormalities, is a challenge that needs to be resolved. With the advance of multimodal imaging and more efforts in standardization of image acquisition and analysis, psychoradiology is becoming a promising tool for the future of clinical care for patients with psychiatric disorders.
Collapse
Affiliation(s)
- Fei Li
- Huaxi MR Research Center (HMRRC), Department of Radiology, West China Hospital of Sichuan University, No. 37 Guo Xue Lane, Chengdu 610041, China; Psychoradiology Research Unit of Chinese Academy of Medical Sciences, West China Hospital of Sichuan University, No. 37 Guo Xue Lane, Chengdu 610041, China
| | - Dongsheng Wu
- Huaxi MR Research Center (HMRRC), Department of Radiology, West China Hospital of Sichuan University, No. 37 Guo Xue Lane, Chengdu 610041, China; Psychoradiology Research Unit of Chinese Academy of Medical Sciences, West China Hospital of Sichuan University, No. 37 Guo Xue Lane, Chengdu 610041, China
| | - Su Lui
- Huaxi MR Research Center (HMRRC), Department of Radiology, West China Hospital of Sichuan University, No. 37 Guo Xue Lane, Chengdu 610041, China; Psychoradiology Research Unit of Chinese Academy of Medical Sciences, West China Hospital of Sichuan University, No. 37 Guo Xue Lane, Chengdu 610041, China.
| | - Qiyong Gong
- Huaxi MR Research Center (HMRRC), Department of Radiology, West China Hospital of Sichuan University, No. 37 Guo Xue Lane, Chengdu 610041, China; Psychoradiology Research Unit of Chinese Academy of Medical Sciences, West China Hospital of Sichuan University, No. 37 Guo Xue Lane, Chengdu 610041, China
| | - John A Sweeney
- Department of Psychiatry and Behavioral Neuroscience, University of Cincinnati, Suite 3200, 260 Stetson Street, Cincinnati, OH 45219, USA
| |
Collapse
|