1
|
Integrating data distribution prior via Langevin dynamics for end-to-end MR reconstruction. Magn Reson Med 2024; 92:202-214. [PMID: 38469985 DOI: 10.1002/mrm.30065] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2023] [Revised: 01/24/2024] [Accepted: 02/08/2024] [Indexed: 03/13/2024]
Abstract
PURPOSE To develop a novel deep learning-based method inheriting the advantages of data distribution prior and end-to-end training for accelerating MRI. METHODS Langevin dynamics is used to formulate image reconstruction with data distribution before facilitate image reconstruction. The data distribution prior is learned implicitly through the end-to-end adversarial training to mitigate the hyper-parameter selection and shorten the testing time compared to traditional probabilistic reconstruction. By seamlessly integrating the deep equilibrium model, the iteration of Langevin dynamics culminates in convergence to a fix-point, ensuring the stability of the learned distribution. RESULTS The feasibility of the proposed method is evaluated on the brain and knee datasets. Retrospective results with uniform and random masks show that the proposed method demonstrates superior performance both quantitatively and qualitatively than the state-of-the-art. CONCLUSION The proposed method incorporating Langevin dynamics with end-to-end adversarial training facilitates efficient and robust reconstruction for MRI. Empirical evaluations conducted on brain and knee datasets compellingly demonstrate the superior performance of the proposed method in terms of artifact removing and detail preserving.
Collapse
|
2
|
Accelerated motion correction with deep generative diffusion models. Magn Reson Med 2024. [PMID: 38688874 DOI: 10.1002/mrm.30082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 01/02/2024] [Accepted: 02/23/2024] [Indexed: 05/02/2024]
Abstract
PURPOSE The aim of this work is to develop a method to solve the ill-posed inverse problem of accelerated image reconstruction while correcting forward model imperfections in the context of subject motion during MRI examinations. METHODS The proposed solution uses a Bayesian framework based on deep generative diffusion models to jointly estimate a motion-free image and rigid motion estimates from subsampled and motion-corrupt two-dimensional (2D) k-space data. RESULTS We demonstrate the ability to reconstruct motion-free images from accelerated two-dimensional (2D) Cartesian and non-Cartesian scans without any external reference signal. We show that our method improves over existing correction techniques on both simulated and prospectively accelerated data. CONCLUSION We propose a flexible framework for retrospective motion correction of accelerated MRI based on deep generative diffusion models, with potential application to other forward model corruptions.
Collapse
|
3
|
Knowledge-driven deep learning for fast MR imaging: Undersampled MR image reconstruction from supervised to un-supervised learning. Magn Reson Med 2024. [PMID: 38624162 DOI: 10.1002/mrm.30105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 03/19/2024] [Accepted: 03/20/2024] [Indexed: 04/17/2024]
Abstract
Deep learning (DL) has emerged as a leading approach in accelerating MRI. It employs deep neural networks to extract knowledge from available datasets and then applies the trained networks to reconstruct accurate images from limited measurements. Unlike natural image restoration problems, MRI involves physics-based imaging processes, unique data properties, and diverse imaging tasks. This domain knowledge needs to be integrated with data-driven approaches. Our review will introduce the significant challenges faced by such knowledge-driven DL approaches in the context of fast MRI along with several notable solutions, which include learning neural networks and addressing different imaging application scenarios. The traits and trends of these techniques have also been given which have shifted from supervised learning to semi-supervised learning, and finally, to unsupervised learning methods. In addition, MR vendors' choices of DL reconstruction have been provided along with some discussions on open questions and future directions, which are critical for the reliable imaging systems.
Collapse
|
4
|
DCT-net: Dual-domain cross-fusion transformer network for MRI reconstruction. Magn Reson Imaging 2024; 107:69-79. [PMID: 38237693 DOI: 10.1016/j.mri.2024.01.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Revised: 12/26/2023] [Accepted: 01/14/2024] [Indexed: 01/22/2024]
Abstract
Current challenges in Magnetic Resonance Imaging (MRI) include long acquisition times and motion artifacts. To address these issues, under-sampled k-space acquisition has gained popularity as a fast imaging method. However, recovering fine details from under-sampled data remains challenging. In this study, we introduce a pioneering deep learning approach, namely DCT-Net, designed for dual-domain MRI reconstruction. DCT-Net seamlessly integrates information from the image domain (IRM) and frequency domain (FRM), utilizing a novel Cross Attention Block (CAB) and Fusion Attention Block (FAB). These innovative blocks enable precise feature extraction and adaptive fusion across both domains, resulting in a significant enhancement of the reconstructed image quality. The adaptive interaction and fusion mechanisms of CAB and FAB contribute to the method's effectiveness in capturing distinctive features and optimizing image reconstruction. Comprehensive ablation studies have been conducted to assess the contributions of these modules to reconstruction quality and accuracy. Experimental results on the FastMRI (2023) and Calgary-Campinas datasets (2021) demonstrate the superiority of our MRI reconstruction framework over other typical methods (most are illustrated in 2023 or 2022) in both qualitative and quantitative evaluations. This holds for knee and brain datasets under 4× and 8× accelerated imaging scenarios.
Collapse
|
5
|
NPB-REC: A non-parametric Bayesian deep-learning approach for undersampled MRI reconstruction with uncertainty estimation. Artif Intell Med 2024; 149:102798. [PMID: 38462289 DOI: 10.1016/j.artmed.2024.102798] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 12/26/2023] [Accepted: 02/03/2024] [Indexed: 03/12/2024]
Abstract
The ability to reconstruct high-quality images from undersampled MRI data is vital in improving MRI temporal resolution and reducing acquisition times. Deep learning methods have been proposed for this task, but the lack of verified methods to quantify the uncertainty in the reconstructed images hampered clinical applicability. We introduce "NPB-REC", a non-parametric fully Bayesian framework, for MRI reconstruction from undersampled data with uncertainty estimation. We use Stochastic Gradient Langevin Dynamics during training to characterize the posterior distribution of the network parameters. This enables us to both improve the quality of the reconstructed images and quantify the uncertainty in the reconstructed images. We demonstrate the efficacy of our approach on a multi-coil MRI dataset from the fastMRI challenge and compare it to the baseline End-to-End Variational Network (E2E-VarNet). Our approach outperforms the baseline in terms of reconstruction accuracy by means of PSNR and SSIM (34.55, 0.908 vs. 33.08, 0.897, p<0.01, acceleration rate R=8) and provides uncertainty measures that correlate better with the reconstruction error (Pearson correlation, R=0.94 vs. R=0.91). Additionally, our approach exhibits better generalization capabilities against anatomical distribution shifts (PSNR and SSIM of 32.38, 0.849 vs. 31.63, 0.836, p<0.01, training on brain data, inference on knee data, acceleration rate R=8). NPB-REC has the potential to facilitate the safe utilization of deep learning-based methods for MRI reconstruction from undersampled data. Code and trained models are available at https://github.com/samahkh/NPB-REC.
Collapse
|
6
|
Intrafraction Motion Management With MR-Guided Radiation Therapy. Semin Radiat Oncol 2024; 34:92-106. [PMID: 38105098 DOI: 10.1016/j.semradonc.2023.10.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2023]
Abstract
High quality radiation therapy requires highly accurate and precise dose delivery. MR-guided radiotherapy (MRgRT), integrating an MRI scanner with a linear accelerator, offers excellent quality images in the treatment room without subjecting patient to ionizing radiation. MRgRT therefore provides a powerful tool for intrafraction motion management. This paper summarizes different sources of intrafraction motion for different disease sites and describes the MR imaging techniques available to visualize and quantify intrafraction motion. It provides an overview of MR guided motion management strategies and of the current technical capabilities of the commercially available MRgRT systems. It describes how these motion management capabilities are currently being used in clinical studies, protocols and provides a future outlook.
Collapse
|
7
|
Subspace Model-Assisted Deep Learning for Improved Image Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3833-3846. [PMID: 37682643 DOI: 10.1109/tmi.2023.3313421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/10/2023]
Abstract
Image reconstruction from limited and/or sparse data is known to be an ill-posed problem and a priori information/constraints have played an important role in solving the problem. Early constrained image reconstruction methods utilize image priors based on general image properties such as sparsity, low-rank structures, spatial support bound, etc. Recent deep learning-based reconstruction methods promise to produce even higher quality reconstructions by utilizing more specific image priors learned from training data. However, learning high-dimensional image priors requires huge amounts of training data that are currently not available in medical imaging applications. As a result, deep learning-based reconstructions often suffer from two known practical issues: a) sensitivity to data perturbations (e.g., changes in data sampling scheme), and b) limited generalization capability (e.g., biased reconstruction of lesions). This paper proposes a new method to address these issues. The proposed method synergistically integrates model-based and data-driven learning in three key components. The first component uses the linear vector space framework to capture global dependence of image features; the second exploits a deep network to learn the mapping from a linear vector space to a nonlinear manifold; the third is an unrolling-based deep network that captures local residual features with the aid of a sparsity model. The proposed method has been evaluated with magnetic resonance imaging data, demonstrating improved reconstruction in the presence of data perturbation and/or novel image features. The method may enhance the practical utility of deep learning-based image reconstruction.
Collapse
|
8
|
Parallel-stream fusion of scan-specific and scan-general priors for learning deep MRI reconstruction in low-data regimes. Comput Biol Med 2023; 167:107610. [PMID: 37883853 DOI: 10.1016/j.compbiomed.2023.107610] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Revised: 09/20/2023] [Accepted: 10/17/2023] [Indexed: 10/28/2023]
Abstract
Magnetic resonance imaging (MRI) is an essential diagnostic tool that suffers from prolonged scan times. Reconstruction methods can alleviate this limitation by recovering clinically usable images from accelerated acquisitions. In particular, learning-based methods promise performance leaps by employing deep neural networks as data-driven priors. A powerful approach uses scan-specific (SS) priors that leverage information regarding the underlying physical signal model for reconstruction. SS priors are learned on each individual test scan without the need for a training dataset, albeit they suffer from computationally burdening inference with nonlinear networks. An alternative approach uses scan-general (SG) priors that instead leverage information regarding the latent features of MRI images for reconstruction. SG priors are frozen at test time for efficiency, albeit they require learning from a large training dataset. Here, we introduce a novel parallel-stream fusion model (PSFNet) that synergistically fuses SS and SG priors for performant MRI reconstruction in low-data regimes, while maintaining competitive inference times to SG methods. PSFNet implements its SG prior based on a nonlinear network, yet it forms its SS prior based on a linear network to maintain efficiency. A pervasive framework for combining multiple priors in MRI reconstruction is algorithmic unrolling that uses serially alternated projections, causing error propagation under low-data regimes. To alleviate error propagation, PSFNet combines its SS and SG priors via a novel parallel-stream architecture with learnable fusion parameters. Demonstrations are performed on multi-coil brain MRI for varying amounts of training data. PSFNet outperforms SG methods in low-data regimes, and surpasses SS methods with few tens of training samples. On average across tasks, PSFNet achieves 3.1 dB higher PSNR, 2.8% higher SSIM, and 0.3 × lower RMSE than baselines. Furthermore, in both supervised and unsupervised setups, PSFNet requires an order of magnitude lower samples compared to SG methods, and enables an order of magnitude faster inference compared to SS methods. Thus, the proposed model improves deep MRI reconstruction with elevated learning and computational efficiency.
Collapse
|
9
|
One-Shot Generative Prior in Hankel-k-Space for Parallel Imaging Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3420-3435. [PMID: 37342955 DOI: 10.1109/tmi.2023.3288219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/23/2023]
Abstract
Magnetic resonance imaging serves as an essential tool for clinical diagnosis. However, it suffers from a long acquisition time. The utilization of deep learning, especially the deep generative models, offers aggressive acceleration and better reconstruction in magnetic resonance imaging. Nevertheless, learning the data distribution as prior knowledge and reconstructing the image from limited data remains challenging. In this work, we propose a novel Hankel-k-space generative model (HKGM), which can generate samples from a training set of as little as one k-space. At the prior learning stage, we first construct a large Hankel matrix from k-space data, then extract multiple structured k-space patches from the Hankel matrix to capture the internal distribution among different patches. Extracting patches from a Hankel matrix enables the generative model to be learned from the redundant and low-rank data space. At the iterative reconstruction stage, the desired solution obeys the learned prior knowledge. The intermediate reconstruction solution is updated by taking it as the input of the generative model. The updated result is then alternatively operated by imposing low-rank penalty on its Hankel matrix and data consistency constraint on the measurement data. Experimental results confirmed that the internal statistics of patches within single k-space data carry enough information for learning a powerful generative model and providing state-of-the-art reconstruction.
Collapse
|
10
|
WKGM: weighted k-space generative model for parallel imaging reconstruction. NMR IN BIOMEDICINE 2023; 36:e5005. [PMID: 37547964 DOI: 10.1002/nbm.5005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Revised: 06/12/2023] [Accepted: 06/24/2023] [Indexed: 08/08/2023]
Abstract
Deep learning based parallel imaging (PI) has made great progress in recent years to accelerate MRI. Nevertheless, it still has some limitations: for example, the robustness and flexibility of existing methods are greatly deficient. In this work, we propose a method to explore the k-space domain learning via robust generative modeling for flexible calibrationless PI reconstruction, coined the weighted k-space generative model (WKGM). Specifically, WKGM is a generalized k-space domain model, where the k-space weighting technology and high-dimensional space augmentation design are efficiently incorporated for score-based generative model training, resulting in good and robust reconstructions. In addition, WKGM is flexible and thus can be synergistically combined with various traditional k-space PI models, which can make full use of the correlation between multi-coil data and realize calibrationless PI. Even though our model was trained on only 500 images, experimental results with varying sampling patterns and acceleration factors demonstrate that WKGM can attain state-of-the-art reconstruction results with the well learned k-space generative prior.
Collapse
|
11
|
Sparse-view reconstruction for photoacoustic tomography combining diffusion model with model-based iteration. PHOTOACOUSTICS 2023; 33:100558. [PMID: 38021282 PMCID: PMC10658608 DOI: 10.1016/j.pacs.2023.100558] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/08/2023] [Revised: 08/14/2023] [Accepted: 09/16/2023] [Indexed: 12/01/2023]
Abstract
As a non-invasive hybrid biomedical imaging technology, photoacoustic tomography combines high contrast of optical imaging and high penetration of acoustic imaging. However, the conventional standard reconstruction under sparse view could result in low-quality image in photoacoustic tomography. Here, a novel model-based sparse reconstruction method for photoacoustic tomography via diffusion model was proposed. A score-based diffusion model is designed for learning the prior information of the data distribution. The learned prior information is utilized as a constraint for the data consistency term of an optimization problem based on the least-square method in the model-based iterative reconstruction, aiming to achieve the optimal solution. Blood vessels simulation data and the animal in vivo experimental data were used to evaluate the performance of the proposed method. The results demonstrate that the proposed method achieves higher-quality sparse reconstruction compared with conventional reconstruction methods and U-Net. In particular, under the extreme sparse projection (e.g., 32 projections), the proposed method achieves an improvement of ∼ 260 % in structural similarity and ∼ 30 % in peak signal-to-noise ratio for in vivo data, compared with the conventional delay-and-sum method. This method has the potential to reduce the acquisition time and cost of photoacoustic tomography, which will further expand the application range.
Collapse
|
12
|
Emerging Trends in Fast MRI Using Deep-Learning Reconstruction on Undersampled k-Space Data: A Systematic Review. Bioengineering (Basel) 2023; 10:1012. [PMID: 37760114 PMCID: PMC10525988 DOI: 10.3390/bioengineering10091012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 08/22/2023] [Accepted: 08/24/2023] [Indexed: 09/29/2023] Open
Abstract
Magnetic Resonance Imaging (MRI) is an essential medical imaging modality that provides excellent soft-tissue contrast and high-resolution images of the human body, allowing us to understand detailed information on morphology, structural integrity, and physiologic processes. However, MRI exams usually require lengthy acquisition times. Methods such as parallel MRI and Compressive Sensing (CS) have significantly reduced the MRI acquisition time by acquiring less data through undersampling k-space. The state-of-the-art of fast MRI has recently been redefined by integrating Deep Learning (DL) models with these undersampled approaches. This Systematic Literature Review (SLR) comprehensively analyzes deep MRI reconstruction models, emphasizing the key elements of recently proposed methods and highlighting their strengths and weaknesses. This SLR involves searching and selecting relevant studies from various databases, including Web of Science and Scopus, followed by a rigorous screening and data extraction process using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. It focuses on various techniques, such as residual learning, image representation using encoders and decoders, data-consistency layers, unrolled networks, learned activations, attention modules, plug-and-play priors, diffusion models, and Bayesian methods. This SLR also discusses the use of loss functions and training with adversarial networks to enhance deep MRI reconstruction methods. Moreover, we explore various MRI reconstruction applications, including non-Cartesian reconstruction, super-resolution, dynamic MRI, joint learning of reconstruction with coil sensitivity and sampling, quantitative mapping, and MR fingerprinting. This paper also addresses research questions, provides insights for future directions, and emphasizes robust generalization and artifact handling. Therefore, this SLR serves as a valuable resource for advancing fast MRI, guiding research and development efforts of MRI reconstruction for better image quality and faster data acquisition.
Collapse
|
13
|
Bayesian reconstruction of magnetic resonance images using Gaussian processes. Sci Rep 2023; 13:12527. [PMID: 37532743 PMCID: PMC10397278 DOI: 10.1038/s41598-023-39533-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Accepted: 07/26/2023] [Indexed: 08/04/2023] Open
Abstract
A central goal of modern magnetic resonance imaging (MRI) is to reduce the time required to produce high-quality images. Efforts have included hardware and software innovations such as parallel imaging, compressed sensing, and deep learning-based reconstruction. Here, we propose and demonstrate a Bayesian method to build statistical libraries of magnetic resonance (MR) images in k-space and use these libraries to identify optimal subsampling paths and reconstruction processes. Specifically, we compute a multivariate normal distribution based upon Gaussian processes using a publicly available library of T1-weighted images of healthy brains. We combine this library with physics-informed envelope functions to only retain meaningful correlations in k-space. This covariance function is then used to select a series of ring-shaped subsampling paths using Bayesian optimization such that they optimally explore space while remaining practically realizable in commercial MRI systems. Combining optimized subsampling paths found for a range of images, we compute a generalized sampling path that, when used for novel images, produces superlative structural similarity and error in comparison to previously reported reconstruction processes (i.e. 96.3% structural similarity and < 0.003 normalized mean squared error from sampling only 12.5% of the k-space data). Finally, we use this reconstruction process on pathological data without retraining to show that reconstructed images are clinically useful for stroke identification. Since the model trained on images of healthy brains could be directly used for predictions in pathological brains without retraining, it shows the inherent transferability of this approach and opens doors to its widespread use.
Collapse
|
14
|
Adaptive diffusion priors for accelerated MRI reconstruction. Med Image Anal 2023; 88:102872. [PMID: 37384951 DOI: 10.1016/j.media.2023.102872] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 04/13/2023] [Accepted: 06/12/2023] [Indexed: 07/01/2023]
Abstract
Deep MRI reconstruction is commonly performed with conditional models that de-alias undersampled acquisitions to recover images consistent with fully-sampled data. Since conditional models are trained with knowledge of the imaging operator, they can show poor generalization across variable operators. Unconditional models instead learn generative image priors decoupled from the operator to improve reliability against domain shifts related to the imaging operator. Recent diffusion models are particularly promising given their high sample fidelity. Nevertheless, inference with a static image prior can perform suboptimally. Here we propose the first adaptive diffusion prior for MRI reconstruction, AdaDiff, to improve performance and reliability against domain shifts. AdaDiff leverages an efficient diffusion prior trained via adversarial mapping over large reverse diffusion steps. A two-phase reconstruction is executed following training: a rapid-diffusion phase that produces an initial reconstruction with the trained prior, and an adaptation phase that further refines the result by updating the prior to minimize data-consistency loss. Demonstrations on multi-contrast brain MRI clearly indicate that AdaDiff outperforms competing conditional and unconditional methods under domain shifts, and achieves superior or on par within-domain performance.
Collapse
|
15
|
Generative AI for brain image computing and brain network computing: a review. Front Neurosci 2023; 17:1203104. [PMID: 37383107 PMCID: PMC10293625 DOI: 10.3389/fnins.2023.1203104] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Accepted: 05/22/2023] [Indexed: 06/30/2023] Open
Abstract
Recent years have witnessed a significant advancement in brain imaging techniques that offer a non-invasive approach to mapping the structure and function of the brain. Concurrently, generative artificial intelligence (AI) has experienced substantial growth, involving using existing data to create new content with a similar underlying pattern to real-world data. The integration of these two domains, generative AI in neuroimaging, presents a promising avenue for exploring various fields of brain imaging and brain network computing, particularly in the areas of extracting spatiotemporal brain features and reconstructing the topological connectivity of brain networks. Therefore, this study reviewed the advanced models, tasks, challenges, and prospects of brain imaging and brain network computing techniques and intends to provide a comprehensive picture of current generative AI techniques in brain imaging. This review is focused on novel methodological approaches and applications of related new methods. It discussed fundamental theories and algorithms of four classic generative models and provided a systematic survey and categorization of tasks, including co-registration, super-resolution, enhancement, classification, segmentation, cross-modality, brain network analysis, and brain decoding. This paper also highlighted the challenges and future directions of the latest work with the expectation that future research can be beneficial.
Collapse
|
16
|
K-space and image domain collaborative energy-based model for parallel MRI reconstruction. Magn Reson Imaging 2023; 99:110-122. [PMID: 36796460 DOI: 10.1016/j.mri.2023.02.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Revised: 02/08/2023] [Accepted: 02/10/2023] [Indexed: 02/17/2023]
Abstract
Decreasing magnetic resonance (MR) image acquisition times can potentially make MR examinations more accessible. Prior arts including the deep learning models have been devoted to solving the problem of long MRI imaging time. Recently, deep generative models have exhibited great potentials in algorithm robustness and usage flexibility. Nevertheless, none of existing schemes can be learned from or employed to the k-space measurement directly. Furthermore, how do the deep generative models work well in hybrid domain is also worth being investigated. In this work, by taking advantage of the deep energy-based models, we propose a k-space and image domain collaborative generative model to comprehensively estimate the MR data from under-sampled measurement. Equipped with parallel and sequential orders, experimental comparisons with the state-of-the-arts demonstrated that they involve less error in reconstruction accuracy and are more stable under different acceleration factors.
Collapse
|
17
|
A distribution information sharing federated learning approach for medical image data. COMPLEX INTELL SYST 2023; 9:1-12. [PMID: 37361966 PMCID: PMC10052320 DOI: 10.1007/s40747-023-01035-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Accepted: 03/09/2023] [Indexed: 03/31/2023]
Abstract
In recent years, federated learning has been believed to play a considerable role in cross-silo scenarios (e.g., medical institutions) due to its privacy-preserving properties. However, the non-IID problem in federated learning between medical institutions is common, which degrades the performance of traditional federated learning algorithms. To overcome the performance degradation problem, a novelty distribution information sharing federated learning approach (FedDIS) to medical image classification is proposed that reduce non-IIDness across clients by generating data locally at each client with shared medical image data distribution from others while protecting patient privacy. First, a variational autoencoder (VAE) is federally trained, of which the encoder is uesd to map the local original medical images into a hidden space, and the distribution information of the mapped data in the hidden space is estimated and then shared among the clients. Second, the clients augment a new set of image data based on the received distribution information with the decoder of VAE. Finally, the clients use the local dataset along with the augmented dataset to train the final classification model in a federated learning manner. Experiments on the diagnosis task of Alzheimer's disease MRI dataset and the MNIST data classification task show that the proposed method can significantly improve the performance of federated learning under non-IID cases.
Collapse
|
18
|
Bayesian MRI reconstruction with joint uncertainty estimation using diffusion models. Magn Reson Med 2023; 90:295-311. [PMID: 36912453 DOI: 10.1002/mrm.29624] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Revised: 02/05/2023] [Accepted: 02/08/2023] [Indexed: 03/14/2023]
Abstract
PURPOSE We introduce a framework that enables efficient sampling from learned probability distributions for MRI reconstruction. METHOD Samples are drawn from the posterior distribution given the measured k-space using the Markov chain Monte Carlo (MCMC) method, different from conventional deep learning-based MRI reconstruction techniques. In addition to the maximum a posteriori estimate for the image, which can be obtained by maximizing the log-likelihood indirectly or directly, the minimum mean square error estimate and uncertainty maps can also be computed from those drawn samples. The data-driven Markov chains are constructed with the score-based generative model learned from a given image database and are independent of the forward operator that is used to model the k-space measurement. RESULTS We numerically investigate the framework from these perspectives: (1) the interpretation of the uncertainty of the image reconstructed from undersampled k-space; (2) the effect of the number of noise scales used to train the generative models; (3) using a burn-in phase in MCMC sampling to reduce computation; (4) the comparison to conventional ℓ 1 $$ {\ell}_1 $$ -wavelet regularized reconstruction; (5) the transferability of learned information; and (6) the comparison to fastMRI challenge. CONCLUSION A framework is described that connects the diffusion process and advanced generative models with Markov chains. We demonstrate its flexibility in terms of contrasts and sampling patterns using advanced generative priors and the benefits of also quantifying the uncertainty for every pixel.
Collapse
|
19
|
Magnetic resonance imaging reconstruction using a deep energy-based model. NMR IN BIOMEDICINE 2023; 36:e4848. [PMID: 36262093 DOI: 10.1002/nbm.4848] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/08/2022] [Revised: 09/09/2022] [Accepted: 09/27/2022] [Indexed: 06/16/2023]
Abstract
Although recent deep energy-based generative models (EBMs) have shown encouraging results in many image-generation tasks, how to take advantage of self-adversarial cogitation in deep EBMs to boost the performance of magnetic resonance imaging (MRI) reconstruction is still desired. With the successful application of deep learning in a wide range of MRI reconstructions, a line of emerging research involves formulating an optimization-based reconstruction method in the space of a generative model. Leveraging this, a novel regularization strategy is introduced in this article that takes advantage of self-adversarial cogitation of the deep energy-based model. More precisely, we advocate alternating learning by a more powerful energy-based model with maximum likelihood estimation to obtain the deep energy-based information, represented as a prior image. Simultaneously, implicit inference with Langevin dynamics is a unique property of reconstruction. In contrast to other generative models for reconstruction, the proposed method utilizes deep energy-based information as the image prior in reconstruction to improve the quality of image. Experimental results imply the proposed technique can obtain remarkable performance in terms of high reconstruction accuracy that is competitive with state-of-the-art methods, and which does not suffer from mode collapse. Algorithmically, an iterative approach is presented to strengthen EBM training with the gradient of energy network. The robustness and reproducibility of the algorithm were also experimentally validated. More importantly, the proposed reconstruction framework can be generalized for most MRI reconstruction scenarios.
Collapse
|
20
|
Manifold Learning via Linear Tangent Space Alignment (LTSA) for Accelerated Dynamic MRI With Sparse Sampling. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:158-169. [PMID: 36121938 PMCID: PMC10024645 DOI: 10.1109/tmi.2022.3207774] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
The spatial resolution and temporal frame-rate of dynamic magnetic resonance imaging (MRI) can be improved by reconstructing images from sparsely sampled k -space data with mathematical modeling of the underlying spatiotemporal signals. These models include sparsity models, linear subspace models, and non-linear manifold models. This work presents a novel linear tangent space alignment (LTSA) model-based framework that exploits the intrinsic low-dimensional manifold structure of dynamic images for accelerated dynamic MRI. The performance of the proposed method was evaluated and compared to state-of-the-art methods using numerical simulation studies as well as 2D and 3D in vivo cardiac imaging experiments. The proposed method achieved the best performance in image reconstruction among all the compared methods. The proposed method could prove useful for accelerating many MRI applications, including dynamic MRI, multi-parametric MRI, and MR spectroscopic imaging.
Collapse
|
21
|
A unified model for reconstruction and R 2* mapping of accelerated 7T data using the quantitative recurrent inference machine. Neuroimage 2022; 264:119680. [PMID: 36240989 DOI: 10.1016/j.neuroimage.2022.119680] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 09/16/2022] [Accepted: 10/10/2022] [Indexed: 11/07/2022] Open
Abstract
Quantitative MRI (qMRI) acquired at the ultra-high field of 7 Tesla has been used in visualizing and analyzing subcortical structures. qMRI relies on the acquisition of multiple images with different scan settings, leading to extended scanning times. Data redundancy and prior information from the relaxometry model can be exploited by deep learning to accelerate the imaging process. We propose the quantitative Recurrent Inference Machine (qRIM), with a unified forward model for joint reconstruction and R2*-mapping from sparse data, embedded in a Recurrent Inference Machine (RIM), an iterative inverse problem-solving network. To study the dependency of the proposed extension of the unified forward model to network architecture, we implemented and compared a quantitative End-to-End Variational Network (qE2EVN). Experiments were performed with high-resolution multi-echo gradient echo data of the brain at 7T of a cohort study covering the entire adult life span. The error in reconstructed R2* from undersampled data relative to reference data significantly decreased for the unified model compared to sequential image reconstruction and parameter fitting using the RIM. With increasing acceleration factor, an increasing reduction in the reconstruction error was observed, pointing to a larger benefit for sparser data. Qualitatively, this was following an observed reduction of image blurriness in R2*-maps. In contrast, when using the U-Net as network architecture, a negative bias in R2* in selected regions of interest was observed. Compressed Sensing rendered accurate, but less precise estimates of R2*. The qE2EVN showed slightly inferior reconstruction quality compared to the qRIM but better quality than the U-Net and Compressed Sensing. Subcortical maturation over age measured by a linearly increasing interquartile range of R2* in the striatum was preserved up to an acceleration factor of 9. With the integrated prior of the unified forward model, the proposed qRIM can exploit the redundancy among repeated measurements and shared information between tasks, facilitating relaxometry in accelerated MRI.
Collapse
|
22
|
Pyramid Convolutional RNN for MRI Image Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2033-2047. [PMID: 35192462 DOI: 10.1109/tmi.2022.3153849] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Fast and accurate MRI image reconstruction from undersampled data is crucial in clinical practice. Deep learning based reconstruction methods have shown promising advances in recent years. However, recovering fine details from undersampled data is still challenging. In this paper, we introduce a novel deep learning based method, Pyramid Convolutional RNN (PC-RNN), to reconstruct images from multiple scales. Based on the formulation of MRI reconstruction as an inverse problem, we design the PC-RNN model with three convolutional RNN (ConvRNN) modules to iteratively learn the features in multiple scales. Each ConvRNN module reconstructs images at different scales and the reconstructed images are combined by a final CNN module in a pyramid fashion. The multi-scale ConvRNN modules learn a coarse-to-fine image reconstruction. Unlike other common reconstruction methods for parallel imaging, PC-RNN does not employ coil sensitive maps for multi-coil data and directly model the multiple coils as multi-channel inputs. The coil compression technique is applied to standardize data with various coil numbers, leading to more efficient training. We evaluate our model on the fastMRI knee and brain datasets and the results show that the proposed model outperforms other methods and can recover more details. The proposed method is one of the winner solutions in the 2019 fastMRI competition.
Collapse
|
23
|
Unsupervised MRI Reconstruction via Zero-Shot Learned Adversarial Transformers. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1747-1763. [PMID: 35085076 DOI: 10.1109/tmi.2022.3147426] [Citation(s) in RCA: 48] [Impact Index Per Article: 24.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Supervised reconstruction models are characteristically trained on matched pairs of undersampled and fully-sampled data to capture an MRI prior, along with supervision regarding the imaging operator to enforce data consistency. To reduce supervision requirements, the recent deep image prior framework instead conjoins untrained MRI priors with the imaging operator during inference. Yet, canonical convolutional architectures are suboptimal in capturing long-range relationships, and priors based on randomly initialized networks may yield suboptimal performance. To address these limitations, here we introduce a novel unsupervised MRI reconstruction method based on zero-Shot Learned Adversarial TransformERs (SLATER). SLATER embodies a deep adversarial network with cross-attention transformers to map noise and latent variables onto coil-combined MR images. During pre-training, this unconditional network learns a high-quality MRI prior in an unsupervised generative modeling task. During inference, a zero-shot reconstruction is then performed by incorporating the imaging operator and optimizing the prior to maximize consistency to undersampled data. Comprehensive experiments on brain MRI datasets clearly demonstrate the superior performance of SLATER against state-of-the-art unsupervised methods.
Collapse
|
24
|
Sampling Possible Reconstructions of Undersampled Acquisitions in MR Imaging With a Deep Learned Prior. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1885-1896. [PMID: 35143393 DOI: 10.1109/tmi.2022.3150853] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Undersampling the k-space during MR acquisitions saves time, however results in an ill-posed inversion problem, leading to an infinite set of images as possible solutions. Traditionally, this is tackled as a reconstruction problem by searching for a single "best" image out of this solution set according to some chosen regularization or prior. This approach, however, misses the possibility of other solutions and hence ignores the uncertainty in the inversion process. In this paper, we propose a method that instead returns multiple images which are possible under the acquisition model and the chosen prior to capture the uncertainty in the inversion process. To this end, we introduce a low dimensional latent space and model the posterior distribution of the latent vectors given the acquisition data in k-space, from which we can sample in the latent space and obtain the corresponding images. We use a variational autoencoder for the latent model and the Metropolis adjusted Langevin algorithm for the sampling. We evaluate our method on two datasets; with images from the Human Connectome Project and in-house measured multi-coil images. We compare to five alternative methods. Results indicate that the proposed method produces images that match the measured k-space data better than the alternatives, while showing realistic structural variability. Furthermore, in contrast to the compared methods, the proposed method yields higher uncertainty in the undersampled phase encoding direction, as expected.
Collapse
|
25
|
A Compressed Reconstruction Network Combining Deep Image Prior and Autoencoding Priors for Single-Pixel Imaging. PHOTONICS 2022. [DOI: 10.3390/photonics9050343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Single-pixel imaging (SPI) is a promising imaging scheme based on compressive sensing. However, its application in high-resolution and real-time scenarios is a great challenge due to the long sampling and reconstruction required. The Deep Learning Compressed Network (DLCNet) can avoid the long-time iterative operation required by traditional reconstruction algorithms, and can achieve fast and high-quality reconstruction; hence, Deep-Learning-based SPI has attracted much attention. DLCNets learn prior distributions of real pictures from massive datasets, while the Deep Image Prior (DIP) uses a neural network′s own structural prior to solve inverse problems without requiring a lot of training data. This paper proposes a compressed reconstruction network (DPAP) based on DIP for Single-pixel imaging. DPAP is designed as two learning stages, which enables DPAP to focus on statistical information of the image structure at different scales. In order to obtain prior information from the dataset, the measurement matrix is jointly optimized by a network and multiple autoencoders are trained as regularization terms to be added to the loss function. Extensive simulations and practical experiments demonstrate that the proposed network outperforms existing algorithms.
Collapse
|
26
|
A Review of Deep Learning Methods for Compressed Sensing Image Reconstruction and Its Medical Applications. ELECTRONICS 2022. [DOI: 10.3390/electronics11040586] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Compressed sensing (CS) and its medical applications are active areas of research. In this paper, we review recent works using deep learning method to solve CS problem for images or medical imaging reconstruction including computed tomography (CT), magnetic resonance imaging (MRI) and positron-emission tomography (PET). We propose a novel framework to unify traditional iterative algorithms and deep learning approaches. In short, we define two projection operators toward image prior and data consistency, respectively, and any reconstruction algorithm can be decomposed to the two parts. Though deep learning methods can be divided into several categories, they all satisfies the framework. We built the relationship between different reconstruction methods of deep learning, and connect them to traditional methods through the proposed framework. It also indicates that the key to solve CS problem and its medical applications is how to depict the image prior. Based on the framework, we analyze the current deep learning methods and point out some important directions of research in the future.
Collapse
|
27
|
Posterior temperature optimized Bayesian models for inverse problems in medical imaging. Med Image Anal 2022; 78:102382. [DOI: 10.1016/j.media.2022.102382] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2021] [Revised: 11/09/2021] [Accepted: 02/01/2022] [Indexed: 11/21/2022]
|
28
|
Iterative Reconstruction for Low-Dose CT using Deep Gradient Priors of Generative Model. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2022. [DOI: 10.1109/trpms.2022.3148373] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
29
|
The augmented radiologist: artificial intelligence in the practice of radiology. Pediatr Radiol 2022; 52:2074-2086. [PMID: 34664088 PMCID: PMC9537212 DOI: 10.1007/s00247-021-05177-7] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Revised: 06/03/2021] [Accepted: 08/02/2021] [Indexed: 12/19/2022]
Abstract
In medicine, particularly in radiology, there are great expectations in artificial intelligence (AI), which can "see" more than human radiologists in regard to, for example, tumor size, shape, morphology, texture and kinetics - thus enabling better care by earlier detection or more precise reports. Another point is that AI can handle large data sets in high-dimensional spaces. But it should not be forgotten that AI is only as good as the training samples available, which should ideally be numerous enough to cover all variants. On the other hand, the main feature of human intelligence is content knowledge and the ability to find near-optimal solutions. The purpose of this paper is to review the current complexity of radiology working places, to describe their advantages and shortcomings. Further, we give an AI overview of the different types and features as used so far. We also touch on the differences between AI and human intelligence in problem-solving. We present a new AI type, labeled "explainable AI," which should enable a balance/cooperation between AI and human intelligence - thus bringing both worlds in compliance with legal requirements. For support of (pediatric) radiologists, we propose the creation of an AI assistant that augments radiologists and keeps their brain free for generic tasks.
Collapse
|
30
|
Abstract
We propose a novel unsupervised deep-learning-based algorithm for dynamic magnetic resonance imaging (MRI) reconstruction. Dynamic MRI requires rapid data acquisition for the study of moving organs such as the heart. We introduce a generalized version of the deep-image-prior approach, which optimizes the weights of a reconstruction network to fit a sequence of sparsely acquired dynamic MRI measurements. Our method needs neither prior training nor additional data. In particular, for cardiac images, it does not require the marking of heartbeats or the reordering of spokes. The key ingredients of our method are threefold: 1) a fixed low-dimensional manifold that encodes the temporal variations of images; 2) a network that maps the manifold into a more expressive latent space; and 3) a convolutional neural network that generates a dynamic series of MRI images from the latent variables and that favors their consistency with the measurements in k -space. Our method outperforms the state-of-the-art methods quantitatively and qualitatively in both retrospective and real fetal cardiac datasets. To the best of our knowledge, this is the first unsupervised deep-learning-based method that can reconstruct the continuous variation of dynamic MRI sequences with high spatial resolution.
Collapse
|
31
|
Homotopic Gradients of Generative Density Priors for MR Image Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3265-3278. [PMID: 34010128 DOI: 10.1109/tmi.2021.3081677] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Deep learning, particularly the generative model, has demonstrated tremendous potential to significantly speed up image reconstruction with reduced measurements recently. Rather than the existing generative models that often optimize the density priors, in this work, by taking advantage of the denoising score matching, homotopic gradients of generative density priors (HGGDP) are exploited for magnetic resonance imaging (MRI) reconstruction. More precisely, to tackle the low-dimensional manifold and low data density region issues in generative density prior, we estimate the target gradients in higher-dimensional space. We train a more powerful noise conditional score network by forming high-dimensional tensor as the network input at the training phase. More artificial noise is also injected in the embedding space. At the reconstruction stage, a homotopy method is employed to pursue the density prior, such as to boost the reconstruction performance. Experiment results implied the remarkable performance of HGGDP in terms of high reconstruction accuracy. Only 10% of the k-space data can still generate image of high quality as effectively as standard MRI reconstructions with the fully sampled data.
Collapse
|
32
|
Cast suppression in radiographs by generative adversarial networks. J Am Med Inform Assoc 2021; 28:2687-2694. [PMID: 34613393 DOI: 10.1093/jamia/ocab192] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2021] [Revised: 08/06/2021] [Accepted: 08/30/2021] [Indexed: 11/14/2022] Open
Abstract
Injured extremities commonly need to be immobilized by casts to allow proper healing. We propose a method to suppress cast superimpositions in pediatric wrist radiographs based on the cycle generative adversarial network (CycleGAN) model. We retrospectively reviewed unpaired pediatric wrist radiographs (n = 9672) and sampled them into 2 equal groups, with and without cast. The test subset consisted of 718 radiographs with cast. We evaluated different quadratic input sizes (256, 512, and 1024 pixels) for U-Net and ResNet-based CycleGAN architectures in cast suppression, quantitatively and qualitatively. The mean age was 11 ± 3 years in images containing cast (n = 4836), and 11 ± 4 years in castless samples (n = 4836). A total of 5956 X-rays had been done in males and 3716 in females. A U-Net 512 CycleGAN performed best (P ≤ .001). CycleGAN models successfully suppressed casts in pediatric wrist radiographs, allowing the development of a related software tool for radiology image viewers.
Collapse
|
33
|
Learning Data Consistency and its Application to Dynamic MR Imaging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3140-3153. [PMID: 34252025 DOI: 10.1109/tmi.2021.3096232] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Magnetic resonance (MR) image reconstruction from undersampled k-space data can be formulated as a minimization problem involving data consistency and image prior. Existing deep learning (DL)-based methods for MR reconstruction employ deep networks to exploit the prior information and integrate the prior knowledge into the reconstruction under the explicit constraint of data consistency, without considering the real distribution of the noise. In this work, we propose a new DL-based approach termed Learned DC that implicitly learns the data consistency with deep networks, corresponding to the actual probability distribution of system noise. The data consistency term and the prior knowledge are both embedded in the weights of the networks, which provides an utterly implicit manner of learning reconstruction model. We evaluated the proposed approach with highly undersampled dynamic data, including the dynamic cardiac cine data with up to 24-fold acceleration and dynamic rectum data with the acceleration factor equal to the number of phases. Experimental results demonstrate the superior performance of the Learned DC both quantitatively and qualitatively than the state-of-the-art.
Collapse
|
34
|
Evaluation of Iterative Denoising 3-Dimensional T2-Weighted Turbo Spin Echo for the Diagnosis of Deep Infiltrating Endometriosis. Invest Radiol 2021; 56:637-644. [PMID: 33813570 DOI: 10.1097/rli.0000000000000786] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
OBJECTIVES The primary end point of this study was to evaluate the image quality and reliability of a highly accelerated 3-dimensional T2 turbo spin echo (3D-T2-TSE) sequence with prototype iterative denoising (ID) reconstruction compared with conventional 2D T2 sequences for the diagnosis of deep infiltrating endometriosis (DIE). The secondary end point was to demonstrate the 3D-T2-TSE sequence image quality improvement using ID reconstruction. MATERIAL AND METHODS Patients were prospectively enrolled to our institution for pelvis magnetic resonance imaging because of a suspicion of endometriosis over a 4-month period. Both conventional 2D-T2 (sagittal, axial, coronal T2 oblique to the cervix) and 3D-T2-TSE sequences were performed with a scan time of 7 minutes 43 seconds and 4 minutes 58 seconds, respectively. Reconstructions with prototype ID (3D-T2-denoised) and without prototype ID (3D-T2) were generated inline at the end of the acquisition. Two radiologists independently evaluated the image quality of 3D-T2, 3D-T2-denoised, and 2D-T2 sequences. Diagnosis confidence of DIE was evaluated for both 3D-T2-denoised and 2D-T2 sequences. Intraobserver and interobserver agreements were calculated using Cohen κ coefficient. RESULTS Ninety female patients were included. Both readers found that the ID algorithm significantly improved the image quality and decreased the artifacts of 3D-T2-denoised compared with 3D-T2 sequences (P < 0.001). A significant image quality improvement was found by 1 radiologist for 3D-T2-denoised compared with 2D-T2 sequences (P = 0.002), whereas the other reader evidenced no significant difference. The interobserver agreement of 3D-T2-denoised and 2D-T2 sequences was 0.84 (0.73-0.95) and 0.78 (0.65-0.9), respectively, for the diagnosis of DIE. Intraobserver agreement for readers 1 and 2 was 0.86 (0.79-1) and 0.83 (0.76-1), respectively. For all localization of DIE, interobserver and intraobserver agreements were either almost perfect or substantial for both 3D-T2-denoised and 2D-T2 sequences. CONCLUSIONS Three-dimensional T2-denoised imaging is a promising tool to replace conventional 2D-T2 sequences, offering a significant scan time reduction without compromising image quality or diagnosis information for the assessment of DIE.
Collapse
|
35
|
Complementary time-frequency domain networks for dynamic parallel MR image reconstruction. Magn Reson Med 2021; 86:3274-3291. [PMID: 34254355 DOI: 10.1002/mrm.28917] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2020] [Revised: 06/10/2021] [Accepted: 06/14/2021] [Indexed: 11/11/2022]
Abstract
PURPOSE To introduce a novel deep learning-based approach for fast and high-quality dynamic multicoil MR reconstruction by learning a complementary time-frequency domain network that exploits spatiotemporal correlations simultaneously from complementary domains. THEORY AND METHODS Dynamic parallel MR image reconstruction is formulated as a multivariable minimization problem, where the data are regularized in combined temporal Fourier and spatial (x-f) domain as well as in spatiotemporal image (x-t) domain. An iterative algorithm based on variable splitting technique is derived, which alternates among signal de-aliasing steps in x-f and x-t spaces, a closed-form point-wise data consistency step and a weighted coupling step. The iterative model is embedded into a deep recurrent neural network which learns to recover the image via exploiting spatiotemporal redundancies in complementary domains. RESULTS Experiments were performed on two datasets of highly undersampled multicoil short-axis cardiac cine MRI scans. Results demonstrate that our proposed method outperforms the current state-of-the-art approaches both quantitatively and qualitatively. The proposed model can also generalize well to data acquired from a different scanner and data with pathologies that were not seen in the training set. CONCLUSION The work shows the benefit of reconstructing dynamic parallel MRI in complementary time-frequency domains with deep neural networks. The method can effectively and robustly reconstruct high-quality images from highly undersampled dynamic multicoil data ( 16 × and 24 × yielding 15 s and 10 s scan times respectively) with fast reconstruction speed (2.8 seconds). This could potentially facilitate achieving fast single-breath-hold clinical 2D cardiac cine imaging.
Collapse
|
36
|
Deep learning for fast MR imaging: A review for learning reconstruction from incomplete k-space data. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102579] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
|
37
|
A CNN-Based Autoencoder and Machine Learning Model for Identifying Betel-Quid Chewers Using Functional MRI Features. Brain Sci 2021; 11:brainsci11060809. [PMID: 34207169 PMCID: PMC8234239 DOI: 10.3390/brainsci11060809] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Revised: 06/11/2021] [Accepted: 06/16/2021] [Indexed: 11/17/2022] Open
Abstract
Betel quid (BQ) is one of the most commonly used psychoactive substances in some parts of Asia and the Pacific. Although some studies have shown brain function alterations in BQ chewers, it is virtually impossible for radiologists’ to visually distinguish MRI maps of BQ chewers from others. In this study, we aimed to construct autoencoder and machine-learning models to discover brain alterations in BQ chewers based on the features of resting-state functional magnetic resonance imaging. Resting-state functional magnetic resonance imaging (rs-fMRI) was obtained from 16 BQ chewers, 15 tobacco- and alcohol-user controls (TA), and 17 healthy controls (HC). We used an autoencoder and machine learning model to identify BQ chewers among the three groups. A convolutional neural network (CNN)-based autoencoder model and supervised machine learning algorithm logistic regression (LR) were used to discriminate BQ chewers from TA and HC. Classifying the brain MRIs of HC, TA controls, and BQ chewers by conducting leave-one-out-cross-validation (LOOCV) resulted in the highest accuracy of 83%, which was attained by LR with two rs-fMRI feature sets. In our research, we constructed an autoencoder and machine-learning model that was able to identify BQ chewers from among TA controls and HC, which were based on data from rs-fMRI, and this might provide a helpful approach for tracking BQ chewers in the future.
Collapse
|
38
|
Image Processing Analytics: Enhancements and Segmentation. Mol Imaging 2021. [DOI: 10.1016/b978-0-12-816386-3.00057-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022] Open
|
39
|
Functional and Structural Connectome Features for Machine Learning Chemo-Brain Prediction in Women Treated for Breast Cancer with Chemotherapy. Brain Sci 2020; 10:brainsci10110851. [PMID: 33198294 PMCID: PMC7696512 DOI: 10.3390/brainsci10110851] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Revised: 11/07/2020] [Accepted: 11/11/2020] [Indexed: 12/12/2022] Open
Abstract
Breast cancer is the leading cancer among women worldwide, and a high number of breast cancer patients are struggling with psychological and cognitive disorders. In this study, we aim to use machine learning models to discriminate between chemo-brain participants and healthy controls (HCs) using connectomes (connectivity matrices) and topological coefficients. Nineteen female post-chemotherapy breast cancer (BC) survivors and 20 female HCs were recruited for this study. Participants in both groups received resting-state functional magnetic resonance imaging (rs-fMRI) and generalized q-sampling imaging (GQI). Logistic regression (LR), decision tree classifier (CART), and xgboost (XGB) were the models we adopted for classification. In connectome analysis, LR achieved an accuracy of 79.49% with the functional connectomes and an accuracy of 71.05% with the structural connectomes. In the topological coefficient analysis, accuracies of 87.18%, 82.05%, and 83.78% were obtained by the functional global efficiency with CART, the functional global efficiency with XGB, and the structural transitivity with CART, respectively. The areas under the curves (AUCs) were 0.93, 0.94, 0.87, 0.88, and 0.84, respectively. Our study showed the discriminating ability of functional connectomes, structural connectomes, and global efficiency. We hope our findings can contribute to an understanding of the chemo brain and the establishment of a clinical system for tracking chemo brain.
Collapse
|
40
|
Subsampled brain MRI reconstruction by generative adversarial neural networks. Med Image Anal 2020; 65:101747. [PMID: 32593933 DOI: 10.1016/j.media.2020.101747] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2019] [Revised: 05/10/2020] [Accepted: 06/01/2020] [Indexed: 01/27/2023]
Abstract
A main challenge in magnetic resonance imaging (MRI) is speeding up scan time. Beyond improving patient experience and reducing operational costs, faster scans are essential for time-sensitive imaging, such as fetal, cardiac, or functional MRI, where temporal resolution is important and target movement is unavoidable, yet must be reduced. Current MRI acquisition methods speed up scan time at the expense of lower spatial resolution and costlier hardware. We introduce a practical, software-only framework, based on deep learning, for accelerating MRI acquisition, while maintaining anatomically meaningful imaging. This is accomplished by MRI subsampling followed by estimating the missing k-space samples via generative adversarial neural networks. A generator-discriminator interplay enables the introduction of an adversarial cost in addition to fidelity and image-quality losses used for optimizing the reconstruction. Promising reconstruction results are obtained from feasible sampling patterns of up to a fivefold acceleration of diverse brain MRIs, from a large publicly available dataset of healthy adult scans as well as multimodal acquisitions of multiple sclerosis patients and dynamic contrast-enhanced MRI (DCE-MRI) sequences of stroke and tumor patients. Clinical usability of the reconstructed MRI scans is assessed by performing either lesion or healthy tissue segmentation and comparing the results to those obtained by using the original, fully sampled images. Reconstruction quality and usability of the DCE-MRI sequences is demonstrated by calculating the pharmacokinetic (PK) parameters. The proposed MRI reconstruction approach is shown to outperform state-of-the-art methods for all datasets tested in terms of the peak signal-to-noise ratio (PSNR), the structural similarity index (SSIM), as well as either the mean squared error (MSE) with respect to the PK parameters, calculated for the fully sampled DCE-MRI sequences, or the segmentation compatibility, measured in terms of Dice scores and Hausdorff distance. The code is available on GitHub.
Collapse
|
41
|
Explainable Anatomical Shape Analysis Through Deep Hierarchical Generative Models. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2088-2099. [PMID: 31944949 PMCID: PMC7269693 DOI: 10.1109/tmi.2020.2964499] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Quantification of anatomical shape changes currently relies on scalar global indexes which are largely insensitive to regional or asymmetric modifications. Accurate assessment of pathology-driven anatomical remodeling is a crucial step for the diagnosis and treatment of many conditions. Deep learning approaches have recently achieved wide success in the analysis of medical images, but they lack interpretability in the feature extraction and decision processes. In this work, we propose a new interpretable deep learning model for shape analysis. In particular, we exploit deep generative networks to model a population of anatomical segmentations through a hierarchy of conditional latent variables. At the highest level of this hierarchy, a two-dimensional latent space is simultaneously optimised to discriminate distinct clinical conditions, enabling the direct visualisation of the classification space. Moreover, the anatomical variability encoded by this discriminative latent space can be visualised in the segmentation space thanks to the generative properties of the model, making the classification task transparent. This approach yielded high accuracy in the categorisation of healthy and remodelled left ventricles when tested on unseen segmentations from our own multi-centre dataset as well as in an external validation set, and on hippocampi from healthy controls and patients with Alzheimer's disease when tested on ADNI data. More importantly, it enabled the visualisation in three-dimensions of both global and regional anatomical features which better discriminate between the conditions under exam. The proposed approach scales effectively to large populations, facilitating high-throughput analysis of normal anatomy and pathology in large-scale studies of volumetric imaging.
Collapse
|
42
|
On instabilities of deep learning in image reconstruction and the potential costs of AI. Proc Natl Acad Sci U S A 2020; 117:30088-30095. [PMID: 32393633 DOI: 10.1073/pnas.1907377117] [Citation(s) in RCA: 184] [Impact Index Per Article: 46.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023] Open
Abstract
Deep learning, due to its unprecedented success in tasks such as image classification, has emerged as a new tool in image reconstruction with potential to change the field. In this paper, we demonstrate a crucial phenomenon: Deep learning typically yields unstable methods for image reconstruction. The instabilities usually occur in several forms: 1) Certain tiny, almost undetectable perturbations, both in the image and sampling domain, may result in severe artefacts in the reconstruction; 2) a small structural change, for example, a tumor, may not be captured in the reconstructed image; and 3) (a counterintuitive type of instability) more samples may yield poorer performance. Our stability test with algorithms and easy-to-use software detects the instability phenomena. The test is aimed at researchers, to test their networks for instabilities, and for government agencies, such as the Food and Drug Administration (FDA), to secure safe use of deep learning methods.
Collapse
|
43
|
MRI reconstruction using deep Bayesian estimation. Magn Reson Med 2020; 84:2246-2261. [PMID: 32274850 DOI: 10.1002/mrm.28274] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2019] [Revised: 03/11/2020] [Accepted: 03/11/2020] [Indexed: 11/11/2022]
Abstract
PURPOSE To develop a deep learning-based Bayesian estimation for MRI reconstruction. METHODS We modeled the MRI reconstruction problem with Bayes's theorem, following the recently proposed PixelCNN++ method. The image reconstruction from incomplete k-space measurement was obtained by maximizing the posterior possibility. A generative network was utilized as the image prior, which was computationally tractable, and the k-space data fidelity was enforced by using an equality constraint. The stochastic backpropagation was utilized to calculate the descent gradient in the process of maximum a posterior, and a projected subgradient method was used to impose the equality constraint. In contrast to the other deep learning reconstruction methods, the proposed one used the likelihood of prior as the training loss and the objective function in reconstruction to improve the image quality. RESULTS The proposed method showed an improved performance in preserving image details and reducing aliasing artifacts, compared with GRAPPA, ℓ 1 -ESPRiT, model-based deep learning architecture for inverse problems (MODL), and variational network (VN), last two were state-of-the-art deep learning reconstruction methods. The proposed method generally achieved more than 3 dB peak signal-to-noise ratio improvement for compressed sensing and parallel imaging reconstructions compared with the other methods. CONCLUSIONS The Bayesian estimation significantly improved the reconstruction performance, compared with the conventional ℓ 1 -sparsity prior in compressed sensing reconstruction tasks. More importantly, the proposed reconstruction framework can be generalized for most MRI reconstruction scenarios.
Collapse
|
44
|
Artificial Intelligence for MR Image Reconstruction: An Overview for Clinicians. J Magn Reson Imaging 2020; 53:1015-1028. [PMID: 32048372 DOI: 10.1002/jmri.27078] [Citation(s) in RCA: 80] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2019] [Revised: 01/15/2020] [Accepted: 01/17/2020] [Indexed: 12/22/2022] Open
Abstract
Artificial intelligence (AI) shows tremendous promise in the field of medical imaging, with recent breakthroughs applying deep-learning models for data acquisition, classification problems, segmentation, image synthesis, and image reconstruction. With an eye towards clinical applications, we summarize the active field of deep-learning-based MR image reconstruction. We review the basic concepts of how deep-learning algorithms aid in the transformation of raw k-space data to image data, and specifically examine accelerated imaging and artifact suppression. Recent efforts in these areas show that deep-learning-based algorithms can match and, in some cases, eclipse conventional reconstruction methods in terms of image quality and computational efficiency across a host of clinical imaging applications, including musculoskeletal, abdominal, cardiac, and brain imaging. This article is an introductory overview aimed at clinical radiologists with no experience in deep-learning-based MR image reconstruction and should enable them to understand the basic concepts and current clinical applications of this rapidly growing area of research across multiple organ systems.
Collapse
|
45
|
Reference-Driven Compressed Sensing MR Image Reconstruction Using Deep Convolutional Neural Networks without Pre-Training. SENSORS (BASEL, SWITZERLAND) 2020; 20:E308. [PMID: 31935887 PMCID: PMC6982784 DOI: 10.3390/s20010308] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/13/2019] [Revised: 01/03/2020] [Accepted: 01/04/2020] [Indexed: 01/31/2023]
Abstract
Deep learning has proven itself to be able to reduce the scanning time of Magnetic Resonance Imaging (MRI) and to improve the image reconstruction quality since it was introduced into Compressed Sensing MRI (CS-MRI). However, the requirement of using large, high-quality, and patient-based datasets for network training procedures is always a challenge in clinical applications. In this paper, we propose a novel deep learning based compressed sensing MR image reconstruction method that does not require any pre-training procedure or training dataset, thereby largely reducing clinician dependence on patient-based datasets. The proposed method is based on the Deep Image Prior (DIP) framework and uses a high-resolution reference MR image as the input of the convolutional neural network in order to induce the structural prior in the learning procedure. This reference-driven strategy improves the efficiency and effect of network learning. We then add the k-space data correction step to enforce the consistency of the k-space data with the measurements, which further improve the image reconstruction accuracy. Experiments on in vivo MR datasets showed that the proposed method can achieve more accurate reconstruction results from undersampled k-space data.
Collapse
|
46
|
Prediction of Potential miRNA-Disease Associations Through a Novel Unsupervised Deep Learning Framework with Variational Autoencoder. Cells 2019; 8:cells8091040. [PMID: 31489920 PMCID: PMC6770222 DOI: 10.3390/cells8091040] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2019] [Revised: 08/31/2019] [Accepted: 09/02/2019] [Indexed: 12/22/2022] Open
Abstract
The important role of microRNAs (miRNAs) in the formation, development, diagnosis, and treatment of diseases has attracted much attention among researchers recently. In this study, we present an unsupervised deep learning model of the variational autoencoder for MiRNA–disease association prediction (VAEMDA). Through combining the integrated miRNA similarity and the integrated disease similarity with known miRNA–disease associations, respectively, we constructed two spliced matrices. These matrices were applied to train the variational autoencoder (VAE), respectively. The final predicted association scores between miRNAs and diseases were obtained by integrating the scores from the two trained VAE models. Unlike previous models, VAEMDA can avoid noise introduced by the random selection of negative samples and reveal associations between miRNAs and diseases from the perspective of data distribution. Compared with previous methods, VAEMDA obtained higher area under the receiver operating characteristics curves (AUCs) of 0.9118, 0.8652, and 0.9091 ± 0.0065 in global leave-one-out cross validation (LOOCV), local LOOCV, and five-fold cross validation, respectively. Further, the AUCs of VAEMDA were 0.8250 and 0.8237 in global leave-one-disease-out cross validation (LODOCV), and local LODOCV, respectively. In three different types of case studies on three important diseases, the results showed that most of the top 50 potentially associated miRNAs were verified by databases and the literature.
Collapse
|
47
|
Highly undersampled magnetic resonance imaging reconstruction using autoencoding priors. Magn Reson Med 2019; 83:322-336. [DOI: 10.1002/mrm.27921] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2019] [Revised: 06/14/2019] [Accepted: 07/09/2019] [Indexed: 11/06/2022]
|
48
|
VS-Net: Variable Splitting Network for Accelerated Parallel MRI Reconstruction. LECTURE NOTES IN COMPUTER SCIENCE 2019. [DOI: 10.1007/978-3-030-32251-9_78] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
|