51
|
Li S, Wang Z, Ding Z, She H, Du YP. Accelerated four-dimensional free-breathing whole-liver water-fat magnetic resonance imaging with deep dictionary learning and chemical shift modeling. Quant Imaging Med Surg 2024; 14:2884-2903. [PMID: 38617145 PMCID: PMC11007520 DOI: 10.21037/qims-23-1396] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Accepted: 02/13/2024] [Indexed: 04/16/2024]
Abstract
Background Multi-echo chemical-shift-encoded magnetic resonance imaging (MRI) has been widely used for fat quantification and fat suppression in clinical liver examinations. Clinical liver water-fat imaging typically requires breath-hold acquisitions, with the free-breathing acquisition method being more desirable for patient comfort. However, the acquisition for free-breathing imaging could take up to several minutes. The purpose of this study is to accelerate four-dimensional free-breathing whole-liver water-fat MRI by jointly using high-dimensional deep dictionary learning and model-guided (MG) reconstruction. Methods A high-dimensional model-guided deep dictionary learning (HMDDL) algorithm is proposed for the acceleration. The HMDDL combines the powers of the high-dimensional dictionary learning neural network (hdDLNN) and the chemical shift model. The neural network utilizes the prior information of the dynamic multi-echo data in spatial respiratory motion, and echo dimensions to exploit the features of images. The chemical shift model is used to guide the reconstruction of field maps, R 2 ∗ maps, water images, and fat images. Data acquired from ten healthy subjects and ten subjects with clinically diagnosed nonalcoholic fatty liver disease (NAFLD) were selected for training. Data acquired from one healthy subject and two NAFLD subjects were selected for validation. Data acquired from five healthy subjects and five NAFLD subjects were selected for testing. A three-dimensional (3D) blipped golden-angle stack-of-stars multi-gradient-echo pulse sequence was designed to accelerate the data acquisition. The retrospectively undersampled data were used for training, and the prospectively undersampled data were used for testing. The performance of the HMDDL was evaluated in comparison with the compressed sensing-based water-fat separation (CS-WF) algorithm and a parallel non-Cartesian recurrent neural network (PNCRNN) algorithm. Results Four-dimensional water-fat images with ten motion states for whole-liver are demonstrated at several R values. In comparison with the CS-WF and PNCRNN, the HMDDL improved the mean peak signal-to-noise ratio (PSNR) of images by 9.93 and 2.20 dB, respectively, and improved the mean structure similarity (SSIM) of images by 0.058 and 0.009, respectively, at R=10. The paired t-test shows that there was no significant difference between HMDDL and ground truth for proton-density fat fraction (PDFF) and R 2 ∗ values at R up to 10. Conclusions The proposed HMDDL enables features of water images and fat images from the highly undersampled multi-echo data along spatial, respiratory motion, and echo dimensions, to improve the performance of accelerated four-dimensional (4D) free-breathing water-fat imaging.
Collapse
Affiliation(s)
- Shuo Li
- National Engineering Research Center of Advanced Magnetic Resonance Technologies for Diagnosis and Therapy, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Zhijun Wang
- National Engineering Research Center of Advanced Magnetic Resonance Technologies for Diagnosis and Therapy, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Zekang Ding
- National Engineering Research Center of Advanced Magnetic Resonance Technologies for Diagnosis and Therapy, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Huajun She
- National Engineering Research Center of Advanced Magnetic Resonance Technologies for Diagnosis and Therapy, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Yiping P Du
- National Engineering Research Center of Advanced Magnetic Resonance Technologies for Diagnosis and Therapy, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
52
|
Yarach U, Chatnuntawech I, Setsompop K, Suwannasak A, Angkurawaranon S, Madla C, Hanprasertpong C, Sangpin P. Improved reconstruction for highly accelerated propeller diffusion 1.5 T clinical MRI. MAGMA (NEW YORK, N.Y.) 2024; 37:283-294. [PMID: 38386154 DOI: 10.1007/s10334-023-01142-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 12/11/2023] [Accepted: 12/13/2023] [Indexed: 02/23/2024]
Abstract
PURPOSE Propeller fast-spin-echo diffusion magnetic resonance imaging (FSE-dMRI) is essential for the diagnosis of Cholesteatoma. However, at clinical 1.5 T MRI, its signal-to-noise ratio (SNR) remains relatively low. To gain sufficient SNR, signal averaging (number of excitations, NEX) is usually used with the cost of prolonged scan time. In this work, we leveraged the benefits of Locally Low Rank (LLR) constrained reconstruction to enhance the SNR. Furthermore, we enhanced both the speed and SNR by employing Convolutional Neural Networks (CNNs) for the accelerated PROPELLER FSE-dMRI on a 1.5 T clinical scanner. METHODS Residual U-Net (RU-Net) was found to be efficient for propeller FSE-dMRI data. It was trained to predict 2-NEX images obtained by Locally Low Rank (LLR) constrained reconstruction and used 1-NEX images obtained via simplified reconstruction as the inputs. The brain scans from healthy volunteers and patients with cholesteatoma were performed for model training and testing. The performance of trained networks was evaluated with normalized root-mean-square-error (NRMSE), structural similarity index measure (SSIM), and peak SNR (PSNR). RESULTS For 4 × under-sampled with 7 blades data, online reconstruction appears to provide suboptimal images-some small details are missing due to high noise interferences. Offline LLR enables suppression of noises and discovering some small structures. RU-Net demonstrated further improvement compared to LLR by increasing 18.87% of PSNR, 2.11% of SSIM, and reducing 53.84% of NRMSE. Moreover, RU-Net is about 1500 × faster than LLR (0.03 vs. 47.59 s/slice). CONCLUSION The LLR remarkably enhances the SNR compared to online reconstruction. Moreover, RU-Net improves propeller FSE-dMRI as reflected in PSNR, SSIM, and NRMSE. It requires only 1-NEX data, which allows a 2 × scan time reduction. In addition, its speed is approximately 1500 times faster than that of LLR-constrained reconstruction.
Collapse
Affiliation(s)
- Uten Yarach
- Department of Radiologic Technology, Faculty of Associated Medical Sciences, Chiang Mai University, Chiang Mai, Thailand.
| | - Itthi Chatnuntawech
- National Nanotechnology Center, National Science and Technology Development Agency, Pathum Thani, Thailand
| | - Kawin Setsompop
- Department of Radiology, Stanford University, Stanford, CA, USA
| | - Atita Suwannasak
- Department of Radiologic Technology, Faculty of Associated Medical Sciences, Chiang Mai University, Chiang Mai, Thailand
| | - Salita Angkurawaranon
- Department of Radiology, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand
| | - Chakri Madla
- Department of Radiology, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand
| | - Charuk Hanprasertpong
- Department of Otolaryngology, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand
| | | |
Collapse
|
53
|
Li B, Hu W, Feng CM, Li Y, Liu Z, Xu Y. Multi-Contrast Complementary Learning for Accelerated MR Imaging. IEEE J Biomed Health Inform 2024; 28:1436-1447. [PMID: 38157466 DOI: 10.1109/jbhi.2023.3348328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2024]
Abstract
Thanks to its powerful ability to depict high-resolution anatomical information, magnetic resonance imaging (MRI) has become an essential non-invasive scanning technique in clinical practice. However, excessive acquisition time often leads to the degradation of image quality and psychological discomfort among subjects, hindering its further popularization. Besides reconstructing images from the undersampled protocol itself, multi-contrast MRI protocols bring promising solutions by leveraging additional morphological priors for the target modality. Nevertheless, previous multi-contrast techniques mainly adopt a simple fusion mechanism that inevitably ignores valuable knowledge. In this work, we propose a novel multi-contrast complementary information aggregation network named MCCA, aiming to exploit available complementary representations fully to reconstruct the undersampled modality. Specifically, a multi-scale feature fusion mechanism has been introduced to incorporate complementary-transferable knowledge into the target modality. Moreover, a hybrid convolution transformer block was developed to extract global-local context dependencies simultaneously, which combines the advantages of CNNs while maintaining the merits of Transformers. Compared to existing MRI reconstruction methods, the proposed method has demonstrated its superiority through extensive experiments on different datasets under different acceleration factors and undersampling patterns.
Collapse
|
54
|
Cao C, Cui ZX, Zhu Q, Liu C, Liang D, Zhu Y. Annihilation-Net: Learned annihilation relation for dynamic MR imaging. Med Phys 2024; 51:1883-1898. [PMID: 37665786 DOI: 10.1002/mp.16723] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 07/17/2023] [Accepted: 08/13/2023] [Indexed: 09/06/2023] Open
Abstract
BACKGROUND Deep learning methods driven by the low-rank regularization have achieved attractive performance in dynamic magnetic resonance (MR) imaging. The effectiveness of existing methods lies mainly in their ability to capture interframe relationships using network modules, which are lack interpretability. PURPOSE This study aims to design an interpretable methodology for modeling interframe relationships using convolutiona networks, namely Annihilation-Net and use it for accelerating dynamic MRI. METHODS Based on the equivalence between Hankel matrix product and convolution, we utilize convolutional networks to learn the null space transform for characterizing low-rankness. We employ low-rankness to represent interframe correlations in dynamic MR imaging, while combining with sparse constraints in the compressed sensing framework. The corresponding optimization problem is solved in an iterative form with the semi-quadratic splitting method (HQS). The iterative steps are unrolled into a network, dubbed Annihilation-Net. All the regularization parameters and null space transforms are set as learnable in the Annihilation-Net. RESULTS Experiments on the cardiac cine dataset show that the proposed model outperforms other competing methods both quantitatively and qualitatively. The training set and test set have 800 and 118 images, respectively. CONCLUSIONS The proposed Annihilation-Net improves the reconstruction quality of accelerated dynamic MRI with better interpretability.
Collapse
Affiliation(s)
- Chentao Cao
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Zhuo-Xu Cui
- Research Center for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Qingyong Zhu
- Research Center for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Congcong Liu
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Dong Liang
- Research Center for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yanjie Zhu
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
55
|
Xu J, Zu T, Hsu YC, Wang X, Chan KWY, Zhang Y. Accelerating CEST imaging using a model-based deep neural network with synthetic training data. Magn Reson Med 2024; 91:583-599. [PMID: 37867413 DOI: 10.1002/mrm.29889] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Revised: 08/31/2023] [Accepted: 09/25/2023] [Indexed: 10/24/2023]
Abstract
PURPOSE To develop a model-based deep neural network for high-quality image reconstruction of undersampled multi-coil CEST data. THEORY AND METHODS Inspired by the variational network (VN), the CEST image reconstruction equation is unrolled into a deep neural network (CEST-VN) with a k-space data-sharing block that takes advantage of the inherent redundancy in adjacent CEST frames and 3D spatial-frequential convolution kernels that exploit correlations in the x-ω domain. Additionally, a new pipeline based on multiple-pool Bloch-McConnell simulations is devised to synthesize multi-coil CEST data from publicly available anatomical MRI data. The proposed network is trained on simulated data with a CEST-specific loss function that jointly measures the structural and CEST contrast. The performance of CEST-VN was evaluated on four healthy volunteers and five brain tumor patients using retrospectively or prospectively undersampled data with various acceleration factors, and then compared with other conventional and state-of-the-art reconstruction methods. RESULTS The proposed CEST-VN method generated high-quality CEST source images and amide proton transfer-weighted maps in healthy and brain tumor subjects, consistently outperforming GRAPPA, blind compressed sensing, and the original VN. With the acceleration factors increasing from 3 to 6, CEST-VN with the same hyperparameters yielded similar and accurate reconstruction without apparent loss of details or increase of artifacts. The ablation studies confirmed the effectiveness of the CEST-specific loss function and data-sharing block used. CONCLUSIONS The proposed CEST-VN method can offer high-quality CEST source images and amide proton transfer-weighted maps from highly undersampled multi-coil data by integrating the deep learning prior and multi-coil sensitivity encoding model.
Collapse
Affiliation(s)
- Jianping Xu
- Key Laboratory for Biomedical Engineering of Ministry of Education, Department of Biomedical Engineering, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, People's Republic of China
| | - Tao Zu
- Key Laboratory for Biomedical Engineering of Ministry of Education, Department of Biomedical Engineering, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, People's Republic of China
| | - Yi-Cheng Hsu
- MR Collaboration, Siemens Healthcare Ltd., Shanghai, People's Republic of China
| | - Xiaoli Wang
- School of Medical Imaging, Weifang Medical University, Weifang, People's Republic of China
| | - Kannie W Y Chan
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, People's Republic of China
| | - Yi Zhang
- Key Laboratory for Biomedical Engineering of Ministry of Education, Department of Biomedical Engineering, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, People's Republic of China
| |
Collapse
|
56
|
Wang Z, Li B, Yu H, Zhang Z, Ran M, Xia W, Yang Z, Lu J, Chen H, Zhou J, Shan H, Zhang Y. Promoting fast MR imaging pipeline by full-stack AI. iScience 2024; 27:108608. [PMID: 38174317 PMCID: PMC10762466 DOI: 10.1016/j.isci.2023.108608] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Revised: 10/17/2023] [Accepted: 11/29/2023] [Indexed: 01/05/2024] Open
Abstract
Magnetic resonance imaging (MRI) is a widely used imaging modality in clinics for medical disease diagnosis, staging, and follow-up. Deep learning has been extensively used to accelerate k-space data acquisition, enhance MR image reconstruction, and automate tissue segmentation. However, these three tasks are usually treated as independent tasks and optimized for evaluation by radiologists, thus ignoring the strong dependencies among them; this may be suboptimal for downstream intelligent processing. Here, we present a novel paradigm, full-stack learning (FSL), which can simultaneously solve these three tasks by considering the overall imaging process and leverage the strong dependence among them to further improve each task, significantly boosting the efficiency and efficacy of practical MRI workflows. Experimental results obtained on multiple open MR datasets validate the superiority of FSL over existing state-of-the-art methods on each task. FSL has great potential to optimize the practical workflow of MRI for medical diagnosis and radiotherapy.
Collapse
Affiliation(s)
- Zhiwen Wang
- School of Computer Science, Sichuan University, Chengdu, Sichuan, China
| | - Bowen Li
- School of Computer Science, Sichuan University, Chengdu, Sichuan, China
| | - Hui Yu
- School of Computer Science, Sichuan University, Chengdu, Sichuan, China
| | - Zhongzhou Zhang
- School of Computer Science, Sichuan University, Chengdu, Sichuan, China
| | - Maosong Ran
- School of Computer Science, Sichuan University, Chengdu, Sichuan, China
| | - Wenjun Xia
- School of Computer Science, Sichuan University, Chengdu, Sichuan, China
| | - Ziyuan Yang
- School of Computer Science, Sichuan University, Chengdu, Sichuan, China
| | - Jingfeng Lu
- School of Cyber Science and Engineering, Sichuan University, Chengdu, Sichuan, China
| | - Hu Chen
- School of Computer Science, Sichuan University, Chengdu, Sichuan, China
| | - Jiliu Zhou
- School of Computer Science, Sichuan University, Chengdu, Sichuan, China
| | - Hongming Shan
- Institute of Science and Technology for Brain-inspired Intelligence, Fudan University, Shanghai, China
| | - Yi Zhang
- School of Cyber Science and Engineering, Sichuan University, Chengdu, Sichuan, China
| |
Collapse
|
57
|
Sun K, Wang Q, Shen D. Joint Cross-Attention Network With Deep Modality Prior for Fast MRI Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:558-569. [PMID: 37695966 DOI: 10.1109/tmi.2023.3314008] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/13/2023]
Abstract
Current deep learning-based reconstruction models for accelerated multi-coil magnetic resonance imaging (MRI) mainly focus on subsampled k-space data of single modality using convolutional neural network (CNN). Although dual-domain information and data consistency constraint are commonly adopted in fast MRI reconstruction, the performance of existing models is still limited mainly by three factors: inaccurate estimation of coil sensitivity, inadequate utilization of structural prior, and inductive bias of CNN. To tackle these challenges, we propose an unrolling-based joint Cross-Attention Network, dubbed as jCAN, using deep guidance of the already acquired intra-subject data. Particularly, to improve the performance of coil sensitivity estimation, we simultaneously optimize the latent MR image and sensitivity map (SM). Besides, we introduce Gating layer and Gaussian layer into SM estimation to alleviate the "defocus" and "over-coupling" effects and further ameliorate the SM estimation. To enhance the representation ability of the proposed model, we deploy Vision Transformer (ViT) and CNN in the image and k-space domains, respectively. Moreover, we exploit pre-acquired intra-subject scan as reference modality to guide the reconstruction of subsampled target modality by resorting to the self- and cross-attention scheme. Experimental results on public knee and in-house brain datasets demonstrate that the proposed jCAN outperforms the state-of-the-art methods by a large margin in terms of SSIM and PSNR for different acceleration factors and sampling masks. Our code is publicly available at https://github.com/sunkg/jCAN.
Collapse
|
58
|
Guan Y, Li Y, Liu R, Meng Z, Li Y, Ying L, Du YP, Liang ZP. Subspace Model-Assisted Deep Learning for Improved Image Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3833-3846. [PMID: 37682643 DOI: 10.1109/tmi.2023.3313421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/10/2023]
Abstract
Image reconstruction from limited and/or sparse data is known to be an ill-posed problem and a priori information/constraints have played an important role in solving the problem. Early constrained image reconstruction methods utilize image priors based on general image properties such as sparsity, low-rank structures, spatial support bound, etc. Recent deep learning-based reconstruction methods promise to produce even higher quality reconstructions by utilizing more specific image priors learned from training data. However, learning high-dimensional image priors requires huge amounts of training data that are currently not available in medical imaging applications. As a result, deep learning-based reconstructions often suffer from two known practical issues: a) sensitivity to data perturbations (e.g., changes in data sampling scheme), and b) limited generalization capability (e.g., biased reconstruction of lesions). This paper proposes a new method to address these issues. The proposed method synergistically integrates model-based and data-driven learning in three key components. The first component uses the linear vector space framework to capture global dependence of image features; the second exploits a deep network to learn the mapping from a linear vector space to a nonlinear manifold; the third is an unrolling-based deep network that captures local residual features with the aid of a sparsity model. The proposed method has been evaluated with magnetic resonance imaging data, demonstrating improved reconstruction in the presence of data perturbation and/or novel image features. The method may enhance the practical utility of deep learning-based image reconstruction.
Collapse
|
59
|
Dar SUH, Öztürk Ş, Özbey M, Oguz KK, Çukur T. Parallel-stream fusion of scan-specific and scan-general priors for learning deep MRI reconstruction in low-data regimes. Comput Biol Med 2023; 167:107610. [PMID: 37883853 DOI: 10.1016/j.compbiomed.2023.107610] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Revised: 09/20/2023] [Accepted: 10/17/2023] [Indexed: 10/28/2023]
Abstract
Magnetic resonance imaging (MRI) is an essential diagnostic tool that suffers from prolonged scan times. Reconstruction methods can alleviate this limitation by recovering clinically usable images from accelerated acquisitions. In particular, learning-based methods promise performance leaps by employing deep neural networks as data-driven priors. A powerful approach uses scan-specific (SS) priors that leverage information regarding the underlying physical signal model for reconstruction. SS priors are learned on each individual test scan without the need for a training dataset, albeit they suffer from computationally burdening inference with nonlinear networks. An alternative approach uses scan-general (SG) priors that instead leverage information regarding the latent features of MRI images for reconstruction. SG priors are frozen at test time for efficiency, albeit they require learning from a large training dataset. Here, we introduce a novel parallel-stream fusion model (PSFNet) that synergistically fuses SS and SG priors for performant MRI reconstruction in low-data regimes, while maintaining competitive inference times to SG methods. PSFNet implements its SG prior based on a nonlinear network, yet it forms its SS prior based on a linear network to maintain efficiency. A pervasive framework for combining multiple priors in MRI reconstruction is algorithmic unrolling that uses serially alternated projections, causing error propagation under low-data regimes. To alleviate error propagation, PSFNet combines its SS and SG priors via a novel parallel-stream architecture with learnable fusion parameters. Demonstrations are performed on multi-coil brain MRI for varying amounts of training data. PSFNet outperforms SG methods in low-data regimes, and surpasses SS methods with few tens of training samples. On average across tasks, PSFNet achieves 3.1 dB higher PSNR, 2.8% higher SSIM, and 0.3 × lower RMSE than baselines. Furthermore, in both supervised and unsupervised setups, PSFNet requires an order of magnitude lower samples compared to SG methods, and enables an order of magnitude faster inference compared to SS methods. Thus, the proposed model improves deep MRI reconstruction with elevated learning and computational efficiency.
Collapse
Affiliation(s)
- Salman Ul Hassan Dar
- Department of Internal Medicine III, Heidelberg University Hospital, 69120, Heidelberg, Germany; AI Health Innovation Cluster, Heidelberg, Germany
| | - Şaban Öztürk
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; Department of Electrical-Electronics Engineering, Amasya University, Amasya 05100, Turkey
| | - Muzaffer Özbey
- Department of Electrical and Computer Engineering, University of Illinois Urbana-Champaign, IL 61820, United States
| | - Kader Karli Oguz
- Department of Radiology, University of California, Davis, CA 95616, United States; Department of Radiology, Hacettepe University, Ankara, Turkey
| | - Tolga Çukur
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; Department of Radiology, Hacettepe University, Ankara, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey; Neuroscience Graduate Program, Bilkent University, Ankara 06800, Turkey.
| |
Collapse
|
60
|
Cui ZX, Jia S, Cheng J, Zhu Q, Liu Y, Zhao K, Ke Z, Huang W, Wang H, Zhu Y, Ying L, Liang D. Equilibrated Zeroth-Order Unrolled Deep Network for Parallel MR Imaging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3540-3554. [PMID: 37428656 DOI: 10.1109/tmi.2023.3293826] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/12/2023]
Abstract
In recent times, model-driven deep learning has evolved an iterative algorithm into a cascade network by replacing the regularizer's first-order information, such as the (sub)gradient or proximal operator, with a network module. This approach offers greater explainability and predictability compared to typical data-driven networks. However, in theory, there is no assurance that a functional regularizer exists whose first-order information matches the substituted network module. This implies that the unrolled network output may not align with the regularization models. Furthermore, there are few established theories that guarantee global convergence and robustness (regularity) of unrolled networks under practical assumptions. To address this gap, we propose a safeguarded methodology for network unrolling. Specifically, for parallel MR imaging, we unroll a zeroth-order algorithm, where the network module serves as a regularizer itself, allowing the network output to be covered by a regularization model. Additionally, inspired by deep equilibrium models, we conduct the unrolled network before backpropagation to converge to a fixed point and then demonstrate that it can tightly approximate the actual MR image. We also prove that the proposed network is robust against noisy interferences if the measurement data contain noise. Finally, numerical experiments indicate that the proposed network consistently outperforms state-of-the-art MRI reconstruction methods, including traditional regularization and unrolled deep learning techniques.
Collapse
|
61
|
Li Y, Yang J, Yu T, Chi J, Liu F. Global attention-enabled texture enhancement network for MR image reconstruction. Magn Reson Med 2023; 90:1919-1931. [PMID: 37382206 DOI: 10.1002/mrm.29785] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2023] [Revised: 05/23/2023] [Accepted: 06/14/2023] [Indexed: 06/30/2023]
Abstract
PURPOSE Although recent convolutional neural network (CNN) methodologies have shown promising results in fast MR imaging, there is still a desire to explore how they can be used to learn the frequency characteristics of multicontrast images and reconstruct texture details. METHODS A global attention-enabled texture enhancement network (GATE-Net) with a frequency-dependent feature extraction module (FDFEM) and convolution-based global attention module (GAM) is proposed to address the highly under-sampling MR image reconstruction problem. First, FDFEM enables GATE-Net to effectively extract high-frequency features from shareable information of multicontrast images to improve the texture details of reconstructed images. Second, GAM with less computation complexity has the receptive field of the entire image, which can fully explore useful shareable information of multi-contrast images and suppress less beneficial shareable information. RESULTS The ablation studies are conducted to evaluate the effectiveness of the proposed FDFEM and GAM. Experimental results under various acceleration rates and datasets consistently demonstrate the superiority of GATE-Net, in terms of peak signal-to-noise ratio, structural similarity and normalized mean square error. CONCLUSION A global attention-enabled texture enhancement network is proposed. it can be applied to multicontrast MR image reconstruction tasks with different acceleration rates and datasets and achieves superior performance in comparison with state-of-the-art methods.
Collapse
Affiliation(s)
- Yingnan Li
- College of Electronics and Information, Qingdao University, Qingdao, Shandong, China
| | - Jie Yang
- College of Mechanical and Electrical Engineering, Qingdao University, Qingdao, Shandong, China
| | - Teng Yu
- College of Electronics and Information, Qingdao University, Qingdao, Shandong, China
| | - Jieru Chi
- College of Electronics and Information, Qingdao University, Qingdao, Shandong, China
| | - Feng Liu
- School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Queensland, Australia
| |
Collapse
|
62
|
Desai AD, Ozturkler BM, Sandino CM, Boutin R, Willis M, Vasanawala S, Hargreaves BA, Ré C, Pauly JM, Chaudhari AS. Noise2Recon: Enabling SNR-robust MRI reconstruction with semi-supervised and self-supervised learning. Magn Reson Med 2023; 90:2052-2070. [PMID: 37427449 DOI: 10.1002/mrm.29759] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Revised: 05/23/2023] [Accepted: 05/24/2023] [Indexed: 07/11/2023]
Abstract
PURPOSE To develop a method for building MRI reconstruction neural networks robust to changes in signal-to-noise ratio (SNR) and trainable with a limited number of fully sampled scans. METHODS We propose Noise2Recon, a consistency training method for SNR-robust accelerated MRI reconstruction that can use both fully sampled (labeled) and undersampled (unlabeled) scans. Noise2Recon uses unlabeled data by enforcing consistency between model reconstructions of undersampled scans and their noise-augmented counterparts. Noise2Recon was compared to compressed sensing and both supervised and self-supervised deep learning baselines. Experiments were conducted using retrospectively accelerated data from the mridata three-dimensional fast-spin-echo knee and two-dimensional fastMRI brain datasets. All methods were evaluated in label-limited settings and among out-of-distribution (OOD) shifts, including changes in SNR, acceleration factors, and datasets. An extensive ablation study was conducted to characterize the sensitivity of Noise2Recon to hyperparameter choices. RESULTS In label-limited settings, Noise2Recon achieved better structural similarity, peak signal-to-noise ratio, and normalized-RMS error than all baselines and matched performance of supervised models, which were trained with14 × $$ 14\times $$ more fully sampled scans. Noise2Recon outperformed all baselines, including state-of-the-art fine-tuning and augmentation techniques, among low-SNR scans and when generalizing to OOD acceleration factors. Augmentation extent and loss weighting hyperparameters had negligible impact on Noise2Recon compared to supervised methods, which may indicate increased training stability. CONCLUSION Noise2Recon is a label-efficient reconstruction method that is robust to distribution shifts, such as changes in SNR, acceleration factors, and others, with limited or no fully sampled training data.
Collapse
Affiliation(s)
- Arjun D Desai
- Department of Electrical Engineering, Stanford University, Stanford, California, USA
- Department of Radiology, Stanford University, Stanford, California, USA
| | - Batu M Ozturkler
- Department of Electrical Engineering, Stanford University, Stanford, California, USA
| | - Christopher M Sandino
- Department of Electrical Engineering, Stanford University, Stanford, California, USA
| | - Robert Boutin
- Department of Radiology, Stanford University, Stanford, California, USA
| | - Marc Willis
- Department of Radiology, Stanford University, Stanford, California, USA
| | | | - Brian A Hargreaves
- Department of Electrical Engineering, Stanford University, Stanford, California, USA
- Department of Radiology, Stanford University, Stanford, California, USA
| | - Christopher Ré
- Department of Computer Science, Stanford University, Stanford, California, USA
| | - John M Pauly
- Department of Electrical Engineering, Stanford University, Stanford, California, USA
| | - Akshay S Chaudhari
- Department of Radiology, Stanford University, Stanford, California, USA
- Department of Biomedical Data Science, Stanford University, Stanford, California, USA
| |
Collapse
|
63
|
Yi Q, Fang F, Zhang G, Zeng T. Frequency Learning via Multi-Scale Fourier Transformer for MRI Reconstruction. IEEE J Biomed Health Inform 2023; 27:5506-5517. [PMID: 37656654 DOI: 10.1109/jbhi.2023.3311189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/03/2023]
Abstract
Since Magnetic Resonance Imaging (MRI) requires a long acquisition time, various methods were proposed to reduce the time, but they ignored the frequency information and non-local similarity, so that they failed to reconstruct images with a clear structure. In this article, we propose Frequency Learning via Multi-scale Fourier Transformer for MRI Reconstruction (FMTNet), which focuses on repairing the low-frequency and high-frequency information. Specifically, FMTNet is composed of a high-frequency learning branch (HFLB) and a low-frequency learning branch (LFLB). Meanwhile, we propose a Multi-scale Fourier Transformer (MFT) as the basic module to learn the non-local information. Unlike normal Transformers, MFT adopts Fourier convolution to replace self-attention to efficiently learn global information. Moreover, we further introduce a multi-scale learning and cross-scale linear fusion strategy in MFT to interact information between features of different scales and strengthen the representation of features. Compared with normal Transformers, the proposed MFT occupies fewer computing resources. Based on MFT, we design a Residual Multi-scale Fourier Transformer module as the main component of HFLB and LFLB. We conduct several experiments under different acceleration rates and different sampling patterns on different datasets, and the experiment results show that our method is superior to the previous state-of-the-art method.
Collapse
|
64
|
Pramanik A, Bhave S, Sajib S, Sharma SD, Jacob M. Adapting model-based deep learning to multiple acquisition conditions: Ada-MoDL. Magn Reson Med 2023; 90:2033-2051. [PMID: 37332189 PMCID: PMC10524947 DOI: 10.1002/mrm.29750] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2022] [Revised: 05/21/2023] [Accepted: 05/22/2023] [Indexed: 06/20/2023]
Abstract
PURPOSE The aim of this work is to introduce a single model-based deep network that can provide high-quality reconstructions from undersampled parallel MRI data acquired with multiple sequences, acquisition settings, and field strengths. METHODS A single unrolled architecture, which offers good reconstructions for multiple acquisition settings, is introduced. The proposed scheme adapts the model to each setting by scaling the convolutional neural network (CNN) features and the regularization parameter with appropriate weights. The scaling weights and regularization parameter are derived using a multilayer perceptron model from conditional vectors, which represents the specific acquisition setting. The perceptron parameters and the CNN weights are jointly trained using data from multiple acquisition settings, including differences in field strengths, acceleration, and contrasts. The conditional network is validated using datasets acquired with different acquisition settings. RESULTS The comparison of the adaptive framework, which trains a single model using the data from all the settings, shows that it can offer consistently improved performance for each acquisition condition. The comparison of the proposed scheme with networks that are trained independently for each acquisition setting shows that it requires less training data per acquisition setting to offer good performance. CONCLUSION The Ada-MoDL framework enables the use of a single model-based unrolled network for multiple acquisition settings. In addition to eliminating the need to train and store multiple networks for different acquisition settings, this approach reduces the training data needed for each acquisition setting.
Collapse
Affiliation(s)
- Aniket Pramanik
- Department of Electrical and Computer Engineering, University of Iowa, Iowa, USA
| | - Sampada Bhave
- Canon Medical Research USA, Inc., Mayfield Village, Ohio, USA
| | - Saurav Sajib
- Canon Medical Research USA, Inc., Mayfield Village, Ohio, USA
| | - Samir D. Sharma
- Canon Medical Research USA, Inc., Mayfield Village, Ohio, USA
| | - Mathews Jacob
- Department of Electrical and Computer Engineering, University of Iowa, Iowa, USA
| |
Collapse
|
65
|
Mello-Thoms C, Mello CAB. Clinical applications of artificial intelligence in radiology. Br J Radiol 2023; 96:20221031. [PMID: 37099398 PMCID: PMC10546456 DOI: 10.1259/bjr.20221031] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 03/28/2023] [Accepted: 03/28/2023] [Indexed: 04/27/2023] Open
Abstract
The rapid growth of medical imaging has placed increasing demands on radiologists. In this scenario, artificial intelligence (AI) has become an attractive partner, one that may complement case interpretation and may aid in various non-interpretive aspects of the work in the radiological clinic. In this review, we discuss interpretative and non-interpretative uses of AI in the clinical practice, as well as report on the barriers to AI's adoption in the clinic. We show that AI currently has a modest to moderate penetration in the clinical practice, with many radiologists still being unconvinced of its value and the return on its investment. Moreover, we discuss the radiologists' liabilities regarding the AI decisions, and explain how we currently do not have regulation to guide the implementation of explainable AI or of self-learning algorithms.
Collapse
Affiliation(s)
| | - Carlos A B Mello
- Centro de Informática, Universidade Federal de Pernambuco, Recife, Brazil
| |
Collapse
|
66
|
Xu W, Jia S, Cui ZX, Zhu Q, Liu X, Liang D, Cheng J. Joint Image Reconstruction and Super-Resolution for Accelerated Magnetic Resonance Imaging. Bioengineering (Basel) 2023; 10:1107. [PMID: 37760209 PMCID: PMC10525692 DOI: 10.3390/bioengineering10091107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 09/07/2023] [Accepted: 09/08/2023] [Indexed: 09/29/2023] Open
Abstract
Magnetic resonance (MR) image reconstruction and super-resolution are two prominent techniques to restore high-quality images from undersampled or low-resolution k-space data to accelerate MR imaging. Combining undersampled and low-resolution acquisition can further improve the acceleration factor. Existing methods often treat the techniques of image reconstruction and super-resolution separately or combine them sequentially for image recovery, which can result in error propagation and suboptimal results. In this work, we propose a novel framework for joint image reconstruction and super-resolution, aiming to efficiently image recovery and enable fast imaging. Specifically, we designed a framework with a reconstruction module and a super-resolution module to formulate multi-task learning. The reconstruction module utilizes a model-based optimization approach, ensuring data fidelity with the acquired k-space data. Moreover, a deep spatial feature transform is employed to enhance the information transition between the two modules, facilitating better integration of image reconstruction and super-resolution. Experimental evaluations on two datasets demonstrate that our proposed method can provide superior performance both quantitatively and qualitatively.
Collapse
Affiliation(s)
- Wei Xu
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (W.X.); (S.J.); (Z.-X.C.); (Q.Z.); (X.L.)
- University of Chinese Academy of Sciences, Beijing 101408, China
| | - Sen Jia
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (W.X.); (S.J.); (Z.-X.C.); (Q.Z.); (X.L.)
| | - Zhuo-Xu Cui
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (W.X.); (S.J.); (Z.-X.C.); (Q.Z.); (X.L.)
| | - Qingyong Zhu
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (W.X.); (S.J.); (Z.-X.C.); (Q.Z.); (X.L.)
| | - Xin Liu
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (W.X.); (S.J.); (Z.-X.C.); (Q.Z.); (X.L.)
| | - Dong Liang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (W.X.); (S.J.); (Z.-X.C.); (Q.Z.); (X.L.)
| | - Jing Cheng
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (W.X.); (S.J.); (Z.-X.C.); (Q.Z.); (X.L.)
| |
Collapse
|
67
|
Wijethilake N, Anandakumar M, Zheng C, So PTC, Yildirim M, Wadduwage DN. DEEP-squared: deep learning powered De-scattering with Excitation Patterning. LIGHT, SCIENCE & APPLICATIONS 2023; 12:228. [PMID: 37704619 PMCID: PMC10499829 DOI: 10.1038/s41377-023-01248-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 07/21/2023] [Accepted: 07/29/2023] [Indexed: 09/15/2023]
Abstract
Limited throughput is a key challenge in in vivo deep tissue imaging using nonlinear optical microscopy. Point scanning multiphoton microscopy, the current gold standard, is slow especially compared to the widefield imaging modalities used for optically cleared or thin specimens. We recently introduced "De-scattering with Excitation Patterning" or "DEEP" as a widefield alternative to point-scanning geometries. Using patterned multiphoton excitation, DEEP encodes spatial information inside tissue before scattering. However, to de-scatter at typical depths, hundreds of such patterned excitations were needed. In this work, we present DEEP2, a deep learning-based model that can de-scatter images from just tens of patterned excitations instead of hundreds. Consequently, we improve DEEP's throughput by almost an order of magnitude. We demonstrate our method in multiple numerical and experimental imaging studies, including in vivo cortical vasculature imaging up to 4 scattering lengths deep in live mice.
Collapse
Affiliation(s)
- Navodini Wijethilake
- Center for Advanced Imaging, Faculty of Arts and Sciences, Harvard University, Cambridge, MA, USA
- Department of Electronic and Telecommunication Engineering, University of Moratuwa, Moratuwa, Sri Lanka
| | - Mithunjha Anandakumar
- Center for Advanced Imaging, Faculty of Arts and Sciences, Harvard University, Cambridge, MA, USA
| | - Cheng Zheng
- Department of Mechanical Engineering, Massachusetts Institute of Technology, 77 Massachusetts Ave., Cambridge, MA, 02139, USA
- Laser Biomedical Research Center, Massachusetts Institute of Technology, 77 Massachusetts Ave., Cambridge, MA, 02139, USA
| | - Peter T C So
- Department of Mechanical Engineering, Massachusetts Institute of Technology, 77 Massachusetts Ave., Cambridge, MA, 02139, USA
- Laser Biomedical Research Center, Massachusetts Institute of Technology, 77 Massachusetts Ave., Cambridge, MA, 02139, USA
- Department of Biological Engineering, Massachusetts Institute of Technology, 77 Massachusetts Ave., Cambridge, MA, 02139, USA
| | - Murat Yildirim
- Laser Biomedical Research Center, Massachusetts Institute of Technology, 77 Massachusetts Ave., Cambridge, MA, 02139, USA
- Picower Institute for Learning and Memory, Massachusetts Institute of Technology, 77 Massachusetts Ave., Cambridge, MA, 02139, USA
- Department of Neuroscience, Cleveland Clinic Lerner Research Institute, Cleveland, OH, 44195, USA
| | - Dushan N Wadduwage
- Center for Advanced Imaging, Faculty of Arts and Sciences, Harvard University, Cambridge, MA, USA.
| |
Collapse
|
68
|
Wang S, Wu R, Li C, Zou J, Zhang Z, Liu Q, Xi Y, Zheng H. PARCEL: Physics-Based Unsupervised Contrastive Representation Learning for Multi-Coil MR Imaging. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2023; 20:2659-2670. [PMID: 36219669 DOI: 10.1109/tcbb.2022.3213669] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
With the successful application of deep learning to magnetic resonance (MR) imaging, parallel imaging techniques based on neural networks have attracted wide attention. However, in the absence of high-quality, fully sampled datasets for training, the performance of these methods is limited. And the interpretability of models is not strong enough. To tackle this issue, this paper proposes a Physics-bAsed unsupeRvised Contrastive rEpresentation Learning (PARCEL) method to speed up parallel MR imaging. Specifically, PARCEL has a parallel framework to contrastively learn two branches of model-based unrolling networks from augmented undersampled multi-coil k-space data. A sophisticated co-training loss with three essential components has been designed to guide the two networks in capturing the inherent features and representations for MR images. And the final MR image is reconstructed with the trained contrastive networks. PARCEL was evaluated on two vivo datasets and compared to five state-of-the-art methods. The results show that PARCEL is able to learn essential representations for accurate MR reconstruction without relying on fully sampled datasets. The code will be made available at https://github.com/ternencewu123/PARCEL.
Collapse
|
69
|
Wang W, Shen H, Chen J, Xing F. MHAN: Multi-Stage Hybrid Attention Network for MRI reconstruction and super-resolution. Comput Biol Med 2023; 163:107181. [PMID: 37352637 DOI: 10.1016/j.compbiomed.2023.107181] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 05/29/2023] [Accepted: 06/13/2023] [Indexed: 06/25/2023]
Abstract
High-quality magnetic resonance imaging (MRI) affords clear body tissue structure for reliable diagnosing. However, there is a principal problem of the trade-off between acquisition speed and image quality. Image reconstruction and super-resolution are crucial techniques to solve these problems. In the main field of MR image restoration, most researchers mainly focus on only one of these aspects, namely reconstruction or super-resolution. In this paper, we propose an efficient model called Multi-Stage Hybrid Attention Network (MHAN) that performs the multi-task of recovering high-resolution (HR) MR images from low-resolution (LR) under-sampled measurements. Our model is highlighted by three major modules: (i) an Amplified Spatial Attention Block (ASAB) capable of enhancing the differences in spatial information, (ii) a Self-Attention Block with a Data-Consistency Layer (DC-SAB), which can improve the accuracy of the extracted feature information, (iii) an Adaptive Local Residual Attention Block (ALRAB) that focuses on both spatial and channel information. MHAN employs an encoder-decoder architecture to deeply extract contextual information and a pipeline to provide spatial accuracy. Compared with the recent multi-task model T2Net, our MHAN improves by 2.759 dB in PSNR and 0.026 in SSIM with scaling factor ×2 and acceleration factor 4× on T2 modality.
Collapse
Affiliation(s)
- Wanliang Wang
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, 310023, China.
| | - Haoxin Shen
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, 310023, China.
| | - Jiacheng Chen
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, 310023, China.
| | - Fangsen Xing
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, 310023, China.
| |
Collapse
|
70
|
Debs P, Fayad LM. The promise and limitations of artificial intelligence in musculoskeletal imaging. FRONTIERS IN RADIOLOGY 2023; 3:1242902. [PMID: 37609456 PMCID: PMC10440743 DOI: 10.3389/fradi.2023.1242902] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 07/26/2023] [Indexed: 08/24/2023]
Abstract
With the recent developments in deep learning and the rapid growth of convolutional neural networks, artificial intelligence has shown promise as a tool that can transform several aspects of the musculoskeletal imaging cycle. Its applications can involve both interpretive and non-interpretive tasks such as the ordering of imaging, scheduling, protocoling, image acquisition, report generation and communication of findings. However, artificial intelligence tools still face a number of challenges that can hinder effective implementation into clinical practice. The purpose of this review is to explore both the successes and limitations of artificial intelligence applications throughout the muscuskeletal imaging cycle and to highlight how these applications can help enhance the service radiologists deliver to their patients, resulting in increased efficiency as well as improved patient and provider satisfaction.
Collapse
Affiliation(s)
- Patrick Debs
- The Russell H. Morgan Department of Radiology & Radiological Science, The Johns Hopkins Medical Institutions, Baltimore, MD, United States
| | - Laura M. Fayad
- The Russell H. Morgan Department of Radiology & Radiological Science, The Johns Hopkins Medical Institutions, Baltimore, MD, United States
- Department of Orthopaedic Surgery, Johns Hopkins University School of Medicine, Baltimore, MD, United States
- Department of Oncology, Johns Hopkins University School of Medicine, Baltimore, MD, United States
| |
Collapse
|
71
|
Millard C, Chiew M. A Theoretical Framework for Self-Supervised MR Image Reconstruction Using Sub-Sampling via Variable Density Noisier2Noise. IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING 2023; 9:707-720. [PMID: 37600280 PMCID: PMC7614963 DOI: 10.1109/tci.2023.3299212] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/22/2023]
Abstract
In recent years, there has been attention on leveraging the statistical modeling capabilities of neural networks for reconstructing sub-sampled Magnetic Resonance Imaging (MRI) data. Most proposed methods assume the existence of a representative fully-sampled dataset and use fully-supervised training. However, for many applications, fully sampled training data is not available, and may be highly impractical to acquire. The development and understanding of self-supervised methods, which use only sub-sampled data for training, are therefore highly desirable. This work extends the Noisier2Noise framework, which was originally constructed for self-supervised denoising tasks, to variable density sub-sampled MRI data. We use the Noisier2Noise framework to analytically explain the performance of Self-Supervised Learning via Data Undersampling (SSDU), a recently proposed method that performs well in practice but until now lacked theoretical justification. Further, we propose two modifications of SSDU that arise as a consequence of the theoretical developments. Firstly, we propose partitioning the sampling set so that the subsets have the same type of distribution as the original sampling mask. Secondly, we propose a loss weighting that compensates for the sampling and partitioning densities. On the fastMRI dataset we show that these changes significantly improve SSDU's image restoration quality and robustness to the partitioning parameters.
Collapse
Affiliation(s)
- Charles Millard
- the Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, OX3 9DU Oxford, U.K
| | - Mark Chiew
- the Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, OX3 9DU Oxford, U.K., and with the Department of Medical Biophysics, University of Toronto, Toronto, ON M5S 1A1, Canada, and also with the Canada and Physical Sciences, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada
| |
Collapse
|
72
|
Palla A, Ramanarayanan S, Ram K, Sivaprakasam M. Generalizable Deep Learning Method for Suppressing Unseen and Multiple MRI Artifacts Using Meta-learning. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-5. [PMID: 38082950 DOI: 10.1109/embc40787.2023.10341123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Magnetic Resonance (MR) images suffer from various types of artifacts due to motion, spatial resolution, and under-sampling. Conventional deep learning methods deal with removing a specific type of artifact, leading to separately trained models for each artifact type that lack the shared knowledge generalizable across artifacts. Moreover, training a model for each type and amount of artifact is a tedious process that consumes more training time and storage of models. On the other hand, the shared knowledge learned by jointly training the model on multiple artifacts might be inadequate to generalize under deviations in the types and amounts of artifacts. Model-agnostic meta-learning (MAML), a nested bi-level optimization framework is a promising technique to learn common knowledge across artifacts in the outer level of optimization, and artifact-specific restoration in the inner level. We propose curriculum-MAML (CMAML), a learning process that integrates MAML with curriculum learning to impart the knowledge of variable artifact complexity to adaptively learn restoration of multiple artifacts during training. Comparative studies against Stochastic Gradient Descent and MAML, using two cardiac datasets reveal that CMAML exhibits (i) better generalization with improved PSNR for 83% of unseen types and amounts of artifacts and improved SSIM in all cases, and (ii) better artifact suppression in 4 out of 5 cases of composite artifacts (scans with multiple artifacts).Clinical relevance- Our results show that CMAML has the potential to minimize the number of artifact-specific models; which is essential to deploy deep learning models for clinical use. Furthermore, we have also taken another practical scenario of an image affected by multiple artifacts and show that our method performs better in 80% of cases.
Collapse
|
73
|
Song M, Hao X, Qi F. CSA: A Channel-Separated Attention Module for Enhancing MRI Reconstruction. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083616 DOI: 10.1109/embc40787.2023.10340098] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Channel attention mechanisms have been proven to effectively enhance network performance in various visual tasks, including the Magnetic Resonance Imaging (MRI) reconstruction task. Channel attention mechanisms typically involve channel dimensionality reduction and cross-channel interaction operations to achieve complexity reduction and generate more effective weights of channels. However, the operations may negatively impact MRI reconstruction performance since it was found that there is no discernible correlation between adjacent channels and the low information value in some feature maps. Therefore, we proposed the Channel-Separated Attention (CSA) module tailored for MRI reconstruction networks. Each layer of the CSA module avoids compressing channels, thereby allowing for lossless information transmission. Additionally, we employed the Hadamard product to realize that each channel's importance weight was generated solely based on itself, avoiding cross-channel interaction and reducing the computational complexity. We replaced the original channel attention module with the CSA module in an advanced MRI reconstruction network and noticed that CSA module achieved superior reconstruction performance with fewer parameters. Furthermore, we conducted comparative experiments with state-of-the-art channel attention modules on an identical network backbone, CSA module achieved competitive reconstruction outcomes with only approximately 1.036% parameters of the Squeeze-and-Excitation (SE) module. Overall, the CSA module makes an optimal trade-off between complexity and reconstruction quality to efficiently and effectively enhance MRI reconstruction. The code is available at https://github.com/smd1997/CSA-Net.
Collapse
|
74
|
Bi W, Xv J, Song M, Hao X, Gao D, Qi F. Linear fine-tuning: a linear transformation based transfer strategy for deep MRI reconstruction. Front Neurosci 2023; 17:1202143. [PMID: 37409107 PMCID: PMC10318193 DOI: 10.3389/fnins.2023.1202143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Accepted: 06/05/2023] [Indexed: 07/07/2023] Open
Abstract
Introduction Fine-tuning (FT) is a generally adopted transfer learning method for deep learning-based magnetic resonance imaging (MRI) reconstruction. In this approach, the reconstruction model is initialized with pre-trained weights derived from a source domain with ample data and subsequently updated with limited data from the target domain. However, the direct full-weight update strategy can pose the risk of "catastrophic forgetting" and overfitting, hindering its effectiveness. The goal of this study is to develop a zero-weight update transfer strategy to preserve pre-trained generic knowledge and reduce overfitting. Methods Based on the commonality between the source and target domains, we assume a linear transformation relationship of the optimal model weights from the source domain to the target domain. Accordingly, we propose a novel transfer strategy, linear fine-tuning (LFT), which introduces scaling and shifting (SS) factors into the pre-trained model. In contrast to FT, LFT only updates SS factors in the transfer phase, while the pre-trained weights remain fixed. Results To evaluate the proposed LFT, we designed three different transfer scenarios and conducted a comparative analysis of FT, LFT, and other methods at various sampling rates and data volumes. In the transfer scenario between different contrasts, LFT outperforms typical transfer strategies at various sampling rates and considerably reduces artifacts on reconstructed images. In transfer scenarios between different slice directions or anatomical structures, LFT surpasses the FT method, particularly when the target domain contains a decreasing number of training images, with a maximum improvement of up to 2.06 dB (5.89%) in peak signal-to-noise ratio. Discussion The LFT strategy shows great potential to address the issues of "catastrophic forgetting" and overfitting in transfer scenarios for MRI reconstruction, while reducing the reliance on the amount of data in the target domain. Linear fine-tuning is expected to shorten the development cycle of reconstruction models for adapting complicated clinical scenarios, thereby enhancing the clinical applicability of deep MRI reconstruction.
Collapse
Affiliation(s)
- Wanqing Bi
- The Centers for Biomedical Engineering, University of Science and Technology of China, Hefei, Anhui, China
| | - Jianan Xv
- The Centers for Biomedical Engineering, University of Science and Technology of China, Hefei, Anhui, China
| | - Mengdie Song
- The Centers for Biomedical Engineering, University of Science and Technology of China, Hefei, Anhui, China
| | - Xiaohan Hao
- The Centers for Biomedical Engineering, University of Science and Technology of China, Hefei, Anhui, China
- Fuqing Medical Co., Ltd., Hefei, Anhui, China
| | - Dayong Gao
- Department of Mechanical Engineering, University of Washington, Seattle, WA, United States
| | - Fulang Qi
- The Centers for Biomedical Engineering, University of Science and Technology of China, Hefei, Anhui, China
| |
Collapse
|
75
|
Huang P, Li H, Liu R, Zhang X, Li X, Liang D, Ying L. Accelerating MRI Using Vision Transformer with Unpaired Unsupervised Training. PROCEEDINGS OF THE INTERNATIONAL SOCIETY FOR MAGNETIC RESONANCE IN MEDICINE ... SCIENTIFIC MEETING AND EXHIBITION. INTERNATIONAL SOCIETY FOR MAGNETIC RESONANCE IN MEDICINE. SCIENTIFIC MEETING AND EXHIBITION 2023; 31:2933. [PMID: 37600538 PMCID: PMC10440071] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 08/22/2023]
Affiliation(s)
- Peizhou Huang
- Biomedical Engineering, State University of New York at Buffalo, Buffalo, NY, United States
| | - Hongyu Li
- Electrical Engineering, State University of New York at Buffalo, Buffalo, NY, United States
| | - Ruiying Liu
- Electrical Engineering, State University of New York at Buffalo, Buffalo, NY, United States
| | - Xiaoliang Zhang
- Biomedical Engineering, State University of New York at Buffalo, Buffalo, NY, United States
| | - Xiaojuan Li
- Program of Advanced Musculoskeletal Imaging (PAMI), Cleveland Clinic, Cleveland, OH, United States
| | - Dong Liang
- Paul C. Lauterbur Research Center for Biomedical Imaging, SIAT CAS, Shenzhen, China
| | - Leslie Ying
- Biomedical Engineering, State University of New York at Buffalo, Buffalo, NY, United States
- Electrical Engineering, State University of New York at Buffalo, Buffalo, NY, United States
| |
Collapse
|
76
|
Jin Z, Xiang QS. Improving accelerated MRI by deep learning with sparsified complex data. Magn Reson Med 2023; 89:1825-1838. [PMID: 36480017 DOI: 10.1002/mrm.29556] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2022] [Revised: 10/23/2022] [Accepted: 11/22/2022] [Indexed: 12/13/2022]
Abstract
PURPOSE To obtain high-quality accelerated MR images with complex-valued reconstruction from undersampled k-space data. METHODS The MRI scans from human subjects were retrospectively undersampled with a regular pattern using skipped phase encoding, leading to ghosts in zero-filling reconstruction. A complex difference transform along the phase-encoding direction was applied in image domain to yield sparsified complex-valued edge maps. These sparse edge maps were used to train a complex-valued U-type convolutional neural network (SCU-Net) for deghosting. A k-space inverse filtering was performed on the predicted deghosted complex edge maps from SCU-Net to obtain final complex images. The SCU-Net was compared with other algorithms including zero-filling, GRAPPA, RAKI, finite difference complex U-type convolutional neural network (FDCU-Net), and CU-Net, both qualitatively and quantitatively, using such metrics as structural similarity index, peak SNR, and normalized mean square error. RESULTS The SCU-Net was found to be effective in deghosting aliased edge maps even at high acceleration factors. High-quality complex images were obtained by performing an inverse filtering on deghosted edge maps. The SCU-Net compared favorably with other algorithms. CONCLUSION Using sparsified complex data, SCU-Net offers higher reconstruction quality for regularly undersampled k-space data. The proposed method is especially useful for phase-sensitive MRI applications.
Collapse
Affiliation(s)
- Zhaoyang Jin
- Machine Learning and I-health International Cooperation Base of Zhejiang Province, School of Automation, Hangzhou Dianzi University, Hangzhou, People's Republic of China
| | - Qing-San Xiang
- Department of Radiology, University of British Columbia, British Columbia, Vancouver, Canada
| |
Collapse
|
77
|
Yang J, Li XX, Liu F, Nie D, Lio P, Qi H, Shen D. Fast Multi-Contrast MRI Acquisition by Optimal Sampling of Information Complementary to Pre-Acquired MRI Contrast. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1363-1373. [PMID: 37015608 DOI: 10.1109/tmi.2022.3227262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Recent studies on multi-contrast MRI reconstruction have demonstrated the potential of further accelerating MRI acquisition by exploiting correlation between contrasts. Most of the state-of-the-art approaches have achieved improvement through the development of network architectures for fixed under-sampling patterns, without considering inter-contrast correlation in the under-sampling pattern design. On the other hand, sampling pattern learning methods have shown better reconstruction performance than those with fixed under-sampling patterns. However, most under-sampling pattern learning algorithms are designed for single contrast MRI without exploiting complementary information between contrasts. To this end, we propose a framework to optimize the under-sampling pattern of a target MRI contrast which complements the acquired fully-sampled reference contrast. Specifically, a novel image synthesis network is introduced to extract the redundant information contained in the reference contrast, which is exploited in the subsequent joint pattern optimization and reconstruction network. We have demonstrated superior performance of our learned under-sampling patterns on both public and in-house datasets, compared to the commonly used under-sampling patterns and state-of-the-art methods that jointly optimize the reconstruction network and the under-sampling patterns, up to 8-fold under-sampling factor.
Collapse
|
78
|
Zhou L, Zhu M, Xiong D, Ouyang L, Ouyang Y, Chen Z, Zhang X. RNLFNet: Residual non-local Fourier network for undersampled MRI reconstruction. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104632] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
|
79
|
Shimron E, Perlman O. AI in MRI: Computational Frameworks for a Faster, Optimized, and Automated Imaging Workflow. Bioengineering (Basel) 2023; 10:492. [PMID: 37106679 PMCID: PMC10135995 DOI: 10.3390/bioengineering10040492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 04/12/2023] [Accepted: 04/18/2023] [Indexed: 04/29/2023] Open
Abstract
Over the last decade, artificial intelligence (AI) has made an enormous impact on a wide range of fields, including science, engineering, informatics, finance, and transportation [...].
Collapse
Affiliation(s)
- Efrat Shimron
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, CA 94720, USA
| | - Or Perlman
- Department of Biomedical Engineering, Tel Aviv University, Tel Aviv 6997801, Israel
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv 6997801, Israel
| |
Collapse
|
80
|
Pal A, Ning L, Rathi Y. A domain-agnostic MR reconstruction framework using a randomly weighted neural network. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.03.22.533764. [PMID: 36993372 PMCID: PMC10055311 DOI: 10.1101/2023.03.22.533764] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
PURPOSE To design a randomly-weighted neural network that performs domain-agnostic MR image reconstruction from undersampled k-space data without the need for ground truth or extensive in-vivo training datasets. The network performance must be similar to the current state-of-the-art algorithms that require large training datasets. METHODS We propose a Weight Agnostic randomly weighted Network method for MRI reconstruction (termed WAN-MRI) which does not require updating the weights of the neural network but rather chooses the most appropriate connections of the network to reconstruct the data from undersampled k-space measurements. The network architecture has three components, i.e. (1) Dimensionality Reduction Layers comprising of 3d convolutions, ReLu, and batch norm; (2) Reshaping Layer is Fully Connected layer; and (3) Upsampling Layers that resembles the ConvDecoder architecture. The proposed methodology is validated on fastMRI knee and brain datasets. RESULTS The proposed method provides a significant boost in performance for structural similarity index measure (SSIM) and root mean squared error (RMSE) scores on fastMRI knee and brain datasets at an undersampling factor of R=4 and R=8 while trained on fractal and natural images, and fine-tuned with only 20 samples from the fastMRI training k-space dataset. Qualitatively, we see that classical methods such as GRAPPA and SENSE fail to capture the subtle details that are clinically relevant. We either outperform or show comparable performance with several existing deep learning techniques (that require extensive training) like GrappaNET, VariationNET, J-MoDL, and RAKI. CONCLUSION The proposed algorithm (WAN-MRI) is agnostic to reconstructing images of different body organs or MRI modalities and provides excellent scores in terms of SSIM, PSNR, and RMSE metrics and generalizes better to out-of-distribution examples. The methodology does not require ground truth data and can be trained using very few undersampled multi-coil k-space training samples.
Collapse
|
81
|
Luo G, Blumenthal M, Heide M, Uecker M. Bayesian MRI reconstruction with joint uncertainty estimation using diffusion models. Magn Reson Med 2023; 90:295-311. [PMID: 36912453 DOI: 10.1002/mrm.29624] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Revised: 02/05/2023] [Accepted: 02/08/2023] [Indexed: 03/14/2023]
Abstract
PURPOSE We introduce a framework that enables efficient sampling from learned probability distributions for MRI reconstruction. METHOD Samples are drawn from the posterior distribution given the measured k-space using the Markov chain Monte Carlo (MCMC) method, different from conventional deep learning-based MRI reconstruction techniques. In addition to the maximum a posteriori estimate for the image, which can be obtained by maximizing the log-likelihood indirectly or directly, the minimum mean square error estimate and uncertainty maps can also be computed from those drawn samples. The data-driven Markov chains are constructed with the score-based generative model learned from a given image database and are independent of the forward operator that is used to model the k-space measurement. RESULTS We numerically investigate the framework from these perspectives: (1) the interpretation of the uncertainty of the image reconstructed from undersampled k-space; (2) the effect of the number of noise scales used to train the generative models; (3) using a burn-in phase in MCMC sampling to reduce computation; (4) the comparison to conventional ℓ 1 $$ {\ell}_1 $$ -wavelet regularized reconstruction; (5) the transferability of learned information; and (6) the comparison to fastMRI challenge. CONCLUSION A framework is described that connects the diffusion process and advanced generative models with Markov chains. We demonstrate its flexibility in terms of contrasts and sampling patterns using advanced generative priors and the benefits of also quantifying the uncertainty for every pixel.
Collapse
Affiliation(s)
- Guanxiong Luo
- Institute for Diagnostic and Interventional Radiology, University Medical Center Göttingen, Göttingen, Germany
| | - Moritz Blumenthal
- Institute for Diagnostic and Interventional Radiology, University Medical Center Göttingen, Göttingen, Germany.,Institute of Biomedical Imaging, Graz University of Technology, Graz, Austria
| | - Martin Heide
- Institute for Diagnostic and Interventional Radiology, University Medical Center Göttingen, Göttingen, Germany
| | - Martin Uecker
- Institute for Diagnostic and Interventional Radiology, University Medical Center Göttingen, Göttingen, Germany.,Institute of Biomedical Imaging, Graz University of Technology, Graz, Austria.,German Centre for Cardiovascular Research (DZHK) Partner Site Göttingen, Göttingen, Germany.,Cluster of Excellence "Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells" (MBExC), University of Göttingen, Göttingen, Germany
| |
Collapse
|
82
|
Oscanoa JA, Middione MJ, Alkan C, Yurt M, Loecher M, Vasanawala SS, Ennis DB. Deep Learning-Based Reconstruction for Cardiac MRI: A Review. Bioengineering (Basel) 2023; 10:334. [PMID: 36978725 PMCID: PMC10044915 DOI: 10.3390/bioengineering10030334] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 03/03/2023] [Accepted: 03/03/2023] [Indexed: 03/09/2023] Open
Abstract
Cardiac magnetic resonance (CMR) is an essential clinical tool for the assessment of cardiovascular disease. Deep learning (DL) has recently revolutionized the field through image reconstruction techniques that allow unprecedented data undersampling rates. These fast acquisitions have the potential to considerably impact the diagnosis and treatment of cardiovascular disease. Herein, we provide a comprehensive review of DL-based reconstruction methods for CMR. We place special emphasis on state-of-the-art unrolled networks, which are heavily based on a conventional image reconstruction framework. We review the main DL-based methods and connect them to the relevant conventional reconstruction theory. Next, we review several methods developed to tackle specific challenges that arise from the characteristics of CMR data. Then, we focus on DL-based methods developed for specific CMR applications, including flow imaging, late gadolinium enhancement, and quantitative tissue characterization. Finally, we discuss the pitfalls and future outlook of DL-based reconstructions in CMR, focusing on the robustness, interpretability, clinical deployment, and potential for new methods.
Collapse
Affiliation(s)
- Julio A. Oscanoa
- Department of Bioengineering, Stanford University, Stanford, CA 94305, USA
- Department of Radiology, Stanford University, Stanford, CA 94305, USA
| | | | - Cagan Alkan
- Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA
| | - Mahmut Yurt
- Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA
| | - Michael Loecher
- Department of Radiology, Stanford University, Stanford, CA 94305, USA
| | | | - Daniel B. Ennis
- Department of Radiology, Stanford University, Stanford, CA 94305, USA
| |
Collapse
|
83
|
Yan X, Ran L, Zou L, Luo Y, Yang Z, Zhang S, Zhang S, Xu J, Huang L, Xia L. Dark blood T2-weighted imaging of the human heart with AI-assisted compressed sensing: a patient cohort study. Quant Imaging Med Surg 2023; 13:1699-1710. [PMID: 36915316 PMCID: PMC10006119 DOI: 10.21037/qims-22-607] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Accepted: 12/16/2022] [Indexed: 02/12/2023]
Abstract
Background Dark blood T2-weighted (DB-T2W) imaging is widely used to evaluate myocardial edema in myocarditis and inflammatory cardiomyopathy. However, this technique is sensitive to arrhythmia, tachycardia, and cardiac and respiratory motion due to the long scan time with multiple breath-holds. The application of artificial intelligence (AI)-assisted compressed sensing (ACS) has facilitated significant progress in accelerating medical imaging. However, the effect of DB-T2W imaging on ACS has not been elucidated. This study aimed to examine the effects of ACS on the image quality of single-shot and multi-shot DB-T2W imaging of edema. Methods Thirty-three patients were included in this study and received DB-T2W imaging with ACS, including single-shot acquisition (SS-ACS) and multi-shot acquisition (MS-ACS). The resulting images were compared with those of the conventional multi-shot DB-T2W imaging with parallel imaging (MS-PI). Quantitative assessments of the signal-to-noise ratio (SNR), tissue contrast ratio (CR), and contrast-to-noise ratio (CNR) were performed. Three radiologists independently evaluated the overall image quality, blood nulling, free wall of the left ventricle, free wall of the right ventricle, and interventricular septum using a 5-point Likert scale. Results The total scan time of the DB-T2W imaging with ACS was significantly reduced compared to the conventional parallel imaging [number of heartbeats (SS-ACS:MS-ACS:MS-PI) =19:63:99; P<0.001]. The SNRmyocardium and CNRblood-myocardium of MS-ACS and SS-ACS were higher than those of MS-PI (all P values <0.01). Furthermore, the CRblood-myocardium of SS-ACS was also higher than that of MS-PI (P<0.01). There were significant differences in overall image quality, blood nulling, left ventricle free wall visibility, and septum visibility between the MS-PI, MS-ACS, and SS-ACS protocols (all P values <0.05). Moreover, blood in the heart was better nulled using SS-ACS (P<0.01). Conclusions The ACS method shortens the scan time of DB-T2W imaging and achieves comparable or even better image quality compared to the PI method. Moreover, DB-T2W imaging using the ACS method can reduce the number of breath-holds to 1 with single-shot acquisition.
Collapse
Affiliation(s)
- Xianghu Yan
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Lingping Ran
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Lixian Zou
- Paul C. Lauterbur Research Centre for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Science, Shenzhen, China
| | - Yi Luo
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Zhaoxia Yang
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | | | | | - Jian Xu
- UIH America, Inc., Houston, TX, USA
| | - Lu Huang
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Liming Xia
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
84
|
Guan Y, Tu Z, Wang S, Wang Y, Liu Q, Liang D. Magnetic resonance imaging reconstruction using a deep energy-based model. NMR IN BIOMEDICINE 2023; 36:e4848. [PMID: 36262093 DOI: 10.1002/nbm.4848] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/08/2022] [Revised: 09/09/2022] [Accepted: 09/27/2022] [Indexed: 06/16/2023]
Abstract
Although recent deep energy-based generative models (EBMs) have shown encouraging results in many image-generation tasks, how to take advantage of self-adversarial cogitation in deep EBMs to boost the performance of magnetic resonance imaging (MRI) reconstruction is still desired. With the successful application of deep learning in a wide range of MRI reconstructions, a line of emerging research involves formulating an optimization-based reconstruction method in the space of a generative model. Leveraging this, a novel regularization strategy is introduced in this article that takes advantage of self-adversarial cogitation of the deep energy-based model. More precisely, we advocate alternating learning by a more powerful energy-based model with maximum likelihood estimation to obtain the deep energy-based information, represented as a prior image. Simultaneously, implicit inference with Langevin dynamics is a unique property of reconstruction. In contrast to other generative models for reconstruction, the proposed method utilizes deep energy-based information as the image prior in reconstruction to improve the quality of image. Experimental results imply the proposed technique can obtain remarkable performance in terms of high reconstruction accuracy that is competitive with state-of-the-art methods, and which does not suffer from mode collapse. Algorithmically, an iterative approach is presented to strengthen EBM training with the gradient of energy network. The robustness and reproducibility of the algorithm were also experimentally validated. More importantly, the proposed reconstruction framework can be generalized for most MRI reconstruction scenarios.
Collapse
Affiliation(s)
- Yu Guan
- Department of Electronic Information Engineering, Nanchang University, Nanchang, China
| | - Zongjiang Tu
- Department of Electronic Information Engineering, Nanchang University, Nanchang, China
| | - Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yuhao Wang
- Department of Electronic Information Engineering, Nanchang University, Nanchang, China
| | - Qiegen Liu
- Department of Electronic Information Engineering, Nanchang University, Nanchang, China
| | - Dong Liang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Medical AI Research Center, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
85
|
Dai Y, Wang C, Wang H. Deep compressed sensing MRI via a gradient-enhanced fusion model. Med Phys 2023; 50:1390-1405. [PMID: 36695158 DOI: 10.1002/mp.16164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Revised: 09/16/2022] [Accepted: 12/05/2022] [Indexed: 01/26/2023] Open
Abstract
BACKGROUND Compressed sensing has been employed to accelerate magnetic resonance imaging by sampling fewer measurements. However, conventional iterative optimization-based CS-MRI are time-consuming for iterative calculations and often share poor generalization ability on multicontrast datasets. Most deep-learning-based CS-MRI focus on learning an end-to-end mapping while ignoring some prior knowledge existed in MR images. PURPOSE We propose an iterative fusion model to integrate the image and gradient-based priors into reconstruction via convolutional neural network models while maintaining high quality and preserving the detailed information as well. METHODS We propose a gradient-enhanced fusion network (GFN) for fast and accurate MRI reconstruction, in which dense blocks with dilated convolution and dense residual learnings are used to capture abundant features with fewer parameters. Meanwhile, decomposed gradient maps containing obvious structural information are introduced into the network to enhance the reconstructed images. Besides this, gradient-based priors along directions X and Y are exploited to learn adaptive tight frames for reconstructing the desired image contrast and edges by respective gradient fusion networks. After that, both image and gradient priors are fused in the proposed optimization model, in which we employ the l2 -norm to promote the sparsity of gradient priors. The proposed fusion model can effectively help to capture edge structures in the gradient images and to preserve more detailed information of MR images. RESULTS Experimental results demonstrate that the proposed method outperforms several CS-MRI methods in terms of peak signal-to-noise (PSNR), the structural similarity index (SSIM), and visualizations on three sampling masks with different rates. Noteworthy, to evaluate the generalization ability, the proposed model conducts cross-center training and testing experiments for all three modalities and shares more stable performance compared than other approaches. In addition, the proposed fusion model is applied to other comparable deep learning methods. The quantitative results show that the reconstruction results of these methods are obviously improved. CONCLUSIONS The gradient-based priors reconstructed from GFNs can effectively enhance the edges and details of under-sampled data. The proposed fusion model integrates image and gradient priors using l2 -norm can effectively improve the generalization ability on multicontrast datasets reconstruction.
Collapse
Affiliation(s)
- Yuxiang Dai
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, China
| | - Chengyan Wang
- Human Phenome Institute, Fudan University, Shanghai, China
| | - He Wang
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, China
- Human Phenome Institute, Fudan University, Shanghai, China
| |
Collapse
|
86
|
Yang Q, Ma L, Zhou Z, Bao J, Yang Q, Huang H, Cai S, He H, Chen Z, Zhong J, Cai C. Rapid high-fidelity T 2 * mapping using single-shot overlapping-echo acquisition and deep learning reconstruction. Magn Reson Med 2023; 89:2157-2170. [PMID: 36656132 DOI: 10.1002/mrm.29585] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Revised: 12/07/2022] [Accepted: 12/29/2022] [Indexed: 01/20/2023]
Abstract
PURPOSE To develop and evaluate a single-shot quantitative MRI technique called GRE-MOLED (gradient-echo multiple overlapping-echo detachment) for rapid T 2 * $$ {T}_2^{\ast } $$ mapping. METHODS In GRE-MOLED, multiple echoes with different TEs are generated and captured in a single shot of the k-space through MOLED encoding and EPI readout. A deep neural network, trained by synthetic data, was employed for end-to-end parametric mapping from overlapping-echo signals. GRE-MOLED uses pure GRE acquisition with a single echo train to deliver T 2 * $$ {T}_2^{\ast } $$ maps less than 90 ms per slice. The self-registered B0 information modulated in image phase was utilized for distortion-corrected parametric mapping. The proposed method was evaluated in phantoms, healthy volunteers, and task-based FMRI experiments. RESULTS The quantitative results of GRE-MOLED T 2 * $$ {T}_2^{\ast } $$ mapping demonstrated good agreement with those obtained from the multi-echo GRE method (Pearson's correlation coefficient = 0.991 and 0.973 for phantom and in vivo brains, respectively). High intrasubject repeatability (coefficient of variation <1.0%) were also achieved in scan-rescan test. Enabled by deep learning reconstruction, GRE-MOLED showed excellent robustness to geometric distortion, noise, and random subject motion. Compared to the conventional FMRI approach, GRE-MOLED also achieved a higher temporal SNR and BOLD sensitivity in task-based FMRI. CONCLUSION GRE-MOLED is a new real-time technique for T 2 * $$ {T}_2^{\ast } $$ quantification with high efficiency and quality, and it has the potential to be a better quantitative BOLD detection method.
Collapse
Affiliation(s)
- Qinqin Yang
- Department of Electronic Science, Xiamen University, Xiamen, Fujian, China
| | - Lingceng Ma
- Department of Electronic Science, Xiamen University, Xiamen, Fujian, China
| | - Zihan Zhou
- The Center for Brain Imaging Science and Technology, College of Biomedical Engineering and Instrumental Science, Zhejiang University, Hangzhou, Zhejiang, China
| | - Jianfeng Bao
- Department of Magnetic Resonance Imaging, The First Affiliated Hospital of Zhengzhou University, Zhengzhou University, Zhengzhou, Henan, China
| | - Qizhi Yang
- Department of Electronic Science, Xiamen University, Xiamen, Fujian, China
| | - Haitao Huang
- Department of Electronic Science, Xiamen University, Xiamen, Fujian, China
| | - Shuhui Cai
- Department of Electronic Science, Xiamen University, Xiamen, Fujian, China
| | - Hongjian He
- The Center for Brain Imaging Science and Technology, College of Biomedical Engineering and Instrumental Science, Zhejiang University, Hangzhou, Zhejiang, China
| | - Zhong Chen
- Department of Electronic Science, Xiamen University, Xiamen, Fujian, China
| | - Jianhui Zhong
- The Center for Brain Imaging Science and Technology, College of Biomedical Engineering and Instrumental Science, Zhejiang University, Hangzhou, Zhejiang, China.,Department of Imaging Sciences, University of Rochester, Rochester, New York, USA
| | - Congbo Cai
- Department of Electronic Science, Xiamen University, Xiamen, Fujian, China
| |
Collapse
|
87
|
Li H, Yang M, Kim JH, Zhang C, Liu R, Huang P, Liang D, Zhang X, Li X, Ying L. SuperMAP: Deep ultrafast MR relaxometry with joint spatiotemporal undersampling. Magn Reson Med 2023; 89:64-76. [PMID: 36128884 PMCID: PMC9617769 DOI: 10.1002/mrm.29411] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2022] [Revised: 07/19/2022] [Accepted: 07/25/2022] [Indexed: 11/09/2022]
Abstract
PURPOSE To develop an ultrafast and robust MR parameter mapping network using deep learning. THEORY AND METHODS We design a deep learning framework called SuperMAP that directly converts a series of undersampled (both in k-space and parameter-space) parameter-weighted images into several quantitative maps, bypassing the conventional exponential fitting procedure. We also present a novel technique to simultaneously reconstruct T1rho and T2 relaxation maps within a single scan. Full data were acquired and retrospectively undersampled for training and testing using traditional and state-of-the-art techniques for comparison. Prospective data were also collected to evaluate the trained network. The performance of all methods is evaluated using the parameter qualification errors and other metrics in the segmented regions of interest. RESULTS SuperMAP achieved accurate T1rho and T2 mapping with high acceleration factors (R = 24 and R = 32). It exploited both spatial and temporal information and yielded low error (normalized mean square error of 2.7% at R = 24 and 2.8% at R = 32) and high resemblance (structural similarity of 97% at R = 24 and 96% at R = 32) to the gold standard. The network trained with retrospectively undersampled data also works well for the prospective data (with a slightly lower acceleration factor). SuperMAP is also superior to conventional methods. CONCLUSION Our results demonstrate the feasibility of generating superfast MR parameter maps through very few undersampled parameter-weighted images. SuperMAP can simultaneously generate T1rho and T2 relaxation maps in a short scan time.
Collapse
Affiliation(s)
- Hongyu Li
- Electrical Engineering, University at Buffalo, State University of New York, Buffalo, NY, USA
| | - Mingrui Yang
- Program of Advanced Musculoskeletal Imaging (PAMI), Cleveland Clinic, Cleveland, Ohio, USA
| | - Jee Hun Kim
- Program of Advanced Musculoskeletal Imaging (PAMI), Cleveland Clinic, Cleveland, Ohio, USA
| | - Chaoyi Zhang
- Electrical Engineering, University at Buffalo, State University of New York, Buffalo, NY, USA
| | - Ruiying Liu
- Electrical Engineering, University at Buffalo, State University of New York, Buffalo, NY, USA
| | - Peizhou Huang
- Electrical Engineering, University at Buffalo, State University of New York, Buffalo, NY, USA
| | - Dong Liang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Medical AI research center, SIAT, CAS, Shenzhen, China
| | - Xiaoliang Zhang
- Biomedical Engineering, University at Buffalo, State University at New York, Buffalo, NY, USA
| | - Xiaojuan Li
- Program of Advanced Musculoskeletal Imaging (PAMI), Cleveland Clinic, Cleveland, Ohio, USA
| | - Leslie Ying
- Electrical Engineering, University at Buffalo, State University of New York, Buffalo, NY, USA
- Biomedical Engineering, University at Buffalo, State University at New York, Buffalo, NY, USA
| |
Collapse
|
88
|
Takeshima H. [[MRI] 2. Recent Research on MR Image Reconstruction Using Artificial Intelligence]. Nihon Hoshasen Gijutsu Gakkai Zasshi 2023; 79:863-869. [PMID: 37599072 DOI: 10.6009/jjrt.2023-2236] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/22/2023]
Affiliation(s)
- Hidenori Takeshima
- Imaging Modality Group, Advanced Technology Research Department, Research and Development Center, Canon Medical Systems Corporation
| |
Collapse
|
89
|
Guo D, Zeng G, Fu H, Wang Z, Yang Y, Qu X. A Joint Group Sparsity-based deep learning for multi-contrast MRI reconstruction. JOURNAL OF MAGNETIC RESONANCE (SAN DIEGO, CALIF. : 1997) 2023; 346:107354. [PMID: 36527935 DOI: 10.1016/j.jmr.2022.107354] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 11/24/2022] [Accepted: 12/03/2022] [Indexed: 06/17/2023]
Abstract
Multi-contrast magnetic resonance imaging (MRI) can provide richer diagnosis information. The data acquisition time, however, is increased than single-contrast imaging. To reduce this time, k-space undersampling is an effective way but a smart reconstruction algorithm is required to remove undersampling image artifacts. Traditional algorithms commonly explore the similarity of multi-contrast images through joint sparsity. However, these algorithms are time-consuming due to the iterative process and require adjusting hyperparameters manually. Recently, data-driven deep learning successfully overcome these limitations but the reconstruction error still needs to be further reduced. Here, we propose a Joint Group Sparsity-based Network (JGSN) for multi-contrast MRI reconstruction, which unrolls the iterative process of the joint sparsity algorithm. The designed network includes data consistency modules, learnable sparse transform modules, and joint group sparsity constraint modules. In particular, weights of different contrasts in the transform module are shared to reduce network parameters without sacrificing the quality of reconstruction. The experiments were performed on the retrospective undersampled brain and knee data. Experimental results on in vivo brain data and knee data show that our method consistently outperforms the state-of-the-art methods under different sampling patterns.
Collapse
Affiliation(s)
- Di Guo
- School of Computer and Information Engineering, Fujian Engineering Research Center for Medical Data Mining and Application, Xiamen University of Technology, Xiamen, China
| | - Gushan Zeng
- School of Computer and Information Engineering, Fujian Engineering Research Center for Medical Data Mining and Application, Xiamen University of Technology, Xiamen, China
| | - Hao Fu
- School of Computer and Information Engineering, Fujian Engineering Research Center for Medical Data Mining and Application, Xiamen University of Technology, Xiamen, China
| | - Zi Wang
- Department of Electronic Science, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, National Institute for Data Science in Health and Medicine, Xiamen University, China
| | - Yonggui Yang
- Department of Radiology, The Second Affiliated Hospital of Xiamen Medical College, Xiamen, China
| | - Xiaobo Qu
- Department of Electronic Science, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, National Institute for Data Science in Health and Medicine, Xiamen University, China.
| |
Collapse
|
90
|
Wang Z, Qian C, Guo D, Sun H, Li R, Zhao B, Qu X. One-Dimensional Deep Low-Rank and Sparse Network for Accelerated MRI. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:79-90. [PMID: 36044484 DOI: 10.1109/tmi.2022.3203312] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Deep learning has shown astonishing performance in accelerated magnetic resonance imaging (MRI). Most state-of-the-art deep learning reconstructions adopt the powerful convolutional neural network and perform 2D convolution since many magnetic resonance images or their corresponding k-space are in 2D. In this work, we present a new approach that explores the 1D convolution, making the deep network much easier to be trained and generalized. We further integrate the 1D convolution into the proposed deep network, named as One-dimensional Deep Low-rank and Sparse network (ODLS), which unrolls the iteration procedure of a low-rank and sparse reconstruction model. Extensive results on in vivo knee and brain datasets demonstrate that, the proposed ODLS is very suitable for the case of limited training subjects and provides improved reconstruction performance than state-of-the-art methods both visually and quantitatively. Additionally, ODLS also shows nice robustness to different undersampling scenarios and some mismatches between the training and test data. In summary, our work demonstrates that the 1D deep learning scheme is memory-efficient and robust in fast MRI.
Collapse
|
91
|
Djebra Y, Marin T, Han PK, Bloch I, El Fakhri G, Ma C. Manifold Learning via Linear Tangent Space Alignment (LTSA) for Accelerated Dynamic MRI With Sparse Sampling. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:158-169. [PMID: 36121938 PMCID: PMC10024645 DOI: 10.1109/tmi.2022.3207774] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
The spatial resolution and temporal frame-rate of dynamic magnetic resonance imaging (MRI) can be improved by reconstructing images from sparsely sampled k -space data with mathematical modeling of the underlying spatiotemporal signals. These models include sparsity models, linear subspace models, and non-linear manifold models. This work presents a novel linear tangent space alignment (LTSA) model-based framework that exploits the intrinsic low-dimensional manifold structure of dynamic images for accelerated dynamic MRI. The performance of the proposed method was evaluated and compared to state-of-the-art methods using numerical simulation studies as well as 2D and 3D in vivo cardiac imaging experiments. The proposed method achieved the best performance in image reconstruction among all the compared methods. The proposed method could prove useful for accelerating many MRI applications, including dynamic MRI, multi-parametric MRI, and MR spectroscopic imaging.
Collapse
Affiliation(s)
- Yanis Djebra
- Gordon Center for Medical Imaging, Massachusetts General Hospital, and Department of Radiology, Harvard Medical School, Boston, MA 02129 USA and the LTCI, Telecom Paris, Institut Polytechnique de Paris, Paris, France
| | - Thibault Marin
- Gordon Center for Medical Imaging, Massachusetts General Hospital, and Department of Radiology, Harvard Medical School, Boston, MA 02129 USA
| | - Paul K. Han
- Gordon Center for Medical Imaging, Massachusetts General Hospital, and Department of Radiology, Harvard Medical School, Boston, MA 02129 USA
| | - Isabelle Bloch
- LIP6, Sorbonne University, CNRS Paris, France. This work was partly done while I. Bloch was with the LTCI, Telecom Paris, Institut Polytechnique de Paris, Paris, France
| | - Georges El Fakhri
- Gordon Center for Medical Imaging, Massachusetts General Hospital, and Department of Radiology, Harvard Medical School, Boston, MA 02129 USA
| | - Chao Ma
- Gordon Center for Medical Imaging, Massachusetts General Hospital, and Department of Radiology, Harvard Medical School, Boston, MA 02129 USA
| |
Collapse
|
92
|
Cheng HLM. Emerging MRI techniques for molecular and functional phenotyping of the diseased heart. Front Cardiovasc Med 2022; 9:1072828. [PMID: 36545017 PMCID: PMC9760746 DOI: 10.3389/fcvm.2022.1072828] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Accepted: 11/22/2022] [Indexed: 12/12/2022] Open
Abstract
Recent advances in cardiac MRI (CMR) capabilities have truly transformed its potential for deep phenotyping of the diseased heart. Long known for its unparalleled soft tissue contrast and excellent depiction of three-dimensional (3D) structure, CMR now boasts a range of unique capabilities for probing disease at the tissue and molecular level. We can look beyond coronary vessel blockages and detect vessel disease not visible on a structural level. We can assess if early fibrotic tissue is being laid down in between viable cardiac muscle cells. We can measure deformation of the heart wall to determine early presentation of stiffening. We can even assess how cardiomyocytes are utilizing energy, where abnormalities are often precursors to overt structural and functional deficits. Finally, with artificial intelligence gaining traction due to the high computing power available today, deep learning has proven itself a viable contender with traditional acceleration techniques for real-time CMR. In this review, we will survey five key emerging MRI techniques that have the potential to transform the CMR clinic and permit early detection and intervention. The emerging areas are: (1) imaging microvascular dysfunction, (2) imaging fibrosis, (3) imaging strain, (4) imaging early metabolic changes, and (5) deep learning for acceleration. Through a concerted effort to develop and translate these areas into the CMR clinic, we are committing ourselves to actualizing early diagnostics for the most intractable heart disease phenotypes.
Collapse
Affiliation(s)
- Hai-Ling Margaret Cheng
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering, Institute of Biomedical Engineering, University of Toronto, Toronto, ON, Canada
- Ted Rogers Centre for Heart Research, Translational Biology & Engineering Program, Toronto, ON, Canada
| |
Collapse
|
93
|
Yaman B, Gu H, Hosseini SAH, Demirel OB, Moeller S, Ellermann J, Uğurbil K, Akçakaya M. Multi-mask self-supervised learning for physics-guided neural networks in highly accelerated magnetic resonance imaging. NMR IN BIOMEDICINE 2022; 35:e4798. [PMID: 35789133 PMCID: PMC9669191 DOI: 10.1002/nbm.4798] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Revised: 06/30/2022] [Accepted: 07/02/2022] [Indexed: 06/15/2023]
Abstract
Self-supervised learning has shown great promise because of its ability to train deep learning (DL) magnetic resonance imaging (MRI) reconstruction methods without fully sampled data. Current self-supervised learning methods for physics-guided reconstruction networks split acquired undersampled data into two disjoint sets, where one is used for data consistency (DC) in the unrolled network, while the other is used to define the training loss. In this study, we propose an improved self-supervised learning strategy that more efficiently uses the acquired data to train a physics-guided reconstruction network without a database of fully sampled data. The proposed multi-mask self-supervised learning via data undersampling (SSDU) applies a holdout masking operation on the acquired measurements to split them into multiple pairs of disjoint sets for each training sample, while using one of these pairs for DC units and the other for defining loss, thereby more efficiently using the undersampled data. Multi-mask SSDU is applied on fully sampled 3D knee and prospectively undersampled 3D brain MRI datasets, for various acceleration rates and patterns, and compared with the parallel imaging method, CG-SENSE, and single-mask SSDU DL-MRI, as well as supervised DL-MRI when fully sampled data are available. The results on knee MRI show that the proposed multi-mask SSDU outperforms SSDU and performs as well as supervised DL-MRI. A clinical reader study further ranks the multi-mask SSDU higher than supervised DL-MRI in terms of signal-to-noise ratio and aliasing artifacts. Results on brain MRI show that multi-mask SSDU achieves better reconstruction quality compared with SSDU. The reader study demonstrates that multi-mask SSDU at R = 8 significantly improves reconstruction compared with single-mask SSDU at R = 8, as well as CG-SENSE at R = 2.
Collapse
Affiliation(s)
- Burhaneddin Yaman
- Department of Electrical and Computer EngineeringUniversity of MinnesotaMinneapolisMinnesotaUSA
- Center for Magnetic Resonance ResearchUniversity of MinnesotaMinneapolisMinnesotaUSA
| | - Hongyi Gu
- Department of Electrical and Computer EngineeringUniversity of MinnesotaMinneapolisMinnesotaUSA
- Center for Magnetic Resonance ResearchUniversity of MinnesotaMinneapolisMinnesotaUSA
| | - Seyed Amir Hossein Hosseini
- Department of Electrical and Computer EngineeringUniversity of MinnesotaMinneapolisMinnesotaUSA
- Center for Magnetic Resonance ResearchUniversity of MinnesotaMinneapolisMinnesotaUSA
| | - Omer Burak Demirel
- Department of Electrical and Computer EngineeringUniversity of MinnesotaMinneapolisMinnesotaUSA
- Center for Magnetic Resonance ResearchUniversity of MinnesotaMinneapolisMinnesotaUSA
| | - Steen Moeller
- Center for Magnetic Resonance ResearchUniversity of MinnesotaMinneapolisMinnesotaUSA
| | - Jutta Ellermann
- Center for Magnetic Resonance ResearchUniversity of MinnesotaMinneapolisMinnesotaUSA
| | - Kâmil Uğurbil
- Center for Magnetic Resonance ResearchUniversity of MinnesotaMinneapolisMinnesotaUSA
| | - Mehmet Akçakaya
- Department of Electrical and Computer EngineeringUniversity of MinnesotaMinneapolisMinnesotaUSA
- Center for Magnetic Resonance ResearchUniversity of MinnesotaMinneapolisMinnesotaUSA
| |
Collapse
|
94
|
Yuliansyah DR, Pan MC, Hsu YF. Sensor-to-Image Based Neural Networks: A Reliable Reconstruction Method for Diffuse Optical Imaging of High-Scattering Media. SENSORS (BASEL, SWITZERLAND) 2022; 22:9096. [PMID: 36501794 PMCID: PMC9741421 DOI: 10.3390/s22239096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 11/16/2022] [Accepted: 11/20/2022] [Indexed: 06/17/2023]
Abstract
Imaging tasks today are being increasingly shifted toward deep learning-based solutions. Biomedical imaging problems are no exception toward this tendency. It is appealing to consider deep learning as an alternative to such a complex imaging task. Although research of deep learning-based solutions continues to thrive, challenges still remain that limits the availability of these solutions in clinical practice. Diffuse optical tomography is a particularly challenging field since the problem is both ill-posed and ill-conditioned. To get a reconstructed image, various regularization-based models and procedures have been developed in the last three decades. In this study, a sensor-to-image based neural network for diffuse optical imaging has been developed as an alternative to the existing Tikhonov regularization (TR) method. It also provides a different structure compared to previous neural network approaches. We focus on realizing a complete image reconstruction function approximation (from sensor to image) by combining multiple deep learning architectures known in imaging fields that gives more capability to learn than the fully connected neural networks (FCNN) and/or convolutional neural networks (CNN) architectures. We use the idea of transformation from sensor- to image-domain similarly with AUTOMAP, and use the concept of an encoder, which is to learn a compressed representation of the inputs. Further, a U-net with skip connections to extract features and obtain the contrast image, is proposed and implemented. We designed a branching-like structure of the network that fully supports the ring-scanning measurement system, which means it can deal with various types of experimental data. The output images are obtained by multiplying the contrast images with the background coefficients. Our network is capable of producing attainable performance in both simulation and experiment cases, and is proven to be reliable to reconstruct non-synthesized data. Its apparent superior performance was compared with the results of the TR method and FCNN models. The proposed and implemented model is feasible to localize the inclusions with various conditions. The strategy created in this paper can be a promising alternative solution for clinical breast tumor imaging applications.
Collapse
Affiliation(s)
| | - Min-Chun Pan
- Department of Mechanical Engineering, National Central University, Taoyuan City 320, Taiwan
| | - Ya-Fen Hsu
- Department of Surgery, Landseed International Hospital, Taoyuan City 324, Taiwan
| |
Collapse
|
95
|
Zou J, Li C, Jia S, Wu R, Pei T, Zheng H, Wang S. SelfCoLearn: Self-Supervised Collaborative Learning for Accelerating Dynamic MR Imaging. Bioengineering (Basel) 2022; 9:650. [PMID: 36354561 PMCID: PMC9687509 DOI: 10.3390/bioengineering9110650] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 10/19/2022] [Accepted: 10/26/2022] [Indexed: 08/22/2023] Open
Abstract
Lately, deep learning technology has been extensively investigated for accelerating dynamic magnetic resonance (MR) imaging, with encouraging progresses achieved. However, without fully sampled reference data for training, the current approaches may have limited abilities in recovering fine details or structures. To address this challenge, this paper proposes a self-supervised collaborative learning framework (SelfCoLearn) for accurate dynamic MR image reconstruction from undersampled k-space data directly. The proposed SelfCoLearn is equipped with three important components, namely, dual-network collaborative learning, reunderampling data augmentation and a special-designed co-training loss. The framework is flexible and can be integrated into various model-based iterative un-rolled networks. The proposed method has been evaluated on an in vivo dataset and was compared to four state-of-the-art methods. The results show that the proposed method possesses strong capabilities in capturing essential and inherent representations for direct reconstructions from the undersampled k-space data and thus enables high-quality and fast dynamic MR imaging.
Collapse
Affiliation(s)
- Juan Zou
- School of Physics and Optoelectronics, Xiangtan University, Xiangtan 411105, China
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Cheng Li
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Sen Jia
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Ruoyou Wu
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Tingrui Pei
- School of Physics and Optoelectronics, Xiangtan University, Xiangtan 411105, China
- College of Information Science and Technology, Jinan University, Guangzhou 510631, China
| | - Hairong Zheng
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Guangdong Provincial Key Laboratory of Artificial Intelligence in Medicial Image Analysis and Application, Shenzhen 518055, China
| |
Collapse
|
96
|
Ouyang B, Yang Q, Wang X, He H, Ma L, Yang Q, Zhou Z, Cai S, Chen Z, Wu Z, Zhong J, Cai C. Single-shot T 2 mapping via multi-echo-train multiple overlapping-echo detachment planar imaging and multitask deep learning. Med Phys 2022; 49:7095-7107. [PMID: 35765150 DOI: 10.1002/mp.15820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Revised: 05/02/2022] [Accepted: 06/13/2022] [Indexed: 12/13/2022] Open
Abstract
BACKGROUND Quantitative magnetic resonance imaging provides robust biomarkers in clinics. Nevertheless, the lengthy scan time reduces imaging throughput and increases the susceptibility of imaging results to motion. In this context, a single-shot T2 mapping method based on multiple overlapping-echo detachment (MOLED) planar imaging was presented, but the relatively small echo time range limits its accuracy, especially in tissues with large T2 . PURPOSE In this work we proposed a novel single-shot method, Multi-Echo-Train Multiple OverLapping-Echo Detachment (METMOLED) planar imaging, to accommodate a large range of T2 quantification without additional measurements to rectify signal degeneration arisen from refocusing pulse imperfection. METHODS Multiple echo-train techniques were integrated into the MOLED sequence to capture larger TE information. Maps of T2 , B1 , and spin density were reconstructed synchronously from acquired METMOLED data via multitask deep learning. A typical U-Net was trained with 3000/600 synthetic data with geometric/brain patterns to learn the mapping relationship between METMOLED signals and quantitative maps. The refocusing pulse imperfection was settled through the inherent information of METMOLED data and auxiliary tasks. RESULTS Experimental results on the digital brain (structural similarity (SSIM) index = 0.975/0.991/0.988 for MOLED/METMOLED-2/METMOLED-3, hyphenated number denotes the number of echo-trains), physical phantom (the slope of linear fitting with reference T2 map = 1.047/1.017/1.006 for MOLED/METMOLED-2/METMOLED-3), and human brain (Pearson's correlation coefficient (PCC) = 0.9581/0.9760/0.9900 for MOLED/METMOLED-2/METMOLED-3) demonstrated that the METMOLED improved the quantitative accuracy and the tissue details in contrast to the MOLED. These improvements were more pronounced in tissues with large T2 and in application scenarios with high temporal resolution (PCC = 0.8692/0.9465/0.9743 for MOLED/METMOLED-2/METMOLED-3). Moreover, the METMOLED could rectify the signal deviations induced by the non-ideal slice profiles of refocusing pulses without additional measurements. A preliminary measurement also demonstrated that the METMOLED is highly repeatable (mean coefficient of variation (CV) = 1.65%). CONCLUSIONS METMOLED breaks the restriction of echo-train length to TE and implements unbiased T2 estimates in an extensive range. Furthermore, it corrects the effect of refocusing pulse inaccuracy without additional measurements or signal post-processing, thus retaining its single-shot characteristic. This technique would be beneficial for accurate T2 quantification.
Collapse
Affiliation(s)
- Binyu Ouyang
- Department of Electronic Science, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, Xiamen University, Xiamen, Fujian, 361005, China
| | - Qizhi Yang
- Department of Electronic Science, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, Xiamen University, Xiamen, Fujian, 361005, China
| | - Xiaoyin Wang
- Center for Brain Imaging Science and Technology, College of Biomedical Engineering and Instrumental Science, Zhejiang University, Hangzhou, Zhejiang, 310058, China
| | - Hongjian He
- Center for Brain Imaging Science and Technology, College of Biomedical Engineering and Instrumental Science, Zhejiang University, Hangzhou, Zhejiang, 310058, China
| | - Lingceng Ma
- Department of Electronic Science, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, Xiamen University, Xiamen, Fujian, 361005, China
| | - Qinqin Yang
- Department of Electronic Science, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, Xiamen University, Xiamen, Fujian, 361005, China
| | - Zihan Zhou
- Center for Brain Imaging Science and Technology, College of Biomedical Engineering and Instrumental Science, Zhejiang University, Hangzhou, Zhejiang, 310058, China
| | - Shuhui Cai
- Department of Electronic Science, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, Xiamen University, Xiamen, Fujian, 361005, China
| | - Zhong Chen
- Department of Electronic Science, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, Xiamen University, Xiamen, Fujian, 361005, China
| | - Zhigang Wu
- MSC Clinical and Technical Solutions, Philips Healthcare, Shenzhen, Guangdong, 518005, China
| | - Jianhui Zhong
- Center for Brain Imaging Science and Technology, College of Biomedical Engineering and Instrumental Science, Zhejiang University, Hangzhou, Zhejiang, 310058, China.,Department of Imaging Sciences, University of Rochester, Rochester, New York, 14642, USA
| | - Congbo Cai
- Department of Electronic Science, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, Xiamen University, Xiamen, Fujian, 361005, China
| |
Collapse
|
97
|
Hou Y, Liu Q, Chen J, Wu B, Zeng F, Yang Z, Song H, Liu Y. Application value of T2 fluid-attenuated inversion recovery sequence based on deep learning in static lacunar infarction. Acta Radiol 2022; 64:1650-1658. [PMID: 36285480 DOI: 10.1177/02841851221134114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Background Regular monitoring of static lacunar infarction (SLI) lesions plays an important role in preventing disease development and managing prognosis. Magnetic resonance imaging is one method used to monitor SLI lesions. Purpose To evaluate the image quality of the T2 fluid-attenuated inversion recovery (T2-FLAIR) sequence using artificial intelligence-assisted compressed sensing (ACS) in detecting SLI lesions and assess its clinical applicability. Methods A total of 42 patients were prospectively enrolled and scanned by T2-FLAIR. Two independent readers reviewed the images acquired with accelerated modes 1D (acceleration factor 2) and ACS (acceleration factors 2, 3, and 4). The overall image quality and lesion image quality were analyzed, as were signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and number of lesions between groups. Results The subjective assessment of overall brain image quality and lesion image quality was consistent between the two readers. The lesion display quality and the overall image quality were better with the traditional 1D acceleration method than with the ACS accelerated method. There was no significant difference in the SNR of the lacunar infarction in the images between the groups. The CNR of the images with the 1D acceleration mode was significantly lower than that of images with the ACS acceleration mode. Images with the 1D, ACS2, and ACS3 acceleration modes showed no significant differences in terms of detecting lesions but scan time can be reduced by 40% (1D vs. ACS3). Conclusion ACS acceleration mode can greatly reduce the scan time. In addition, the images have good SNR, high CNR, and strong SLI lesion detection ability.
Collapse
Affiliation(s)
- Yanzhen Hou
- Medical Imaging Center, 559569Shenzhen Hospital of Southern Medical University, Shenzhen, Guangdong Province, PR China
| | - Qian Liu
- Medical Imaging Center, 559569Shenzhen Hospital of Southern Medical University, Shenzhen, Guangdong Province, PR China
| | - Jialing Chen
- Medical Imaging Center, 559569Shenzhen Hospital of Southern Medical University, Shenzhen, Guangdong Province, PR China
| | - Bin Wu
- Medical Imaging Center, 559569Shenzhen Hospital of Southern Medical University, Shenzhen, Guangdong Province, PR China
| | - Feihong Zeng
- Medical Imaging Center, 559569Shenzhen Hospital of Southern Medical University, Shenzhen, Guangdong Province, PR China
| | - Zhongxian Yang
- Medical Imaging Center, 559569Shenzhen Hospital of Southern Medical University, Shenzhen, Guangdong Province, PR China
| | - Haiyan Song
- Department of Radiology, 499778Shenzhen Second People's Hospital, Shenzhen, Guangdong Province, PR China
| | - Yubao Liu
- Medical Imaging Center, 559569Shenzhen Hospital of Southern Medical University, Shenzhen, Guangdong Province, PR China
| |
Collapse
|
98
|
Mumuni AN, Hasford F, Udeme NI, Dada MO, Awojoyogbe BO. A SWOT analysis of artificial intelligence in diagnostic imaging in the developing world: making a case for a paradigm shift. PHYSICAL SCIENCES REVIEWS 2022. [DOI: 10.1515/psr-2022-0121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Abstract
Diagnostic imaging (DI) refers to techniques and methods of creating images of the body’s internal parts and organs with or without the use of ionizing radiation, for purposes of diagnosing, monitoring and characterizing diseases. By default, DI equipment are technology based and in recent times, there has been widespread automation of DI operations in high-income countries while low and middle-income countries (LMICs) are yet to gain traction in automated DI. Advanced DI techniques employ artificial intelligence (AI) protocols to enable imaging equipment perceive data more accurately than humans do, and yet automatically or under expert evaluation, make clinical decisions such as diagnosis and characterization of diseases. In this narrative review, SWOT analysis is used to examine the strengths, weaknesses, opportunities and threats associated with the deployment of AI-based DI protocols in LMICs. Drawing from this analysis, a case is then made to justify the need for widespread AI applications in DI in resource-poor settings. Among other strengths discussed, AI-based DI systems could enhance accuracies in diagnosis, monitoring, characterization of diseases and offer efficient image acquisition, processing, segmentation and analysis procedures, but may have weaknesses regarding the need for big data, huge initial and maintenance costs, and inadequate technical expertise of professionals. They present opportunities for synthetic modality transfer, increased access to imaging services, and protocol optimization; and threats of input training data biases, lack of regulatory frameworks and perceived fear of job losses among DI professionals. The analysis showed that successful integration of AI in DI procedures could position LMICs towards achievement of universal health coverage by 2030/2035. LMICs will however have to learn from the experiences of advanced settings, train critical staff in relevant areas of AI and proceed to develop in-house AI systems with all relevant stakeholders onboard.
Collapse
Affiliation(s)
| | - Francis Hasford
- Department of Medical Physics , University of Ghana, Ghana Atomic Energy Commission , Accra , Ghana
| | | | | | | |
Collapse
|
99
|
Dillman JR, Somasundaram E, Brady SL, He L. Current and emerging artificial intelligence applications for pediatric abdominal imaging. Pediatr Radiol 2022; 52:2139-2148. [PMID: 33844048 DOI: 10.1007/s00247-021-05057-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/27/2020] [Revised: 01/25/2021] [Accepted: 03/16/2021] [Indexed: 12/12/2022]
Abstract
Artificial intelligence (AI) uses computers to mimic cognitive functions of the human brain, allowing inferences to be made from generally large datasets. Traditional machine learning (e.g., decision tree analysis, support vector machines) and deep learning (e.g., convolutional neural networks) are two commonly employed AI approaches both outside and within the field of medicine. Such techniques can be used to evaluate medical images for the purposes of automated detection and segmentation, classification tasks (including diagnosis, lesion or tissue characterization, and prediction), and image reconstruction. In this review article we highlight recent literature describing current and emerging AI methods applied to abdominal imaging (e.g., CT, MRI and US) and suggest potential future applications of AI in the pediatric population.
Collapse
Affiliation(s)
- Jonathan R Dillman
- Department of Radiology, Cincinnati Children's Hospital Medical Center, 3333 Burnet Ave., Cincinnati, OH, 45229, USA. .,Department of Radiology, University of Cincinnati College of Medicine, Cincinnati, OH, USA.
| | - Elan Somasundaram
- Department of Radiology, Cincinnati Children's Hospital Medical Center, 3333 Burnet Ave., Cincinnati, OH, 45229, USA.,Department of Radiology, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Samuel L Brady
- Department of Radiology, Cincinnati Children's Hospital Medical Center, 3333 Burnet Ave., Cincinnati, OH, 45229, USA.,Department of Radiology, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Lili He
- Department of Radiology, Cincinnati Children's Hospital Medical Center, 3333 Burnet Ave., Cincinnati, OH, 45229, USA.,Department of Radiology, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| |
Collapse
|
100
|
Artificial Intelligence in Biological Sciences. Life (Basel) 2022; 12:life12091430. [PMID: 36143468 PMCID: PMC9505413 DOI: 10.3390/life12091430] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Revised: 08/25/2022] [Accepted: 09/10/2022] [Indexed: 12/03/2022] Open
Abstract
Artificial intelligence (AI), currently a cutting-edge concept, has the potential to improve the quality of life of human beings. The fields of AI and biological research are becoming more intertwined, and methods for extracting and applying the information stored in live organisms are constantly being refined. As the field of AI matures with more trained algorithms, the potential of its application in epidemiology, the study of host–pathogen interactions and drug designing widens. AI is now being applied in several fields of drug discovery, customized medicine, gene editing, radiography, image processing and medication management. More precise diagnosis and cost-effective treatment will be possible in the near future due to the application of AI-based technologies. In the field of agriculture, farmers have reduced waste, increased output and decreased the amount of time it takes to bring their goods to market due to the application of advanced AI-based approaches. Moreover, with the use of AI through machine learning (ML) and deep-learning-based smart programs, one can modify the metabolic pathways of living systems to obtain the best possible outputs with the minimal inputs. Such efforts can improve the industrial strains of microbial species to maximize the yield in the bio-based industrial setup. This article summarizes the potentials of AI and their application to several fields of biology, such as medicine, agriculture, and bio-based industry.
Collapse
|