1
|
Sarkar A, Das A, Ram K, Ramanarayanan S, Joel SE, Sivaprakasam M. AutoDPS: An unsupervised diffusion model based method for multiple degradation removal in MRI. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2025; 263:108684. [PMID: 40023963 DOI: 10.1016/j.cmpb.2025.108684] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/18/2024] [Revised: 01/31/2025] [Accepted: 02/19/2025] [Indexed: 03/04/2025]
Abstract
BACKGROUND AND OBJECTIVE Diffusion models have demonstrated their ability in image generation and solving inverse problems like restoration. Unlike most existing deep-learning based image restoration techniques which rely on unpaired or paired data for degradation awareness, diffusion models offer an unsupervised degradation independent alternative. This is well-suited in the context of restoring artifact-corrupted Magnetic Resonance Images (MRI), where it is impractical to exactly model the degradations apriori. In MRI, multiple corruptions arise, for instance, from patient movement compounded by undersampling artifacts from the acquisition settings. METHODS To tackle this scenario, we propose AutoDPS, an unsupervised method for corruption removal in brain MRI based on Diffusion Posterior Sampling. Our method (i) performs motion-related corruption parameter estimation using a blind iterative solver, and (ii) utilizes the knowledge of the undersampling pattern when the corruption consists of both motion and undersampling artifacts. We incorporate this corruption operation during sampling to guide the generation in recovering high-quality images. RESULTS Despite being trained to denoise and tested on completely unseen corruptions, our method AutoDPS has shown ∼ 1.63 dB of improvement in PSNR over baselines for realistic 3D motion restoration and ∼ 0.5 dB of improvement for random motion with undersampling. Additionally, our experiments demonstrate AutoDPS's resilience to noise and its generalization capability under domain shift, showcasing its robustness and adaptability. CONCLUSION In this paper, we propose an unsupervised method that removes multiple corruptions, mainly motion with undersampling, in MRI images which are essential for accurate diagnosis. The experiments show promising results on realistic and composite artifacts with higher improvement margins as compared to other methods. Our code is available at https://github.com/arunima101/AutoDPS/tree/master.
Collapse
Affiliation(s)
- Arunima Sarkar
- Department of Electrical Engineering, Indian Institute of Technology Madras (IITM), Chennai 600036, Tamil Nadu, India.
| | - Ayantika Das
- Department of Electrical Engineering, Indian Institute of Technology Madras (IITM), Chennai 600036, Tamil Nadu, India
| | - Keerthi Ram
- Healthcare Technology Innovation Centre, IITM, Chennai 600036, Tamil Nadu, India
| | - Sriprabha Ramanarayanan
- Department of Electrical Engineering, Indian Institute of Technology Madras (IITM), Chennai 600036, Tamil Nadu, India; Healthcare Technology Innovation Centre, IITM, Chennai 600036, Tamil Nadu, India
| | | | - Mohanasankar Sivaprakasam
- Department of Electrical Engineering, Indian Institute of Technology Madras (IITM), Chennai 600036, Tamil Nadu, India; Healthcare Technology Innovation Centre, IITM, Chennai 600036, Tamil Nadu, India
| |
Collapse
|
2
|
Zhang R, Zhang Q, Wu Y. A CVAE-based generative model for generalized B 1 inhomogeneity corrected chemical exchange saturation transfer MRI at 5 T. Neuroimage 2025; 312:121202. [PMID: 40268259 DOI: 10.1016/j.neuroimage.2025.121202] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2024] [Revised: 03/26/2025] [Accepted: 04/09/2025] [Indexed: 04/25/2025] Open
Abstract
Chemical exchange saturation transfer (CEST) magnetic resonance imaging (MRI) has emerged as a powerful tool to image endogenous or exogenous macromolecules. CEST contrast highly depends on radiofrequency irradiation B1 level. Spatial inhomogeneity of B1 field would bias CEST measurement. Conventional interpolation-based B1 correction method required CEST dataset acquisition under multiple B1 levels, substantially prolonging scan time. The recently proposed supervised deep learning approach reconstructed B1 inhomogeneity corrected CEST effect at the identical B1 as of the training data, hindering its generalization to other B1 levels. In this study, we proposed a Conditional Variational Autoencoder (CVAE)-based generative model to generate B1 inhomogeneity corrected Z spectra from single CEST acquisition. The model was trained from pixel-wise source-target paired Z spectra under multiple B1 with target B1 as a conditional variable. Numerical simulation and healthy human brain imaging at 5 T were respectively performed to evaluate the performance of proposed model in B1 inhomogeneity corrected CEST MRI. Results showed that the generated B1-corrected Z spectra agreed well with the reference averaged from regions with subtle B1 inhomogeneity. Moreover, the performance of the proposed model in correcting B1 inhomogeneity in APT CEST effect, as measured by both MTRasym and [Formula: see text] at 3.5 ppm, were superior over conventional Z/contrast-B1-interpolation and other deep learning methods, especially when target B1 were not included in sampling or training dataset. In summary, the proposed model allows generalized B1 inhomogeneity correction, benefiting quantitative CEST MRI in clinical routines.
Collapse
Affiliation(s)
- Ruifen Zhang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, 1068 Xueyuan Boulevard, Xili, Nanshan, Shenzhen, 518055, Guangdong, China
| | - Qiyang Zhang
- Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, 1068 Xueyuan Boulevard, Xili, Nanshan, Shenzhen, 518055, Guangdong, China
| | - Yin Wu
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, 1068 Xueyuan Boulevard, Xili, Nanshan, Shenzhen, 518055, Guangdong, China; Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, 1068 Xueyuan Boulevard, Xili, Nanshan, Shenzhen, 518055, Guangdong, China; State Key Laboratory of Biomedical Imaging Science and System, 1068 Xueyuan Boulevard, Xili, Nanshan, Shenzhen, 518055, Guangdong, China.
| |
Collapse
|
3
|
Melazzini L, Bortolotto C, Brizzi L, Achilli M, Basla N, D'Onorio De Meo A, Gerbasi A, Bottinelli OM, Bellazzi R, Preda L. AI for image quality and patient safety in CT and MRI. Eur Radiol Exp 2025; 9:28. [PMID: 39987533 PMCID: PMC11847764 DOI: 10.1186/s41747-025-00562-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2024] [Accepted: 01/27/2025] [Indexed: 02/25/2025] Open
Abstract
Substantial endeavors have been recently dedicated to developing artificial intelligence (AI) solutions, especially deep learning-based, tailored to enhance radiological procedures, in particular algorithms designed to minimize radiation exposure and enhance image clarity. Thus, not only better diagnostic accuracy but also reduced potential harm to patients was pursued, thereby exemplifying the intersection of technological innovation and the highest standards of patient care. We provide herein an overview of recent AI developments in computed tomography and magnetic resonance imaging. Major AI results in CT regard: optimization of patient positioning, scan range selection (avoiding "overscanning"), and choice of technical parameters; reduction of the amount of injected contrast agent and injection flow rate (also avoiding extravasation); faster and better image reconstruction reducing noise level and artifacts. Major AI results in MRI regard: reconstruction of undersampled images; artifact removal, including those derived from unintentional patient's (or fetal) movement or from heart motion; up to 80-90% reduction of GBCA dose. Challenges include limited generalizability, lack of external validation, insufficient explainability of models, and opacity of decision-making. Developing explainable AI algorithms that provide transparent and interpretable outputs is essential to enable seamless AI integration into CT and MRI practice. RELEVANCE STATEMENT: This review highlights how AI-driven advancements in CT and MRI improve image quality and enhance patient safety by leveraging AI solutions for dose reduction, contrast optimization, noise reduction, and efficient image reconstruction, paving the way for safer, faster, and more accurate diagnostic imaging practices. KEY POINTS: Advancements in AI are revolutionizing the way radiological images are acquired, reconstructed, and interpreted. AI algorithms can assist in optimizing radiation doses, reducing scan times, and enhancing image quality. AI techniques are paving the way for a future of more efficient, accurate, and safe medical imaging examinations.
Collapse
Affiliation(s)
- Luca Melazzini
- Department of Clinical, Surgical, Diagnostic and Pediatric Sciences, University of Pavia, Pavia, Italy
| | - Chandra Bortolotto
- Department of Clinical, Surgical, Diagnostic and Pediatric Sciences, University of Pavia, Pavia, Italy
- Department of Radiology, IRCCS Policlinico San Matteo, Pavia, Italy
| | - Leonardo Brizzi
- Department of Clinical, Surgical, Diagnostic and Pediatric Sciences, University of Pavia, Pavia, Italy.
| | - Marina Achilli
- Department of Clinical, Surgical, Diagnostic and Pediatric Sciences, University of Pavia, Pavia, Italy
| | - Nicoletta Basla
- Department of Clinical, Surgical, Diagnostic and Pediatric Sciences, University of Pavia, Pavia, Italy
| | | | - Alessia Gerbasi
- Department of Electrical, Computer and Biomedical Engineering, University of Pavia, Pavia, Italy
| | - Olivia Maria Bottinelli
- Department of Clinical, Surgical, Diagnostic and Pediatric Sciences, University of Pavia, Pavia, Italy
| | - Riccardo Bellazzi
- Department of Electrical, Computer and Biomedical Engineering, University of Pavia, Pavia, Italy
| | - Lorenzo Preda
- Department of Clinical, Surgical, Diagnostic and Pediatric Sciences, University of Pavia, Pavia, Italy
- Department of Radiology, IRCCS Policlinico San Matteo, Pavia, Italy
| |
Collapse
|
4
|
Safari M, Eidex Z, Pan S, Qiu RLJ, Yang X. Self-supervised adversarial diffusion models for fast MRI reconstruction. Med Phys 2025. [PMID: 39924867 DOI: 10.1002/mp.17675] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2024] [Revised: 11/20/2024] [Accepted: 01/22/2025] [Indexed: 02/11/2025] Open
Abstract
BACKGROUND Magnetic resonance imaging (MRI) offers excellent soft tissue contrast essential for diagnosis and treatment, but its long acquisition times can cause patient discomfort and motion artifacts. PURPOSE To propose a self-supervised deep learning-based compressed sensing MRI method named "Self-Supervised Adversarial Diffusion for MRI Accelerated Reconstruction (SSAD-MRI)" to accelerate data acquisition without requiring fully sampled datasets. MATERIALS AND METHODS We used the fastMRI multi-coil brain axialT 2 $\text{T}_{2}$ -weighted (T 2 $\text{T}_{2}$ -w) dataset from 1376 cases and single-coil brain quantitative magnetization prepared 2 rapid acquisition gradient echoesT 1 $\text{T}_{1}$ maps from 318 cases to train and test our model. Robustness against domain shift was evaluated using two out-of-distribution (OOD) datasets: multi-coil brain axial postcontrastT 1 $\text{T}_{1}$ -weighted (T 1 c $\text{T}_{1}\text{c}$ ) dataset from 50 cases and axial T1-weighted (T1-w) dataset from 50 patients. Data were retrospectively subsampled at acceleration ratesR ∈ { 2 × , 4 × , 8 × } $ R \in \lbrace 2\times, 4\times, 8\times \rbrace $ . SSAD-MRI partitions a random sampling pattern into two disjoint sets, ensuring data consistency during training. We compared our method with ReconFormer Transformer and SS-MRI, assessing performance using normalized mean squared error (NMSE), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM). Statistical tests included one-way analysis of variance and multi-comparison Tukey's honesty significant difference (HSD) tests. RESULTS SSAD-MRI preserved fine structures and brain abnormalities visually better than comparative methods atR = 8 × $ R=8\times$ for both multi-coil and single-coil datasets. It achieved the lowest NMSE atR ∈ { 4 × , 8 × } $ R \in \lbrace 4\times, 8\times \rbrace $ , and the highest PSNR and SSIM values at all acceleration rates for the multi-coil dataset. Similar trends were observed for the single-coil dataset, though SSIM values were comparable to ReconFormer atR ∈ { 2 × , 8 × } $ R \in \lbrace 2\times, 8\times \rbrace $ . These results were further confirmed by the voxel-wise correlation scatter plots. OOD results showed significant (p≪ 10 - 5 $ \ll 10^{-5}$ ) improvements in undersampled image quality after reconstruction. CONCLUSIONS SSAD-MRI successfully reconstructs fully sampled images without utilizing them in the training step, potentially reducing imaging costs and enhancing image quality crucial for diagnosis and treatment.
Collapse
Affiliation(s)
- Mojtaba Safari
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Zach Eidex
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Shaoyan Pan
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Richard L J Qiu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| |
Collapse
|
5
|
Cui ZX, Liu C, Fan X, Cao C, Cheng J, Zhu Q, Liu Y, Jia S, Wang H, Zhu Y, Zhou Y, Zhang J, Liu Q, Liang D. Physics-Informed DeepMRI: k-Space Interpolation Meets Heat Diffusion. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:3503-3520. [PMID: 39292579 DOI: 10.1109/tmi.2024.3462988] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/20/2024]
Abstract
Recently, diffusion models have shown considerable promise for MRI reconstruction. However, extensive experimentation has revealed that these models are prone to generating artifacts due to the inherent randomness involved in generating images from pure noise. To achieve more controlled image reconstruction, we reexamine the concept of interpolatable physical priors in k-space data, focusing specifically on the interpolation of high-frequency (HF) k-space data from low-frequency (LF) k-space data. Broadly, this insight drives a shift in the generation paradigm from random noise to a more deterministic approach grounded in the existing LF k-space data. Building on this, we first establish a relationship between the interpolation of HF k-space data from LF k-space data and the reverse heat diffusion process, providing a fundamental framework for designing diffusion models that generate missing HF data. To further improve reconstruction accuracy, we integrate a traditional physics-informed k-space interpolation model into our diffusion framework as a data fidelity term. Experimental validation using publicly available datasets demonstrates that our approach significantly surpasses traditional k-space interpolation methods, deep learning-based k-space interpolation techniques, and conventional diffusion models, particularly in HF regions. Finally, we assess the generalization performance of our model across various out-of-distribution datasets. Our code are available at https://github.com/ZhuoxuCui/Heat-Diffusion.
Collapse
|
6
|
Xue Z, Zhu S, Yang F, Gao J, Peng H, Zou C, Jin H, Hu C. A hybrid deep image prior and compressed sensing reconstruction method for highly accelerated 3D coronary magnetic resonance angiography. Front Cardiovasc Med 2024; 11:1408351. [PMID: 39328236 PMCID: PMC11424428 DOI: 10.3389/fcvm.2024.1408351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2024] [Accepted: 08/27/2024] [Indexed: 09/28/2024] Open
Abstract
Introduction High-resolution whole-heart coronary magnetic resonance angiography (CMRA) often suffers from unreasonably long scan times, rendering imaging acceleration highly desirable. Traditional reconstruction methods used in CMRA rely on either hand-crafted priors or supervised learning models. Although the latter often yield superior reconstruction quality, they require a large amount of training data and memory resources, and may encounter generalization issues when dealing with out-of-distribution datasets. Methods To address these challenges, we introduce an unsupervised reconstruction method that combines deep image prior (DIP) with compressed sensing (CS) to accelerate 3D CMRA. This method incorporates a slice-by-slice DIP reconstruction and 3D total variation (TV) regularization, enabling high-quality reconstruction under a significant acceleration while enforcing continuity in the slice direction. We evaluated our method by comparing it to iterative SENSE, CS-TV, CS-wavelet, and other DIP-based variants, using both retrospectively and prospectively undersampled datasets. Results The results demonstrate the superiority of our 3D DIP-CS approach, which improved the reconstruction accuracy relative to the other approaches across both datasets. Ablation studies further reveal the benefits of combining DIP with 3D TV regularization, which leads to significant improvements of image quality over pure DIP-based methods. Evaluation of vessel sharpness and image quality scores shows that DIP-CS improves the quality of reformatted coronary arteries. Discussion The proposed method enables scan-specific reconstruction of high-quality 3D CMRA from a five-minute acquisition, without relying on fully-sampled training data or placing a heavy burden on memory resources.
Collapse
Affiliation(s)
- Zhihao Xue
- National Engineering Research Center of Advanced Magnetic Resonance Technologies for Diagnosis and Therapy, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Sicheng Zhu
- National Engineering Research Center of Advanced Magnetic Resonance Technologies for Diagnosis and Therapy, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Fan Yang
- National Engineering Research Center of Advanced Magnetic Resonance Technologies for Diagnosis and Therapy, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Juan Gao
- National Engineering Research Center of Advanced Magnetic Resonance Technologies for Diagnosis and Therapy, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Hao Peng
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China
| | - Chao Zou
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China
| | - Hang Jin
- Department of Radiology, Zhongshan Hospital, Fudan University and Shanghai Medical Imaging Institute, Shanghai, China
| | - Chenxi Hu
- National Engineering Research Center of Advanced Magnetic Resonance Technologies for Diagnosis and Therapy, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
7
|
Wang S, Wu R, Jia S, Diakite A, Li C, Liu Q, Zheng H, Ying L. Knowledge-driven deep learning for fast MR imaging: Undersampled MR image reconstruction from supervised to un-supervised learning. Magn Reson Med 2024; 92:496-518. [PMID: 38624162 DOI: 10.1002/mrm.30105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 03/19/2024] [Accepted: 03/20/2024] [Indexed: 04/17/2024]
Abstract
Deep learning (DL) has emerged as a leading approach in accelerating MRI. It employs deep neural networks to extract knowledge from available datasets and then applies the trained networks to reconstruct accurate images from limited measurements. Unlike natural image restoration problems, MRI involves physics-based imaging processes, unique data properties, and diverse imaging tasks. This domain knowledge needs to be integrated with data-driven approaches. Our review will introduce the significant challenges faced by such knowledge-driven DL approaches in the context of fast MRI along with several notable solutions, which include learning neural networks and addressing different imaging application scenarios. The traits and trends of these techniques have also been given which have shifted from supervised learning to semi-supervised learning, and finally, to unsupervised learning methods. In addition, MR vendors' choices of DL reconstruction have been provided along with some discussions on open questions and future directions, which are critical for the reliable imaging systems.
Collapse
Affiliation(s)
- Shanshan Wang
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Ruoyou Wu
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Sen Jia
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Alou Diakite
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Cheng Li
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Qiegen Liu
- Department of Electronic Information Engineering, Nanchang University, Nanchang, China
| | - Hairong Zheng
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Leslie Ying
- Department of Biomedical Engineering and Department of Electrical Engineering, The State University of New York, Buffalo, New York, USA
| |
Collapse
|
8
|
Levac B, Kumar S, Jalal A, Tamir JI. Accelerated motion correction with deep generative diffusion models. Magn Reson Med 2024; 92:853-868. [PMID: 38688874 DOI: 10.1002/mrm.30082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 01/02/2024] [Accepted: 02/23/2024] [Indexed: 05/02/2024]
Abstract
PURPOSE The aim of this work is to develop a method to solve the ill-posed inverse problem of accelerated image reconstruction while correcting forward model imperfections in the context of subject motion during MRI examinations. METHODS The proposed solution uses a Bayesian framework based on deep generative diffusion models to jointly estimate a motion-free image and rigid motion estimates from subsampled and motion-corrupt two-dimensional (2D) k-space data. RESULTS We demonstrate the ability to reconstruct motion-free images from accelerated two-dimensional (2D) Cartesian and non-Cartesian scans without any external reference signal. We show that our method improves over existing correction techniques on both simulated and prospectively accelerated data. CONCLUSION We propose a flexible framework for retrospective motion correction of accelerated MRI based on deep generative diffusion models, with potential application to other forward model corruptions.
Collapse
Affiliation(s)
- Brett Levac
- Chandra Family Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, Texas, USA
| | - Sidharth Kumar
- Chandra Family Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, Texas, USA
| | - Ajil Jalal
- Electrical Engineering and Computer Sciences, University of California at Berkeley, Berkeley, California, USA
| | - Jonathan I Tamir
- Chandra Family Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, Texas, USA
| |
Collapse
|
9
|
Heckel R, Jacob M, Chaudhari A, Perlman O, Shimron E. Deep learning for accelerated and robust MRI reconstruction. MAGMA (NEW YORK, N.Y.) 2024; 37:335-368. [PMID: 39042206 DOI: 10.1007/s10334-024-01173-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/03/2024] [Revised: 05/24/2024] [Accepted: 05/28/2024] [Indexed: 07/24/2024]
Abstract
Deep learning (DL) has recently emerged as a pivotal technology for enhancing magnetic resonance imaging (MRI), a critical tool in diagnostic radiology. This review paper provides a comprehensive overview of recent advances in DL for MRI reconstruction, and focuses on various DL approaches and architectures designed to improve image quality, accelerate scans, and address data-related challenges. It explores end-to-end neural networks, pre-trained and generative models, and self-supervised methods, and highlights their contributions to overcoming traditional MRI limitations. It also discusses the role of DL in optimizing acquisition protocols, enhancing robustness against distribution shifts, and tackling biases. Drawing on the extensive literature and practical insights, it outlines current successes, limitations, and future directions for leveraging DL in MRI reconstruction, while emphasizing the potential of DL to significantly impact clinical imaging practices.
Collapse
Affiliation(s)
- Reinhard Heckel
- Department of computer engineering, Technical University of Munich, Munich, Germany
| | - Mathews Jacob
- Department of Electrical and Computer Engineering, University of Iowa, Iowa, 52242, IA, USA
| | - Akshay Chaudhari
- Department of Radiology, Stanford University, Stanford, 94305, CA, USA
- Department of Biomedical Data Science, Stanford University, Stanford, 94305, CA, USA
| | - Or Perlman
- Department of Biomedical Engineering, Tel Aviv University, Tel Aviv, Israel
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel
| | - Efrat Shimron
- Department of Electrical and Computer Engineering, Technion-Israel Institute of Technology, Haifa, 3200004, Israel.
- Department of Biomedical Engineering, Technion-Israel Institute of Technology, Haifa, 3200004, Israel.
| |
Collapse
|
10
|
Cheng J, Cui ZX, Zhu Q, Wang H, Zhu Y, Liang D. Integrating data distribution prior via Langevin dynamics for end-to-end MR reconstruction. Magn Reson Med 2024; 92:202-214. [PMID: 38469985 DOI: 10.1002/mrm.30065] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2023] [Revised: 01/24/2024] [Accepted: 02/08/2024] [Indexed: 03/13/2024]
Abstract
PURPOSE To develop a novel deep learning-based method inheriting the advantages of data distribution prior and end-to-end training for accelerating MRI. METHODS Langevin dynamics is used to formulate image reconstruction with data distribution before facilitate image reconstruction. The data distribution prior is learned implicitly through the end-to-end adversarial training to mitigate the hyper-parameter selection and shorten the testing time compared to traditional probabilistic reconstruction. By seamlessly integrating the deep equilibrium model, the iteration of Langevin dynamics culminates in convergence to a fix-point, ensuring the stability of the learned distribution. RESULTS The feasibility of the proposed method is evaluated on the brain and knee datasets. Retrospective results with uniform and random masks show that the proposed method demonstrates superior performance both quantitatively and qualitatively than the state-of-the-art. CONCLUSION The proposed method incorporating Langevin dynamics with end-to-end adversarial training facilitates efficient and robust reconstruction for MRI. Empirical evaluations conducted on brain and knee datasets compellingly demonstrate the superior performance of the proposed method in terms of artifact removing and detail preserving.
Collapse
Affiliation(s)
- Jing Cheng
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, China
| | - Zhuo-Xu Cui
- Research Center for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Qingyong Zhu
- Research Center for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Haifeng Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, China
| | - Yanjie Zhu
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, China
| | - Dong Liang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, China
- Research Center for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
11
|
Wang B, Lian Y, Xiong X, Zhou H, Liu Z, Zhou X. DCT-net: Dual-domain cross-fusion transformer network for MRI reconstruction. Magn Reson Imaging 2024; 107:69-79. [PMID: 38237693 DOI: 10.1016/j.mri.2024.01.007] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Revised: 12/26/2023] [Accepted: 01/14/2024] [Indexed: 01/22/2024]
Abstract
Current challenges in Magnetic Resonance Imaging (MRI) include long acquisition times and motion artifacts. To address these issues, under-sampled k-space acquisition has gained popularity as a fast imaging method. However, recovering fine details from under-sampled data remains challenging. In this study, we introduce a pioneering deep learning approach, namely DCT-Net, designed for dual-domain MRI reconstruction. DCT-Net seamlessly integrates information from the image domain (IRM) and frequency domain (FRM), utilizing a novel Cross Attention Block (CAB) and Fusion Attention Block (FAB). These innovative blocks enable precise feature extraction and adaptive fusion across both domains, resulting in a significant enhancement of the reconstructed image quality. The adaptive interaction and fusion mechanisms of CAB and FAB contribute to the method's effectiveness in capturing distinctive features and optimizing image reconstruction. Comprehensive ablation studies have been conducted to assess the contributions of these modules to reconstruction quality and accuracy. Experimental results on the FastMRI (2023) and Calgary-Campinas datasets (2021) demonstrate the superiority of our MRI reconstruction framework over other typical methods (most are illustrated in 2023 or 2022) in both qualitative and quantitative evaluations. This holds for knee and brain datasets under 4× and 8× accelerated imaging scenarios.
Collapse
Affiliation(s)
- Bin Wang
- National Institute of Metrology, Beijing 100029, China; Key Laboratory of Metrology Digitalization and Digital Metrology for State Market Regulation, Beijing 100029, China; School of Printing and Packaging Engineering, Beijing Institute of Graphic Communication, Beijing 102600, China
| | - Yusheng Lian
- School of Printing and Packaging Engineering, Beijing Institute of Graphic Communication, Beijing 102600, China
| | - Xingchuang Xiong
- National Institute of Metrology, Beijing 100029, China; Key Laboratory of Metrology Digitalization and Digital Metrology for State Market Regulation, Beijing 100029, China.
| | - Han Zhou
- School of Printing and Packaging Engineering, Beijing Institute of Graphic Communication, Beijing 102600, China
| | - Zilong Liu
- National Institute of Metrology, Beijing 100029, China; Key Laboratory of Metrology Digitalization and Digital Metrology for State Market Regulation, Beijing 100029, China.
| | - Xiaohao Zhou
- State Key Laboratory of Infrared Physics, Shanghai Institute of Technical Physics, Chinese Academy of Sciences, Shanghai 200083, China.
| |
Collapse
|
12
|
Khawaled S, Freiman M. NPB-REC: A non-parametric Bayesian deep-learning approach for undersampled MRI reconstruction with uncertainty estimation. Artif Intell Med 2024; 149:102798. [PMID: 38462289 DOI: 10.1016/j.artmed.2024.102798] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 12/26/2023] [Accepted: 02/03/2024] [Indexed: 03/12/2024]
Abstract
The ability to reconstruct high-quality images from undersampled MRI data is vital in improving MRI temporal resolution and reducing acquisition times. Deep learning methods have been proposed for this task, but the lack of verified methods to quantify the uncertainty in the reconstructed images hampered clinical applicability. We introduce "NPB-REC", a non-parametric fully Bayesian framework, for MRI reconstruction from undersampled data with uncertainty estimation. We use Stochastic Gradient Langevin Dynamics during training to characterize the posterior distribution of the network parameters. This enables us to both improve the quality of the reconstructed images and quantify the uncertainty in the reconstructed images. We demonstrate the efficacy of our approach on a multi-coil MRI dataset from the fastMRI challenge and compare it to the baseline End-to-End Variational Network (E2E-VarNet). Our approach outperforms the baseline in terms of reconstruction accuracy by means of PSNR and SSIM (34.55, 0.908 vs. 33.08, 0.897, p<0.01, acceleration rate R=8) and provides uncertainty measures that correlate better with the reconstruction error (Pearson correlation, R=0.94 vs. R=0.91). Additionally, our approach exhibits better generalization capabilities against anatomical distribution shifts (PSNR and SSIM of 32.38, 0.849 vs. 31.63, 0.836, p<0.01, training on brain data, inference on knee data, acceleration rate R=8). NPB-REC has the potential to facilitate the safe utilization of deep learning-based methods for MRI reconstruction from undersampled data. Code and trained models are available at https://github.com/samahkh/NPB-REC.
Collapse
Affiliation(s)
- Samah Khawaled
- The Interdisciplinary program in Applied Mathematics, Faculty of Mathematics, Technion - Israel Institute of Technology, Israel.
| | - Moti Freiman
- The Faculty of Biomedical Engineering, Technion - Israel Institute of Technology, Israel.
| |
Collapse
|
13
|
Fast MF, Cao M, Parikh P, Sonke JJ. Intrafraction Motion Management With MR-Guided Radiation Therapy. Semin Radiat Oncol 2024; 34:92-106. [PMID: 38105098 DOI: 10.1016/j.semradonc.2023.10.008] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2023]
Abstract
High quality radiation therapy requires highly accurate and precise dose delivery. MR-guided radiotherapy (MRgRT), integrating an MRI scanner with a linear accelerator, offers excellent quality images in the treatment room without subjecting patient to ionizing radiation. MRgRT therefore provides a powerful tool for intrafraction motion management. This paper summarizes different sources of intrafraction motion for different disease sites and describes the MR imaging techniques available to visualize and quantify intrafraction motion. It provides an overview of MR guided motion management strategies and of the current technical capabilities of the commercially available MRgRT systems. It describes how these motion management capabilities are currently being used in clinical studies, protocols and provides a future outlook.
Collapse
Affiliation(s)
- Martin F Fast
- Department of Radiotherapy, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Minsong Cao
- Department of Radiation Oncology, University of California, Los Angeles, CA
| | - Parag Parikh
- Department of Radiation Oncology, Henry Ford Health - Cancer, Detroit, MI
| | - Jan-Jakob Sonke
- Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam, The Netherlands.
| |
Collapse
|
14
|
Guan Y, Li Y, Liu R, Meng Z, Li Y, Ying L, Du YP, Liang ZP. Subspace Model-Assisted Deep Learning for Improved Image Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3833-3846. [PMID: 37682643 DOI: 10.1109/tmi.2023.3313421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/10/2023]
Abstract
Image reconstruction from limited and/or sparse data is known to be an ill-posed problem and a priori information/constraints have played an important role in solving the problem. Early constrained image reconstruction methods utilize image priors based on general image properties such as sparsity, low-rank structures, spatial support bound, etc. Recent deep learning-based reconstruction methods promise to produce even higher quality reconstructions by utilizing more specific image priors learned from training data. However, learning high-dimensional image priors requires huge amounts of training data that are currently not available in medical imaging applications. As a result, deep learning-based reconstructions often suffer from two known practical issues: a) sensitivity to data perturbations (e.g., changes in data sampling scheme), and b) limited generalization capability (e.g., biased reconstruction of lesions). This paper proposes a new method to address these issues. The proposed method synergistically integrates model-based and data-driven learning in three key components. The first component uses the linear vector space framework to capture global dependence of image features; the second exploits a deep network to learn the mapping from a linear vector space to a nonlinear manifold; the third is an unrolling-based deep network that captures local residual features with the aid of a sparsity model. The proposed method has been evaluated with magnetic resonance imaging data, demonstrating improved reconstruction in the presence of data perturbation and/or novel image features. The method may enhance the practical utility of deep learning-based image reconstruction.
Collapse
|
15
|
Dar SUH, Öztürk Ş, Özbey M, Oguz KK, Çukur T. Parallel-stream fusion of scan-specific and scan-general priors for learning deep MRI reconstruction in low-data regimes. Comput Biol Med 2023; 167:107610. [PMID: 37883853 DOI: 10.1016/j.compbiomed.2023.107610] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Revised: 09/20/2023] [Accepted: 10/17/2023] [Indexed: 10/28/2023]
Abstract
Magnetic resonance imaging (MRI) is an essential diagnostic tool that suffers from prolonged scan times. Reconstruction methods can alleviate this limitation by recovering clinically usable images from accelerated acquisitions. In particular, learning-based methods promise performance leaps by employing deep neural networks as data-driven priors. A powerful approach uses scan-specific (SS) priors that leverage information regarding the underlying physical signal model for reconstruction. SS priors are learned on each individual test scan without the need for a training dataset, albeit they suffer from computationally burdening inference with nonlinear networks. An alternative approach uses scan-general (SG) priors that instead leverage information regarding the latent features of MRI images for reconstruction. SG priors are frozen at test time for efficiency, albeit they require learning from a large training dataset. Here, we introduce a novel parallel-stream fusion model (PSFNet) that synergistically fuses SS and SG priors for performant MRI reconstruction in low-data regimes, while maintaining competitive inference times to SG methods. PSFNet implements its SG prior based on a nonlinear network, yet it forms its SS prior based on a linear network to maintain efficiency. A pervasive framework for combining multiple priors in MRI reconstruction is algorithmic unrolling that uses serially alternated projections, causing error propagation under low-data regimes. To alleviate error propagation, PSFNet combines its SS and SG priors via a novel parallel-stream architecture with learnable fusion parameters. Demonstrations are performed on multi-coil brain MRI for varying amounts of training data. PSFNet outperforms SG methods in low-data regimes, and surpasses SS methods with few tens of training samples. On average across tasks, PSFNet achieves 3.1 dB higher PSNR, 2.8% higher SSIM, and 0.3 × lower RMSE than baselines. Furthermore, in both supervised and unsupervised setups, PSFNet requires an order of magnitude lower samples compared to SG methods, and enables an order of magnitude faster inference compared to SS methods. Thus, the proposed model improves deep MRI reconstruction with elevated learning and computational efficiency.
Collapse
Affiliation(s)
- Salman Ul Hassan Dar
- Department of Internal Medicine III, Heidelberg University Hospital, 69120, Heidelberg, Germany; AI Health Innovation Cluster, Heidelberg, Germany
| | - Şaban Öztürk
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; Department of Electrical-Electronics Engineering, Amasya University, Amasya 05100, Turkey
| | - Muzaffer Özbey
- Department of Electrical and Computer Engineering, University of Illinois Urbana-Champaign, IL 61820, United States
| | - Kader Karli Oguz
- Department of Radiology, University of California, Davis, CA 95616, United States; Department of Radiology, Hacettepe University, Ankara, Turkey
| | - Tolga Çukur
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; Department of Radiology, Hacettepe University, Ankara, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey; Neuroscience Graduate Program, Bilkent University, Ankara 06800, Turkey.
| |
Collapse
|
16
|
Peng H, Jiang C, Cheng J, Zhang M, Wang S, Liang D, Liu Q. One-Shot Generative Prior in Hankel-k-Space for Parallel Imaging Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3420-3435. [PMID: 37342955 DOI: 10.1109/tmi.2023.3288219] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/23/2023]
Abstract
Magnetic resonance imaging serves as an essential tool for clinical diagnosis. However, it suffers from a long acquisition time. The utilization of deep learning, especially the deep generative models, offers aggressive acceleration and better reconstruction in magnetic resonance imaging. Nevertheless, learning the data distribution as prior knowledge and reconstructing the image from limited data remains challenging. In this work, we propose a novel Hankel-k-space generative model (HKGM), which can generate samples from a training set of as little as one k-space. At the prior learning stage, we first construct a large Hankel matrix from k-space data, then extract multiple structured k-space patches from the Hankel matrix to capture the internal distribution among different patches. Extracting patches from a Hankel matrix enables the generative model to be learned from the redundant and low-rank data space. At the iterative reconstruction stage, the desired solution obeys the learned prior knowledge. The intermediate reconstruction solution is updated by taking it as the input of the generative model. The updated result is then alternatively operated by imposing low-rank penalty on its Hankel matrix and data consistency constraint on the measurement data. Experimental results confirmed that the internal statistics of patches within single k-space data carry enough information for learning a powerful generative model and providing state-of-the-art reconstruction.
Collapse
|
17
|
Tu Z, Liu D, Wang X, Jiang C, Zhu P, Zhang M, Wang S, Liang D, Liu Q. WKGM: weighted k-space generative model for parallel imaging reconstruction. NMR IN BIOMEDICINE 2023; 36:e5005. [PMID: 37547964 DOI: 10.1002/nbm.5005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Revised: 06/12/2023] [Accepted: 06/24/2023] [Indexed: 08/08/2023]
Abstract
Deep learning based parallel imaging (PI) has made great progress in recent years to accelerate MRI. Nevertheless, it still has some limitations: for example, the robustness and flexibility of existing methods are greatly deficient. In this work, we propose a method to explore the k-space domain learning via robust generative modeling for flexible calibrationless PI reconstruction, coined the weighted k-space generative model (WKGM). Specifically, WKGM is a generalized k-space domain model, where the k-space weighting technology and high-dimensional space augmentation design are efficiently incorporated for score-based generative model training, resulting in good and robust reconstructions. In addition, WKGM is flexible and thus can be synergistically combined with various traditional k-space PI models, which can make full use of the correlation between multi-coil data and realize calibrationless PI. Even though our model was trained on only 500 images, experimental results with varying sampling patterns and acceleration factors demonstrate that WKGM can attain state-of-the-art reconstruction results with the well learned k-space generative prior.
Collapse
Affiliation(s)
- Zongjiang Tu
- Department of Electronic Information Engineering, Nanchang University, Nanchang, China
| | - Die Liu
- Department of Electronic Information Engineering, Nanchang University, Nanchang, China
| | - Xiaoqing Wang
- Department of Biomedical Imaging, Graz University of Technology, Graz, Austria
| | - Chen Jiang
- Department of Mathematics and Computer Sciences, Nanchang University, Nanchang, China
| | - Pengwen Zhu
- Department of Engineering, Pennsylvania State University, Pennsylvania, State College, USA
| | - Minghui Zhang
- Department of Electronic Information Engineering, Nanchang University, Nanchang, China
| | - Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, SIAT, CAS, Shenzhen, China
| | - Dong Liang
- Paul C. Lauterbur Research Center for Biomedical Imaging, SIAT, CAS, Shenzhen, China
| | - Qiegen Liu
- Department of Electronic Information Engineering, Nanchang University, Nanchang, China
| |
Collapse
|
18
|
Song X, Wang G, Zhong W, Guo K, Li Z, Liu X, Dong J, Liu Q. Sparse-view reconstruction for photoacoustic tomography combining diffusion model with model-based iteration. PHOTOACOUSTICS 2023; 33:100558. [PMID: 38021282 PMCID: PMC10658608 DOI: 10.1016/j.pacs.2023.100558] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/08/2023] [Revised: 08/14/2023] [Accepted: 09/16/2023] [Indexed: 12/01/2023]
Abstract
As a non-invasive hybrid biomedical imaging technology, photoacoustic tomography combines high contrast of optical imaging and high penetration of acoustic imaging. However, the conventional standard reconstruction under sparse view could result in low-quality image in photoacoustic tomography. Here, a novel model-based sparse reconstruction method for photoacoustic tomography via diffusion model was proposed. A score-based diffusion model is designed for learning the prior information of the data distribution. The learned prior information is utilized as a constraint for the data consistency term of an optimization problem based on the least-square method in the model-based iterative reconstruction, aiming to achieve the optimal solution. Blood vessels simulation data and the animal in vivo experimental data were used to evaluate the performance of the proposed method. The results demonstrate that the proposed method achieves higher-quality sparse reconstruction compared with conventional reconstruction methods and U-Net. In particular, under the extreme sparse projection (e.g., 32 projections), the proposed method achieves an improvement of ∼ 260 % in structural similarity and ∼ 30 % in peak signal-to-noise ratio for in vivo data, compared with the conventional delay-and-sum method. This method has the potential to reduce the acquisition time and cost of photoacoustic tomography, which will further expand the application range.
Collapse
Affiliation(s)
| | | | - Wenhua Zhong
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Kangjun Guo
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Zilong Li
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Xuan Liu
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Jiaqing Dong
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Qiegen Liu
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| |
Collapse
|
19
|
Singh D, Monga A, de Moura HL, Zhang X, Zibetti MVW, Regatte RR. Emerging Trends in Fast MRI Using Deep-Learning Reconstruction on Undersampled k-Space Data: A Systematic Review. Bioengineering (Basel) 2023; 10:1012. [PMID: 37760114 PMCID: PMC10525988 DOI: 10.3390/bioengineering10091012] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 08/22/2023] [Accepted: 08/24/2023] [Indexed: 09/29/2023] Open
Abstract
Magnetic Resonance Imaging (MRI) is an essential medical imaging modality that provides excellent soft-tissue contrast and high-resolution images of the human body, allowing us to understand detailed information on morphology, structural integrity, and physiologic processes. However, MRI exams usually require lengthy acquisition times. Methods such as parallel MRI and Compressive Sensing (CS) have significantly reduced the MRI acquisition time by acquiring less data through undersampling k-space. The state-of-the-art of fast MRI has recently been redefined by integrating Deep Learning (DL) models with these undersampled approaches. This Systematic Literature Review (SLR) comprehensively analyzes deep MRI reconstruction models, emphasizing the key elements of recently proposed methods and highlighting their strengths and weaknesses. This SLR involves searching and selecting relevant studies from various databases, including Web of Science and Scopus, followed by a rigorous screening and data extraction process using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. It focuses on various techniques, such as residual learning, image representation using encoders and decoders, data-consistency layers, unrolled networks, learned activations, attention modules, plug-and-play priors, diffusion models, and Bayesian methods. This SLR also discusses the use of loss functions and training with adversarial networks to enhance deep MRI reconstruction methods. Moreover, we explore various MRI reconstruction applications, including non-Cartesian reconstruction, super-resolution, dynamic MRI, joint learning of reconstruction with coil sensitivity and sampling, quantitative mapping, and MR fingerprinting. This paper also addresses research questions, provides insights for future directions, and emphasizes robust generalization and artifact handling. Therefore, this SLR serves as a valuable resource for advancing fast MRI, guiding research and development efforts of MRI reconstruction for better image quality and faster data acquisition.
Collapse
Affiliation(s)
- Dilbag Singh
- Center of Biomedical Imaging, Department of Radiology, New York University Grossman School of Medicine, New York, NY 10016, USA; (A.M.); (H.L.d.M.); (X.Z.); (M.V.W.Z.)
| | | | | | | | | | - Ravinder R. Regatte
- Center of Biomedical Imaging, Department of Radiology, New York University Grossman School of Medicine, New York, NY 10016, USA; (A.M.); (H.L.d.M.); (X.Z.); (M.V.W.Z.)
| |
Collapse
|
20
|
Xu Y, Farris CW, Anderson SW, Zhang X, Brown KA. Bayesian reconstruction of magnetic resonance images using Gaussian processes. Sci Rep 2023; 13:12527. [PMID: 37532743 PMCID: PMC10397278 DOI: 10.1038/s41598-023-39533-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Accepted: 07/26/2023] [Indexed: 08/04/2023] Open
Abstract
A central goal of modern magnetic resonance imaging (MRI) is to reduce the time required to produce high-quality images. Efforts have included hardware and software innovations such as parallel imaging, compressed sensing, and deep learning-based reconstruction. Here, we propose and demonstrate a Bayesian method to build statistical libraries of magnetic resonance (MR) images in k-space and use these libraries to identify optimal subsampling paths and reconstruction processes. Specifically, we compute a multivariate normal distribution based upon Gaussian processes using a publicly available library of T1-weighted images of healthy brains. We combine this library with physics-informed envelope functions to only retain meaningful correlations in k-space. This covariance function is then used to select a series of ring-shaped subsampling paths using Bayesian optimization such that they optimally explore space while remaining practically realizable in commercial MRI systems. Combining optimized subsampling paths found for a range of images, we compute a generalized sampling path that, when used for novel images, produces superlative structural similarity and error in comparison to previously reported reconstruction processes (i.e. 96.3% structural similarity and < 0.003 normalized mean squared error from sampling only 12.5% of the k-space data). Finally, we use this reconstruction process on pathological data without retraining to show that reconstructed images are clinically useful for stroke identification. Since the model trained on images of healthy brains could be directly used for predictions in pathological brains without retraining, it shows the inherent transferability of this approach and opens doors to its widespread use.
Collapse
Affiliation(s)
- Yihong Xu
- Department of Physics, Boston University, Boston, MA, 02215, USA
| | - Chad W Farris
- Department of Radiology, Boston Medical Center, Boston University Chobanian & Avedisian School of Medicine, Boston, MA, 02118, USA
| | - Stephan W Anderson
- Department of Radiology, Boston Medical Center, Boston University Chobanian & Avedisian School of Medicine, Boston, MA, 02118, USA
| | - Xin Zhang
- Department of Mechanical Engineering, Boston University, Boston, MA, 02215, USA
- Department of Electrical & Computer Engineering, Boston University, Boston, MA, 02215, USA
- Department of Biomedical Engineering, Boston University, Boston, MA, 02215, USA
- Division of Materials Science & Engineering, Boston University, Boston, MA, 02215, USA
| | - Keith A Brown
- Department of Physics, Boston University, Boston, MA, 02215, USA.
- Department of Mechanical Engineering, Boston University, Boston, MA, 02215, USA.
- Division of Materials Science & Engineering, Boston University, Boston, MA, 02215, USA.
| |
Collapse
|
21
|
Güngör A, Dar SU, Öztürk Ş, Korkmaz Y, Bedel HA, Elmas G, Ozbey M, Çukur T. Adaptive diffusion priors for accelerated MRI reconstruction. Med Image Anal 2023; 88:102872. [PMID: 37384951 DOI: 10.1016/j.media.2023.102872] [Citation(s) in RCA: 41] [Impact Index Per Article: 20.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 04/13/2023] [Accepted: 06/12/2023] [Indexed: 07/01/2023]
Abstract
Deep MRI reconstruction is commonly performed with conditional models that de-alias undersampled acquisitions to recover images consistent with fully-sampled data. Since conditional models are trained with knowledge of the imaging operator, they can show poor generalization across variable operators. Unconditional models instead learn generative image priors decoupled from the operator to improve reliability against domain shifts related to the imaging operator. Recent diffusion models are particularly promising given their high sample fidelity. Nevertheless, inference with a static image prior can perform suboptimally. Here we propose the first adaptive diffusion prior for MRI reconstruction, AdaDiff, to improve performance and reliability against domain shifts. AdaDiff leverages an efficient diffusion prior trained via adversarial mapping over large reverse diffusion steps. A two-phase reconstruction is executed following training: a rapid-diffusion phase that produces an initial reconstruction with the trained prior, and an adaptation phase that further refines the result by updating the prior to minimize data-consistency loss. Demonstrations on multi-contrast brain MRI clearly indicate that AdaDiff outperforms competing conditional and unconditional methods under domain shifts, and achieves superior or on par within-domain performance.
Collapse
Affiliation(s)
- Alper Güngör
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey; ASELSAN Research Center, Ankara 06200, Turkey
| | - Salman Uh Dar
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey; Department of Internal Medicine III, Heidelberg University Hospital, Heidelberg 69120, Germany
| | - Şaban Öztürk
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey; Department of Electrical and Electronics Engineering, Amasya University, Amasya 05100, Turkey
| | - Yilmaz Korkmaz
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey
| | - Hasan A Bedel
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey
| | - Gokberk Elmas
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey
| | - Muzaffer Ozbey
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey
| | - Tolga Çukur
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey; Neuroscience Program, Bilkent University, Ankara 06800, Turkey.
| |
Collapse
|
22
|
Gong C, Jing C, Chen X, Pun CM, Huang G, Saha A, Nieuwoudt M, Li HX, Hu Y, Wang S. Generative AI for brain image computing and brain network computing: a review. Front Neurosci 2023; 17:1203104. [PMID: 37383107 PMCID: PMC10293625 DOI: 10.3389/fnins.2023.1203104] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Accepted: 05/22/2023] [Indexed: 06/30/2023] Open
Abstract
Recent years have witnessed a significant advancement in brain imaging techniques that offer a non-invasive approach to mapping the structure and function of the brain. Concurrently, generative artificial intelligence (AI) has experienced substantial growth, involving using existing data to create new content with a similar underlying pattern to real-world data. The integration of these two domains, generative AI in neuroimaging, presents a promising avenue for exploring various fields of brain imaging and brain network computing, particularly in the areas of extracting spatiotemporal brain features and reconstructing the topological connectivity of brain networks. Therefore, this study reviewed the advanced models, tasks, challenges, and prospects of brain imaging and brain network computing techniques and intends to provide a comprehensive picture of current generative AI techniques in brain imaging. This review is focused on novel methodological approaches and applications of related new methods. It discussed fundamental theories and algorithms of four classic generative models and provided a systematic survey and categorization of tasks, including co-registration, super-resolution, enhancement, classification, segmentation, cross-modality, brain network analysis, and brain decoding. This paper also highlighted the challenges and future directions of the latest work with the expectation that future research can be beneficial.
Collapse
Affiliation(s)
- Changwei Gong
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Department of Computer Science, University of Chinese Academy of Sciences, Beijing, China
| | - Changhong Jing
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Department of Computer Science, University of Chinese Academy of Sciences, Beijing, China
| | - Xuhang Chen
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Department of Computer and Information Science, University of Macau, Macau, China
| | - Chi Man Pun
- Department of Computer and Information Science, University of Macau, Macau, China
| | - Guoli Huang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Ashirbani Saha
- Department of Oncology and School of Biomedical Engineering, McMaster University, Hamilton, ON, Canada
| | - Martin Nieuwoudt
- Institute for Biomedical Engineering, Stellenbosch University, Stellenbosch, South Africa
| | - Han-Xiong Li
- Department of Systems Engineering, City University of Hong Kong, Hong Kong, China
| | - Yong Hu
- Department of Orthopaedics and Traumatology, The University of Hong Kong, Hong Kong, China
| | - Shuqiang Wang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Department of Computer Science, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
23
|
Tu Z, Jiang C, Guan Y, Liu J, Liu Q. K-space and image domain collaborative energy-based model for parallel MRI reconstruction. Magn Reson Imaging 2023; 99:110-122. [PMID: 36796460 DOI: 10.1016/j.mri.2023.02.004] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Revised: 02/08/2023] [Accepted: 02/10/2023] [Indexed: 02/17/2023]
Abstract
Decreasing magnetic resonance (MR) image acquisition times can potentially make MR examinations more accessible. Prior arts including the deep learning models have been devoted to solving the problem of long MRI imaging time. Recently, deep generative models have exhibited great potentials in algorithm robustness and usage flexibility. Nevertheless, none of existing schemes can be learned from or employed to the k-space measurement directly. Furthermore, how do the deep generative models work well in hybrid domain is also worth being investigated. In this work, by taking advantage of the deep energy-based models, we propose a k-space and image domain collaborative generative model to comprehensively estimate the MR data from under-sampled measurement. Equipped with parallel and sequential orders, experimental comparisons with the state-of-the-arts demonstrated that they involve less error in reconstruction accuracy and are more stable under different acceleration factors.
Collapse
Affiliation(s)
- Zongjiang Tu
- Department of Electronic Information Engineering, Nanchang University, Nanchang 330031, China
| | - Chen Jiang
- Department of Mathematics and Computer Sciences, Nanchang University, Nanchang 330031, China
| | - Yu Guan
- Department of Electronic Information Engineering, Nanchang University, Nanchang 330031, China
| | - Jijun Liu
- Department of Mathematics, Southeast University, Nanjing 210096, China; Nanjing Center for Applied Mathemtics, Nanjing, 211135,China
| | - Qiegen Liu
- Department of Electronic Information Engineering, Nanchang University, Nanchang 330031, China.
| |
Collapse
|
24
|
Zhao L, Huang J. A distribution information sharing federated learning approach for medical image data. COMPLEX INTELL SYST 2023; 9:1-12. [PMID: 37361966 PMCID: PMC10052320 DOI: 10.1007/s40747-023-01035-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Accepted: 03/09/2023] [Indexed: 03/31/2023]
Abstract
In recent years, federated learning has been believed to play a considerable role in cross-silo scenarios (e.g., medical institutions) due to its privacy-preserving properties. However, the non-IID problem in federated learning between medical institutions is common, which degrades the performance of traditional federated learning algorithms. To overcome the performance degradation problem, a novelty distribution information sharing federated learning approach (FedDIS) to medical image classification is proposed that reduce non-IIDness across clients by generating data locally at each client with shared medical image data distribution from others while protecting patient privacy. First, a variational autoencoder (VAE) is federally trained, of which the encoder is uesd to map the local original medical images into a hidden space, and the distribution information of the mapped data in the hidden space is estimated and then shared among the clients. Second, the clients augment a new set of image data based on the received distribution information with the decoder of VAE. Finally, the clients use the local dataset along with the augmented dataset to train the final classification model in a federated learning manner. Experiments on the diagnosis task of Alzheimer's disease MRI dataset and the MNIST data classification task show that the proposed method can significantly improve the performance of federated learning under non-IID cases.
Collapse
Affiliation(s)
- Leiyang Zhao
- Guangdong Key Laboratory of Intelligent Information Processing, College of Electronics and Information Engineering, Shenzhen University, Shenzhen, China
| | - Jianjun Huang
- Guangdong Key Laboratory of Intelligent Information Processing, College of Electronics and Information Engineering, Shenzhen University, Shenzhen, China
| |
Collapse
|
25
|
Luo G, Blumenthal M, Heide M, Uecker M. Bayesian MRI reconstruction with joint uncertainty estimation using diffusion models. Magn Reson Med 2023; 90:295-311. [PMID: 36912453 DOI: 10.1002/mrm.29624] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Revised: 02/05/2023] [Accepted: 02/08/2023] [Indexed: 03/14/2023]
Abstract
PURPOSE We introduce a framework that enables efficient sampling from learned probability distributions for MRI reconstruction. METHOD Samples are drawn from the posterior distribution given the measured k-space using the Markov chain Monte Carlo (MCMC) method, different from conventional deep learning-based MRI reconstruction techniques. In addition to the maximum a posteriori estimate for the image, which can be obtained by maximizing the log-likelihood indirectly or directly, the minimum mean square error estimate and uncertainty maps can also be computed from those drawn samples. The data-driven Markov chains are constructed with the score-based generative model learned from a given image database and are independent of the forward operator that is used to model the k-space measurement. RESULTS We numerically investigate the framework from these perspectives: (1) the interpretation of the uncertainty of the image reconstructed from undersampled k-space; (2) the effect of the number of noise scales used to train the generative models; (3) using a burn-in phase in MCMC sampling to reduce computation; (4) the comparison to conventional ℓ 1 $$ {\ell}_1 $$ -wavelet regularized reconstruction; (5) the transferability of learned information; and (6) the comparison to fastMRI challenge. CONCLUSION A framework is described that connects the diffusion process and advanced generative models with Markov chains. We demonstrate its flexibility in terms of contrasts and sampling patterns using advanced generative priors and the benefits of also quantifying the uncertainty for every pixel.
Collapse
Affiliation(s)
- Guanxiong Luo
- Institute for Diagnostic and Interventional Radiology, University Medical Center Göttingen, Göttingen, Germany
| | - Moritz Blumenthal
- Institute for Diagnostic and Interventional Radiology, University Medical Center Göttingen, Göttingen, Germany.,Institute of Biomedical Imaging, Graz University of Technology, Graz, Austria
| | - Martin Heide
- Institute for Diagnostic and Interventional Radiology, University Medical Center Göttingen, Göttingen, Germany
| | - Martin Uecker
- Institute for Diagnostic and Interventional Radiology, University Medical Center Göttingen, Göttingen, Germany.,Institute of Biomedical Imaging, Graz University of Technology, Graz, Austria.,German Centre for Cardiovascular Research (DZHK) Partner Site Göttingen, Göttingen, Germany.,Cluster of Excellence "Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells" (MBExC), University of Göttingen, Göttingen, Germany
| |
Collapse
|
26
|
Guan Y, Tu Z, Wang S, Wang Y, Liu Q, Liang D. Magnetic resonance imaging reconstruction using a deep energy-based model. NMR IN BIOMEDICINE 2023; 36:e4848. [PMID: 36262093 DOI: 10.1002/nbm.4848] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/08/2022] [Revised: 09/09/2022] [Accepted: 09/27/2022] [Indexed: 06/16/2023]
Abstract
Although recent deep energy-based generative models (EBMs) have shown encouraging results in many image-generation tasks, how to take advantage of self-adversarial cogitation in deep EBMs to boost the performance of magnetic resonance imaging (MRI) reconstruction is still desired. With the successful application of deep learning in a wide range of MRI reconstructions, a line of emerging research involves formulating an optimization-based reconstruction method in the space of a generative model. Leveraging this, a novel regularization strategy is introduced in this article that takes advantage of self-adversarial cogitation of the deep energy-based model. More precisely, we advocate alternating learning by a more powerful energy-based model with maximum likelihood estimation to obtain the deep energy-based information, represented as a prior image. Simultaneously, implicit inference with Langevin dynamics is a unique property of reconstruction. In contrast to other generative models for reconstruction, the proposed method utilizes deep energy-based information as the image prior in reconstruction to improve the quality of image. Experimental results imply the proposed technique can obtain remarkable performance in terms of high reconstruction accuracy that is competitive with state-of-the-art methods, and which does not suffer from mode collapse. Algorithmically, an iterative approach is presented to strengthen EBM training with the gradient of energy network. The robustness and reproducibility of the algorithm were also experimentally validated. More importantly, the proposed reconstruction framework can be generalized for most MRI reconstruction scenarios.
Collapse
Affiliation(s)
- Yu Guan
- Department of Electronic Information Engineering, Nanchang University, Nanchang, China
| | - Zongjiang Tu
- Department of Electronic Information Engineering, Nanchang University, Nanchang, China
| | - Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yuhao Wang
- Department of Electronic Information Engineering, Nanchang University, Nanchang, China
| | - Qiegen Liu
- Department of Electronic Information Engineering, Nanchang University, Nanchang, China
| | - Dong Liang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Medical AI Research Center, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
27
|
Djebra Y, Marin T, Han PK, Bloch I, El Fakhri G, Ma C. Manifold Learning via Linear Tangent Space Alignment (LTSA) for Accelerated Dynamic MRI With Sparse Sampling. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:158-169. [PMID: 36121938 PMCID: PMC10024645 DOI: 10.1109/tmi.2022.3207774] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
The spatial resolution and temporal frame-rate of dynamic magnetic resonance imaging (MRI) can be improved by reconstructing images from sparsely sampled k -space data with mathematical modeling of the underlying spatiotemporal signals. These models include sparsity models, linear subspace models, and non-linear manifold models. This work presents a novel linear tangent space alignment (LTSA) model-based framework that exploits the intrinsic low-dimensional manifold structure of dynamic images for accelerated dynamic MRI. The performance of the proposed method was evaluated and compared to state-of-the-art methods using numerical simulation studies as well as 2D and 3D in vivo cardiac imaging experiments. The proposed method achieved the best performance in image reconstruction among all the compared methods. The proposed method could prove useful for accelerating many MRI applications, including dynamic MRI, multi-parametric MRI, and MR spectroscopic imaging.
Collapse
Affiliation(s)
- Yanis Djebra
- Gordon Center for Medical Imaging, Massachusetts General Hospital, and Department of Radiology, Harvard Medical School, Boston, MA 02129 USA and the LTCI, Telecom Paris, Institut Polytechnique de Paris, Paris, France
| | - Thibault Marin
- Gordon Center for Medical Imaging, Massachusetts General Hospital, and Department of Radiology, Harvard Medical School, Boston, MA 02129 USA
| | - Paul K. Han
- Gordon Center for Medical Imaging, Massachusetts General Hospital, and Department of Radiology, Harvard Medical School, Boston, MA 02129 USA
| | - Isabelle Bloch
- LIP6, Sorbonne University, CNRS Paris, France. This work was partly done while I. Bloch was with the LTCI, Telecom Paris, Institut Polytechnique de Paris, Paris, France
| | - Georges El Fakhri
- Gordon Center for Medical Imaging, Massachusetts General Hospital, and Department of Radiology, Harvard Medical School, Boston, MA 02129 USA
| | - Chao Ma
- Gordon Center for Medical Imaging, Massachusetts General Hospital, and Department of Radiology, Harvard Medical School, Boston, MA 02129 USA
| |
Collapse
|
28
|
A unified model for reconstruction and R 2* mapping of accelerated 7T data using the quantitative recurrent inference machine. Neuroimage 2022; 264:119680. [PMID: 36240989 DOI: 10.1016/j.neuroimage.2022.119680] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 09/16/2022] [Accepted: 10/10/2022] [Indexed: 11/07/2022] Open
Abstract
Quantitative MRI (qMRI) acquired at the ultra-high field of 7 Tesla has been used in visualizing and analyzing subcortical structures. qMRI relies on the acquisition of multiple images with different scan settings, leading to extended scanning times. Data redundancy and prior information from the relaxometry model can be exploited by deep learning to accelerate the imaging process. We propose the quantitative Recurrent Inference Machine (qRIM), with a unified forward model for joint reconstruction and R2*-mapping from sparse data, embedded in a Recurrent Inference Machine (RIM), an iterative inverse problem-solving network. To study the dependency of the proposed extension of the unified forward model to network architecture, we implemented and compared a quantitative End-to-End Variational Network (qE2EVN). Experiments were performed with high-resolution multi-echo gradient echo data of the brain at 7T of a cohort study covering the entire adult life span. The error in reconstructed R2* from undersampled data relative to reference data significantly decreased for the unified model compared to sequential image reconstruction and parameter fitting using the RIM. With increasing acceleration factor, an increasing reduction in the reconstruction error was observed, pointing to a larger benefit for sparser data. Qualitatively, this was following an observed reduction of image blurriness in R2*-maps. In contrast, when using the U-Net as network architecture, a negative bias in R2* in selected regions of interest was observed. Compressed Sensing rendered accurate, but less precise estimates of R2*. The qE2EVN showed slightly inferior reconstruction quality compared to the qRIM but better quality than the U-Net and Compressed Sensing. Subcortical maturation over age measured by a linearly increasing interquartile range of R2* in the striatum was preserved up to an acceleration factor of 9. With the integrated prior of the unified forward model, the proposed qRIM can exploit the redundancy among repeated measurements and shared information between tasks, facilitating relaxometry in accelerated MRI.
Collapse
|
29
|
Sorantin E, Grasser MG, Hemmelmayr A, Tschauner S, Hrzic F, Weiss V, Lacekova J, Holzinger A. The augmented radiologist: artificial intelligence in the practice of radiology. Pediatr Radiol 2022; 52:2074-2086. [PMID: 34664088 PMCID: PMC9537212 DOI: 10.1007/s00247-021-05177-7] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Revised: 06/03/2021] [Accepted: 08/02/2021] [Indexed: 12/19/2022]
Abstract
In medicine, particularly in radiology, there are great expectations in artificial intelligence (AI), which can "see" more than human radiologists in regard to, for example, tumor size, shape, morphology, texture and kinetics - thus enabling better care by earlier detection or more precise reports. Another point is that AI can handle large data sets in high-dimensional spaces. But it should not be forgotten that AI is only as good as the training samples available, which should ideally be numerous enough to cover all variants. On the other hand, the main feature of human intelligence is content knowledge and the ability to find near-optimal solutions. The purpose of this paper is to review the current complexity of radiology working places, to describe their advantages and shortcomings. Further, we give an AI overview of the different types and features as used so far. We also touch on the differences between AI and human intelligence in problem-solving. We present a new AI type, labeled "explainable AI," which should enable a balance/cooperation between AI and human intelligence - thus bringing both worlds in compliance with legal requirements. For support of (pediatric) radiologists, we propose the creation of an AI assistant that augments radiologists and keeps their brain free for generic tasks.
Collapse
Affiliation(s)
- Erich Sorantin
- Division of Pediatric Radiology, Department of Radiology, Medical University Graz, Auenbruggerplatz 36, A - 8036, Graz, Austria.
| | - Michael G Grasser
- Division of Pediatric Radiology, Department of Radiology, Medical University Graz, Auenbruggerplatz 36, A - 8036, Graz, Austria
| | - Ariane Hemmelmayr
- Division of Pediatric Radiology, Department of Radiology, Medical University Graz, Auenbruggerplatz 36, A - 8036, Graz, Austria
| | - Sebastian Tschauner
- Division of Pediatric Radiology, Department of Radiology, Medical University Graz, Auenbruggerplatz 36, A - 8036, Graz, Austria
| | - Franko Hrzic
- Faculty of Engineering, Department of Computer Engineering, University of Rijeka, Vukovarska 58, Rijeka, 51000, Croatia
| | - Veronika Weiss
- Division of Pediatric Radiology, Department of Radiology, Medical University Graz, Auenbruggerplatz 36, A - 8036, Graz, Austria
| | - Jana Lacekova
- Division of Pediatric Radiology, Department of Radiology, Medical University Graz, Auenbruggerplatz 36, A - 8036, Graz, Austria
| | - Andreas Holzinger
- Institute for Medical Informatics, Statistics and Documentation, Medical University Graz, Graz, Austria
| |
Collapse
|
30
|
Chen EZ, Wang P, Chen X, Chen T, Sun S. Pyramid Convolutional RNN for MRI Image Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2033-2047. [PMID: 35192462 DOI: 10.1109/tmi.2022.3153849] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Fast and accurate MRI image reconstruction from undersampled data is crucial in clinical practice. Deep learning based reconstruction methods have shown promising advances in recent years. However, recovering fine details from undersampled data is still challenging. In this paper, we introduce a novel deep learning based method, Pyramid Convolutional RNN (PC-RNN), to reconstruct images from multiple scales. Based on the formulation of MRI reconstruction as an inverse problem, we design the PC-RNN model with three convolutional RNN (ConvRNN) modules to iteratively learn the features in multiple scales. Each ConvRNN module reconstructs images at different scales and the reconstructed images are combined by a final CNN module in a pyramid fashion. The multi-scale ConvRNN modules learn a coarse-to-fine image reconstruction. Unlike other common reconstruction methods for parallel imaging, PC-RNN does not employ coil sensitive maps for multi-coil data and directly model the multiple coils as multi-channel inputs. The coil compression technique is applied to standardize data with various coil numbers, leading to more efficient training. We evaluate our model on the fastMRI knee and brain datasets and the results show that the proposed model outperforms other methods and can recover more details. The proposed method is one of the winner solutions in the 2019 fastMRI competition.
Collapse
|
31
|
Korkmaz Y, Dar SUH, Yurt M, Ozbey M, Cukur T. Unsupervised MRI Reconstruction via Zero-Shot Learned Adversarial Transformers. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1747-1763. [PMID: 35085076 DOI: 10.1109/tmi.2022.3147426] [Citation(s) in RCA: 88] [Impact Index Per Article: 29.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Supervised reconstruction models are characteristically trained on matched pairs of undersampled and fully-sampled data to capture an MRI prior, along with supervision regarding the imaging operator to enforce data consistency. To reduce supervision requirements, the recent deep image prior framework instead conjoins untrained MRI priors with the imaging operator during inference. Yet, canonical convolutional architectures are suboptimal in capturing long-range relationships, and priors based on randomly initialized networks may yield suboptimal performance. To address these limitations, here we introduce a novel unsupervised MRI reconstruction method based on zero-Shot Learned Adversarial TransformERs (SLATER). SLATER embodies a deep adversarial network with cross-attention transformers to map noise and latent variables onto coil-combined MR images. During pre-training, this unconditional network learns a high-quality MRI prior in an unsupervised generative modeling task. During inference, a zero-shot reconstruction is then performed by incorporating the imaging operator and optimizing the prior to maximize consistency to undersampled data. Comprehensive experiments on brain MRI datasets clearly demonstrate the superior performance of SLATER against state-of-the-art unsupervised methods.
Collapse
|
32
|
Tezcan KC, Karani N, Baumgartner CF, Konukoglu E. Sampling Possible Reconstructions of Undersampled Acquisitions in MR Imaging With a Deep Learned Prior. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1885-1896. [PMID: 35143393 DOI: 10.1109/tmi.2022.3150853] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Undersampling the k-space during MR acquisitions saves time, however results in an ill-posed inversion problem, leading to an infinite set of images as possible solutions. Traditionally, this is tackled as a reconstruction problem by searching for a single "best" image out of this solution set according to some chosen regularization or prior. This approach, however, misses the possibility of other solutions and hence ignores the uncertainty in the inversion process. In this paper, we propose a method that instead returns multiple images which are possible under the acquisition model and the chosen prior to capture the uncertainty in the inversion process. To this end, we introduce a low dimensional latent space and model the posterior distribution of the latent vectors given the acquisition data in k-space, from which we can sample in the latent space and obtain the corresponding images. We use a variational autoencoder for the latent model and the Metropolis adjusted Langevin algorithm for the sampling. We evaluate our method on two datasets; with images from the Human Connectome Project and in-house measured multi-coil images. We compare to five alternative methods. Results indicate that the proposed method produces images that match the measured k-space data better than the alternatives, while showing realistic structural variability. Furthermore, in contrast to the compared methods, the proposed method yields higher uncertainty in the undersampled phase encoding direction, as expected.
Collapse
|
33
|
A Compressed Reconstruction Network Combining Deep Image Prior and Autoencoding Priors for Single-Pixel Imaging. PHOTONICS 2022. [DOI: 10.3390/photonics9050343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Single-pixel imaging (SPI) is a promising imaging scheme based on compressive sensing. However, its application in high-resolution and real-time scenarios is a great challenge due to the long sampling and reconstruction required. The Deep Learning Compressed Network (DLCNet) can avoid the long-time iterative operation required by traditional reconstruction algorithms, and can achieve fast and high-quality reconstruction; hence, Deep-Learning-based SPI has attracted much attention. DLCNets learn prior distributions of real pictures from massive datasets, while the Deep Image Prior (DIP) uses a neural network′s own structural prior to solve inverse problems without requiring a lot of training data. This paper proposes a compressed reconstruction network (DPAP) based on DIP for Single-pixel imaging. DPAP is designed as two learning stages, which enables DPAP to focus on statistical information of the image structure at different scales. In order to obtain prior information from the dataset, the measurement matrix is jointly optimized by a network and multiple autoencoders are trained as regularization terms to be added to the loss function. Extensive simulations and practical experiments demonstrate that the proposed network outperforms existing algorithms.
Collapse
|
34
|
A Review of Deep Learning Methods for Compressed Sensing Image Reconstruction and Its Medical Applications. ELECTRONICS 2022. [DOI: 10.3390/electronics11040586] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Compressed sensing (CS) and its medical applications are active areas of research. In this paper, we review recent works using deep learning method to solve CS problem for images or medical imaging reconstruction including computed tomography (CT), magnetic resonance imaging (MRI) and positron-emission tomography (PET). We propose a novel framework to unify traditional iterative algorithms and deep learning approaches. In short, we define two projection operators toward image prior and data consistency, respectively, and any reconstruction algorithm can be decomposed to the two parts. Though deep learning methods can be divided into several categories, they all satisfies the framework. We built the relationship between different reconstruction methods of deep learning, and connect them to traditional methods through the proposed framework. It also indicates that the key to solve CS problem and its medical applications is how to depict the image prior. Based on the framework, we analyze the current deep learning methods and point out some important directions of research in the future.
Collapse
|
35
|
Posterior temperature optimized Bayesian models for inverse problems in medical imaging. Med Image Anal 2022; 78:102382. [DOI: 10.1016/j.media.2022.102382] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2021] [Revised: 11/09/2021] [Accepted: 02/01/2022] [Indexed: 11/21/2022]
|
36
|
Iterative Reconstruction for Low-Dose CT using Deep Gradient Priors of Generative Model. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2022. [DOI: 10.1109/trpms.2022.3148373] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
37
|
Yoo J, Jin KH, Gupta H, Yerly J, Stuber M, Unser M. Time-Dependent Deep Image Prior for Dynamic MRI. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3337-3348. [PMID: 34043506 DOI: 10.1109/tmi.2021.3084288] [Citation(s) in RCA: 46] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
We propose a novel unsupervised deep-learning-based algorithm for dynamic magnetic resonance imaging (MRI) reconstruction. Dynamic MRI requires rapid data acquisition for the study of moving organs such as the heart. We introduce a generalized version of the deep-image-prior approach, which optimizes the weights of a reconstruction network to fit a sequence of sparsely acquired dynamic MRI measurements. Our method needs neither prior training nor additional data. In particular, for cardiac images, it does not require the marking of heartbeats or the reordering of spokes. The key ingredients of our method are threefold: 1) a fixed low-dimensional manifold that encodes the temporal variations of images; 2) a network that maps the manifold into a more expressive latent space; and 3) a convolutional neural network that generates a dynamic series of MRI images from the latent variables and that favors their consistency with the measurements in k -space. Our method outperforms the state-of-the-art methods quantitatively and qualitatively in both retrospective and real fetal cardiac datasets. To the best of our knowledge, this is the first unsupervised deep-learning-based method that can reconstruct the continuous variation of dynamic MRI sequences with high spatial resolution.
Collapse
|
38
|
Quan C, Zhou J, Zhu Y, Chen Y, Wang S, Liang D, Liu Q. Homotopic Gradients of Generative Density Priors for MR Image Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3265-3278. [PMID: 34010128 DOI: 10.1109/tmi.2021.3081677] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Deep learning, particularly the generative model, has demonstrated tremendous potential to significantly speed up image reconstruction with reduced measurements recently. Rather than the existing generative models that often optimize the density priors, in this work, by taking advantage of the denoising score matching, homotopic gradients of generative density priors (HGGDP) are exploited for magnetic resonance imaging (MRI) reconstruction. More precisely, to tackle the low-dimensional manifold and low data density region issues in generative density prior, we estimate the target gradients in higher-dimensional space. We train a more powerful noise conditional score network by forming high-dimensional tensor as the network input at the training phase. More artificial noise is also injected in the embedding space. At the reconstruction stage, a homotopy method is employed to pursue the density prior, such as to boost the reconstruction performance. Experiment results implied the remarkable performance of HGGDP in terms of high reconstruction accuracy. Only 10% of the k-space data can still generate image of high quality as effectively as standard MRI reconstructions with the fully sampled data.
Collapse
|
39
|
Hržić F, Žužić I, Tschauner S, Štajduhar I. Cast suppression in radiographs by generative adversarial networks. J Am Med Inform Assoc 2021; 28:2687-2694. [PMID: 34613393 DOI: 10.1093/jamia/ocab192] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2021] [Revised: 08/06/2021] [Accepted: 08/30/2021] [Indexed: 11/14/2022] Open
Abstract
Injured extremities commonly need to be immobilized by casts to allow proper healing. We propose a method to suppress cast superimpositions in pediatric wrist radiographs based on the cycle generative adversarial network (CycleGAN) model. We retrospectively reviewed unpaired pediatric wrist radiographs (n = 9672) and sampled them into 2 equal groups, with and without cast. The test subset consisted of 718 radiographs with cast. We evaluated different quadratic input sizes (256, 512, and 1024 pixels) for U-Net and ResNet-based CycleGAN architectures in cast suppression, quantitatively and qualitatively. The mean age was 11 ± 3 years in images containing cast (n = 4836), and 11 ± 4 years in castless samples (n = 4836). A total of 5956 X-rays had been done in males and 3716 in females. A U-Net 512 CycleGAN performed best (P ≤ .001). CycleGAN models successfully suppressed casts in pediatric wrist radiographs, allowing the development of a related software tool for radiology image viewers.
Collapse
Affiliation(s)
- Franko Hržić
- Department of Computer Engineering, Faculty of Engineering, University of Rijeka, Rijeka, Croatia.,Center for Artificial Intelligence and Cybersecurity, University of Rijeka, Rijeka, Croatia
| | - Ivana Žužić
- Department of Informatics, Technical University of Munich, Munich, Germany
| | - Sebastian Tschauner
- Division of Pediatric Radiology, Department of Radiology, Medical University of Graz, Graz, Austria
| | - Ivan Štajduhar
- Department of Computer Engineering, Faculty of Engineering, University of Rijeka, Rijeka, Croatia.,Center for Artificial Intelligence and Cybersecurity, University of Rijeka, Rijeka, Croatia
| |
Collapse
|
40
|
Cheng J, Cui ZX, Huang W, Ke Z, Ying L, Wang H, Zhu Y, Liang D. Learning Data Consistency and its Application to Dynamic MR Imaging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3140-3153. [PMID: 34252025 DOI: 10.1109/tmi.2021.3096232] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Magnetic resonance (MR) image reconstruction from undersampled k-space data can be formulated as a minimization problem involving data consistency and image prior. Existing deep learning (DL)-based methods for MR reconstruction employ deep networks to exploit the prior information and integrate the prior knowledge into the reconstruction under the explicit constraint of data consistency, without considering the real distribution of the noise. In this work, we propose a new DL-based approach termed Learned DC that implicitly learns the data consistency with deep networks, corresponding to the actual probability distribution of system noise. The data consistency term and the prior knowledge are both embedded in the weights of the networks, which provides an utterly implicit manner of learning reconstruction model. We evaluated the proposed approach with highly undersampled dynamic data, including the dynamic cardiac cine data with up to 24-fold acceleration and dynamic rectum data with the acceleration factor equal to the number of phases. Experimental results demonstrate the superior performance of the Learned DC both quantitatively and qualitatively than the state-of-the-art.
Collapse
|
41
|
Florin M, Vaussy A, Macron L, Bazot M, Stemmer A, Pinar U, Jarboui L. Evaluation of Iterative Denoising 3-Dimensional T2-Weighted Turbo Spin Echo for the Diagnosis of Deep Infiltrating Endometriosis. Invest Radiol 2021; 56:637-644. [PMID: 33813570 DOI: 10.1097/rli.0000000000000786] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
OBJECTIVES The primary end point of this study was to evaluate the image quality and reliability of a highly accelerated 3-dimensional T2 turbo spin echo (3D-T2-TSE) sequence with prototype iterative denoising (ID) reconstruction compared with conventional 2D T2 sequences for the diagnosis of deep infiltrating endometriosis (DIE). The secondary end point was to demonstrate the 3D-T2-TSE sequence image quality improvement using ID reconstruction. MATERIAL AND METHODS Patients were prospectively enrolled to our institution for pelvis magnetic resonance imaging because of a suspicion of endometriosis over a 4-month period. Both conventional 2D-T2 (sagittal, axial, coronal T2 oblique to the cervix) and 3D-T2-TSE sequences were performed with a scan time of 7 minutes 43 seconds and 4 minutes 58 seconds, respectively. Reconstructions with prototype ID (3D-T2-denoised) and without prototype ID (3D-T2) were generated inline at the end of the acquisition. Two radiologists independently evaluated the image quality of 3D-T2, 3D-T2-denoised, and 2D-T2 sequences. Diagnosis confidence of DIE was evaluated for both 3D-T2-denoised and 2D-T2 sequences. Intraobserver and interobserver agreements were calculated using Cohen κ coefficient. RESULTS Ninety female patients were included. Both readers found that the ID algorithm significantly improved the image quality and decreased the artifacts of 3D-T2-denoised compared with 3D-T2 sequences (P < 0.001). A significant image quality improvement was found by 1 radiologist for 3D-T2-denoised compared with 2D-T2 sequences (P = 0.002), whereas the other reader evidenced no significant difference. The interobserver agreement of 3D-T2-denoised and 2D-T2 sequences was 0.84 (0.73-0.95) and 0.78 (0.65-0.9), respectively, for the diagnosis of DIE. Intraobserver agreement for readers 1 and 2 was 0.86 (0.79-1) and 0.83 (0.76-1), respectively. For all localization of DIE, interobserver and intraobserver agreements were either almost perfect or substantial for both 3D-T2-denoised and 2D-T2 sequences. CONCLUSIONS Three-dimensional T2-denoised imaging is a promising tool to replace conventional 2D-T2 sequences, offering a significant scan time reduction without compromising image quality or diagnosis information for the assessment of DIE.
Collapse
Affiliation(s)
- Marie Florin
- From the Centre imagerie du Nord, Clinique du Landy, radiology departement, Ramsay-Générale de Santé, Saint-Ouen, France
| | | | - Laurent Macron
- From the Centre imagerie du Nord, Clinique du Landy, radiology departement, Ramsay-Générale de Santé, Saint-Ouen, France
| | - Marc Bazot
- Department of Radiology, Hôpital Tenon, Paris, France
| | | | - Ugo Pinar
- Sorbonne University, APHP, Hôpital la Pitié-Salpêtrière, Urology and renal transplantation department, Paris, France
| | - Lamia Jarboui
- From the Centre imagerie du Nord, Clinique du Landy, radiology departement, Ramsay-Générale de Santé, Saint-Ouen, France
| |
Collapse
|
42
|
Qin C, Duan J, Hammernik K, Schlemper J, Küstner T, Botnar R, Prieto C, Price AN, Hajnal JV, Rueckert D. Complementary time-frequency domain networks for dynamic parallel MR image reconstruction. Magn Reson Med 2021; 86:3274-3291. [PMID: 34254355 DOI: 10.1002/mrm.28917] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2020] [Revised: 06/10/2021] [Accepted: 06/14/2021] [Indexed: 11/11/2022]
Abstract
PURPOSE To introduce a novel deep learning-based approach for fast and high-quality dynamic multicoil MR reconstruction by learning a complementary time-frequency domain network that exploits spatiotemporal correlations simultaneously from complementary domains. THEORY AND METHODS Dynamic parallel MR image reconstruction is formulated as a multivariable minimization problem, where the data are regularized in combined temporal Fourier and spatial (x-f) domain as well as in spatiotemporal image (x-t) domain. An iterative algorithm based on variable splitting technique is derived, which alternates among signal de-aliasing steps in x-f and x-t spaces, a closed-form point-wise data consistency step and a weighted coupling step. The iterative model is embedded into a deep recurrent neural network which learns to recover the image via exploiting spatiotemporal redundancies in complementary domains. RESULTS Experiments were performed on two datasets of highly undersampled multicoil short-axis cardiac cine MRI scans. Results demonstrate that our proposed method outperforms the current state-of-the-art approaches both quantitatively and qualitatively. The proposed model can also generalize well to data acquired from a different scanner and data with pathologies that were not seen in the training set. CONCLUSION The work shows the benefit of reconstructing dynamic parallel MRI in complementary time-frequency domains with deep neural networks. The method can effectively and robustly reconstruct high-quality images from highly undersampled dynamic multicoil data ( 16 × and 24 × yielding 15 s and 10 s scan times respectively) with fast reconstruction speed (2.8 seconds). This could potentially facilitate achieving fast single-breath-hold clinical 2D cardiac cine imaging.
Collapse
Affiliation(s)
- Chen Qin
- Institute for Digital Communications, School of Engineering, University of Edinburgh, Edinburgh, UK.,Department of Computing, Imperial College London, London, UK
| | - Jinming Duan
- School of Computer Science, University of Birmingham, Birmingham, UK
| | - Kerstin Hammernik
- Department of Computing, Imperial College London, London, UK.,Institute for AI and Informatics, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Jo Schlemper
- Department of Computing, Imperial College London, London, UK.,Hyperfine Research Inc., Guilford, CT, USA
| | - Thomas Küstner
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK.,Department of Diagnostic and Interventional Radiology, Medical Image and Data Analysis, University Hospital of Tuebingen, Tuebingen, Germany
| | - René Botnar
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Claudia Prieto
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Anthony N Price
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Joseph V Hajnal
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Daniel Rueckert
- Department of Computing, Imperial College London, London, UK.,Institute for AI and Informatics, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| |
Collapse
|
43
|
Wang S, Xiao T, Liu Q, Zheng H. Deep learning for fast MR imaging: A review for learning reconstruction from incomplete k-space data. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102579] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
|
44
|
A CNN-Based Autoencoder and Machine Learning Model for Identifying Betel-Quid Chewers Using Functional MRI Features. Brain Sci 2021; 11:brainsci11060809. [PMID: 34207169 PMCID: PMC8234239 DOI: 10.3390/brainsci11060809] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Revised: 06/11/2021] [Accepted: 06/16/2021] [Indexed: 11/17/2022] Open
Abstract
Betel quid (BQ) is one of the most commonly used psychoactive substances in some parts of Asia and the Pacific. Although some studies have shown brain function alterations in BQ chewers, it is virtually impossible for radiologists’ to visually distinguish MRI maps of BQ chewers from others. In this study, we aimed to construct autoencoder and machine-learning models to discover brain alterations in BQ chewers based on the features of resting-state functional magnetic resonance imaging. Resting-state functional magnetic resonance imaging (rs-fMRI) was obtained from 16 BQ chewers, 15 tobacco- and alcohol-user controls (TA), and 17 healthy controls (HC). We used an autoencoder and machine learning model to identify BQ chewers among the three groups. A convolutional neural network (CNN)-based autoencoder model and supervised machine learning algorithm logistic regression (LR) were used to discriminate BQ chewers from TA and HC. Classifying the brain MRIs of HC, TA controls, and BQ chewers by conducting leave-one-out-cross-validation (LOOCV) resulted in the highest accuracy of 83%, which was attained by LR with two rs-fMRI feature sets. In our research, we constructed an autoencoder and machine-learning model that was able to identify BQ chewers from among TA controls and HC, which were based on data from rs-fMRI, and this might provide a helpful approach for tracking BQ chewers in the future.
Collapse
|
45
|
Lin DJ, Johnson PM, Knoll F, Lui YW. Artificial Intelligence for MR Image Reconstruction: An Overview for Clinicians. J Magn Reson Imaging 2021; 53:1015-1028. [PMID: 32048372 PMCID: PMC7423636 DOI: 10.1002/jmri.27078] [Citation(s) in RCA: 114] [Impact Index Per Article: 28.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2019] [Revised: 01/15/2020] [Accepted: 01/17/2020] [Indexed: 12/22/2022] Open
Abstract
Artificial intelligence (AI) shows tremendous promise in the field of medical imaging, with recent breakthroughs applying deep-learning models for data acquisition, classification problems, segmentation, image synthesis, and image reconstruction. With an eye towards clinical applications, we summarize the active field of deep-learning-based MR image reconstruction. We review the basic concepts of how deep-learning algorithms aid in the transformation of raw k-space data to image data, and specifically examine accelerated imaging and artifact suppression. Recent efforts in these areas show that deep-learning-based algorithms can match and, in some cases, eclipse conventional reconstruction methods in terms of image quality and computational efficiency across a host of clinical imaging applications, including musculoskeletal, abdominal, cardiac, and brain imaging. This article is an introductory overview aimed at clinical radiologists with no experience in deep-learning-based MR image reconstruction and should enable them to understand the basic concepts and current clinical applications of this rapidly growing area of research across multiple organ systems.
Collapse
Affiliation(s)
- Dana J. Lin
- Department of Radiology, NYU School of Medicine / NYU Langone Health
| | | | - Florian Knoll
- New York University School of Medicine, Center for Biomedical Imaging
| | - Yvonne W. Lui
- Department of Radiology, NYU School of Medicine / NYU Langone Health
| |
Collapse
|
46
|
Hadjiiski L, Samala R, Chan HP. Image Processing Analytics: Enhancements and Segmentation. Mol Imaging 2021. [DOI: 10.1016/b978-0-12-816386-3.00057-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022] Open
|
47
|
Functional and Structural Connectome Features for Machine Learning Chemo-Brain Prediction in Women Treated for Breast Cancer with Chemotherapy. Brain Sci 2020; 10:brainsci10110851. [PMID: 33198294 PMCID: PMC7696512 DOI: 10.3390/brainsci10110851] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Revised: 11/07/2020] [Accepted: 11/11/2020] [Indexed: 12/12/2022] Open
Abstract
Breast cancer is the leading cancer among women worldwide, and a high number of breast cancer patients are struggling with psychological and cognitive disorders. In this study, we aim to use machine learning models to discriminate between chemo-brain participants and healthy controls (HCs) using connectomes (connectivity matrices) and topological coefficients. Nineteen female post-chemotherapy breast cancer (BC) survivors and 20 female HCs were recruited for this study. Participants in both groups received resting-state functional magnetic resonance imaging (rs-fMRI) and generalized q-sampling imaging (GQI). Logistic regression (LR), decision tree classifier (CART), and xgboost (XGB) were the models we adopted for classification. In connectome analysis, LR achieved an accuracy of 79.49% with the functional connectomes and an accuracy of 71.05% with the structural connectomes. In the topological coefficient analysis, accuracies of 87.18%, 82.05%, and 83.78% were obtained by the functional global efficiency with CART, the functional global efficiency with XGB, and the structural transitivity with CART, respectively. The areas under the curves (AUCs) were 0.93, 0.94, 0.87, 0.88, and 0.84, respectively. Our study showed the discriminating ability of functional connectomes, structural connectomes, and global efficiency. We hope our findings can contribute to an understanding of the chemo brain and the establishment of a clinical system for tracking chemo brain.
Collapse
|
48
|
Shaul R, David I, Shitrit O, Riklin Raviv T. Subsampled brain MRI reconstruction by generative adversarial neural networks. Med Image Anal 2020; 65:101747. [PMID: 32593933 DOI: 10.1016/j.media.2020.101747] [Citation(s) in RCA: 41] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2019] [Revised: 05/10/2020] [Accepted: 06/01/2020] [Indexed: 01/27/2023]
Abstract
A main challenge in magnetic resonance imaging (MRI) is speeding up scan time. Beyond improving patient experience and reducing operational costs, faster scans are essential for time-sensitive imaging, such as fetal, cardiac, or functional MRI, where temporal resolution is important and target movement is unavoidable, yet must be reduced. Current MRI acquisition methods speed up scan time at the expense of lower spatial resolution and costlier hardware. We introduce a practical, software-only framework, based on deep learning, for accelerating MRI acquisition, while maintaining anatomically meaningful imaging. This is accomplished by MRI subsampling followed by estimating the missing k-space samples via generative adversarial neural networks. A generator-discriminator interplay enables the introduction of an adversarial cost in addition to fidelity and image-quality losses used for optimizing the reconstruction. Promising reconstruction results are obtained from feasible sampling patterns of up to a fivefold acceleration of diverse brain MRIs, from a large publicly available dataset of healthy adult scans as well as multimodal acquisitions of multiple sclerosis patients and dynamic contrast-enhanced MRI (DCE-MRI) sequences of stroke and tumor patients. Clinical usability of the reconstructed MRI scans is assessed by performing either lesion or healthy tissue segmentation and comparing the results to those obtained by using the original, fully sampled images. Reconstruction quality and usability of the DCE-MRI sequences is demonstrated by calculating the pharmacokinetic (PK) parameters. The proposed MRI reconstruction approach is shown to outperform state-of-the-art methods for all datasets tested in terms of the peak signal-to-noise ratio (PSNR), the structural similarity index (SSIM), as well as either the mean squared error (MSE) with respect to the PK parameters, calculated for the fully sampled DCE-MRI sequences, or the segmentation compatibility, measured in terms of Dice scores and Hausdorff distance. The code is available on GitHub.
Collapse
Affiliation(s)
- Roy Shaul
- The School of Electrical and Computer Engineering The Zlotowski Center for Neuroscience Ben-Gurion University of the Negev, Israel
| | - Itamar David
- The School of Electrical and Computer Engineering The Zlotowski Center for Neuroscience Ben-Gurion University of the Negev, Israel
| | - Ohad Shitrit
- The School of Electrical and Computer Engineering The Zlotowski Center for Neuroscience Ben-Gurion University of the Negev, Israel
| | - Tammy Riklin Raviv
- The School of Electrical and Computer Engineering The Zlotowski Center for Neuroscience Ben-Gurion University of the Negev, Israel.
| |
Collapse
|
49
|
Biffi C, Cerrolaza JJ, Tarroni G, Bai W, de Marvao A, Oktay O, Ledig C, Le Folgoc L, Kamnitsas K, Doumou G, Duan J, Prasad SK, Cook SA, O'Regan DP, Rueckert D. Explainable Anatomical Shape Analysis Through Deep Hierarchical Generative Models. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2088-2099. [PMID: 31944949 PMCID: PMC7269693 DOI: 10.1109/tmi.2020.2964499] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Quantification of anatomical shape changes currently relies on scalar global indexes which are largely insensitive to regional or asymmetric modifications. Accurate assessment of pathology-driven anatomical remodeling is a crucial step for the diagnosis and treatment of many conditions. Deep learning approaches have recently achieved wide success in the analysis of medical images, but they lack interpretability in the feature extraction and decision processes. In this work, we propose a new interpretable deep learning model for shape analysis. In particular, we exploit deep generative networks to model a population of anatomical segmentations through a hierarchy of conditional latent variables. At the highest level of this hierarchy, a two-dimensional latent space is simultaneously optimised to discriminate distinct clinical conditions, enabling the direct visualisation of the classification space. Moreover, the anatomical variability encoded by this discriminative latent space can be visualised in the segmentation space thanks to the generative properties of the model, making the classification task transparent. This approach yielded high accuracy in the categorisation of healthy and remodelled left ventricles when tested on unseen segmentations from our own multi-centre dataset as well as in an external validation set, and on hippocampi from healthy controls and patients with Alzheimer's disease when tested on ADNI data. More importantly, it enabled the visualisation in three-dimensions of both global and regional anatomical features which better discriminate between the conditions under exam. The proposed approach scales effectively to large populations, facilitating high-throughput analysis of normal anatomy and pathology in large-scale studies of volumetric imaging.
Collapse
|
50
|
On instabilities of deep learning in image reconstruction and the potential costs of AI. Proc Natl Acad Sci U S A 2020; 117:30088-30095. [PMID: 32393633 DOI: 10.1073/pnas.1907377117] [Citation(s) in RCA: 250] [Impact Index Per Article: 50.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023] Open
Abstract
Deep learning, due to its unprecedented success in tasks such as image classification, has emerged as a new tool in image reconstruction with potential to change the field. In this paper, we demonstrate a crucial phenomenon: Deep learning typically yields unstable methods for image reconstruction. The instabilities usually occur in several forms: 1) Certain tiny, almost undetectable perturbations, both in the image and sampling domain, may result in severe artefacts in the reconstruction; 2) a small structural change, for example, a tumor, may not be captured in the reconstructed image; and 3) (a counterintuitive type of instability) more samples may yield poorer performance. Our stability test with algorithms and easy-to-use software detects the instability phenomena. The test is aimed at researchers, to test their networks for instabilities, and for government agencies, such as the Food and Drug Administration (FDA), to secure safe use of deep learning methods.
Collapse
|