101
|
Beracha I, Seginer A, Tal A. Adaptive model-based Magnetic Resonance. Magn Reson Med 2023. [PMID: 37154407 DOI: 10.1002/mrm.29688] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Revised: 04/11/2023] [Accepted: 04/14/2023] [Indexed: 05/10/2023]
Abstract
PURPOSE Conventional sequences are static in nature, fixing measurement parameters in advance in anticipation of a wide range of expected tissue parameter values. We set out to design and benchmark a new, personalized approach-termed adaptive MR-in which incoming subject data is used to update and fine-tune the pulse sequence parameters in real time. METHODS We implemented an adaptive, real-time multi-echo (MTE) experiment for estimating T2 s. Our approach combined a Bayesian framework with model-based reconstruction. It maintained and continuously updated a prior distribution of the desired tissue parameters, including T2 , which was used to guide the selection of sequence parameters in real time. RESULTS Computer simulations predicted accelerations between 1.7- and 3.3-fold for adaptive multi-echo sequences relative to static ones. These predictions were corroborated in phantom experiments. In healthy volunteers, our adaptive framework accelerated the measurement of T2 for n-acetyl-aspartate by a factor of 2.5. CONCLUSION Adaptive pulse sequences that alter their excitations in real time could provide substantial reductions in acquisition times. Given the generality of our proposed framework, our results motivate further research into other adaptive model-based approaches to MRI and MRS.
Collapse
Affiliation(s)
- Inbal Beracha
- Department of Chemical and Biological Physics, Weizmann Institute of Science, Rehovot, Israel
| | | | - Assaf Tal
- Department of Chemical and Biological Physics, Weizmann Institute of Science, Rehovot, Israel
| |
Collapse
|
102
|
Jafari M, Shoeibi A, Khodatars M, Ghassemi N, Moridian P, Alizadehsani R, Khosravi A, Ling SH, Delfan N, Zhang YD, Wang SH, Gorriz JM, Alinejad-Rokny H, Acharya UR. Automated diagnosis of cardiovascular diseases from cardiac magnetic resonance imaging using deep learning models: A review. Comput Biol Med 2023; 160:106998. [PMID: 37182422 DOI: 10.1016/j.compbiomed.2023.106998] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2022] [Revised: 03/01/2023] [Accepted: 04/28/2023] [Indexed: 05/16/2023]
Abstract
In recent years, cardiovascular diseases (CVDs) have become one of the leading causes of mortality globally. At early stages, CVDs appear with minor symptoms and progressively get worse. The majority of people experience symptoms such as exhaustion, shortness of breath, ankle swelling, fluid retention, and other symptoms when starting CVD. Coronary artery disease (CAD), arrhythmia, cardiomyopathy, congenital heart defect (CHD), mitral regurgitation, and angina are the most common CVDs. Clinical methods such as blood tests, electrocardiography (ECG) signals, and medical imaging are the most effective methods used for the detection of CVDs. Among the diagnostic methods, cardiac magnetic resonance imaging (CMRI) is increasingly used to diagnose, monitor the disease, plan treatment and predict CVDs. Coupled with all the advantages of CMR data, CVDs diagnosis is challenging for physicians as each scan has many slices of data, and the contrast of it might be low. To address these issues, deep learning (DL) techniques have been employed in the diagnosis of CVDs using CMR data, and much research is currently being conducted in this field. This review provides an overview of the studies performed in CVDs detection using CMR images and DL techniques. The introduction section examined CVDs types, diagnostic methods, and the most important medical imaging techniques. The following presents research to detect CVDs using CMR images and the most significant DL methods. Another section discussed the challenges in diagnosing CVDs from CMRI data. Next, the discussion section discusses the results of this review, and future work in CVDs diagnosis from CMR images and DL techniques are outlined. Finally, the most important findings of this study are presented in the conclusion section.
Collapse
Affiliation(s)
- Mahboobeh Jafari
- Internship in BioMedical Machine Learning Lab, The Graduate School of Biomedical Engineering, UNSW Sydney, Sydney, NSW, 2052, Australia
| | - Afshin Shoeibi
- Internship in BioMedical Machine Learning Lab, The Graduate School of Biomedical Engineering, UNSW Sydney, Sydney, NSW, 2052, Australia; Data Science and Computational Intelligence Institute, University of Granada, Spain.
| | - Marjane Khodatars
- Data Science and Computational Intelligence Institute, University of Granada, Spain
| | - Navid Ghassemi
- Internship in BioMedical Machine Learning Lab, The Graduate School of Biomedical Engineering, UNSW Sydney, Sydney, NSW, 2052, Australia
| | - Parisa Moridian
- Data Science and Computational Intelligence Institute, University of Granada, Spain
| | - Roohallah Alizadehsani
- Institute for Intelligent Systems Research and Innovation, Deakin University, Geelong, Australia
| | - Abbas Khosravi
- Institute for Intelligent Systems Research and Innovation, Deakin University, Geelong, Australia
| | - Sai Ho Ling
- Faculty of Engineering and IT, University of Technology Sydney (UTS), Australia
| | - Niloufar Delfan
- Faculty of Computer Engineering, Dept. of Artificial Intelligence Engineering, K. N. Toosi University of Technology, Tehran, Iran
| | - Yu-Dong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester, UK
| | - Shui-Hua Wang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester, UK
| | - Juan M Gorriz
- Data Science and Computational Intelligence Institute, University of Granada, Spain; Department of Psychiatry, University of Cambridge, UK
| | - Hamid Alinejad-Rokny
- BioMedical Machine Learning Lab, The Graduate School of Biomedical Engineering, UNSW Sydney, Sydney, NSW, 2052, Australia; UNSW Data Science Hub, The University of New South Wales, Sydney, NSW, 2052, Australia; Health Data Analytics Program, Centre for Applied Artificial Intelligence, Macquarie University, Sydney, 2109, Australia
| | - U Rajendra Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Springfield, Australia; Dept. of Biomedical Informatics and Medical Engineering, Asia University, Taichung, Taiwan
| |
Collapse
|
103
|
Qiu D, Cheng Y, Wang X. Medical image super-resolution reconstruction algorithms based on deep learning: A survey. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 238:107590. [PMID: 37201252 DOI: 10.1016/j.cmpb.2023.107590] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Revised: 03/21/2023] [Accepted: 05/05/2023] [Indexed: 05/20/2023]
Abstract
BACKGROUND AND OBJECTIVE With the high-resolution (HR) requirements of medical images in clinical practice, super-resolution (SR) reconstruction algorithms based on low-resolution (LR) medical images have become a research hotspot. This type of method can significantly improve image SR without improving hardware equipment, so it is of great significance to review it. METHODS Aiming at the unique SR reconstruction algorithms in the field of medical images, based on subdivided medical fields such as magnetic resonance (MR) images, computed tomography (CT) images, and ultrasound images. Firstly, we deeply analyzed the research progress of SR reconstruction algorithms, and summarized and compared the different types of algorithms. Secondly, we introduced the evaluation indicators corresponding to the SR reconstruction algorithms. Finally, we prospected the development trend of SR reconstruction technology in the medical field. RESULTS The medical image SR reconstruction technology based on deep learning can provide more abundant lesion information, relieve the expert's diagnosis pressure, and improve the diagnosis efficiency and accuracy. CONCLUSION The medical image SR reconstruction technology based on deep learning helps to improve the quality of medicine, provides help for the diagnosis of experts, and lays a solid foundation for the subsequent analysis and identification tasks of the computer, which is of great significance for improving the diagnosis efficiency of experts and realizing intelligent medical care.
Collapse
Affiliation(s)
- Defu Qiu
- Engineering Research Center of Intelligent Control for Underground Space, Ministry of Education, China University of Mining and Technology, Xuzhou 221116, China; School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221116, China
| | - Yuhu Cheng
- Engineering Research Center of Intelligent Control for Underground Space, Ministry of Education, China University of Mining and Technology, Xuzhou 221116, China; School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221116, China
| | - Xuesong Wang
- Engineering Research Center of Intelligent Control for Underground Space, Ministry of Education, China University of Mining and Technology, Xuzhou 221116, China; School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221116, China.
| |
Collapse
|
104
|
Yousef R, Khan S, Gupta G, Siddiqui T, Albahlal BM, Alajlan SA, Haq MA. U-Net-Based Models towards Optimal MR Brain Image Segmentation. Diagnostics (Basel) 2023; 13:diagnostics13091624. [PMID: 37175015 PMCID: PMC10178263 DOI: 10.3390/diagnostics13091624] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 04/14/2023] [Accepted: 04/25/2023] [Indexed: 05/15/2023] Open
Abstract
Brain tumor segmentation from MRIs has always been a challenging task for radiologists, therefore, an automatic and generalized system to address this task is needed. Among all other deep learning techniques used in medical imaging, U-Net-based variants are the most used models found in the literature to segment medical images with respect to different modalities. Therefore, the goal of this paper is to examine the numerous advancements and innovations in the U-Net architecture, as well as recent trends, with the aim of highlighting the ongoing potential of U-Net being used to better the performance of brain tumor segmentation. Furthermore, we provide a quantitative comparison of different U-Net architectures to highlight the performance and the evolution of this network from an optimization perspective. In addition to that, we have experimented with four U-Net architectures (3D U-Net, Attention U-Net, R2 Attention U-Net, and modified 3D U-Net) on the BraTS 2020 dataset for brain tumor segmentation to provide a better overview of this architecture's performance in terms of Dice score and Hausdorff distance 95%. Finally, we analyze the limitations and challenges of medical image analysis to provide a critical discussion about the importance of developing new architectures in terms of optimization.
Collapse
Affiliation(s)
- Rammah Yousef
- Yogananda School of AI, Computers and Data Sciences, Shoolini University, Solan 173229, India
| | - Shakir Khan
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia
- Department of Computer Science and Engineering, University Centre for Research and Development, Chandigarh University, Mohali 140413, India
| | - Gaurav Gupta
- Yogananda School of AI, Computers and Data Sciences, Shoolini University, Solan 173229, India
| | - Tamanna Siddiqui
- Department of Computer Science, Aligarh Muslim University, Aligarh 202001, India
| | - Bader M Albahlal
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia
| | - Saad Abdullah Alajlan
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia
| | - Mohd Anul Haq
- Department of Computer Science, College of Computer and Information Sciences, Majmaah University, Al-Majmaah 11952, Saudi Arabia
| |
Collapse
|
105
|
Yang J, Li XX, Liu F, Nie D, Lio P, Qi H, Shen D. Fast Multi-Contrast MRI Acquisition by Optimal Sampling of Information Complementary to Pre-Acquired MRI Contrast. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1363-1373. [PMID: 37015608 DOI: 10.1109/tmi.2022.3227262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Recent studies on multi-contrast MRI reconstruction have demonstrated the potential of further accelerating MRI acquisition by exploiting correlation between contrasts. Most of the state-of-the-art approaches have achieved improvement through the development of network architectures for fixed under-sampling patterns, without considering inter-contrast correlation in the under-sampling pattern design. On the other hand, sampling pattern learning methods have shown better reconstruction performance than those with fixed under-sampling patterns. However, most under-sampling pattern learning algorithms are designed for single contrast MRI without exploiting complementary information between contrasts. To this end, we propose a framework to optimize the under-sampling pattern of a target MRI contrast which complements the acquired fully-sampled reference contrast. Specifically, a novel image synthesis network is introduced to extract the redundant information contained in the reference contrast, which is exploited in the subsequent joint pattern optimization and reconstruction network. We have demonstrated superior performance of our learned under-sampling patterns on both public and in-house datasets, compared to the commonly used under-sampling patterns and state-of-the-art methods that jointly optimize the reconstruction network and the under-sampling patterns, up to 8-fold under-sampling factor.
Collapse
|
106
|
Zhou L, Zhu M, Xiong D, Ouyang L, Ouyang Y, Chen Z, Zhang X. RNLFNet: Residual non-local Fourier network for undersampled MRI reconstruction. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104632] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
|
107
|
Qiao X, Huang Y, Li W. MEDL-Net: A model-based neural network for MRI reconstruction with enhanced deep learned regularizers. Magn Reson Med 2023; 89:2062-2075. [PMID: 36656129 DOI: 10.1002/mrm.29575] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Revised: 12/09/2022] [Accepted: 12/20/2022] [Indexed: 01/20/2023]
Abstract
PURPOSE To improve the MRI reconstruction performance of model-based networks and to alleviate their large demand for GPU memory. METHODS A model-based neural network with enhanced deep learned regularizers (MEDL-Net) was proposed. The MEDL-Net is separated into several submodules, each of which consists of several cascades to mimic the optimization steps in conventional MRI reconstruction algorithms. Information from shallow cascades is densely connected to latter ones to enrich their inputs in each submodule, and additional revising blocks (RB) are stacked at the end of the submodules to bring more flexibility. Moreover, a composition loss function was designed to explicitly supervise RBs. RESULTS Network performance was evaluated on a publicly available dataset. The MEDL-Net quantitatively outperforms the state-of-the-art methods on different MR image sequences with different acceleration rates (four-fold and six-fold). Moreover, the reconstructed images showed that the detailed textures are better preserved. In addition, fewer cascades are required when achieving the same reconstruction results compared with other model-based networks. CONCLUSION In this study, a more efficient model-based deep network was proposed to reconstruct MR images. The experimental results indicate that the proposed method improves reconstruction performance with fewer cascades, which alleviates the large demand for GPU memory.
Collapse
Affiliation(s)
- Xiaoyu Qiao
- Chongqing Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Yuping Huang
- Chongqing Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Weisheng Li
- Chongqing Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications, Chongqing, China
| |
Collapse
|
108
|
Lou W, Li H, Li G, Han X, Wan X. Which Pixel to Annotate: A Label-Efficient Nuclei Segmentation Framework. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:947-958. [PMID: 36355729 DOI: 10.1109/tmi.2022.3221666] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Recently deep neural networks, which require a large amount of annotated samples, have been widely applied in nuclei instance segmentation of H&E stained pathology images. However, it is inefficient and unnecessary to label all pixels for a dataset of nuclei images which usually contain similar and redundant patterns. Although unsupervised and semi-supervised learning methods have been studied for nuclei segmentation, very few works have delved into the selective labeling of samples to reduce the workload of annotation. Thus, in this paper, we propose a novel full nuclei segmentation framework that chooses only a few image patches to be annotated, augments the training set from the selected samples, and achieves nuclei segmentation in a semi-supervised manner. In the proposed framework, we first develop a novel consistency-based patch selection method to determine which image patches are the most beneficial to the training. Then we introduce a conditional single-image GAN with a component-wise discriminator, to synthesize more training samples. Lastly, our proposed framework trains an existing segmentation model with the above augmented samples. The experimental results show that our proposed method could obtain the same-level performance as a fully-supervised baseline by annotating less than 5% pixels on some benchmarks.
Collapse
|
109
|
Geng C, Jiang M, Fang X, Li Y, Jin G, Chen A, Liu F. HFIST-Net: High-throughput fast iterative shrinkage thresholding network for accelerating MR image reconstruction. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 232:107440. [PMID: 36881983 DOI: 10.1016/j.cmpb.2023.107440] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Revised: 01/22/2023] [Accepted: 02/19/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND AND OBJECTIVES Compressed sensing (CS) is often used to accelerate magnetic resonance image (MRI) reconstruction from undersampled k-space data. A novelty deeply unfolded networks (DUNs) based method, designed by unfolding a traditional CS-MRI optimization algorithm into deep networks, can provide significantly faster reconstruction speeds than traditional CS-MRI methods while improving image quality. METHODS In this paper, we propose a High-Throughput Fast Iterative Shrinkage Thresholding Network (HFIST-Net) for reconstructing MR images from sparse measurements by combining traditional model-based CS techniques and data-driven deep learning methods. Specifically, the conventional Fast Iterative Shrinkage Thresholding Algorithm (FISTA) method is expanded as a deep network. To break the bottleneck of information transmission, a multi-channel fusion mechanism is proposed to improve the efficiency of information transmission between adjacent network stages. Moreover, a simple yet efficient channel attention block, called Gaussian context transformer (GCT), is proposed to improve the characterization capabilities of deep Convolutional Neural Network (CNN,) which utilizes Gaussian functions that satisfy preset relationships to achieve context feature excitation. RESULTS T1 and T2 brain MR images from the FastMRI dataset are used to validate the performance of the proposed HFIST-Net. The qualitative and quantitative results showed that our method is superior to those compared state-of-the-art unfolded deep learning networks. CONCLUSIONS The proposed HFIST-Net is capable of reconstructing more accurate MR image details from highly undersampled k-space data while maintaining fast computational speed.
Collapse
Affiliation(s)
- Chenghu Geng
- Department of Physics, Zhejiang Sci-Tech University, Hangzhou 310018, China
| | - Mingfeng Jiang
- School of Computer Science and Technology, Zhejiang Sci-Tech University, Hangzhou 310018, China.
| | - Xian Fang
- School of Computer Science and Technology, Zhejiang Sci-Tech University, Hangzhou 310018, China
| | - Yang Li
- School of Computer Science and Technology, Zhejiang Sci-Tech University, Hangzhou 310018, China
| | - Guangri Jin
- Department of Physics, Zhejiang Sci-Tech University, Hangzhou 310018, China
| | - Aixi Chen
- Department of Physics, Zhejiang Sci-Tech University, Hangzhou 310018, China
| | - Feng Liu
- The School of Information Technology & Electrical Engineering, The University of Queensland, St. Lucia, Brisbane, Queensland 4072, Australia
| |
Collapse
|
110
|
Chun IY, Huang Z, Lim H, Fessler JA. Momentum-Net: Fast and Convergent Iterative Neural Network for Inverse Problems. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2023; 45:4915-4931. [PMID: 32750839 PMCID: PMC8011286 DOI: 10.1109/tpami.2020.3012955] [Citation(s) in RCA: 27] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Iterative neural networks (INN) are rapidly gaining attention for solving inverse problems in imaging, image processing, and computer vision. INNs combine regression NNs and an iterative model-based image reconstruction (MBIR) algorithm, often leading to both good generalization capability and outperforming reconstruction quality over existing MBIR optimization models. This paper proposes the first fast and convergent INN architecture, Momentum-Net, by generalizing a block-wise MBIR algorithm that uses momentum and majorizers with regression NNs. For fast MBIR, Momentum-Net uses momentum terms in extrapolation modules, and noniterative MBIR modules at each iteration by using majorizers, where each iteration of Momentum-Net consists of three core modules: image refining, extrapolation, and MBIR. Momentum-Net guarantees convergence to a fixed-point for general differentiable (non)convex MBIR functions (or data-fit terms) and convex feasible sets, under two asymptomatic conditions. To consider data-fit variations across training and testing samples, we also propose a regularization parameter selection scheme based on the "spectral spread" of majorization matrices. Numerical experiments for light-field photography using a focal stack and sparse-view computational tomography demonstrate that, given identical regression NN architectures, Momentum-Net significantly improves MBIR speed and accuracy over several existing INNs; it significantly improves reconstruction quality compared to a state-of-the-art MBIR method in each application.
Collapse
|
111
|
Zhang Z, Du H, Qiu B. FFVN: An explicit feature fusion-based variational network for accelerated multi-coil MRI reconstruction. Magn Reson Imaging 2023; 97:31-45. [PMID: 36586627 DOI: 10.1016/j.mri.2022.12.018] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Revised: 11/15/2022] [Accepted: 12/23/2022] [Indexed: 12/30/2022]
Abstract
Magnetic Resonance Imaging (MRI) is a leading diagnostic imaging modality that supports high contrast of soft tissues with no invasiveness or radiation. Nonetheless, it suffers from long scan time owing to the inherent physics in its data acquisition process, hampering its development and applications. Traditional strategies such as Compressed Sensing (CS) and Parallel Imaging (PI) allow for MRI acceleration via sub-sampling strategy, and multiple coils, respectively. When Deep Learning (DL) joins in, both strategies get re-vitalized to achieve even faster reconstruction in various reconstruction methods, among which the variational network is a previously proposed method that combines the mathematical structure of variational models with DL for fast MRI reconstruction. However, in our study we observe that the information of MR features is either not efficiently or explicitly exploited in former works based on the variational network. Instead, we introduce a variational network with explicit feature fusion that combines the CS, PI, with DL for accelerated multi-coil MRI reconstruction. By explicitly leveraging the extra information via feature fusion following feature extraction, our proposed method achieves comparably satisfying performance to the state-of-the-art methods without too much computation overhead on a public multi-coil brain dataset under 5-fold and 10-fold acceleration.
Collapse
Affiliation(s)
- Zhenxi Zhang
- Biomedical Engineering Center, University of Science and Technology of China, Hefei, Anhui 230026, China
| | - Hongwei Du
- Biomedical Engineering Center, University of Science and Technology of China, Hefei, Anhui 230026, China.
| | - Bensheng Qiu
- Biomedical Engineering Center, University of Science and Technology of China, Hefei, Anhui 230026, China
| |
Collapse
|
112
|
Aggarwal HK, Pramanik A, John M, Jacob M. ENSURE: A General Approach for Unsupervised Training of Deep Image Reconstruction Algorithms. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1133-1144. [PMID: 36417742 PMCID: PMC10210546 DOI: 10.1109/tmi.2022.3224359] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
Image reconstruction using deep learning algorithms offers improved reconstruction quality and lower reconstruction time than classical compressed sensing and model-based algorithms. Unfortunately, clean and fully sampled ground-truth data to train the deep networks is often unavailable in several applications, restricting the applicability of the above methods. We introduce a novel metric termed the ENsemble Stein's Unbiased Risk Estimate (ENSURE) framework, which can be used to train deep image reconstruction algorithms without fully sampled and noise-free images. The proposed framework is the generalization of the classical SURE and GSURE formulation to the setting where the images are sampled by different measurement operators, chosen randomly from a set. We evaluate the expectation of the GSURE loss functions over the sampling patterns to obtain the ENSURE loss function. We show that this loss is an unbiased estimate for the true mean-square error, which offers a better alternative to GSURE, which only offers an unbiased estimate for the projected error. Our experiments show that the networks trained with this loss function can offer reconstructions comparable to the supervised setting. While we demonstrate this framework in the context of MR image recovery, the ENSURE framework is generally applicable to arbitrary inverse problems.
Collapse
|
113
|
Hossain MB, Kwon KC, Shinde RK, Imtiaz SM, Kim N. A Hybrid Residual Attention Convolutional Neural Network for Compressed Sensing Magnetic Resonance Image Reconstruction. Diagnostics (Basel) 2023; 13:diagnostics13071306. [PMID: 37046524 PMCID: PMC10093476 DOI: 10.3390/diagnostics13071306] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Revised: 03/20/2023] [Accepted: 03/29/2023] [Indexed: 04/03/2023] Open
Abstract
We propose a dual-domain deep learning technique for accelerating compressed sensing magnetic resonance image reconstruction. An advanced convolutional neural network with residual connectivity and an attention mechanism was developed for frequency and image domains. First, the sensor domain subnetwork estimates the unmeasured frequencies of k-space to reduce aliasing artifacts. Second, the image domain subnetwork performs a pixel-wise operation to remove blur and noisy artifacts. The skip connections efficiently concatenate the feature maps to alleviate the vanishing gradient problem. An attention gate in each decoder layer enhances network generalizability and speeds up image reconstruction by eliminating irrelevant activations. The proposed technique reconstructs real-valued clinical images from sparsely sampled k-spaces that are identical to the reference images. The performance of this novel approach was compared with state-of-the-art direct mapping, single-domain, and multi-domain methods. With acceleration factors (AFs) of 4 and 5, our method improved the mean peak signal-to-noise ratio (PSNR) to 8.67 and 9.23, respectively, compared with the single-domain Unet model; similarly, our approach increased the average PSNR to 3.72 and 4.61, respectively, compared with the multi-domain W-net. Remarkably, using an AF of 6, it enhanced the PSNR by 9.87 ± 1.55 and 6.60 ± 0.38 compared with Unet and W-net, respectively.
Collapse
|
114
|
Ouchi S, Ito S. Efficient complex-valued image reconstruction for compressed sensing MRI using single real-valued convolutional neural network. Magn Reson Imaging 2023; 101:13-24. [PMID: 36965835 DOI: 10.1016/j.mri.2023.03.011] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2022] [Revised: 03/19/2023] [Accepted: 03/21/2023] [Indexed: 03/27/2023]
Affiliation(s)
- Shohei Ouchi
- Department of Information and Control Systems Science, Graduate School of Engineering, Utsunomiya University, 7-1-2 Yoto, Utsunomiya, Tochigi 321-8585, Japan; Japan Society for the Promotion of Science, Japan.
| | - Satoshi Ito
- Department of Information and Control Systems Science, Graduate School of Engineering, Utsunomiya University, 7-1-2 Yoto, Utsunomiya, Tochigi 321-8585, Japan.
| |
Collapse
|
115
|
Jiang Z, Polf JC, Barajas CA, Gobbert MK, Ren L. A feasibility study of enhanced prompt gamma imaging for range verification in proton therapy using deep learning. Phys Med Biol 2023; 68:10.1088/1361-6560/acbf9a. [PMID: 36848674 PMCID: PMC10173868 DOI: 10.1088/1361-6560/acbf9a] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2022] [Accepted: 02/27/2023] [Indexed: 03/01/2023]
Abstract
Background and objective. Range uncertainty is a major concern affecting the delivery precision in proton therapy. The Compton camera (CC)-based prompt-gamma (PG) imaging is a promising technique to provide 3Din vivorange verification. However, the conventional back-projected PG images suffer from severe distortions due to the limited view of the CC, significantly limiting its clinical utility. Deep learning has demonstrated effectiveness in enhancing medical images from limited-view measurements. But different from other medical images with abundant anatomical structures, the PGs emitted along the path of a proton pencil beam take up an extremely low portion of the 3D image space, presenting both the attention and the imbalance challenge for deep learning. To solve these issues, we proposed a two-tier deep learning-based method with a novel weighted axis-projection loss to generate precise 3D PG images to achieve accurate proton range verification.Materials and methods: the proposed method consists of two models: first, a localization model is trained to define a region-of-interest (ROI) in the distorted back-projected PG image that contains the proton pencil beam; second, an enhancement model is trained to restore the true PG emissions with additional attention on the ROI. In this study, we simulated 54 proton pencil beams (energy range: 75-125 MeV, dose level: 1 × 109protons/beam and 3 × 108protons/beam) delivered at clinical dose rates (20 kMU min-1and 180 kMU min-1) in a tissue-equivalent phantom using Monte-Carlo (MC). PG detection with a CC was simulated using the MC-Plus-Detector-Effects model. Images were reconstructed using the kernel-weighted-back-projection algorithm, and were then enhanced by the proposed method.Results. The method effectively restored the 3D shape of the PG images with the proton pencil beam range clearly visible in all testing cases. Range errors were within 2 pixels (4 mm) in all directions in most cases at a higher dose level. The proposed method is fully automatic, and the enhancement takes only ∼0.26 s.Significance. Overall, this preliminary study demonstrated the feasibility of the proposed method to generate accurate 3D PG images using a deep learning framework, providing a powerful tool for high-precisionin vivorange verification of proton therapy.
Collapse
Affiliation(s)
- Zhuoran Jiang
- Medical Physics Graduate Program, Duke University, Durham, NC, 27705, USA
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, 27710, USA
| | - Jerimy C. Polf
- Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, MD, 21201, USA
| | - Carlos A. Barajas
- Department of Mathematics and Statistics, University of Maryland, Baltimore County, Baltimore, MD, 21250, USA
| | - Matthias K. Gobbert
- Department of Mathematics and Statistics, University of Maryland, Baltimore County, Baltimore, MD, 21250, USA
| | - Lei Ren
- Medical Physics Graduate Program, Duke University, Durham, NC, 27705, USA
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, 27710, USA
- Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, MD, 21201, USA
| |
Collapse
|
116
|
Wu Y, Jiang X, Chen Y, Liu T, Ni Z, Yi H, Lu R. Rapid estimation approach for glycosylated serum protein of human serum based on the combination of deep learning and TD-NMR technology. ANAL SCI 2023; 39:957-968. [PMID: 36897540 DOI: 10.1007/s44211-023-00303-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Accepted: 02/13/2023] [Indexed: 03/11/2023]
Abstract
Rapid and precise estimation of glycosylated serum protein (GSP) of human serum is of great importance for the treatment and diagnosis of diabetes mellitus. In this study, we propose a novel method for estimation of GSP level based on the combination of deep learning and time domain nuclear magnetic resonance (TD-NMR) transverse relaxation signal of human serum. Specifically, a principal component analysis (PCA)-enhanced one-dimensional convolutional neural network (1D-CNN) is proposed to analyze the TD-NMR transverse relaxation signal of human serum. The proposed algorithm is proved by accurate estimation of GSP level for the collected serum samples. Furthermore, the proposed algorithm is compared with 1D-CNN without PCA, long short-term memory network (LSTM) and some conventional machine learning algorithms. The results indicate that PCA-enhanced 1D-CNN (PC-1D-CNN) has the minimum error. This study proves that proposed method is feasible and superior to estimate GSP level of human serum using TD-NMR transverse relaxation signals.
Collapse
Affiliation(s)
- Yuchen Wu
- Jiangsu Key Laboratory for Design and Manufacture of Micro Nano Biomedical Instruments, Southeast University, Nanjing, 211189, China
| | - Xiaowen Jiang
- Jiangsu Key Laboratory for Design and Manufacture of Micro Nano Biomedical Instruments, Southeast University, Nanjing, 211189, China
| | - Yi Chen
- Jiangsu Key Laboratory for Design and Manufacture of Micro Nano Biomedical Instruments, Southeast University, Nanjing, 211189, China
| | - Tingyu Liu
- School of Mechanical Engineering, Southeast University, Nanjing, 211189, China
| | - Zhonghua Ni
- Jiangsu Key Laboratory for Design and Manufacture of Micro Nano Biomedical Instruments, Southeast University, Nanjing, 211189, China
| | - Hong Yi
- Jiangsu Key Laboratory for Design and Manufacture of Micro Nano Biomedical Instruments, Southeast University, Nanjing, 211189, China.
| | - Rongsheng Lu
- Jiangsu Key Laboratory for Design and Manufacture of Micro Nano Biomedical Instruments, Southeast University, Nanjing, 211189, China.
| |
Collapse
|
117
|
Oscanoa JA, Middione MJ, Alkan C, Yurt M, Loecher M, Vasanawala SS, Ennis DB. Deep Learning-Based Reconstruction for Cardiac MRI: A Review. Bioengineering (Basel) 2023; 10:334. [PMID: 36978725 PMCID: PMC10044915 DOI: 10.3390/bioengineering10030334] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 03/03/2023] [Accepted: 03/03/2023] [Indexed: 03/09/2023] Open
Abstract
Cardiac magnetic resonance (CMR) is an essential clinical tool for the assessment of cardiovascular disease. Deep learning (DL) has recently revolutionized the field through image reconstruction techniques that allow unprecedented data undersampling rates. These fast acquisitions have the potential to considerably impact the diagnosis and treatment of cardiovascular disease. Herein, we provide a comprehensive review of DL-based reconstruction methods for CMR. We place special emphasis on state-of-the-art unrolled networks, which are heavily based on a conventional image reconstruction framework. We review the main DL-based methods and connect them to the relevant conventional reconstruction theory. Next, we review several methods developed to tackle specific challenges that arise from the characteristics of CMR data. Then, we focus on DL-based methods developed for specific CMR applications, including flow imaging, late gadolinium enhancement, and quantitative tissue characterization. Finally, we discuss the pitfalls and future outlook of DL-based reconstructions in CMR, focusing on the robustness, interpretability, clinical deployment, and potential for new methods.
Collapse
Affiliation(s)
- Julio A. Oscanoa
- Department of Bioengineering, Stanford University, Stanford, CA 94305, USA
- Department of Radiology, Stanford University, Stanford, CA 94305, USA
| | | | - Cagan Alkan
- Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA
| | - Mahmut Yurt
- Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA
| | - Michael Loecher
- Department of Radiology, Stanford University, Stanford, CA 94305, USA
| | | | - Daniel B. Ennis
- Department of Radiology, Stanford University, Stanford, CA 94305, USA
| |
Collapse
|
118
|
Chen Z, Xiang Y, Zhang P, Hu J. Robust compressed sensing MRI based on combined nonconvex regularization. Knowl Based Syst 2023. [DOI: 10.1016/j.knosys.2023.110466] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/17/2023]
|
119
|
Wang Y, Pang Y, Tong C. DSMENet: Detail and Structure Mutually Enhancing Network for under-sampled MRI reconstruction. Comput Biol Med 2023; 154:106204. [PMID: 36716684 DOI: 10.1016/j.compbiomed.2022.106204] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Revised: 09/21/2022] [Accepted: 10/09/2022] [Indexed: 02/01/2023]
Abstract
Reconstructing zero-filled MR images (ZF) from partial k-space by convolutional neural networks (CNN) is an important way to accelerate MRI. However, due to the lack of attention to different components in ZF, it is challenging to learn the mapping from ZF to targets effectively. To ameliorate this issue, we propose a Detail and Structure Mutually Enhancing Network (DSMENet), which benefits from the complementary of the Structure Reconstruction UNet (SRUN) and the Detail Feature Refinement Module (DFRM). The SRUN learns structure-dominated information at multiple scales. And the DRFM enriches detail-dominated information from coarse to fine. The bidirectional alternate connections then exchange information between them. Moreover, the Detail Representation Construction Module (DRCM) extracts valuable initial detail representation for DFRM. And the Detail Guided Fusion Module (DGFM) facilitates the deep fusion of these complementary information. With the help of them, various components in ZF can be applied with discriminative attentions and mutually enhanced. In addition, the performance can be further improved by the Deep Enhanced Restoration (DER), a strategy based on recursion and constrain. Extensive experiments on fastMRI and CC-359 datasets demonstrate that DSMENet has robustness in terms of various body parts, under-sampling rates, and masks. Furthermore, DSMENet can achieve promising performance on qualitative and quantitative results, especially the competitive NMSE of 0.0268, PSNE of 33.7, and SSIM of 0.7808 on fastMRI 4 × single-coil knee leaderboard.
Collapse
Affiliation(s)
- Yueze Wang
- TJK-BIIT Lab, School of Electrical and Information Engineering, Tianjin University, Tianjin, 300072, China.
| | - Yanwei Pang
- TJK-BIIT Lab, School of Electrical and Information Engineering, Tianjin University, Tianjin, 300072, China.
| | - Chuan Tong
- TJK-BIIT Lab, School of Electrical and Information Engineering, Tianjin University, Tianjin, 300072, China.
| |
Collapse
|
120
|
Zhang Q, Liang Y, Zhang Y, Tao Z, Li R, Bi H. A comparative study of attention mechanism based deep learning methods for bladder tumor segmentation. Int J Med Inform 2023; 171:104984. [PMID: 36634475 DOI: 10.1016/j.ijmedinf.2023.104984] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Revised: 12/17/2022] [Accepted: 01/01/2023] [Indexed: 01/07/2023]
Abstract
BACKGROUND Artificial intelligence aided tumor segmentation has been applied in various medical scenarios and showed effectiveness in helping physicians observe the potential malignant tissues. However, little research has been conducted for the cystoscopic image segmentation problem. METHODS This paper provided a comprehensive comparison of various attention modules for improving the bladder tumor segmentation performance by utilizing the cystoscopic images from Peking University Third Hospital within 2017-2022. Furthermore, this paper presented an attention mechanism based cystoscopic images segmentation (ACS) model, which was featured by the following points: (1) A mixed attention module including both the channel and spatial attention modules was integrated in the encoder-decoder path, which helped to exploit the global information of the tumor area more effectively. (2) A guidance and fusion attention module was introduced in the skip connection part, facilitating the integration of the high-level semantic features with low-level fine-grained features and the discarding of irrelevant features. (3) An inception attention module was added to enhance the feature expression in the scale of pixel level, so as to better discriminate multi-scale targets. RESULTS The proposed ACS model showed obviously better tumor segmentation performance than the compared models, with Dice of 82.7% and MIoU of 69% achieved. CONCLUSIONS The proposed ACS model achieved significantly better diagnostic performance than the previous bladder tumor segmentation method based on U-Net. Our ACS model is expected to be a useful support tool to assist the tumor segmentation under cystoscopy.
Collapse
Affiliation(s)
- Qi Zhang
- School of Information Technology & Management, University of International Business & Economics, Beijing 100029, China
| | - Yinglu Liang
- School of Information Technology & Management, University of International Business & Economics, Beijing 100029, China
| | - Yi Zhang
- School of Information Technology & Management, University of International Business & Economics, Beijing 100029, China
| | - Zihao Tao
- Department of Urology, Peking University Third Hospital, Beijing 100191, China
| | - Rui Li
- School of Mathematical Sciences, Dalian University of Technology, Dalian 116024, China.
| | - Hai Bi
- Department of Urology, Peking University Third Hospital, Beijing 100191, China.
| |
Collapse
|
121
|
Lyu J, Li Y, Yan F, Chen W, Wang C, Li R. Multi-channel GAN-based calibration-free diffusion-weighted liver imaging with simultaneous coil sensitivity estimation and reconstruction. Front Oncol 2023; 13:1095637. [PMID: 36845688 PMCID: PMC9945270 DOI: 10.3389/fonc.2023.1095637] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Accepted: 01/09/2023] [Indexed: 02/10/2023] Open
Abstract
INTRODUCTION Diffusion-weighted imaging (DWI) with parallel reconstruction may suffer from a mismatch between the coil calibration scan and imaging scan due to motions, especially for abdominal imaging. METHODS This study aimed to construct an iterative multichannel generative adversarial network (iMCGAN)-based framework for simultaneous sensitivity map estimation and calibration-free image reconstruction. The study included 106 healthy volunteers and 10 patients with tumors. RESULTS The performance of iMCGAN was evaluated in healthy participants and patients and compared with the SAKE, ALOHA-net, and DeepcomplexMRI reconstructions. The peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), root mean squared error (RMSE), and histograms of apparent diffusion coefficient (ADC) maps were calculated for assessing image qualities. The proposed iMCGAN outperformed the other methods in terms of the PSNR (iMCGAN: 41.82 ± 2.14; SAKE: 17.38 ± 1.78; ALOHA-net: 20.43 ± 2.11 and DeepcomplexMRI: 39.78 ± 2.78) for b = 800 DWI with an acceleration factor of 4. Besides, the ghosting artifacts in the SENSE due to the mismatch between the DW image and the sensitivity maps were avoided using the iMCGAN model. DISCUSSION The current model iteratively refined the sensitivity maps and the reconstructed images without additional acquisitions. Thus, the quality of the reconstructed image was improved, and the aliasing artifact was alleviated when motions occurred during the imaging procedure.
Collapse
Affiliation(s)
- Jun Lyu
- School of Computer and Control Engineering, Yantai University, Yantai, Shandong, China
| | - Yan Li
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- College of Health Science and Technology, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Fuhua Yan
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- College of Health Science and Technology, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Weibo Chen
- Philips Healthcare (China), Shanghai, China
| | - Chengyan Wang
- Human Phenome Institute, Fudan University, Shanghai, China
| | - Ruokun Li
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- College of Health Science and Technology, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| |
Collapse
|
122
|
Hu W, Wang T, Chu F. A Wasserstein generative digital twin model in health monitoring of rotating machines. COMPUT IND 2023. [DOI: 10.1016/j.compind.2022.103807] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
123
|
A densely interconnected network for deep learning accelerated MRI. MAGMA (NEW YORK, N.Y.) 2023; 36:65-77. [PMID: 36103029 PMCID: PMC9992260 DOI: 10.1007/s10334-022-01041-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Revised: 08/29/2022] [Accepted: 08/31/2022] [Indexed: 10/14/2022]
Abstract
OBJECTIVE To improve accelerated MRI reconstruction through a densely connected cascading deep learning reconstruction framework. MATERIALS AND METHODS A cascading deep learning reconstruction framework (reference model) was modified by applying three architectural modifications: input-level dense connections between cascade inputs and outputs, an improved deep learning sub-network, and long-range skip-connections between subsequent deep learning networks. An ablation study was performed, where five model configurations were trained on the NYU fastMRI neuro dataset with an end-to-end scheme conjunct on four- and eightfold acceleration. The trained models were evaluated by comparing their respective structural similarity index measure (SSIM), normalized mean square error (NMSE), and peak signal to noise ratio (PSNR). RESULTS The proposed densely interconnected residual cascading network (DIRCN), utilizing all three suggested modifications achieved a SSIM improvement of 8% and 11%, a NMSE improvement of 14% and 23%, and a PSNR improvement of 2% and 3% for four- and eightfold acceleration, respectively. In an ablation study, the individual architectural modifications all contributed to this improvement for both acceleration factors, by improving the SSIM, NMSE, and PSNR with approximately 2-4%, 4-9%, and 0.5-1%, respectively. CONCLUSION The proposed architectural modifications allow for simple adjustments on an already existing cascading framework to further improve the resulting reconstructions.
Collapse
|
124
|
Zhao X, Yang T, Li B, Zhang X. SwinGAN: A dual-domain Swin Transformer-based generative adversarial network for MRI reconstruction. Comput Biol Med 2023; 153:106513. [PMID: 36603439 DOI: 10.1016/j.compbiomed.2022.106513] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Revised: 12/09/2022] [Accepted: 12/31/2022] [Indexed: 01/02/2023]
Abstract
Magnetic resonance imaging (MRI) is one of the most important modalities for clinical diagnosis. However, the main disadvantages of MRI are the long scanning time and the moving artifact caused by patient movement during prolonged imaging. It can also lead to patient anxiety and discomfort, so accelerated imaging is indispensable for MRI. Convolutional neural network (CNN) based methods have become the fact standard for medical image reconstruction, and generative adversarial network (GAN) have also been widely used. Nevertheless, due to the limited ability of CNN to capture long-distance information, it may lead to defects in the structure of the reconstructed images such as blurry contour. In this paper, we propose a novel Swin Transformer-based dual-domain generative adversarial network (SwinGAN) for accelerated MRI reconstruction. The SwinGAN consists of two generators: a frequency-domain generator and an image-domain generator. Both the generators utilize Swin Transformer as backbone for effectively capturing the long-distance dependencies. A contextual image relative position encoder (ciRPE) is designed to enhance the ability to capture local information. We extensively evaluate the method on the IXI brain dataset, MICCAI 2013 dataset and MRNet knee dataset. Compared with KIGAN, the peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) are improved by 6.1% and 1.49% to 37.64 dB and 0.98 on IXI dataset respectively, which demonstrates that our model can sufficiently utilize the local and global information of image. The model shows promising performance and robustness under different undersampling masks, different acceleration rates and different datasets. But it needs high hardware requirements with the increasing of the network parameters. The code is available at: https://github.com/learnerzx/SwinGAN.
Collapse
Affiliation(s)
- Xiang Zhao
- School of Information Science and Engineering, Henan University of Technology, Zhengzhou, 450001, China
| | - Tiejun Yang
- School of Artificial Intelligence and Big Data, Henan University of Technology, Zhengzhou, 450001, China; Key Laboratory of Grain Information Processing and Control (HAUT), Ministry of Education, Zhengzhou, China; Henan Key Laboratory of Grain Photoelectric Detection and Control (HAUT), Zhengzhou, Henan, China.
| | - Bingjie Li
- School of Information Science and Engineering, Henan University of Technology, Zhengzhou, 450001, China
| | - Xin Zhang
- School of Information Science and Engineering, Henan University of Technology, Zhengzhou, 450001, China
| |
Collapse
|
125
|
Osuala R, Kushibar K, Garrucho L, Linardos A, Szafranowska Z, Klein S, Glocker B, Diaz O, Lekadir K. Data synthesis and adversarial networks: A review and meta-analysis in cancer imaging. Med Image Anal 2023; 84:102704. [PMID: 36473414 DOI: 10.1016/j.media.2022.102704] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Revised: 11/02/2022] [Accepted: 11/21/2022] [Indexed: 11/26/2022]
Abstract
Despite technological and medical advances, the detection, interpretation, and treatment of cancer based on imaging data continue to pose significant challenges. These include inter-observer variability, class imbalance, dataset shifts, inter- and intra-tumour heterogeneity, malignancy determination, and treatment effect uncertainty. Given the recent advancements in image synthesis, Generative Adversarial Networks (GANs), and adversarial training, we assess the potential of these technologies to address a number of key challenges of cancer imaging. We categorise these challenges into (a) data scarcity and imbalance, (b) data access and privacy, (c) data annotation and segmentation, (d) cancer detection and diagnosis, and (e) tumour profiling, treatment planning and monitoring. Based on our analysis of 164 publications that apply adversarial training techniques in the context of cancer imaging, we highlight multiple underexplored solutions with research potential. We further contribute the Synthesis Study Trustworthiness Test (SynTRUST), a meta-analysis framework for assessing the validation rigour of medical image synthesis studies. SynTRUST is based on 26 concrete measures of thoroughness, reproducibility, usefulness, scalability, and tenability. Based on SynTRUST, we analyse 16 of the most promising cancer imaging challenge solutions and observe a high validation rigour in general, but also several desirable improvements. With this work, we strive to bridge the gap between the needs of the clinical cancer imaging community and the current and prospective research on data synthesis and adversarial networks in the artificial intelligence community.
Collapse
Affiliation(s)
- Richard Osuala
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain.
| | - Kaisar Kushibar
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Lidia Garrucho
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Akis Linardos
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Zuzanna Szafranowska
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Stefan Klein
- Biomedical Imaging Group Rotterdam, Department of Radiology & Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands
| | - Ben Glocker
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, UK
| | - Oliver Diaz
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Karim Lekadir
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| |
Collapse
|
126
|
Blumenthal M, Luo G, Schilling M, Holme HCM, Uecker M. Deep, deep learning with BART. Magn Reson Med 2023; 89:678-693. [PMID: 36254526 PMCID: PMC10898647 DOI: 10.1002/mrm.29485] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Revised: 08/26/2022] [Accepted: 09/20/2022] [Indexed: 12/13/2022]
Abstract
PURPOSE To develop a deep-learning-based image reconstruction framework for reproducible research in MRI. METHODS The BART toolbox offers a rich set of implementations of calibration and reconstruction algorithms for parallel imaging and compressed sensing. In this work, BART was extended by a nonlinear operator framework that provides automatic differentiation to allow computation of gradients. Existing MRI-specific operators of BART, such as the nonuniform fast Fourier transform, are directly integrated into this framework and are complemented by common building blocks used in neural networks. To evaluate the use of the framework for advanced deep-learning-based reconstruction, two state-of-the-art unrolled reconstruction networks, namely the Variational Network and MoDL, were implemented. RESULTS State-of-the-art deep image-reconstruction networks can be constructed and trained using BART's gradient-based optimization algorithms. The BART implementation achieves a similar performance in terms of training time and reconstruction quality compared to the original implementations based on TensorFlow. CONCLUSION By integrating nonlinear operators and neural networks into BART, we provide a general framework for deep-learning-based reconstruction in MRI.
Collapse
Affiliation(s)
- Moritz Blumenthal
- Institute for Diagnostic and Interventional Radiology,
University Medical Center Göttingen, Göttingen, Germany
| | - Guanxiong Luo
- Institute for Diagnostic and Interventional Radiology,
University Medical Center Göttingen, Göttingen, Germany
| | - Martin Schilling
- Institute for Diagnostic and Interventional Radiology,
University Medical Center Göttingen, Göttingen, Germany
| | | | - Martin Uecker
- Institute for Diagnostic and Interventional Radiology,
University Medical Center Göttingen, Göttingen, Germany
- Institute of Biomedical Imaging, Graz University of
Technology, Graz, Austria
- German Centre for Cardiovascular Research (DZHK),Partner
Site Göttingen, Göttingen, Germany
- Cluster of Excellence “Multiscale Bioimaging: from
Molecular Machines to Networks of Excitable Cells” (MBExC), University of
Göttingen, Germany
- BioTechMed-Graz, Graz, Austria
| |
Collapse
|
127
|
Zhang Q, Liang C, Tang M, Yang X, Lin M, Han Y, Liu X, Yang Q. Alternative deep learning method for fast spatial-frequency shift imaging microscopy. OPTICS EXPRESS 2023; 31:3719-3730. [PMID: 36785358 DOI: 10.1364/oe.482062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 12/28/2022] [Indexed: 06/18/2023]
Abstract
Spatial-frequency shift (SFS) imaging microscopy can break the diffraction limit of fluorescently labeled and label-free samples by transferring the high spatial-frequency information into the passband of microscope. However, the resolution improvement is at the cost of decreasing temporal resolution since dozens of raw SFS images are needed to expand the frequency spectrum. Although some deep learning methods have been proposed to solve this problem, no neural network that is compatible to both labeled and label-free SFS imaging has been proposed. Here, we propose the joint spatial-Fourier channel attention network (JSFCAN), which learns the general connection between the spatial domain and Fourier frequency domain from complex samples. We demonstrate that JSFCAN can achieve a resolution similar to the traditional algorithm using nearly 1/4 raw images and increase the reconstruction speed by two orders of magnitude. Subsequently, we prove that JSFCAN can be applied to both fluorescently labeled and label-free samples without architecture changes. We also demonstrate that compared with the typical spatial domain optimization network U-net, JSFCAN is more robust to deal with deep-SFS images and noisy images. The proposed JSFCAN provides an alternative route for fast SFS imaging reconstruction, enabling future applications for real-time living cell research.
Collapse
|
128
|
Moya-Sáez E, de Luis-García R, Alberola-López C. Toward deep learning replacement of gadolinium in neuro-oncology: A review of contrast-enhanced synthetic MRI. FRONTIERS IN NEUROIMAGING 2023; 2:1055463. [PMID: 37554645 PMCID: PMC10406200 DOI: 10.3389/fnimg.2023.1055463] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Accepted: 01/04/2023] [Indexed: 08/10/2023]
Abstract
Gadolinium-based contrast agents (GBCAs) have become a crucial part of MRI acquisitions in neuro-oncology for the detection, characterization and monitoring of brain tumors. However, contrast-enhanced (CE) acquisitions not only raise safety concerns, but also lead to patient discomfort, the need of more skilled manpower and cost increase. Recently, several proposed deep learning works intend to reduce, or even eliminate, the need of GBCAs. This study reviews the published works related to the synthesis of CE images from low-dose and/or their native -non CE- counterparts. The data, type of neural network, and number of input modalities for each method are summarized as well as the evaluation methods. Based on this analysis, we discuss the main issues that these methods need to overcome in order to become suitable for their clinical usage. We also hypothesize some future trends that research on this topic may follow.
Collapse
Affiliation(s)
- Elisa Moya-Sáez
- Laboratorio de Procesado de Imagen, ETSI Telecomunicación, Universidad de Valladolid, Valladolid, Spain
| | | | | |
Collapse
|
129
|
Basty N, Thanaj M, Cule M, Sorokin EP, Liu Y, Thomas EL, Bell JD, Whitcher B. Artifact-free fat-water separation in Dixon MRI using deep learning. JOURNAL OF BIG DATA 2023; 10:4. [PMID: 36686622 PMCID: PMC9835035 DOI: 10.1186/s40537-022-00677-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Accepted: 12/25/2022] [Indexed: 06/17/2023]
Abstract
Chemical-shift encoded MRI (CSE-MRI) is a widely used technique for the study of body composition and metabolic disorders, where derived fat and water signals enable the quantification of adipose tissue and muscle. The UK Biobank is acquiring whole-body Dixon MRI (a specific implementation of CSE-MRI) for over 100,000 participants. Current processing methods associated with large whole-body volumes are time intensive and prone to artifacts during fat-water separation performed by the scanner, making quantitative analysis challenging. The most common artifacts are fat-water swaps, where the labels are inverted at the voxel level. It is common for researchers to discard swapped data (generally around 10%), which is wasteful and may lead to unintended biases. Given the large number of whole-body Dixon MRI acquisitions in the UK Biobank, thousands of swaps are expected to be present in the fat and water volumes from image reconstruction performed on the scanner. If they go undetected, errors will propagate into processes such as organ segmentation, and dilute the results in population-based analyses. There is a clear need for a robust method to accurately separate fat and water volumes in big data collections like the UK Biobank. We formulate fat-water separation as a style transfer problem, where swap-free fat and water volumes are predicted from the acquired Dixon MRI data using a conditional generative adversarial network, and introduce a new loss function for the generator model. Our method is able to predict highly accurate fat and water volumes free from artifacts in the UK Biobank. We show that our model separates fat and water volumes using either single input (in-phase only) or dual input (in-phase and opposed-phase) data, with the latter producing superior results. Our proposed method enables faster and more accurate downstream analysis of body composition from Dixon MRI in population studies by eliminating the need for visual inspection or discarding data due to fat-water swaps. Supplementary Information The online version contains supplementary material available at 10.1186/s40537-022-00677-1.
Collapse
Affiliation(s)
- Nicolas Basty
- Research Centre for Optimal Health, University of Westminster, London, UK
| | - Marjola Thanaj
- Research Centre for Optimal Health, University of Westminster, London, UK
| | | | | | - Yi Liu
- Calico Life Sciences LLC, South San Francisco, USA
| | - E. Louise Thomas
- Research Centre for Optimal Health, University of Westminster, London, UK
| | - Jimmy D. Bell
- Research Centre for Optimal Health, University of Westminster, London, UK
| | - Brandon Whitcher
- Research Centre for Optimal Health, University of Westminster, London, UK
| |
Collapse
|
130
|
Hammernik K, Küstner T, Yaman B, Huang Z, Rueckert D, Knoll F, Akçakaya M. Physics-Driven Deep Learning for Computational Magnetic Resonance Imaging: Combining physics and machine learning for improved medical imaging. IEEE SIGNAL PROCESSING MAGAZINE 2023; 40:98-114. [PMID: 37304755 PMCID: PMC10249732 DOI: 10.1109/msp.2022.3215288] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Physics-driven deep learning methods have emerged as a powerful tool for computational magnetic resonance imaging (MRI) problems, pushing reconstruction performance to new limits. This article provides an overview of the recent developments in incorporating physics information into learning-based MRI reconstruction. We consider inverse problems with both linear and non-linear forward models for computational MRI, and review the classical approaches for solving these. We then focus on physics-driven deep learning approaches, covering physics-driven loss functions, plug-and-play methods, generative models, and unrolled networks. We highlight domain-specific challenges such as real- and complex-valued building blocks of neural networks, and translational applications in MRI with linear and non-linear forward models. Finally, we discuss common issues and open challenges, and draw connections to the importance of physics-driven learning when combined with other downstream tasks in the medical imaging pipeline.
Collapse
Affiliation(s)
- Kerstin Hammernik
- Institute of AI and Informatics in Medicine, Technical University of Munich and the Department of Computing, Imperial College London
| | - Thomas Küstner
- Department of Diagnostic and Interventional Radiology, University Hospital of Tuebingen
| | - Burhaneddin Yaman
- Department of Electrical and Computer Engineering, and Center for Magnetic Resonance Research, University of Minnesota, USA
| | - Zhengnan Huang
- Center for Biomedical Imaging, Department of Radiology, New York University
| | - Daniel Rueckert
- Institute of AI and Informatics in Medicine, Technical University of Munich and the Department of Computing, Imperial College London
| | - Florian Knoll
- Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander University Erlangen
| | - Mehmet Akçakaya
- Department of Electrical and Computer Engineering, and Center for Magnetic Resonance Research, University of Minnesota, USA
| |
Collapse
|
131
|
Blons M, Deffieux T, Osmanski BF, Tanter M, Berthon B. PerceptFlow: Real-Time Ultrafast Doppler Image Enhancement Using Deep Convolutional Neural Network and Perceptual Loss. ULTRASOUND IN MEDICINE & BIOLOGY 2023; 49:225-236. [PMID: 36244920 DOI: 10.1016/j.ultrasmedbio.2022.08.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Revised: 08/24/2022] [Accepted: 08/28/2022] [Indexed: 06/16/2023]
Abstract
Ultrafast ultrasound is an emerging imaging modality derived from standard medical ultrasound. It allows for a high spatial resolution of 100 μm and a temporal resolution in the millisecond range with techniques such as ultrafast Doppler imaging. Ultrafast Doppler imaging has become a priceless tool for neuroscience, especially for visualizing functional vascular structures and navigating the brain in real time. Yet, the quality of a Doppler image strongly depends on experimental conditions and is easily subject to artifacts and deterioration, especially with transcranial imaging, which often comes at the cost of higher noise and lower sensitivity to small blood vessels. A common solution to better visualize brain vasculature is either accumulating more information, integrating the image over several seconds or using standard filter-based enhancement techniques, which often over-smooth the image, thus failing both to preserve sharp details and to improve our perception of the vasculature. In this study we propose combining the standard Doppler accumulation process with a real-time enhancement strategy, based on deep-learning techniques, using perceptual loss (PerceptFlow). With our perceptual approach, we bypass the need for long integration times to enhance Doppler images. We applied and evaluated our proposed method on transcranial Doppler images of mouse brains, outperforming state-of-the-art filters. We found that, in comparison to standard filters such as the Gaussian filter (GF) and block-matching and 3-D filtering (BM3D), PerceptFlow was capable of reducing background noise with a significant increase in contrast and contrast-to-noise ratio, as well as better preserving details without compromising spatial resolution.
Collapse
Affiliation(s)
- Matthieu Blons
- Physics for Medicine Paris, INSERM U1273, ESPCI Paris, PSL University, and CNRS 8063, Paris, France.
| | - Thomas Deffieux
- Physics for Medicine Paris, INSERM U1273, ESPCI Paris, PSL University, and CNRS 8063, Paris, France
| | | | - Mickaël Tanter
- Physics for Medicine Paris, INSERM U1273, ESPCI Paris, PSL University, and CNRS 8063, Paris, France
| | - Béatrice Berthon
- Physics for Medicine Paris, INSERM U1273, ESPCI Paris, PSL University, and CNRS 8063, Paris, France
| |
Collapse
|
132
|
Gao C, Ghodrati V, Shih SF, Wu HH, Liu Y, Nickel MD, Vahle T, Dale B, Sai V, Felker E, Surawech C, Miao Q, Finn JP, Zhong X, Hu P. Undersampling artifact reduction for free-breathing 3D stack-of-radial MRI based on a deep adversarial learning network. Magn Reson Imaging 2023; 95:70-79. [PMID: 36270417 PMCID: PMC10163826 DOI: 10.1016/j.mri.2022.10.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Revised: 10/06/2022] [Accepted: 10/14/2022] [Indexed: 11/05/2022]
Abstract
PURPOSE Stack-of-radial MRI allows free-breathing abdominal scans, however, it requires relatively long acquisition time. Undersampling reduces scan time but can cause streaking artifacts and degrade image quality. This study developed deep learning networks with adversarial loss and evaluated the performance of reducing streaking artifacts and preserving perceptual image sharpness. METHODS A 3D generative adversarial network (GAN) was developed for reducing streaking artifacts in stack-of-radial abdominal scans. Training and validation datasets were self-gated to 5 respiratory states to reduce motion artifacts and to effectively augment the data. The network used a combination of three loss functions to constrain the anatomy and preserve image quality: adversarial loss, mean-squared-error loss and structural similarity index loss. The performance of the network was investigated for 3-5 times undersampled data from 2 institutions. The performance of the GAN for 5 times accelerated images was compared with a 3D U-Net and evaluated using quantitative NMSE, SSIM and region of interest (ROI) measurements as well as qualitative scores of radiologists. RESULTS The 3D GAN showed similar NMSE (0.0657 vs. 0.0559, p = 0.5217) and significantly higher SSIM (0.841 vs. 0.798, p < 0.0001) compared to U-Net. ROI analysis showed GAN removed streaks in both the background air and the tissue and was not significantly different from the reference mean and variations. Radiologists' scores showed GAN had a significant improvement of 1.6 point (p = 0.004) on a 4-point scale in streaking score while no significant difference in sharpness score compared to the input. CONCLUSION 3D GAN removes streaking artifacts and preserves perceptual image details.
Collapse
Affiliation(s)
- Chang Gao
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, United States; Inter-Departmental Graduate Program of Physics and Biology in Medicine, University of California Los Angeles, Los Angeles, CA, United States
| | - Vahid Ghodrati
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, United States; Inter-Departmental Graduate Program of Physics and Biology in Medicine, University of California Los Angeles, Los Angeles, CA, United States
| | - Shu-Fu Shih
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, United States; Department of Bioengineering, University of California Los Angeles, Los Angeles, CA, United States
| | - Holden H Wu
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, United States; Inter-Departmental Graduate Program of Physics and Biology in Medicine, University of California Los Angeles, Los Angeles, CA, United States; Department of Bioengineering, University of California Los Angeles, Los Angeles, CA, United States
| | - Yongkai Liu
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, United States; Inter-Departmental Graduate Program of Physics and Biology in Medicine, University of California Los Angeles, Los Angeles, CA, United States
| | | | - Thomas Vahle
- MR Application Predevelopment, Siemens Healthcare GmbH, Erlangen, Germany
| | - Brian Dale
- MR R&D Collaborations, Siemens Medical Solutions USA, Inc., Cary, NC, United States
| | - Victor Sai
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, United States
| | - Ely Felker
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, United States
| | - Chuthaporn Surawech
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, United States; Department of Radiology, Division of Diagnostic Radiology, Faculty of Medicine, Chulalongkorn University and King Chulalongkorn Memorial Hospital, Bangkok, Thailand
| | - Qi Miao
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, United States; Department of Radiology, The First Affiliated Hospital of China Medical University, Shenyang, Liaoning Province, China
| | - J Paul Finn
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, United States; Inter-Departmental Graduate Program of Physics and Biology in Medicine, University of California Los Angeles, Los Angeles, CA, United States
| | - Xiaodong Zhong
- MR R&D Collaborations, Siemens Medical Solutions USA, Inc., Los Angeles, CA, United States
| | - Peng Hu
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, United States; Inter-Departmental Graduate Program of Physics and Biology in Medicine, University of California Los Angeles, Los Angeles, CA, United States.
| |
Collapse
|
133
|
Djebra Y, Marin T, Han PK, Bloch I, El Fakhri G, Ma C. Manifold Learning via Linear Tangent Space Alignment (LTSA) for Accelerated Dynamic MRI With Sparse Sampling. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:158-169. [PMID: 36121938 PMCID: PMC10024645 DOI: 10.1109/tmi.2022.3207774] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
The spatial resolution and temporal frame-rate of dynamic magnetic resonance imaging (MRI) can be improved by reconstructing images from sparsely sampled k -space data with mathematical modeling of the underlying spatiotemporal signals. These models include sparsity models, linear subspace models, and non-linear manifold models. This work presents a novel linear tangent space alignment (LTSA) model-based framework that exploits the intrinsic low-dimensional manifold structure of dynamic images for accelerated dynamic MRI. The performance of the proposed method was evaluated and compared to state-of-the-art methods using numerical simulation studies as well as 2D and 3D in vivo cardiac imaging experiments. The proposed method achieved the best performance in image reconstruction among all the compared methods. The proposed method could prove useful for accelerating many MRI applications, including dynamic MRI, multi-parametric MRI, and MR spectroscopic imaging.
Collapse
Affiliation(s)
- Yanis Djebra
- Gordon Center for Medical Imaging, Massachusetts General Hospital, and Department of Radiology, Harvard Medical School, Boston, MA 02129 USA and the LTCI, Telecom Paris, Institut Polytechnique de Paris, Paris, France
| | - Thibault Marin
- Gordon Center for Medical Imaging, Massachusetts General Hospital, and Department of Radiology, Harvard Medical School, Boston, MA 02129 USA
| | - Paul K. Han
- Gordon Center for Medical Imaging, Massachusetts General Hospital, and Department of Radiology, Harvard Medical School, Boston, MA 02129 USA
| | - Isabelle Bloch
- LIP6, Sorbonne University, CNRS Paris, France. This work was partly done while I. Bloch was with the LTCI, Telecom Paris, Institut Polytechnique de Paris, Paris, France
| | - Georges El Fakhri
- Gordon Center for Medical Imaging, Massachusetts General Hospital, and Department of Radiology, Harvard Medical School, Boston, MA 02129 USA
| | - Chao Ma
- Gordon Center for Medical Imaging, Massachusetts General Hospital, and Department of Radiology, Harvard Medical School, Boston, MA 02129 USA
| |
Collapse
|
134
|
Artificial Intelligence-Driven Ultra-Fast Superresolution MRI: 10-Fold Accelerated Musculoskeletal Turbo Spin Echo MRI Within Reach. Invest Radiol 2023; 58:28-42. [PMID: 36355637 DOI: 10.1097/rli.0000000000000928] [Citation(s) in RCA: 48] [Impact Index Per Article: 24.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
ABSTRACT Magnetic resonance imaging (MRI) is the keystone of modern musculoskeletal imaging; however, long pulse sequence acquisition times may restrict patient tolerability and access. Advances in MRI scanners, coil technology, and innovative pulse sequence acceleration methods enable 4-fold turbo spin echo pulse sequence acceleration in clinical practice; however, at this speed, conventional image reconstruction approaches the signal-to-noise limits of temporal, spatial, and contrast resolution. Novel deep learning image reconstruction methods can minimize signal-to-noise interdependencies to better advantage than conventional image reconstruction, leading to unparalleled gains in image speed and quality when combined with parallel imaging and simultaneous multislice acquisition. The enormous potential of deep learning-based image reconstruction promises to facilitate the 10-fold acceleration of the turbo spin echo pulse sequence, equating to a total acquisition time of 2-3 minutes for entire MRI examinations of joints without sacrificing spatial resolution or image quality. Current investigations aim for a better understanding of stability and failure modes of image reconstruction networks, validation of network reconstruction performance with external data sets, determination of diagnostic performances with independent reference standards, establishing generalizability to other centers, scanners, field strengths, coils, and anatomy, and building publicly available benchmark data sets to compare methods and foster innovation and collaboration between the clinical and image processing community. In this article, we review basic concepts of deep learning-based acquisition and image reconstruction techniques for accelerating and improving the quality of musculoskeletal MRI, commercially available and developing deep learning-based MRI solutions, superresolution, denoising, generative adversarial networks, and combined strategies for deep learning-driven ultra-fast superresolution musculoskeletal MRI. This article aims to equip radiologists and imaging scientists with the necessary practical knowledge and enthusiasm to meet this exciting new era of musculoskeletal MRI.
Collapse
|
135
|
Nepal P, Bagga B, Feng L, Chandarana H. Respiratory Motion Management in Abdominal MRI: Radiology In Training. Radiology 2023; 306:47-53. [PMID: 35997609 PMCID: PMC9792710 DOI: 10.1148/radiol.220448] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
Abstract
A 96-year-old woman had a suboptimal evaluation of liver observations at abdominal MRI due to significant respiratory motion. State-of-the-art strategies to minimize respiratory motion during clinical abdominal MRI are discussed.
Collapse
Affiliation(s)
- Pankaj Nepal
- From the Department of Radiology, Massachusetts General Hospital, 55
Fruit St, Boston, MA 02114 (P.N.); Department of Radiology, New York University
School of Medicine, New York, NY (B.B., H.C.); and Biomedical Engineering and
Imaging Institute and Department of Radiology, Icahn School of Medicine at Mount
Sinai, New York, NY (L.F.)
| | - Barun Bagga
- From the Department of Radiology, Massachusetts General Hospital, 55
Fruit St, Boston, MA 02114 (P.N.); Department of Radiology, New York University
School of Medicine, New York, NY (B.B., H.C.); and Biomedical Engineering and
Imaging Institute and Department of Radiology, Icahn School of Medicine at Mount
Sinai, New York, NY (L.F.)
| | - Li Feng
- From the Department of Radiology, Massachusetts General Hospital, 55
Fruit St, Boston, MA 02114 (P.N.); Department of Radiology, New York University
School of Medicine, New York, NY (B.B., H.C.); and Biomedical Engineering and
Imaging Institute and Department of Radiology, Icahn School of Medicine at Mount
Sinai, New York, NY (L.F.)
| | - Hersh Chandarana
- From the Department of Radiology, Massachusetts General Hospital, 55
Fruit St, Boston, MA 02114 (P.N.); Department of Radiology, New York University
School of Medicine, New York, NY (B.B., H.C.); and Biomedical Engineering and
Imaging Institute and Department of Radiology, Icahn School of Medicine at Mount
Sinai, New York, NY (L.F.)
| |
Collapse
|
136
|
Hossain MB, Kwon KC, Imtiaz SM, Nam OS, Jeon SH, Kim N. De-Aliasing and Accelerated Sparse Magnetic Resonance Image Reconstruction Using Fully Dense CNN with Attention Gates. Bioengineering (Basel) 2022; 10:22. [PMID: 36671594 PMCID: PMC9854709 DOI: 10.3390/bioengineering10010022] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Revised: 12/19/2022] [Accepted: 12/19/2022] [Indexed: 12/24/2022] Open
Abstract
When sparsely sampled data are used to accelerate magnetic resonance imaging (MRI), conventional reconstruction approaches produce significant artifacts that obscure the content of the image. To remove aliasing artifacts, we propose an advanced convolutional neural network (CNN) called fully dense attention CNN (FDA-CNN). We updated the Unet model with the fully dense connectivity and attention mechanism for MRI reconstruction. The main benefit of FDA-CNN is that an attention gate in each decoder layer increases the learning process by focusing on the relevant image features and provides a better generalization of the network by reducing irrelevant activations. Moreover, densely interconnected convolutional layers reuse the feature maps and prevent the vanishing gradient problem. Additionally, we also implement a new, proficient under-sampling pattern in the phase direction that takes low and high frequencies from the k-space both randomly and non-randomly. The performance of FDA-CNN was evaluated quantitatively and qualitatively with three different sub-sampling masks and datasets. Compared with five current deep learning-based and two compressed sensing MRI reconstruction techniques, the proposed method performed better as it reconstructed smoother and brighter images. Furthermore, FDA-CNN improved the mean PSNR by 2 dB, SSIM by 0.35, and VIFP by 0.37 compared with Unet for the acceleration factor of 5.
Collapse
Affiliation(s)
- Md. Biddut Hossain
- School of Information and Communication Engineering, Chungbuk National University, Cheongju-si 28644, Chungcheongbuk-do, Republic of Korea
| | - Ki-Chul Kwon
- School of Information and Communication Engineering, Chungbuk National University, Cheongju-si 28644, Chungcheongbuk-do, Republic of Korea
| | - Shariar Md Imtiaz
- School of Information and Communication Engineering, Chungbuk National University, Cheongju-si 28644, Chungcheongbuk-do, Republic of Korea
| | - Oh-Seung Nam
- School of Information and Communication Engineering, Chungbuk National University, Cheongju-si 28644, Chungcheongbuk-do, Republic of Korea
| | - Seok-Hee Jeon
- Department of Electronics Engineering, Incheon National University, 119 Academy-ro, Yeonsu-gu, Incheon 22012, Gyeonggi-do, Republic of Korea
| | - Nam Kim
- School of Information and Communication Engineering, Chungbuk National University, Cheongju-si 28644, Chungcheongbuk-do, Republic of Korea
| |
Collapse
|
137
|
Kwak K, Stanford W, Dayan E. Identifying the regional substrates predictive of Alzheimer's disease progression through a convolutional neural network model and occlusion. Hum Brain Mapp 2022; 43:5509-5519. [PMID: 35904092 PMCID: PMC9704798 DOI: 10.1002/hbm.26026] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Revised: 06/02/2022] [Accepted: 07/08/2022] [Indexed: 01/15/2023] Open
Abstract
Progressive brain atrophy is a key neuropathological hallmark of Alzheimer's disease (AD) dementia. However, atrophy patterns along the progression of AD dementia are diffuse and variable and are often missed by univariate methods. Consequently, identifying the major regional atrophy patterns underlying AD dementia progression is challenging. In the current study, we propose a method that evaluates the degree to which specific regional atrophy patterns are predictive of AD dementia progression, while holding all other atrophy changes constant using a total sample of 334 subjects. We first trained a dense convolutional neural network model to differentiate individuals with mild cognitive impairment (MCI) who progress to AD dementia versus those with a stable MCI diagnosis. Then, we retested the model multiple times, each time occluding different regions of interest (ROIs) from the model's testing set's input. We also validated this approach by occluding ROIs based on Braak's staging scheme. We found that the hippocampus, fusiform, and inferior temporal gyri were the strongest predictors of AD dementia progression, in agreement with established staging models. We also found that occlusion of limbic ROIs defined according to Braak stage III had the largest impact on the performance of the model. Our predictive model reveals the major regional patterns of atrophy predictive of AD dementia progression. These results highlight the potential for early diagnosis and stratification of individuals with prodromal AD dementia based on patterns of cortical atrophy, prior to interventional clinical trials.
Collapse
Affiliation(s)
- Kichang Kwak
- Biomedical Research Imaging CenterUniversity of North Carolina at Chapel HillChapel HillNorth CarolinaUSA
| | - William Stanford
- Neuroscience Curriculum, Biological and Biomedical Sciences ProgramUniversity of North Carolina at Chapel HillChapel HillNorth CarolinaUSA
| | - Eran Dayan
- Biomedical Research Imaging CenterUniversity of North Carolina at Chapel HillChapel HillNorth CarolinaUSA
- Neuroscience Curriculum, Biological and Biomedical Sciences ProgramUniversity of North Carolina at Chapel HillChapel HillNorth CarolinaUSA
- Department of RadiologyUniversity of North Carolina at Chapel HillChapel HillNorth CarolinaUSA
| | | |
Collapse
|
138
|
Nath R, Callahan S, Stoddard M, Amini AA. FlowRAU-Net: Accelerated 4D Flow MRI of Aortic Valvular Flows With a Deep 2D Residual Attention Network. IEEE Trans Biomed Eng 2022; 69:3812-3824. [PMID: 35675233 PMCID: PMC10577002 DOI: 10.1109/tbme.2022.3180691] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
In this work, we propose a novel deep learning reconstruction framework for rapid and accurate reconstruction of 4D flow MRI data. Reconstruction is performed on a slice-by-slice basis by reducing artifacts in zero-filled reconstructed complex images obtained from undersampled k-space. A deep residual attention network FlowRAU-Net is proposed, trained separately for each encoding direction with 2D complex image slices extracted from complex 4D images at each temporal frame and slice position. The network was trained and tested on 4D flow MRI data of aortic valvular flow in 18 human subjects. Performance of the reconstructions was measured in terms of image quality, 3-D velocity vector accuracy, and accuracy in hemodynamic parameters. Reconstruction performance was measured for three different k-space undersamplings and compared with one state of the art compressed sensing reconstruction method and three deep learning-based reconstruction methods. The proposed method outperforms state of the art methods in all performance measures for all three different k-space undersamplings. Hemodynamic parameters such as blood flow rate and peak velocity from the proposed technique show good agreement with reference flow parameters. Visualization of the reconstructed image and velocity magnitude also shows excellent agreement with the fully sampled reference dataset. Moreover, the proposed method is computationally fast. Total 4D flow data (including all slices in space and time) for a subject can be reconstructed in 69 seconds on a single GPU. Although the proposed method has been applied to 4D flow MRI of aortic valvular flows, given a sufficient number of training samples, it should be applicable to other arterial flows.
Collapse
|
139
|
Lee C, Ha EG, Choi YJ, Jeon KJ, Han SS. Synthesis of T2-weighted images from proton density images using a generative adversarial network in a temporomandibular joint magnetic resonance imaging protocol. Imaging Sci Dent 2022; 52:393-398. [PMID: 36605858 PMCID: PMC9807788 DOI: 10.5624/isd.20220125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Revised: 09/02/2022] [Accepted: 09/24/2022] [Indexed: 11/07/2022] Open
Abstract
Purpose This study proposed a generative adversarial network (GAN) model for T2-weighted image (WI) synthesis from proton density (PD)-WI in a temporomandibular joint (TMJ) magnetic resonance imaging (MRI) protocol. Materials and Methods From January to November 2019, MRI scans for TMJ were reviewed and 308 imaging sets were collected. For training, 277 pairs of PD- and T2-WI sagittal TMJ images were used. Transfer learning of the pix2pix GAN model was utilized to generate T2-WI from PD-WI. Model performance was evaluated with the structural similarity index map (SSIM) and peak signal-to-noise ratio (PSNR) indices for 31 predicted T2-WI (pT2). The disc position was clinically diagnosed as anterior disc displacement with or without reduction, and joint effusion as present or absent. The true T2-WI-based diagnosis was regarded as the gold standard, to which pT2-based diagnoses were compared using Cohen's ĸ coefficient. Results The mean SSIM and PSNR values were 0.4781(±0.0522) and 21.30(±1.51) dB, respectively. The pT2 protocol showed almost perfect agreement (ĸ=0.81) with the gold standard for disc position. The number of discordant cases was higher for normal disc position (17%) than for anterior displacement with reduction (2%) or without reduction (10%). The effusion diagnosis also showed almost perfect agreement (ĸ=0.88), with higher concordance for the presence (85%) than for the absence (77%) of effusion. Conclusion The application of pT2 images for a TMJ MRI protocol useful for diagnosis, although the image quality of pT2 was not fully satisfactory. Further research is expected to enhance pT2 quality.
Collapse
Affiliation(s)
- Chena Lee
- Department of Oral and Maxillofacial Radiology, Yonsei University College of Dentistry, Seoul, Korea
| | - Eun-Gyu Ha
- Department of Oral and Maxillofacial Radiology, Yonsei University College of Dentistry, Seoul, Korea
| | - Yoon Joo Choi
- Department of Oral and Maxillofacial Radiology, Yonsei University College of Dentistry, Seoul, Korea
| | - Kug Jin Jeon
- Department of Oral and Maxillofacial Radiology, Yonsei University College of Dentistry, Seoul, Korea
| | - Sang-Sun Han
- Department of Oral and Maxillofacial Radiology, Yonsei University College of Dentistry, Seoul, Korea
| |
Collapse
|
140
|
Penso M, Babbaro M, Moccia S, Guglielmo M, Carerj ML, Giacari CM, Chiesa M, Maragna R, Rabbat MG, Barison A, Martini N, Pepi M, Caiani EG, Pontone G. Cardiovascular magnetic resonance images with susceptibility artifacts: artificial intelligence with spatial-attention for ventricular volumes and mass assessment. J Cardiovasc Magn Reson 2022; 24:62. [PMID: 36437452 PMCID: PMC9703740 DOI: 10.1186/s12968-022-00899-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Accepted: 11/02/2022] [Indexed: 11/29/2022] Open
Abstract
BACKGROUND Segmentation of cardiovascular magnetic resonance (CMR) images is an essential step for evaluating dimensional and functional ventricular parameters as ejection fraction (EF) but may be limited by artifacts, which represent the major challenge to automatically derive clinical information. The aim of this study is to investigate the accuracy of a deep learning (DL) approach for automatic segmentation of cardiac structures from CMR images characterized by magnetic susceptibility artifact in patient with cardiac implanted electronic devices (CIED). METHODS In this retrospective study, 230 patients (100 with CIED) who underwent clinically indicated CMR were used to developed and test a DL model. A novel convolutional neural network was proposed to extract the left ventricle (LV) and right (RV) ventricle endocardium and LV epicardium. In order to perform a successful segmentation, it is important the network learns to identify salient image regions even during local magnetic field inhomogeneities. The proposed network takes advantage from a spatial attention module to selectively process the most relevant information and focus on the structures of interest. To improve segmentation, especially for images with artifacts, multiple loss functions were minimized in unison. Segmentation results were assessed against manual tracings and commercial CMR analysis software cvi42(Circle Cardiovascular Imaging, Calgary, Alberta, Canada). An external dataset of 56 patients with CIED was used to assess model generalizability. RESULTS In the internal datasets, on image with artifacts, the median Dice coefficients for end-diastolic LV cavity, LV myocardium and RV cavity, were 0.93, 0.77 and 0.87 and 0.91, 0.82, and 0.83 in end-systole, respectively. The proposed method reached higher segmentation accuracy than commercial software, with performance comparable to expert inter-observer variability (bias ± 95%LoA): LVEF 1 ± 8% vs 3 ± 9%, RVEF - 2 ± 15% vs 3 ± 21%. In the external cohort, EF well correlated with manual tracing (intraclass correlation coefficient: LVEF 0.98, RVEF 0.93). The automatic approach was significant faster than manual segmentation in providing cardiac parameters (approximately 1.5 s vs 450 s). CONCLUSIONS Experimental results show that the proposed method reached promising performance in cardiac segmentation from CMR images with susceptibility artifacts and alleviates time consuming expert physician contour segmentation.
Collapse
Affiliation(s)
- Marco Penso
- Cardiovascular Imaging Department, Centro Cardiologico Monzino IRCCS, Via C. Parea 4, 20138 Milan, Italy
- Department of Electronics, Information and Biomedical Engineering, Politecnico di Milano, Milan, Italy
| | - Mario Babbaro
- Cardiovascular Imaging Department, Centro Cardiologico Monzino IRCCS, Via C. Parea 4, 20138 Milan, Italy
| | - Sara Moccia
- The BioRobotics Institute and Department of Excellence in Robotics and AI, Scuola Superiore Sant’Anna, Pisa, Italy
| | - Marco Guglielmo
- Cardiovascular Imaging Department, Centro Cardiologico Monzino IRCCS, Via C. Parea 4, 20138 Milan, Italy
| | - Maria Ludovica Carerj
- Cardiovascular Imaging Department, Centro Cardiologico Monzino IRCCS, Via C. Parea 4, 20138 Milan, Italy
- Department of Biomedical Sciences and Morphological and Functional Imaging, “G. Martino” University Hospital Messina, Messina, Italy
| | - Carlo Maria Giacari
- Cardiovascular Imaging Department, Centro Cardiologico Monzino IRCCS, Via C. Parea 4, 20138 Milan, Italy
| | - Mattia Chiesa
- Cardiovascular Imaging Department, Centro Cardiologico Monzino IRCCS, Via C. Parea 4, 20138 Milan, Italy
- Department of Electronics, Information and Biomedical Engineering, Politecnico di Milano, Milan, Italy
| | - Riccardo Maragna
- Cardiovascular Imaging Department, Centro Cardiologico Monzino IRCCS, Via C. Parea 4, 20138 Milan, Italy
| | - Mark G. Rabbat
- Loyola University of Chicago, Chicago, IL USA
- Edward Hines Jr. VA Hospital, Hines, IL USA
| | | | | | - Mauro Pepi
- Cardiovascular Imaging Department, Centro Cardiologico Monzino IRCCS, Via C. Parea 4, 20138 Milan, Italy
| | - Enrico G. Caiani
- Department of Electronics, Information and Biomedical Engineering, Politecnico di Milano, Milan, Italy
- Istituto di Elettronica e di Ingegneria dell’Informazione e delle Telecomunicazioni, Consiglio Nazionale delle Ricerche, Milan, Italy
| | - Gianluca Pontone
- Cardiovascular Imaging Department, Centro Cardiologico Monzino IRCCS, Via C. Parea 4, 20138 Milan, Italy
| |
Collapse
|
141
|
Xu L, Zhu S, Wen N. Deep reinforcement learning and its applications in medical imaging and radiation therapy: a survey. Phys Med Biol 2022; 67. [PMID: 36270582 DOI: 10.1088/1361-6560/ac9cb3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Accepted: 10/21/2022] [Indexed: 11/07/2022]
Abstract
Reinforcement learning takes sequential decision-making approaches by learning the policy through trial and error based on interaction with the environment. Combining deep learning and reinforcement learning can empower the agent to learn the interactions and the distribution of rewards from state-action pairs to achieve effective and efficient solutions in more complex and dynamic environments. Deep reinforcement learning (DRL) has demonstrated astonishing performance in surpassing the human-level performance in the game domain and many other simulated environments. This paper introduces the basics of reinforcement learning and reviews various categories of DRL algorithms and DRL models developed for medical image analysis and radiation treatment planning optimization. We will also discuss the current challenges of DRL and approaches proposed to make DRL more generalizable and robust in a real-world environment. DRL algorithms, by fostering the designs of the reward function, agents interactions and environment models, can resolve the challenges from scarce and heterogeneous annotated medical image data, which has been a major obstacle to implementing deep learning models in the clinic. DRL is an active research area with enormous potential to improve deep learning applications in medical imaging and radiation therapy planning.
Collapse
Affiliation(s)
- Lanyu Xu
- Department of Computer Science and Engineering, Oakland University, Rochester, MI, United States of America
| | - Simeng Zhu
- Department of Radiation Oncology, Henry Ford Health Systems, Detroit, MI, United States of America
| | - Ning Wen
- Department of Radiology/The Institute for Medical Imaging Technology, Ruijin Hospital, Shanghai Jiaotong University School of Medicine, Shanghai, People's Republic of China.,The Global Institute of Future Technology, Shanghai Jiaotong University, Shanghai, People's Republic of China
| |
Collapse
|
142
|
Wang NC, Noll DC, Srinivasan A, Gagnon-Bartsch J, Kim MM, Rao A. Simulated MRI Artifacts: Testing Machine Learning Failure Modes. BME FRONTIERS 2022; 2022:9807590. [PMID: 37850164 PMCID: PMC10521705 DOI: 10.34133/2022/9807590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Accepted: 09/08/2022] [Indexed: 10/19/2023] Open
Abstract
Objective. Seven types of MRI artifacts, including acquisition and preprocessing errors, were simulated to test a machine learning brain tumor segmentation model for potential failure modes. Introduction. Real-world medical deployments of machine learning algorithms are less common than the number of medical research papers using machine learning. Part of the gap between the performance of models in research and deployment comes from a lack of hard test cases in the data used to train a model. Methods. These failure modes were simulated for a pretrained brain tumor segmentation model that utilizes standard MRI and used to evaluate the performance of the model under duress. These simulated MRI artifacts consisted of motion, susceptibility induced signal loss, aliasing, field inhomogeneity, sequence mislabeling, sequence misalignment, and skull stripping failures. Results. The artifact with the largest effect was the simplest, sequence mislabeling, though motion, field inhomogeneity, and sequence misalignment also caused significant performance decreases. The model was most susceptible to artifacts affecting the FLAIR (fluid attenuation inversion recovery) sequence. Conclusion. Overall, these simulated artifacts could be used to test other brain MRI models, but this approach could be used across medical imaging applications.
Collapse
Affiliation(s)
- Nicholas C. Wang
- Department of Computational Medicine and Bioinformatics, University of Michigan, USA
| | - Douglas C. Noll
- Department of Biomedical Engineering, University of Michigan, USA
- Department of Radiology, University of Michigan, USA
| | - Ashok Srinivasan
- Department of Radiology, Division of Neuroradiology, University of Michigan, USA
- Rogel Cancer Center, University of Michigan, USA
- Frankel Cardiovascular Center, University of Michigan, USA
| | | | - Michelle M. Kim
- Department of Radiation Oncology, University of Michigan, USA
| | - Arvind Rao
- Department of Computational Medicine and Bioinformatics, University of Michigan, USA
- Department of Radiation Oncology, University of Michigan, USA
| |
Collapse
|
143
|
You SH, Cho Y, Kim B, Yang KS, Kim BK, Park SE. Synthetic Time of Flight Magnetic Resonance Angiography Generation Model Based on Cycle-Consistent Generative Adversarial Network Using PETRA-MRA in the Patients With Treated Intracranial Aneurysm. J Magn Reson Imaging 2022; 56:1513-1528. [PMID: 35142407 DOI: 10.1002/jmri.28114] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Revised: 02/02/2022] [Accepted: 02/03/2022] [Indexed: 12/15/2022] Open
Abstract
BACKGROUND Pointwise encoding time reduction with radial acquisition (PETRA) magnetic resonance angiography (MRA) is useful for evaluating intracranial aneurysm recurrence, but the problem of severe background noise and low peripheral signal-to-noise ratio (SNR) remain. Deep learning could reduce noise using high- and low-quality images. PURPOSE To develop a cycle-consistent generative adversarial network (cycleGAN)-based deep learning model to generate synthetic TOF (synTOF) using PETRA. STUDY TYPE Retrospective. POPULATION A total of 377 patients (mean age: 60 ± 11; 293 females) with treated intracranial aneurysms who underwent both PETRA and TOF from October 2017 to January 2021. Data were randomly divided into training (49.9%, 188/377) and validation (50.1%, 189/377) groups. FIELD STRENGTH/SEQUENCE Ultra-short echo time and TOF-MRA on a 3-T MR system. ASSESSMENT For the cycleGAN model, the peak SNR (PSNR) and structural similarity (SSIM) were evaluated. Image quality was compared qualitatively (5-point Likert scale) and quantitatively (SNR). A multireader diagnostic optimality evaluation was performed with 17 radiologists (experience of 1-18 years). STATISTICAL TESTS Generalized estimating equation analysis, Friedman's test, McNemar test, and Spearman's rank correlation. P < 0.05 indicated statistical significance. RESULTS The PSNR and SSIM between synTOF and TOF were 17.51 [16.76; 18.31] dB and 0.71 ± 0.02. The median values of overall image quality, noise, sharpness, and vascular conspicuity were significantly higher for synTOF than for PETRA (4.00 [4.00; 5.00] vs. 4.00 [3.00; 4.00]; 5.00 [4.00; 5.00] vs. 3.00 [2.00; 4.00]; 4.00 [4.00; 4.00] vs. 4.00 [3.00; 4.00]; 3.00 [3.00; 4.00] vs. 3.00 [2.00; 3.00]). The SNRs of the middle cerebral arteries were the highest for synTOF (synTOF vs. TOF vs. PETRA; 63.67 [43.25; 105.00] vs. 52.42 [32.88; 74.67] vs. 21.05 [12.34; 37.88]). In the multireader evaluation, there was no significant difference in diagnostic optimality or preference between synTOF and TOF (19.00 [18.00; 19.00] vs. 20.00 [18.00; 20.00], P = 0.510; 8.00 [6.00; 11.00] vs. 11.00 [9.00, 14.00], P = 1.000). DATA CONCLUSION The cycleGAN-based deep learning model provided synTOF free from background artifact. The synTOF could be a versatile alternative to TOF in patients who have undergone PETRA for evaluating treated aneurysms. EVIDENCE LEVEL 4 TECHNICAL EFFICACY: Stage 1.
Collapse
Affiliation(s)
- Sung-Hye You
- Department of Radiology, Anam Hospital, Korea University College of Medicine, Korea
| | - Yongwon Cho
- Biomedical Research Center, Korea University College of Medicine, Korea
| | - Byungjun Kim
- Department of Radiology, Anam Hospital, Korea University College of Medicine, Korea
| | - Kyung-Sook Yang
- Department of Biostatistics, Korea University College of Medicine, Seoul, Korea
| | - Bo Kyu Kim
- Department of Radiology, Anam Hospital, Korea University College of Medicine, Korea
| | - Sang Eun Park
- Department of Radiology, Anam Hospital, Korea University College of Medicine, Korea
| |
Collapse
|
144
|
Singh NM, Harrod JB, Subramanian S, Robinson M, Chang K, Cetin-Karayumak S, Dalca AV, Eickhoff S, Fox M, Franke L, Golland P, Haehn D, Iglesias JE, O'Donnell LJ, Ou Y, Rathi Y, Siddiqi SH, Sun H, Westover MB, Whitfield-Gabrieli S, Gollub RL. How Machine Learning is Powering Neuroimaging to Improve Brain Health. Neuroinformatics 2022; 20:943-964. [PMID: 35347570 PMCID: PMC9515245 DOI: 10.1007/s12021-022-09572-9] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/07/2022] [Indexed: 12/31/2022]
Abstract
This report presents an overview of how machine learning is rapidly advancing clinical translational imaging in ways that will aid in the early detection, prediction, and treatment of diseases that threaten brain health. Towards this goal, we aresharing the information presented at a symposium, "Neuroimaging Indicators of Brain Structure and Function - Closing the Gap Between Research and Clinical Application", co-hosted by the McCance Center for Brain Health at Mass General Hospital and the MIT HST Neuroimaging Training Program on February 12, 2021. The symposium focused on the potential for machine learning approaches, applied to increasingly large-scale neuroimaging datasets, to transform healthcare delivery and change the trajectory of brain health by addressing brain care earlier in the lifespan. While not exhaustive, this overview uniquely addresses many of the technical challenges from image formation, to analysis and visualization, to synthesis and incorporation into the clinical workflow. Some of the ethical challenges inherent to this work are also explored, as are some of the regulatory requirements for implementation. We seek to educate, motivate, and inspire graduate students, postdoctoral fellows, and early career investigators to contribute to a future where neuroimaging meaningfully contributes to the maintenance of brain health.
Collapse
Affiliation(s)
- Nalini M Singh
- Harvard-MIT Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Jordan B Harrod
- Harvard-MIT Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Sandya Subramanian
- Harvard-MIT Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Mitchell Robinson
- Harvard-MIT Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Ken Chang
- Harvard-MIT Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Suheyla Cetin-Karayumak
- Department of Psychiatry, Brigham and Women's Hospital and Harvard Medical School, Boston, 02115, USA
| | | | - Simon Eickhoff
- Institute of Systems Neuroscience, Medical Faculty, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
- Institute of Neuroscience and Medicine, Brain & Behaviour (INM-7) Research Centre Jülich, Jülich, Germany
| | - Michael Fox
- Center for Brain Circuit Therapeutics, Department of Neurology, Psychiatry, and Radiology, Brigham and Women's Hospital and Harvard Medical School, 02115, Boston, USA
| | - Loraine Franke
- University of Massachusetts Boston, Boston, MA, 02125, USA
| | - Polina Golland
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Daniel Haehn
- University of Massachusetts Boston, Boston, MA, 02125, USA
| | - Juan Eugenio Iglesias
- Centre for Medical Image Computing, University College London, London, UK
- Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, 02114, USA
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Lauren J O'Donnell
- Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, MA, 02115, Boston, USA
| | - Yangming Ou
- Department of Radiology, Boston Children's Hospital, Harvard Medical School, Boston, MA, 02115, USA
| | - Yogesh Rathi
- Department of Psychiatry, Brigham and Women's Hospital and Harvard Medical School, Boston, 02115, USA
| | - Shan H Siddiqi
- Department of Psychiatry, Brigham and Women's Hospital and Harvard Medical School, Boston, 02115, USA
| | - Haoqi Sun
- Department of Neurology and McCance Center for Brain Health / Harvard Medical School, Massachusetts General Hospital, Boston, 02114, USA
| | - M Brandon Westover
- Department of Neurology and McCance Center for Brain Health / Harvard Medical School, Massachusetts General Hospital, Boston, 02114, USA
| | | | - Randy L Gollub
- Department of Psychiatry and Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, 02114, USA.
| |
Collapse
|
145
|
Kim M, Chung W. A cascade of preconditioned conjugate gradient networks for accelerated magnetic resonance imaging. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 225:107090. [PMID: 36067702 DOI: 10.1016/j.cmpb.2022.107090] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Revised: 08/10/2022] [Accepted: 08/25/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Recent unfolding based compressed sensing magnetic resonance imaging (CS-MRI) methods only reinterpret conventional CS-MRI optimization algorithms and, consequently, inherit the weaknesses of the alternating optimization strategy. In order to avoid the structural complexity of the alternating optimization strategy and achieve better reconstruction performance, we propose to directly optimize the ℓ1 regularized convex optimization problem using a deep learning approach. METHOD In order to achieve direct optimization, a system of equations solving the ℓ1 regularized optimization problem is constructed from the optimality conditions of a novel primal-dual form proposed for the effective training of the sparsifying transform. The optimal solution is obtained by a cascade of unfolding networks of the preconditioned conjugate gradient (PCG) algorithm trained to minimize the mean element-wise absolute difference (ℓ1 loss) between the terminal output and ground truth image in an end-to-end manner. The performance of the proposed method was compared with that of U-Net, PD-Net, ISTA-Net+, and the recently proposed projection-based cascaded U-Net, using single-coil knee MR images of the fastMRI dataset. RESULTS In our experiment, the proposed network outperformed existing unfolding-based networks and the complex version of U-Net in several subsampling scenarios. In particular, when using the random Cartesian subsampling mask with 25 % sampling rate, the proposed model outperformed PD-Net by 0.76 dB, ISTA-Net+ by 0.43 dB, and U-Net by 1.21 dB on the positron density without suppression (PD) dataset in term of peak signal to noise ratio. In comparison with the projection-based cascade U-Net, the proposed algorithm achieved approximately the same performance when the sampling rate was 25% with only 1.62% number of network parameters at the cost of a longer reconstruction time (approximately twice). CONCLUSION A cascade of unfolding networks of the PCG algorithm was proposed to directly optimize the ℓ1 regularized CS-MRI optimization problem. The proposed network achieved improved reconstruction performance compared with U-Net, PD-Net, and ISTA-Net+, and achieved approximately the same performance as the projection-based cascaded U-Net while using significantly fewer network parameters.
Collapse
Affiliation(s)
- Moogyeong Kim
- Department of Artificial Intelligence, Korea University, Seoul 02841 South Korea
| | - Wonzoo Chung
- Department of Artificial Intelligence, Korea University, Seoul 02841 South Korea.
| |
Collapse
|
146
|
Zhang X, Cao X, Zhang P, Song F, Zhang J, Zhang L, Zhang G. Self-Training Strategy Based on Finite Element Method for Adaptive Bioluminescence Tomography Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2629-2643. [PMID: 35436185 DOI: 10.1109/tmi.2022.3167809] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Bioluminescence tomography (BLT) is a promising pre-clinical imaging technique for a wide variety of biomedical applications, which can non-invasively reveal functional activities inside living animal bodies through the detection of visible or near-infrared light produced by bioluminescent reactions. Recently, reconstruction approaches based on deep learning have shown great potential in optical tomography modalities. However, these reports only generate data with stationary patterns of constant target number, shape, and size. The neural networks trained by these data sets are difficult to reconstruct the patterns outside the data sets. This will tremendously restrict the applications of deep learning in optical tomography reconstruction. To address this problem, a self-training strategy is proposed for BLT reconstruction in this paper. The proposed strategy can fast generate large-scale BLT data sets with random target numbers, shapes, and sizes through an algorithm named random seed growth algorithm and the neural network is automatically self-trained. In addition, the proposed strategy uses the neural network to build a map between photon densities on surface and inside the imaged object rather than an end-to-end neural network that directly infers the distribution of sources from the photon density on surface. The map of photon density is further converted into the distribution of sources through the multiplication with stiffness matrix. Simulation, phantom, and mouse studies are carried out. Results show the availability of the proposed self-training strategy.
Collapse
|
147
|
Tang Y, Gao X, Wang W, Dan Y, Zhou L, Su S, Wu J, Lv H, He Y. Automated Detection of Epiretinal Membranes in OCT Images Using Deep Learning. Ophthalmic Res 2022; 66:238-246. [PMID: 36170844 DOI: 10.1159/000525929] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Accepted: 06/08/2022] [Indexed: 11/19/2022]
Abstract
INTRODUCTION Development and validation of a deep learning algorithm to automatically identify and locate epiretinal memberane (ERM) regions in OCT images. METHODS OCT images of 468 eyes were retrospectively collected from a total of 404 ERM patients. One expert manually annotated the ERM regions for all images. A total of 422 images (90%) and the remainig 46 images (10%) were used as the training dataset and validation dataset for deep learning algorithm training and validation, respectively. One senior and one junior clinician read the images. The diagnostic results were compared. RESULTS The algorithm accurately segmented and located the ERM regions in OCT images. The image-level accuracy was 95.65%, and the ERM region-level accuracy was 90.14%, respectively. In comparison experiments, the accuracies of the junior clinician improved from 85.00% to 61.29% without the assistance of the algorithm to 100.00% and 90.32% with the assistance of the algorithm. The corresponding results of the senior clinician were 96.15%, 95.00% without the assistance of the algorithm, and 96.15%, 97.50% with the assistance of the algorithm. CONCLUSIONS The developed deep learning algorithm can accurately segment ERM regions in OCT images. This deep learning approach may help clinicians in clinical diagnosis with better accuracy and efficiency.
Collapse
Affiliation(s)
- Yong Tang
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Xiaorong Gao
- Department of Ophthalmology, The Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Weijia Wang
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Yujiao Dan
- Department of Ophthalmology, The Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Linjing Zhou
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Song Su
- Department of Hepatobiliary Surgery, The Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Jiali Wu
- Department of Anesthesiology, The Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Hongbin Lv
- Department of Ophthalmology, The Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Yue He
- Department of Ophthalmology, The Affiliated Hospital of Southwest Medical University, Luzhou, China
| |
Collapse
|
148
|
Oh C, Chung JY, Han Y. An End-to-End Recurrent Neural Network for Radial MR Image Reconstruction. SENSORS (BASEL, SWITZERLAND) 2022; 22:7277. [PMID: 36236376 PMCID: PMC9572393 DOI: 10.3390/s22197277] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 09/20/2022] [Accepted: 09/23/2022] [Indexed: 06/16/2023]
Abstract
Recent advances in deep learning have contributed greatly to the field of parallel MR imaging, where a reduced amount of k-space data are acquired to accelerate imaging time. In our previous work, we have proposed a deep learning method to reconstruct MR images directly from k-space data acquired with Cartesian trajectories. However, MRI utilizes various non-Cartesian trajectories, such as radial trajectories, with various numbers of multi-channel RF coils according to the purpose of an MRI scan. Thus, it is important for a reconstruction network to efficiently unfold aliasing artifacts due to undersampling and to combine multi-channel k-space data into single-channel data. In this work, a neural network named 'ETER-net' is utilized to reconstruct an MR image directly from k-space data acquired with Cartesian and non-Cartesian trajectories and multi-channel RF coils. In the proposed image reconstruction network, the domain transform network converts k-space data into a rough image, which is then refined in the following network to reconstruct a final image. We also analyze loss functions including adversarial and perceptual losses to improve the network performance. For experiments, we acquired k-space data at a 3T MRI scanner with Cartesian and radial trajectories to show the learning mechanism of the direct mapping relationship between the k-space and the corresponding image by the proposed network and to demonstrate the practical applications. According to our experiments, the proposed method showed satisfactory performance in reconstructing images from undersampled single- or multi-channel k-space data with reduced image artifacts. In conclusion, the proposed method is a deep-learning-based MR reconstruction network, which can be used as a unified solution for parallel MRI, where k-space data are acquired with various scanning trajectories.
Collapse
Affiliation(s)
- Changheun Oh
- Neuroscience Research Institute, Gachon University, Incheon 21565, Korea
| | - Jun-Young Chung
- Department of Neuroscience, College of Medicine, Gachon University, Incheon 21565, Korea
| | - Yeji Han
- Department of Biomedical Engineering, Gachon University, Incheon 21936, Korea
| |
Collapse
|
149
|
Soleymani F, Paquet E, Viktor H, Michalowski W, Spinello D. Protein-protein interaction prediction with deep learning: A comprehensive review. Comput Struct Biotechnol J 2022; 20:5316-5341. [PMID: 36212542 PMCID: PMC9520216 DOI: 10.1016/j.csbj.2022.08.070] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Revised: 08/29/2022] [Accepted: 08/30/2022] [Indexed: 11/15/2022] Open
Abstract
Most proteins perform their biological function by interacting with themselves or other molecules. Thus, one may obtain biological insights into protein functions, disease prevalence, and therapy development by identifying protein-protein interactions (PPI). However, finding the interacting and non-interacting protein pairs through experimental approaches is labour-intensive and time-consuming, owing to the variety of proteins. Hence, protein-protein interaction and protein-ligand binding problems have drawn attention in the fields of bioinformatics and computer-aided drug discovery. Deep learning methods paved the way for scientists to predict the 3-D structure of proteins from genomes, predict the functions and attributes of a protein, and modify and design new proteins to provide desired functions. This review focuses on recent deep learning methods applied to problems including predicting protein functions, protein-protein interaction and their sites, protein-ligand binding, and protein design.
Collapse
Affiliation(s)
- Farzan Soleymani
- Department of Mechanical Engineering, University of Ottawa, Ottawa, ON, Canada
| | - Eric Paquet
- National Research Council, 1200 Montreal Road, Ottawa, ON K1A 0R6, Canada
| | - Herna Viktor
- School of Electrical Engineering and Computer Science, University of Ottawa, ON, Canada
| | | | - Davide Spinello
- Department of Mechanical Engineering, University of Ottawa, Ottawa, ON, Canada
| |
Collapse
|
150
|
Braunstorfer L, Romanowicz J, Powell AJ, Pattee J, Browne LP, van der Geest RJ, Moghari MH. Non-contrast free-breathing whole-heart 3D cine cardiovascular magnetic resonance with a novel 3D radial leaf trajectory. Magn Reson Imaging 2022; 94:64-72. [PMID: 36122675 DOI: 10.1016/j.mri.2022.09.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Revised: 08/18/2022] [Accepted: 09/13/2022] [Indexed: 11/25/2022]
Abstract
PURPOSE To develop and validate a non-contrast free-breathing whole-heart 3D cine steady-state free precession (SSFP) sequence with a novel 3D radial leaf trajectory. METHODS We used a respiratory navigator to trigger acquisition of 3D cine data at end-expiration to minimize respiratory motion in our 3D cine SSFP sequence. We developed a novel 3D radial leaf trajectory to reduce gradient jumps and associated eddy-current artifacts. We then reconstructed the 3D cine images with a resolution of 2.0mm3 using an iterative nonlinear optimization algorithm. Prospective validation was performed by comparing ventricular volumetric measurements from a conventional breath-hold 2D cine ventricular short-axis stack against the non-contrast free-breathing whole-heart 3D cine dataset in each patient (n = 13). RESULTS All 3D cine SSFP acquisitions were successful and mean scan time was 07:09 ± 01:31 min. End-diastolic ventricular volumes for left ventricle (LV) and right ventricle (RV) measured from the 3D datasets were smaller than those from 2D (LV: 159.99 ± 42.99 vs. 173.16 ± 47.42; RV: 180.35 ± 46.08 vs. 193.13 ± 49.38; p-value≤0.044; bias<8%), whereas ventricular end-systolic volumes were more comparable (LV: 79.12 ± 26.78 vs. 78.46 ± 25.35; RV: 97.18 ± 32.35 vs. 102.42 ± 32.53; p-value≥0.190, bias<6%). The 3D cine data had a lower subjective image quality score. CONCLUSION Our non-contrast free-breathing whole-heart 3D cine sequence with novel leaf trajectory was robust and yielded smaller ventricular end-diastolic volumes compared to 2D cine imaging. It has the potential to make examinations easier and more comfortable for patients.
Collapse
Affiliation(s)
- Lukas Braunstorfer
- Department of Cardiology, Boston Children's Hospital, Department of Pediatrics, Harvard Medical School, Boston, MA, USA; Department of Informatics, Technical University of Munich, Munich, BY, Germany.
| | - Jennifer Romanowicz
- Department of Cardiology, Boston Children's Hospital, Department of Pediatrics, Harvard Medical School, Boston, MA, USA; Department of Pediatrics, Section of Cardiology, Children's Hospital Colorado, School of Medicine, The University of Colorado, CO, USA
| | - Andrew J Powell
- Department of Cardiology, Boston Children's Hospital, Department of Pediatrics, Harvard Medical School, Boston, MA, USA
| | - Jack Pattee
- Department of Biostatistics and Informatics, Colorado School of Public Health, CO, USA
| | - Lorna P Browne
- Department of Radiology, Children's Hospital Colorado, and School of Medicine, The University of Colorado, CO, USA
| | - Rob J van der Geest
- Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands
| | - Mehdi H Moghari
- Department of Radiology, Children's Hospital Colorado, and School of Medicine, The University of Colorado, CO, USA
| |
Collapse
|