201
|
Calivà F, Namiri NK, Dubreuil M, Pedoia V, Ozhinsky E, Majumdar S. Studying osteoarthritis with artificial intelligence applied to magnetic resonance imaging. Nat Rev Rheumatol 2022; 18:112-121. [PMID: 34848883 DOI: 10.1038/s41584-021-00719-7] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/03/2021] [Indexed: 02/08/2023]
Abstract
The 3D nature and soft-tissue contrast of MRI makes it an invaluable tool for osteoarthritis research, by facilitating the elucidation of disease pathogenesis and progression. The recent increasing employment of MRI has certainly been stimulated by major advances that are due to considerable investment in research, particularly related to artificial intelligence (AI). These AI-related advances are revolutionizing the use of MRI in clinical research by augmenting activities ranging from image acquisition to post-processing. Automation is key to reducing the long acquisition times of MRI, conducting large-scale longitudinal studies and quantitatively defining morphometric and other important clinical features of both soft and hard tissues in various anatomical joints. Deep learning methods have been used recently for multiple applications in the musculoskeletal field to improve understanding of osteoarthritis. Compared with labour-intensive human efforts, AI-based methods have advantages and potential in all stages of imaging, as well as post-processing steps, including aiding diagnosis and prognosis. However, AI-based methods also have limitations, including the arguably limited interpretability of AI models. Given that the AI community is highly invested in uncovering uncertainties associated with model predictions and improving their interpretability, we envision future clinical translation and progressive increase in the use of AI algorithms to support clinicians in optimizing patient care.
Collapse
Affiliation(s)
- Francesco Calivà
- Department of Radiology and Biomedical Imaging and Center for Intelligent Imaging, University of California, San Francisco, San Francisco, CA, USA
| | - Nikan K Namiri
- Department of Radiology and Biomedical Imaging and Center for Intelligent Imaging, University of California, San Francisco, San Francisco, CA, USA
| | - Maureen Dubreuil
- Section of Rheumatology, Department of Medicine, Boston University School of Medicine, Boston, MA, USA
| | - Valentina Pedoia
- Department of Radiology and Biomedical Imaging and Center for Intelligent Imaging, University of California, San Francisco, San Francisco, CA, USA
| | - Eugene Ozhinsky
- Department of Radiology and Biomedical Imaging and Center for Intelligent Imaging, University of California, San Francisco, San Francisco, CA, USA
| | - Sharmila Majumdar
- Department of Radiology and Biomedical Imaging and Center for Intelligent Imaging, University of California, San Francisco, San Francisco, CA, USA.
| |
Collapse
|
202
|
Dinh TQ, Xiong Y, Huang Z, Vo T, Mishra A, Kim WH, Ravi SN, Singh V. Performing Group Difference Testing on Graph Structured Data From GANs: Analysis and Applications in Neuroimaging. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:877-889. [PMID: 32763848 PMCID: PMC7867665 DOI: 10.1109/tpami.2020.3013433] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Generative adversarial networks (GANs) have emerged as a powerful generative model in computer vision. Given their impressive abilities in generating highly realistic images, they are also being used in novel ways in applications in the life sciences. This raises an interesting question when GANs are used in scientific or biomedical studies. Consider the setting where we are restricted to only using the samples from a trained GAN for downstream group difference analysis (and do not have direct access to the real data). Will we obtain similar conclusions? In this work, we explore if "generated" data, i.e., sampled from such GANs can be used for performing statistical group difference tests in cases versus controls studies, common across many scientific disciplines. We provide a detailed analysis describing regimes where this may be feasible. We complement the technical results with an empirical study focused on the analysis of cortical thickness on brain mesh surfaces in an Alzheimer's disease dataset. To exploit the geometric nature of the data, we use simple ideas from spectral graph theory to show how adjustments to existing GANs can yield improvements. We also give a generalization error bound by extending recent results on Neural Network Distance. To our knowledge, our work offers the first analysis assessing whether the Null distribution in "healthy versus diseased subjects" type statistical testing using data generated from the GANs coincides with the one obtained from the same analysis with real data. The code is available at https://github.com/yyxiongzju/GLapGAN.
Collapse
|
203
|
Huang J, Ding W, Lv J, Yang J, Dong H, Del Ser J, Xia J, Ren T, Wong ST, Yang G. Edge-enhanced dual discriminator generative adversarial network for fast MRI with parallel imaging using multi-view information. APPL INTELL 2022; 52:14693-14710. [PMID: 36199853 PMCID: PMC9526695 DOI: 10.1007/s10489-021-03092-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/09/2021] [Indexed: 12/24/2022]
Abstract
In clinical medicine, magnetic resonance imaging (MRI) is one of the most important tools for diagnosis, triage, prognosis, and treatment planning. However, MRI suffers from an inherent slow data acquisition process because data is collected sequentially in k-space. In recent years, most MRI reconstruction methods proposed in the literature focus on holistic image reconstruction rather than enhancing the edge information. This work steps aside this general trend by elaborating on the enhancement of edge information. Specifically, we introduce a novel parallel imaging coupled dual discriminator generative adversarial network (PIDD-GAN) for fast multi-channel MRI reconstruction by incorporating multi-view information. The dual discriminator design aims to improve the edge information in MRI reconstruction. One discriminator is used for holistic image reconstruction, whereas the other one is responsible for enhancing edge information. An improved U-Net with local and global residual learning is proposed for the generator. Frequency channel attention blocks (FCA Blocks) are embedded in the generator for incorporating attention mechanisms. Content loss is introduced to train the generator for better reconstruction quality. We performed comprehensive experiments on Calgary-Campinas public brain MR dataset and compared our method with state-of-the-art MRI reconstruction methods. Ablation studies of residual learning were conducted on the MICCAI13 dataset to validate the proposed modules. Results show that our PIDD-GAN provides high-quality reconstructed MR images, with well-preserved edge information. The time of single-image reconstruction is below 5ms, which meets the demand of faster processing.
Collapse
Affiliation(s)
- Jiahao Huang
- College of Information Science and Technology, Zhejiang Shuren University, 310015 Hangzhou, China
- National Heart and Lung Institute, Imperial College London, London, UK
| | - Weiping Ding
- School of Information Science and Technology, Nantong University, 226019 Nantong, China
| | - Jun Lv
- School of Computer and Control Engineering, Yantai University, 264005 Yantai, China
| | - Jingwen Yang
- Department of Prosthodontics, Peking University School and Hospital of Stomatology, Beijing, China
| | - Hao Dong
- Center on Frontiers of Computing Studies, Peking University, Beijing, China
| | - Javier Del Ser
- TECNALIA, Basque Research and Technology Alliance (BRTA), 48160 Derio, Spain
- University of the Basque Country (UPV/EHU), 48013 Bilbao, Spain
| | - Jun Xia
- Department of Radiology, Shenzhen Second People’s Hospital, The First Afliated Hospital of Shenzhen University Health Science Center, Shenzhen, China
| | - Tiaojuan Ren
- College of Information Science and Technology, Zhejiang Shuren University, 310015 Hangzhou, China
| | - Stephen T. Wong
- Systems Medicine and Bioengineering Department, Departments of Radiology and Pathology, Houston Methodist Cancer Center, Houston Methodist Hospital, Weill Cornell Medicine, 77030 Houston, TX USA
| | - Guang Yang
- National Heart and Lung Institute, Imperial College London, London, UK
- Cardiovascular Research Centre, Royal Brompton Hospital, London, UK
| |
Collapse
|
204
|
Wu X, Li C, Zeng X, Wei H, Deng HW, Zhang J, Xu M. CryoETGAN: Cryo-Electron Tomography Image Synthesis via Unpaired Image Translation. Front Physiol 2022; 13:760404. [PMID: 35370760 PMCID: PMC8970048 DOI: 10.3389/fphys.2022.760404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Accepted: 01/17/2022] [Indexed: 12/02/2022] Open
Abstract
Cryo-electron tomography (Cryo-ET) has been regarded as a revolution in structural biology and can reveal molecular sociology. Its unprecedented quality enables it to visualize cellular organelles and macromolecular complexes at nanometer resolution with native conformations. Motivated by developments in nanotechnology and machine learning, establishing machine learning approaches such as classification, detection and averaging for Cryo-ET image analysis has inspired broad interest. Yet, deep learning-based methods for biomedical imaging typically require large labeled datasets for good results, which can be a great challenge due to the expense of obtaining and labeling training data. To deal with this problem, we propose a generative model to simulate Cryo-ET images efficiently and reliably: CryoETGAN. This cycle-consistent and Wasserstein generative adversarial network (GAN) is able to generate images with an appearance similar to the original experimental data. Quantitative and visual grading results on generated images are provided to show that the results of our proposed method achieve better performance compared to the previous state-of-the-art simulation methods. Moreover, CryoETGAN is stable to train and capable of generating plausibly diverse image samples.
Collapse
Affiliation(s)
- Xindi Wu
- Computational Biology Department, Carnegie Mellon University, Pittsburgh, PA, United States
| | - Chengkun Li
- École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Xiangrui Zeng
- Computational Biology Department, Carnegie Mellon University, Pittsburgh, PA, United States
| | - Haocheng Wei
- Department of Electrical & Computer Engineering, University of Toronto, Toronto, ON, Canada
| | - Hong-Wen Deng
- Center for Biomedical Informatics & Genomics, Tulane University, New Orleans, LA, United States
| | - Jing Zhang
- Department of Computer Science, University of California, Irvine, Irvine, CA, United States
| | - Min Xu
- Computational Biology Department, Carnegie Mellon University, Pittsburgh, PA, United States
| |
Collapse
|
205
|
Accelerate gas diffusion-weighted MRI for lung morphometry with deep learning. Eur Radiol 2022; 32:702-713. [PMID: 34255160 PMCID: PMC8276538 DOI: 10.1007/s00330-021-08126-y] [Citation(s) in RCA: 56] [Impact Index Per Article: 18.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2020] [Revised: 04/14/2021] [Accepted: 06/08/2021] [Indexed: 02/07/2023]
Abstract
OBJECTIVES Multiple b-value gas diffusion-weighted MRI (DW-MRI) enables non-invasive and quantitative assessment of lung morphometry, but its long acquisition time is not well-tolerated by patients. We aimed to accelerate multiple b-value gas DW-MRI for lung morphometry using deep learning. METHODS A deep cascade of residual dense network (DC-RDN) was developed to reconstruct high-quality DW images from highly undersampled k-space data. Hyperpolarized 129Xe lung ventilation images were acquired from 101 participants and were retrospectively collected to generate synthetic DW-MRI data to train the DC-RDN. Afterwards, the performance of the DC-RDN was evaluated on retrospectively and prospectively undersampled multiple b-value 129Xe MRI datasets. RESULTS Each slice with size of 64 × 64 × 5 could be reconstructed within 7.2 ms. For the retrospective test data, the DC-RDN showed significant improvement on all quantitative metrics compared with the conventional reconstruction methods (p < 0.05). The apparent diffusion coefficient (ADC) and morphometry parameters were not significantly different between the fully sampled and DC-RDN reconstructed images (p > 0.05). For the prospectively accelerated acquisition, the required breath-holding time was reduced from 17.8 to 4.7 s with an acceleration factor of 4. Meanwhile, the prospectively reconstructed results showed good agreement with the fully sampled images, with a mean difference of -0.72% and -0.74% regarding global mean ADC and mean linear intercept (Lm) values. CONCLUSIONS DC-RDN is effective in accelerating multiple b-value gas DW-MRI while maintaining accurate estimation of lung microstructural morphometry, facilitating the clinical potential of studying lung diseases with hyperpolarized DW-MRI. KEY POINTS • The deep cascade of residual dense network allowed fast and high-quality reconstruction of multiple b-value gas diffusion-weighted MRI at an acceleration factor of 4. • The apparent diffusion coefficient and morphometry parameters were not significantly different between the fully sampled images and the reconstructed results (p > 0.05). • The required breath-holding time was reduced from 17.8 to 4.7 s and each slice with size of 64 × 64 × 5 could be reconstructed within 7.2 ms.
Collapse
|
206
|
Khor HG, Ning G, Zhang X, Liao H. Ultrasound Speckle Reduction using Wavelet-based Generative Adversarial Network. IEEE J Biomed Health Inform 2022; 26:3080-3091. [DOI: 10.1109/jbhi.2022.3144628] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
207
|
Li Y, Yang H, Xie D, Dreizin D, Zhou F, Wang Z. POCS-Augmented CycleGAN for MR Image Reconstruction. APPLIED SCIENCES (BASEL, SWITZERLAND) 2022; 12:114. [PMID: 37465648 PMCID: PMC10353773 DOI: 10.3390/app12010114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/20/2023]
Abstract
Recent years have seen increased research interest in replacing the computationally intensive Magnetic resonance (MR) image reconstruction process with deep neural networks. We claim in this paper that the traditional image reconstruction methods and deep learning (DL) are mutually complementary and can be combined to achieve better image reconstruction quality. To test this hypothesis, a hybrid DL image reconstruction method was proposed by combining a state-of-the-art deep learning network, namely a generative adversarial network with cycle loss (CycleGAN), with a traditional data reconstruction algorithm: Projection Onto Convex Set (POCS). The output of the first iteration's training results of the CycleGAN was updated by POCS and used as the extra training data for the second training iteration of the CycleGAN. The method was validated using sub-sampled Magnetic resonance imaging data. Compared with other state-of-the-art, DL-based methods (e.g., U-Net, GAN, and RefineGAN) and a traditional method (compressed sensing), our method showed the best reconstruction results.
Collapse
Affiliation(s)
- Yiran Li
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland, Baltimore, MD 21201, USA
- Department of Electrical and Computer Engineering, Temple University, Philadelphia, PA 19121, USA
| | - Hanlu Yang
- Department of Computer Science and Electrical Engineering, University of Maryland, Baltimore, MD 21250, USA
| | - Danfeng Xie
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland, Baltimore, MD 21201, USA
- Department of Electrical and Computer Engineering, Temple University, Philadelphia, PA 19121, USA
| | - David Dreizin
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland, Baltimore, MD 21201, USA
| | - Fuqing Zhou
- Department of Radiology, The First Affiliated Hospital of Nanchang University, Nanchang 330209, China
| | - Ze Wang
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland, Baltimore, MD 21201, USA
| |
Collapse
|
208
|
Luca AR, Ursuleanu TF, Gheorghe L, Grigorovici R, Iancu S, Hlusneac M, Grigorovici A. Impact of quality, type and volume of data used by deep learning models in the analysis of medical images. INFORMATICS IN MEDICINE UNLOCKED 2022. [DOI: 10.1016/j.imu.2022.100911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022] Open
|
209
|
Evaluation on the generalization of a learned convolutional neural network for MRI reconstruction. Magn Reson Imaging 2021; 87:38-46. [PMID: 34968699 DOI: 10.1016/j.mri.2021.12.003] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2021] [Revised: 11/25/2021] [Accepted: 12/22/2021] [Indexed: 02/01/2023]
Abstract
Recently, deep learning approaches with various network architectures have drawn significant attention from the magnetic resonance imaging (MRI) community because of their great potential for image reconstruction from undersampled k-space data in fast MRI. However, the robustness of a trained network when applied to test data deviated from training data is still an important open question. In this work, we focus on quantitatively evaluating the influence of image contrast, human anatomy, sampling pattern, undersampling factor, and noise level on the generalization of a trained network composed by a cascade of several CNNs and a data consistency layer, called a deep cascade of convolutional neural network (DC-CNN). The DC-CNN is trained from datasets with different image contrast, human anatomy, sampling pattern, undersampling factor, and noise level, and then applied to test datasets consistent or inconsistent with the training datasets to assess the generalizability of the learned DC-CNN network. The results of our experiments show that reconstruction quality from the DC-CNN network is highly sensitive to sampling pattern, undersampling factor, and noise level, which are closely related to signal-to-noise ratio (SNR), and is relatively less sensitive to the image contrast. We also show that a deviation of human anatomy between training and test data leads to a substantial reduction of image quality for the brain dataset, whereas comparable performance for the chest and knee dataset having fewer anatomy details than brain images. This work further provides some empirical understanding of the generalizability of trained networks when there are deviations between training and test data. It also demonstrates the potential of transfer learning for image reconstruction from datasets different from those used in training the network.
Collapse
|
210
|
Li Z, Tian Q, Ngamsombat C, Cartmell S, Conklin J, Filho ALMG, Lo WC, Wang G, Ying K, Setsompop K, Fan Q, Bilgic B, Cauley S, Huang SY. High-fidelity fast volumetric brain MRI using synergistic wave-controlled aliasing in parallel imaging and a hybrid denoising generative adversarial network (HDnGAN). Med Phys 2021; 49:1000-1014. [PMID: 34961944 DOI: 10.1002/mp.15427] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2021] [Revised: 11/22/2021] [Accepted: 12/12/2021] [Indexed: 11/08/2022] Open
Abstract
PURPOSE The goal of this study is to leverage an advanced fast imaging technique, wave-controlled aliasing in parallel imaging (Wave-CAIPI), and a generative adversarial network (GAN) for denoising to achieve accelerated high-quality high-signal-to-noise-ratio (SNR) volumetric MRI. METHODS Three-dimensional (3D) T2 -weighted fluid-attenuated inversion recovery (FLAIR) image data were acquired on 33 multiple sclerosis (MS) patients using a prototype Wave-CAIPI sequence (acceleration factor R = 3×2, 2.75 minutes) and a standard T2 -SPACE FLAIR sequence (R = 2, 7.25 minutes). A hybrid denoising GAN entitled "HDnGAN" consisting of a 3D generator and a 2D discriminator was proposed to denoise highly accelerated Wave-CAIPI images. HDnGAN benefits from the improved image synthesis performance provided by the 3D generator and increased training samples from a limited number of patients for training the 2D discriminator. HDnGAN was trained and validated on data from 25 MS patients with the standard FLAIR images as the target and evaluated on data from 8 MS patients not seen during training. HDnGAN was compared to other denoising methods including AONLM, BM4D, MU-Net, and 3D GAN in qualitative and quantitative analysis of output images using the mean squared error (MSE) and VGG perceptual loss compared to standard FLAIR images, and a reader assessment by two neuroradiologists regarding sharpness, SNR, lesion conspicuity, and overall quality. Finally, the performance of these denoising methods was compared at higher noise levels using simulated data with added Rician noise. RESULTS HDnGAN effectively denoised low-SNR Wave-CAIPI images with sharpness and rich textural details, which could be adjusted by controlling the contribution of the adversarial loss to the total loss when training the generator. Quantitatively, HDnGAN (λ = 10-3 ) achieved low MSE and the lowest VGG perceptual loss. The reader study showed that HDnGAN (λ = 10-3 ) significantly improved the SNR of Wave-CAIPI images (P<0.001), outperformed AONLM (P = 0.015), BM4D (P<0.001), MU-Net (P<0.001) and 3D GAN (λ = 10-3 ) (P<0.001) regarding image sharpness, and outperformed MU-Net (P<0.001) and 3D GAN (λ = 10-3 ) (P = 0.001) regarding lesion conspicuity. The overall quality score of HDnGAN (λ = 10-3 ) (4.25±0.43) was significantly higher than those from Wave-CAIPI (3.69±0.46, P = 0.003), BM4D (3.50±0.71, P = 0.001), MU-Net (3.25±0.75, P<0.001), and 3D GAN (λ = 10-3 ) (3.50±0.50, P<0.001), with no significant difference compared to standard FLAIR images (4.38±0.48, P = 0.333). The advantages of HDnGAN over other methods were more obvious at higher noise levels. CONCLUSION HDnGAN provides robust and feasible denoising while preserving rich textural detail in empirical volumetric MRI data. Our study using empirical patient data and systematic evaluation supports the use of HDnGAN in combination with modern fast imaging techniques such as Wave-CAIPI to achieve high-fidelity fast volumetric MRI and represents an important step to the clinical translation of GANs. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Ziyu Li
- Department of Biomedical Engineering, Tsinghua University, Beijing, P.R. China
| | - Qiyuan Tian
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA.,Harvard Medical School, Boston, MA, USA
| | - Chanon Ngamsombat
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA.,Department of Radiology, Faculty of Medicine, Siriraj Hospital, Mahidol University, Mahidol, Thailand
| | - Samuel Cartmell
- Department of Radiology, Massachusetts General Hospital, Boston, USA
| | - John Conklin
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA.,Harvard Medical School, Boston, MA, USA.,Department of Radiology, Massachusetts General Hospital, Boston, USA
| | - Augusto Lio M Gonçalves Filho
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA.,Department of Radiology, Massachusetts General Hospital, Boston, USA
| | | | - Guangzhi Wang
- Department of Biomedical Engineering, Tsinghua University, Beijing, P.R. China
| | - Kui Ying
- Department of Engineering Physics, Tsinghua University, Beijing, P. R. China
| | - Kawin Setsompop
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA.,Harvard Medical School, Boston, MA, USA.,Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Qiuyun Fan
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA.,Harvard Medical School, Boston, MA, USA
| | - Berkin Bilgic
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA.,Harvard Medical School, Boston, MA, USA.,Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Stephen Cauley
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA.,Harvard Medical School, Boston, MA, USA
| | - Susie Y Huang
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA.,Harvard Medical School, Boston, MA, USA.,Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
211
|
Kwak K, Giovanello KS, Bozoki A, Styner M, Dayan E. Subtyping of mild cognitive impairment using a deep learning model based on brain atrophy patterns. Cell Rep Med 2021; 2:100467. [PMID: 35028609 PMCID: PMC8714856 DOI: 10.1016/j.xcrm.2021.100467] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2021] [Revised: 09/08/2021] [Accepted: 11/13/2021] [Indexed: 12/28/2022]
Abstract
Trajectories of cognitive decline vary considerably among individuals with mild cognitive impairment (MCI). To address this heterogeneity, subtyping approaches have been developed, with the objective of identifying more homogeneous subgroups. To date, subtyping of MCI has been based primarily on cognitive measures, often resulting in indistinct boundaries between subgroups and limited validity. Here, we introduce a subtyping method for MCI based solely upon brain atrophy. We train a deep learning model to differentiate between Alzheimer's disease (AD) and cognitively normal (CN) subjects based on whole-brain MRI features. We then deploy the trained model to classify MCI subjects based on whole-brain gray matter resemblance to AD-like or CN-like patterns. We subsequently validate the subtyping approach using cognitive, clinical, fluid biomarker, and molecular imaging data. Overall, the results suggest that atrophy patterns in MCI are sufficiently heterogeneous and can thus be used to subtype individuals into biologically and clinically meaningful subgroups.
Collapse
Affiliation(s)
- Kichang Kwak
- Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Kelly S. Giovanello
- Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
- Department of Psychology and Neuroscience, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Andrea Bozoki
- Department of Neurology, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Martin Styner
- Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
- Department of Psychiatry, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Eran Dayan
- Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
- Department of Radiology, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - for the Alzheimer’s Disease Neuroimaging Initiative
- Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
- Department of Psychology and Neuroscience, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
- Department of Neurology, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
- Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
- Department of Psychiatry, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
- Department of Radiology, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| |
Collapse
|
212
|
Mohammad-Djafari A. Regularization, Bayesian Inference, and Machine Learning Methods for Inverse Problems. ENTROPY 2021; 23:e23121673. [PMID: 34945979 PMCID: PMC8699938 DOI: 10.3390/e23121673] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Revised: 12/03/2021] [Accepted: 12/07/2021] [Indexed: 11/16/2022]
Abstract
Classical methods for inverse problems are mainly based on regularization theory, in particular those, that are based on optimization of a criterion with two parts: a data-model matching and a regularization term. Different choices for these two terms and a great number of optimization algorithms have been proposed. When these two terms are distance or divergence measures, they can have a Bayesian Maximum A Posteriori (MAP) interpretation where these two terms correspond to the likelihood and prior-probability models, respectively. The Bayesian approach gives more flexibility in choosing these terms and, in particular, the prior term via hierarchical models and hidden variables. However, the Bayesian computations can become very heavy computationally. The machine learning (ML) methods such as classification, clustering, segmentation, and regression, based on neural networks (NN) and particularly convolutional NN, deep NN, physics-informed neural networks, etc. can become helpful to obtain approximate practical solutions to inverse problems. In this tutorial article, particular examples of image denoising, image restoration, and computed-tomography (CT) image reconstruction will illustrate this cooperation between ML and inversion.
Collapse
Affiliation(s)
- Ali Mohammad-Djafari
- Laboratoire des Signaux et Système, CNRS, CentraleSupélec-University Paris Saclay, 91192 Gif-sur-Yvette, France;
- International Science Consulting and Training (ISCT), 91440 Bures-sur-Yvette, France
- Scientific Leader of Shanfeng Company, Shaoxing 312352, China
| |
Collapse
|
213
|
Zhang Q, Du Q, Liu G. A whole-process interpretable and multi-modal deep reinforcement learning for diagnosis and analysis of Alzheimer's disease ∗. J Neural Eng 2021; 18:066032. [PMID: 34753116 DOI: 10.1088/1741-2552/ac37cc] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Accepted: 11/09/2021] [Indexed: 01/09/2023]
Abstract
Objective. Alzheimer's disease (AD), a common disease of the elderly with unknown etiology, has been adversely affecting many people, especially with the aging of the population and the younger trend of this disease. Current artificial intelligence (AI) methods based on individual information or magnetic resonance imaging (MRI) can solve the problem of diagnostic sensitivity and specificity, but still face the challenges of interpretability and clinical feasibility. In this study, we propose an interpretable multimodal deep reinforcement learning model for inferring pathological features and the diagnosis of AD.Approach. First, for better clinical feasibility, the compressed-sensing MRI image is reconstructed using an interpretable deep reinforcement learning model. Then, the reconstructed MRI is input into the full convolution neural network to generate a pixel-level disease probability risk map (DPM) of the whole brain for AD. The DPM of important brain regions and individual information are then input into the attention-based fully deep neural network to obtain the diagnosis results and analyze the biomarkers. We used 1349 multi-center samples to construct and test the model.Main results.Finally, the model obtained 99.6% ± 0.2%, 97.9% ± 0.2%, and 96.1% ± 0.3% area under curve in ADNI, AIBL and NACC, respectively. The model also provides an effective analysis of multimodal pathology, predicts the imaging biomarkers in MRI and the weight of each individual item of information. In this study, a deep reinforcement learning model was designed, which can not only accurately diagnose AD, but analyze potential biomarkers.Significance. In this study, a deep reinforcement learning model was designed. The model builds a bridge between clinical practice and AI diagnosis and provides a viewpoint for the interpretability of AI technology.
Collapse
Affiliation(s)
- Quan Zhang
- College of Electronic Information and Optical Engineering, Nankai University, Tianjin 300350, People's Republic of China
- Tianjin Key Laboratory of Optoelectronic Sensor and Sensing Network Technology, Nankai University, Tianjin 300350, People's Republic of China
| | - Qian Du
- College of Electronic Information and Optical Engineering, Nankai University, Tianjin 300350, People's Republic of China
- Tianjin Key Laboratory of Optoelectronic Sensor and Sensing Network Technology, Nankai University, Tianjin 300350, People's Republic of China
| | - Guohua Liu
- College of Electronic Information and Optical Engineering, Nankai University, Tianjin 300350, People's Republic of China
- Tianjin Key Laboratory of Optoelectronic Sensor and Sensing Network Technology, Nankai University, Tianjin 300350, People's Republic of China
- Engineering Research Center of Thin Film Optoelectronics Technology, Ministry of Education, Nankai University, Tianjin 300350, People's Republic of China
| |
Collapse
|
214
|
Wang S, Cao G, Wang Y, Liao S, Wang Q, Shi J, Li C, Shen D. Review and Prospect: Artificial Intelligence in Advanced Medical Imaging. FRONTIERS IN RADIOLOGY 2021; 1:781868. [PMID: 37492170 PMCID: PMC10365109 DOI: 10.3389/fradi.2021.781868] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Accepted: 11/08/2021] [Indexed: 07/27/2023]
Abstract
Artificial intelligence (AI) as an emerging technology is gaining momentum in medical imaging. Recently, deep learning-based AI techniques have been actively investigated in medical imaging, and its potential applications range from data acquisition and image reconstruction to image analysis and understanding. In this review, we focus on the use of deep learning in image reconstruction for advanced medical imaging modalities including magnetic resonance imaging (MRI), computed tomography (CT), and positron emission tomography (PET). Particularly, recent deep learning-based methods for image reconstruction will be emphasized, in accordance with their methodology designs and performances in handling volumetric imaging data. It is expected that this review can help relevant researchers understand how to adapt AI for medical imaging and which advantages can be achieved with the assistance of AI.
Collapse
Affiliation(s)
- Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences (CAS), Shenzhen, China
- Pengcheng Laboratrory, Shenzhen, China
| | - Guohua Cao
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Yan Wang
- School of Computer Science, Sichuan University, Chengdu, China
| | - Shu Liao
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Qian Wang
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Jun Shi
- School of Communication and Information Engineering, Shanghai University, Shanghai, China
| | - Cheng Li
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences (CAS), Shenzhen, China
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| |
Collapse
|
215
|
Anisotropic neural deblurring for MRI acceleration. Int J Comput Assist Radiol Surg 2021; 17:315-327. [PMID: 34859362 DOI: 10.1007/s11548-021-02535-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Accepted: 11/10/2021] [Indexed: 10/19/2022]
Abstract
PURPOSE MRI has become the tool of choice for brain imaging, providing unrivalled contrast between soft tissues, as well as a wealth of information about anatomy, function, and neurochemistry. Image quality, in terms of spatial resolution and noise, is strongly dependent on acquisition duration. A typical brain MRI scan may last several minutes, with total protocol duration often exceeding 30 minutes. Long scan duration leads to poor patient experience, long waiting time for appointments, and high costs. Therefore, shortening MRI scans is crucial. In this paper, we investigate the enhancement of low-resolution (LR) brain MRI scanning, to enable shorter acquisition times without compromising the diagnostic value of the images. METHODS We propose a novel fully convolutional neural enhancement approach. It is optimized for accelerated LR MRI acquisitions obtained by reducing the acquisition matrix size only along phase encoding direction. The network is trained to transform the LR acquisitions into corresponding high-resolution (HR) counterparts in an end-to-end manner. In contrast to previous neural-based MRI enhancement algorithms, such as DAGAN, the LR images used for training are real acquisitions rather than smoothed, downsampled versions of the HR images. RESULTS The proposed method is validated qualitatively and quantitatively for an acceleration factor of 4. Favourable comparison is demonstrated against the state-of-the-art DeblurGAN and DAGAN algorithms in terms of PSNR and SSIM scores. The result was further confirmed by an image quality rating experiment performed by four senior neuroradiologists. CONCLUSIONS The proposed method may become a valuable tool for scan time reduction in brain MRI. In continuation of this research, the validation should be extended to larger datasets acquired for different imaging protocols, and considering several MRI machines produced by different vendors.
Collapse
|
216
|
Li X, Jiang Y, Rodriguez-Andina JJ, Luo H, Yin S, Kaynak O. When medical images meet generative adversarial network: recent development and research opportunities. DISCOVER ARTIFICIAL INTELLIGENCE 2021; 1:5. [DOI: 10.1007/s44163-021-00006-0] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/08/2021] [Accepted: 07/12/2021] [Indexed: 11/27/2022]
Abstract
AbstractDeep learning techniques have promoted the rise of artificial intelligence (AI) and performed well in computer vision. Medical image analysis is an important application of deep learning, which is expected to greatly reduce the workload of doctors, contributing to more sustainable health systems. However, most current AI methods for medical image analysis are based on supervised learning, which requires a lot of annotated data. The number of medical images available is usually small and the acquisition of medical image annotations is an expensive process. Generative adversarial network (GAN), an unsupervised method that has become very popular in recent years, can simulate the distribution of real data and reconstruct approximate real data. GAN opens some exciting new ways for medical image generation, expanding the number of medical images available for deep learning methods. Generated data can solve the problem of insufficient data or imbalanced data categories. Adversarial training is another contribution of GAN to medical imaging that has been applied to many tasks, such as classification, segmentation, or detection. This paper investigates the research status of GAN in medical images and analyzes several GAN methods commonly applied in this area. The study addresses GAN application for both medical image synthesis and adversarial learning for other medical image tasks. The open challenges and future research directions are also discussed.
Collapse
|
217
|
Shen D, Pathrose A, Sarnari R, Blake A, Berhane H, Baraboo JJ, Carr JC, Markl M, Kim D. Automated segmentation of biventricular contours in tissue phase mapping using deep learning. NMR IN BIOMEDICINE 2021; 34:e4606. [PMID: 34476863 PMCID: PMC8795858 DOI: 10.1002/nbm.4606] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2021] [Revised: 07/27/2021] [Accepted: 08/02/2021] [Indexed: 06/13/2023]
Abstract
Tissue phase mapping (TPM) is an MRI technique for quantification of regional biventricular myocardial velocities. Despite its potential, clinical use is limited due to the requisite labor-intensive manual segmentation of cardiac contours for all time frames. The purpose of this study was to develop a deep learning (DL) network for automated segmentation of TPM images, without significant loss in segmentation and myocardial velocity quantification accuracy compared with manual segmentation. We implemented a multi-channel 3D (three dimensional; 2D + time) dense U-Net that trained on magnitude and phase images and combined cross-entropy, Dice, and Hausdorff distance loss terms to improve the segmentation accuracy and suppress unnatural boundaries. The dense U-Net was trained and tested with 150 multi-slice, multi-phase TPM scans (114 scans for training, 36 for testing) from 99 heart transplant patients (44 females, 1-4 scans/patient), where the magnitude and velocity-encoded (Vx , Vy , Vz ) images were used as input and the corresponding manual segmentation masks were used as reference. The accuracy of DL segmentation was evaluated using quantitative metrics (Dice scores, Hausdorff distance) and linear regression and Bland-Altman analyses on the resulting peak radial and longitudinal velocities (Vr and Vz ). The mean segmentation time was about 2 h per patient for manual and 1.9 ± 0.3 s for DL. Our network produced good accuracy (median Dice = 0.85 for left ventricle (LV), 0.64 for right ventricle (RV), Hausdorff distance = 3.17 pixels) compared with manual segmentation. Peak Vr and Vz measured from manual and DL segmentations were strongly correlated (R ≥ 0.88) and in good agreement with manual analysis (mean difference and limits of agreement for Vz and Vr were -0.05 ± 0.98 cm/s and -0.06 ± 1.18 cm/s for LV, and -0.21 ± 2.33 cm/s and 0.46 ± 4.00 cm/s for RV, respectively). The proposed multi-channel 3D dense U-Net was capable of reducing the segmentation time by 3,600-fold, without significant loss in accuracy in tissue velocity measurements.
Collapse
Affiliation(s)
- Daming Shen
- Department of Radiology, Northwestern University Feinberg School of Medicine, Chicago, USA
- Biomedical Engineering, Northwestern University McCormick School of Engineering and Applied Science, Evanston, USA
| | - Ashitha Pathrose
- Department of Radiology, Northwestern University Feinberg School of Medicine, Chicago, USA
| | - Roberto Sarnari
- Department of Radiology, Northwestern University Feinberg School of Medicine, Chicago, USA
| | - Allison Blake
- Department of Radiology, Northwestern University Feinberg School of Medicine, Chicago, USA
| | - Haben Berhane
- Department of Radiology, Northwestern University Feinberg School of Medicine, Chicago, USA
- Biomedical Engineering, Northwestern University McCormick School of Engineering and Applied Science, Evanston, USA
| | - Justin J Baraboo
- Department of Radiology, Northwestern University Feinberg School of Medicine, Chicago, USA
- Biomedical Engineering, Northwestern University McCormick School of Engineering and Applied Science, Evanston, USA
| | - James C Carr
- Department of Radiology, Northwestern University Feinberg School of Medicine, Chicago, USA
| | - Michael Markl
- Department of Radiology, Northwestern University Feinberg School of Medicine, Chicago, USA
- Biomedical Engineering, Northwestern University McCormick School of Engineering and Applied Science, Evanston, USA
| | - Daniel Kim
- Department of Radiology, Northwestern University Feinberg School of Medicine, Chicago, USA
- Biomedical Engineering, Northwestern University McCormick School of Engineering and Applied Science, Evanston, USA
| |
Collapse
|
218
|
Quan C, Zhou J, Zhu Y, Chen Y, Wang S, Liang D, Liu Q. Homotopic Gradients of Generative Density Priors for MR Image Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3265-3278. [PMID: 34010128 DOI: 10.1109/tmi.2021.3081677] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Deep learning, particularly the generative model, has demonstrated tremendous potential to significantly speed up image reconstruction with reduced measurements recently. Rather than the existing generative models that often optimize the density priors, in this work, by taking advantage of the denoising score matching, homotopic gradients of generative density priors (HGGDP) are exploited for magnetic resonance imaging (MRI) reconstruction. More precisely, to tackle the low-dimensional manifold and low data density region issues in generative density prior, we estimate the target gradients in higher-dimensional space. We train a more powerful noise conditional score network by forming high-dimensional tensor as the network input at the training phase. More artificial noise is also injected in the embedding space. At the reconstruction stage, a homotopy method is employed to pursue the density prior, such as to boost the reconstruction performance. Experiment results implied the remarkable performance of HGGDP in terms of high reconstruction accuracy. Only 10% of the k-space data can still generate image of high quality as effectively as standard MRI reconstructions with the fully sampled data.
Collapse
|
219
|
Kavitha TS, Prasad KS. Hybridizing ant lion with whale optimization algorithm for compressed sensing MR image reconstruction via l1 minimization: an ALWOA strategy. EVOLUTIONARY INTELLIGENCE 2021. [DOI: 10.1007/s12065-020-00475-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
220
|
Ji S, Jeong J, Oh SH, Nam Y, Choi SH, Shin HG, Shin D, Jung W, Lee J. Quad-Contrast Imaging: Simultaneous Acquisition of Four Contrast-Weighted Images (PD-Weighted, T₂-Weighted, PD-FLAIR and T₂-FLAIR Images) With Synthetic T₁-Weighted Image, T₁- and T₂-Maps. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3617-3626. [PMID: 34191724 DOI: 10.1109/tmi.2021.3093617] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Magnetic resonance imaging (MRI) can provide multiple contrast-weighted images using different pulse sequences and protocols. However, a long acquisition time of the images is a major challenge. To address this limitation, a new pulse sequence referred to as quad-contrast imaging is presented. The quad-contrast sequence enables the simultaneous acquisition of four contrast-weighted images (proton density (PD)-weighted, T2-weighted, PD-fluid attenuated inversion recovery (FLAIR), and T2-FLAIR), and the synthesis of T1-weighted images and T1- and T2-maps in a single scan. The scan time is less than 6 min and is further reduced to 2 min 50 s using a deep learning-based parallel imaging reconstruction. The natively acquired quad contrasts demonstrate high quality images, comparable to those from the conventional scans. The deep learning-based reconstruction successfully reconstructed highly accelerated data (acceleration factor 6), reporting smaller normalized root mean squared errors (NRMSEs) and higher structural similarities (SSIMs) than those from conventional generalized autocalibrating partially parallel acquisitions (GRAPPA)-reconstruction (mean NRMSE of 4.36% vs. 10.54% and mean SSIM of 0.990 vs. 0.953). In particular, the FLAIR contrast is natively acquired and does not suffer from lesion-like artifacts at the boundary of tissue and cerebrospinal fluid, differentiating the proposed method from synthetic imaging methods. The quad-contrast imaging method may have the potentials to be used in a clinical routine as a rapid diagnostic tool.
Collapse
|
221
|
Peng X, Sutton BP, Lam F, Liang ZP. DeepSENSE: Learning coil sensitivity functions for SENSE reconstruction using deep learning. Magn Reson Med 2021; 87:1894-1902. [PMID: 34825732 DOI: 10.1002/mrm.29085] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2021] [Revised: 10/25/2021] [Accepted: 10/28/2021] [Indexed: 12/26/2022]
Abstract
PURPOSE To improve the estimation of coil sensitivity functions from limited auto-calibration signals (ACS) in SENSE-based reconstruction for brain imaging. METHODS We propose to use deep learning to estimate coil sensitivity functions by leveraging information from previous scans obtained using the same RF receiver system. Specifically, deep convolutional neural networks were designed to learn an end-to-end mapping from the initial sensitivity to the high-resolution counterpart. Sensitivity alignment was further proposed to reduce the geometric variation caused by different subject positions and imaging FOVs. Cross-validation with a small set of datasets was performed to validate the learned neural network. Iterative SENSE reconstruction was adopted to evaluate the utility of the sensitivity functions from the proposed and conventional methods. RESULTS The proposed method produced improved sensitivity estimates and SENSE reconstructions compared to the conventional methods in terms of aliasing and noise suppression with very limited ACS data. Cross-validation with a small set of data demonstrated the feasibility of learning coil sensitivity functions for brain imaging. The network learned on the spoiled GRE data can be applied to predict sensitivity functions for spin-echo and MPRAGE datasets. CONCLUSION A deep learning-based method has been proposed for improving the estimation of coil sensitivity functions. Experimental results have demonstrated the feasibility and potential of the proposed method for improving SENSE-based reconstructions especially when the ACS data are limited.
Collapse
Affiliation(s)
- Xi Peng
- Department of Radiology, Mayo Clinic, Rochester, Minnesota, USA
| | - Bradley P Sutton
- Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA
| | - Fan Lam
- Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA
- Cancer Center at Illinois, Urbana, Illinois, USA
| | - Zhi-Pei Liang
- Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA
| |
Collapse
|
222
|
Gassenmaier S, Küstner T, Nickel D, Herrmann J, Hoffmann R, Almansour H, Afat S, Nikolaou K, Othman AE. Deep Learning Applications in Magnetic Resonance Imaging: Has the Future Become Present? Diagnostics (Basel) 2021; 11:2181. [PMID: 34943418 PMCID: PMC8700442 DOI: 10.3390/diagnostics11122181] [Citation(s) in RCA: 42] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 11/18/2021] [Accepted: 11/22/2021] [Indexed: 12/11/2022] Open
Abstract
Deep learning technologies and applications demonstrate one of the most important upcoming developments in radiology. The impact and influence of these technologies on image acquisition and reporting might change daily clinical practice. The aim of this review was to present current deep learning technologies, with a focus on magnetic resonance image reconstruction. The first part of this manuscript concentrates on the basic technical principles that are necessary for deep learning image reconstruction. The second part highlights the translation of these techniques into clinical practice. The third part outlines the different aspects of image reconstruction techniques, and presents a review of the current literature regarding image reconstruction and image post-processing in MRI. The promising results of the most recent studies indicate that deep learning will be a major player in radiology in the upcoming years. Apart from decision and diagnosis support, the major advantages of deep learning magnetic resonance imaging reconstruction techniques are related to acquisition time reduction and the improvement of image quality. The implementation of these techniques may be the solution for the alleviation of limited scanner availability via workflow acceleration. It can be assumed that this disruptive technology will change daily routines and workflows permanently.
Collapse
Affiliation(s)
- Sebastian Gassenmaier
- Department of Diagnostic and Interventional Radiology, Eberhard-Karls-University Tuebingen, 72076 Tuebingen, Germany; (S.G.); (J.H.); (R.H.); (H.A.); (S.A.); (K.N.)
| | - Thomas Küstner
- Department of Diagnostic and Interventional Radiology, Medical Image and Data Analysis (MIDAS.lab), Eberhard-Karls-University Tuebingen, 72076 Tuebingen, Germany;
| | - Dominik Nickel
- MR Applications Predevelopment, Siemens Healthcare GmbH, Allee am Roethelheimpark 2, 91052 Erlangen, Germany;
| | - Judith Herrmann
- Department of Diagnostic and Interventional Radiology, Eberhard-Karls-University Tuebingen, 72076 Tuebingen, Germany; (S.G.); (J.H.); (R.H.); (H.A.); (S.A.); (K.N.)
| | - Rüdiger Hoffmann
- Department of Diagnostic and Interventional Radiology, Eberhard-Karls-University Tuebingen, 72076 Tuebingen, Germany; (S.G.); (J.H.); (R.H.); (H.A.); (S.A.); (K.N.)
| | - Haidara Almansour
- Department of Diagnostic and Interventional Radiology, Eberhard-Karls-University Tuebingen, 72076 Tuebingen, Germany; (S.G.); (J.H.); (R.H.); (H.A.); (S.A.); (K.N.)
| | - Saif Afat
- Department of Diagnostic and Interventional Radiology, Eberhard-Karls-University Tuebingen, 72076 Tuebingen, Germany; (S.G.); (J.H.); (R.H.); (H.A.); (S.A.); (K.N.)
| | - Konstantin Nikolaou
- Department of Diagnostic and Interventional Radiology, Eberhard-Karls-University Tuebingen, 72076 Tuebingen, Germany; (S.G.); (J.H.); (R.H.); (H.A.); (S.A.); (K.N.)
| | - Ahmed E. Othman
- Department of Diagnostic and Interventional Radiology, Eberhard-Karls-University Tuebingen, 72076 Tuebingen, Germany; (S.G.); (J.H.); (R.H.); (H.A.); (S.A.); (K.N.)
- Department of Neuroradiology, University Medical Center, 55131 Mainz, Germany
| |
Collapse
|
223
|
Hu R, Yang R, Liu Y, Li X. Simulation and Mitigation of the Wrap-Around Artifact in the MRI Image. Front Comput Neurosci 2021; 15:746549. [PMID: 34744675 PMCID: PMC8566355 DOI: 10.3389/fncom.2021.746549] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2021] [Accepted: 09/15/2021] [Indexed: 11/13/2022] Open
Abstract
Magnetic resonance imaging (MRI) is an essential clinical imaging modality for diagnosis and medical research, while various artifacts occur during the acquisition of MRI image, resulting in severe degradation of the perceptual quality and diagnostic efficacy. To tackle such challenges, this study deals with one of the most frequent artifact sources, namely the wrap-around artifact. In particular, given that the MRI data are limited and difficult to access, we first propose a method to simulate the wrap-around artifact on the artifact-free MRI image to increase the quantity of MRI data. Then, an image restoration technique, based on the deep neural networks, is proposed for wrap-around artifact reduction and overall perceptual quality improvement. This study presents a comprehensive analysis regarding both the occurrence of and reduction in the wrap-around artifact, with the aim of facilitating the detection and mitigation of MRI artifacts in clinical situations.
Collapse
Affiliation(s)
- Runze Hu
- Department of Information Science and Technology, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
| | - Rui Yang
- Department of Information Science and Technology, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
| | - Yutao Liu
- School of Computer Science and Technology, Ocean University of China, Qingdao, China
| | - Xiu Li
- Department of Information Science and Technology, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
| |
Collapse
|
224
|
Cheng J, Cui ZX, Huang W, Ke Z, Ying L, Wang H, Zhu Y, Liang D. Learning Data Consistency and its Application to Dynamic MR Imaging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3140-3153. [PMID: 34252025 DOI: 10.1109/tmi.2021.3096232] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Magnetic resonance (MR) image reconstruction from undersampled k-space data can be formulated as a minimization problem involving data consistency and image prior. Existing deep learning (DL)-based methods for MR reconstruction employ deep networks to exploit the prior information and integrate the prior knowledge into the reconstruction under the explicit constraint of data consistency, without considering the real distribution of the noise. In this work, we propose a new DL-based approach termed Learned DC that implicitly learns the data consistency with deep networks, corresponding to the actual probability distribution of system noise. The data consistency term and the prior knowledge are both embedded in the weights of the networks, which provides an utterly implicit manner of learning reconstruction model. We evaluated the proposed approach with highly undersampled dynamic data, including the dynamic cardiac cine data with up to 24-fold acceleration and dynamic rectum data with the acceleration factor equal to the number of phases. Experimental results demonstrate the superior performance of the Learned DC both quantitatively and qualitatively than the state-of-the-art.
Collapse
|
225
|
Wu W, Hu D, Niu C, Yu H, Vardhanabhuti V, Wang G. DRONE: Dual-Domain Residual-based Optimization NEtwork for Sparse-View CT Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3002-3014. [PMID: 33956627 PMCID: PMC8591633 DOI: 10.1109/tmi.2021.3078067] [Citation(s) in RCA: 77] [Impact Index Per Article: 19.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Deep learning has attracted rapidly increasing attention in the field of tomographic image reconstruction, especially for CT, MRI, PET/SPECT, ultrasound and optical imaging. Among various topics, sparse-view CT remains a challenge which targets a decent image reconstruction from very few projections. To address this challenge, in this article we propose a Dual-domain Residual-based Optimization NEtwork (DRONE). DRONE consists of three modules respectively for embedding, refinement, and awareness. In the embedding module, a sparse sinogram is first extended. Then, sparse-view artifacts are effectively suppressed in the image domain. After that, the refinement module recovers image details in the residual data and image domains synergistically. Finally, the results from the embedding and refinement modules in the data and image domains are regularized for optimized image quality in the awareness module, which ensures the consistency between measurements and images with the kernel awareness of compressed sensing. The DRONE network is trained, validated, and tested on preclinical and clinical datasets, demonstrating its merits in edge preservation, feature recovery, and reconstruction accuracy.
Collapse
|
226
|
Sandino CM, Cole EK, Alkan C, Chaudhari AS, Loening AM, Hyun D, Dahl J, Imran AAZ, Wang AS, Vasanawala SS. Upstream Machine Learning in Radiology. Radiol Clin North Am 2021; 59:967-985. [PMID: 34689881 PMCID: PMC8549864 DOI: 10.1016/j.rcl.2021.07.009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
Machine learning (ML) and Artificial intelligence (AI) has the potential to dramatically improve radiology practice at multiple stages of the imaging pipeline. Most of the attention has been garnered by applications focused on improving the end of the pipeline: image interpretation. However, this article reviews how AI/ML can be applied to improve upstream components of the imaging pipeline, including exam modality selection, hardware design, exam protocol selection, data acquisition, image reconstruction, and image processing. A breadth of applications and their potential for impact is shown across multiple imaging modalities, including ultrasound, computed tomography, and MRI.
Collapse
Affiliation(s)
- Christopher M Sandino
- Department of Electrical Engineering, Stanford University, 350 Serra Mall, Stanford, CA 94305, USA
| | - Elizabeth K Cole
- Department of Electrical Engineering, Stanford University, 350 Serra Mall, Stanford, CA 94305, USA
| | - Cagan Alkan
- Department of Electrical Engineering, Stanford University, 350 Serra Mall, Stanford, CA 94305, USA
| | - Akshay S Chaudhari
- Department of Biomedical Data Science, 1201 Welch Road, Stanford, CA 94305, USA; Department of Radiology, Stanford University, 1201 Welch Road, Stanford, CA 94305, USA
| | - Andreas M Loening
- Department of Radiology, Stanford University, 1201 Welch Road, Stanford, CA 94305, USA
| | - Dongwoon Hyun
- Department of Radiology, Stanford University, 1201 Welch Road, Stanford, CA 94305, USA
| | - Jeremy Dahl
- Department of Radiology, Stanford University, 1201 Welch Road, Stanford, CA 94305, USA
| | | | - Adam S Wang
- Department of Radiology, Stanford University, 1201 Welch Road, Stanford, CA 94305, USA
| | - Shreyas S Vasanawala
- Department of Radiology, Stanford University, 1201 Welch Road, Stanford, CA 94305, USA.
| |
Collapse
|
227
|
Küstner T, Munoz C, Psenicny A, Bustin A, Fuin N, Qi H, Neji R, Kunze K, Hajhosseiny R, Prieto C, Botnar R. Deep-learning based super-resolution for 3D isotropic coronary MR angiography in less than a minute. Magn Reson Med 2021; 86:2837-2852. [PMID: 34240753 DOI: 10.1002/mrm.28911] [Citation(s) in RCA: 45] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Revised: 06/08/2021] [Accepted: 06/11/2021] [Indexed: 01/21/2023]
Abstract
PURPOSE To develop and evaluate a novel and generalizable super-resolution (SR) deep-learning framework for motion-compensated isotropic 3D coronary MR angiography (CMRA), which allows free-breathing acquisitions in less than a minute. METHODS Undersampled motion-corrected reconstructions have enabled free-breathing isotropic 3D CMRA in ~5-10 min acquisition times. In this work, we propose a deep-learning-based SR framework, combined with non-rigid respiratory motion compensation, to shorten the acquisition time to less than 1 min. A generative adversarial network (GAN) is proposed consisting of two cascaded Enhanced Deep Residual Network generator, a trainable discriminator, and a perceptual loss network. A 16-fold increase in spatial resolution is achieved by reconstructing a high-resolution (HR) isotropic CMRA (0.9 mm3 or 1.2 mm3 ) from a low-resolution (LR) anisotropic CMRA (0.9 × 3.6 × 3.6 mm3 or 1.2 × 4.8 × 4.8 mm3 ). The impact and generalization of the proposed SRGAN approach to different input resolutions and operation on image and patch-level is investigated. SRGAN was evaluated on a retrospective downsampled cohort of 50 patients and on 16 prospective patients that were scanned with LR-CMRA in ~50 s under free-breathing. Vessel sharpness and length of the coronary arteries from the SR-CMRA is compared against the HR-CMRA. RESULTS SR-CMRA showed statistically significant (P < .001) improved vessel sharpness 34.1% ± 12.3% and length 41.5% ± 8.1% compared with LR-CMRA. Good generalization to input resolution and image/patch-level processing was found. SR-CMRA enabled recovery of coronary stenosis similar to HR-CMRA with comparable qualitative performance. CONCLUSION The proposed SR-CMRA provides a 16-fold increase in spatial resolution with comparable image quality to HR-CMRA while reducing the predictable scan time to <1 min.
Collapse
Affiliation(s)
- Thomas Küstner
- School of Biomedical Engineering and Imaging Sciences, King's College London, St. Thomas' Hospital, London, United Kingdom
- Medical Image and Data Analysis, Department of Interventional and Diagnostic Radiology, University Hospital of Tübingen, Tübingen, Germany
| | - Camila Munoz
- School of Biomedical Engineering and Imaging Sciences, King's College London, St. Thomas' Hospital, London, United Kingdom
| | - Alina Psenicny
- School of Biomedical Engineering and Imaging Sciences, King's College London, St. Thomas' Hospital, London, United Kingdom
| | - Aurelien Bustin
- School of Biomedical Engineering and Imaging Sciences, King's College London, St. Thomas' Hospital, London, United Kingdom
- Centre de recherche Cardio-Thoracique de Bordeaux, IHU LIRYC, Electrophysiology and Heart Modeling Institute, Université de Bordeaux, INSERM, Bordeaux, France
| | - Niccolo Fuin
- School of Biomedical Engineering and Imaging Sciences, King's College London, St. Thomas' Hospital, London, United Kingdom
| | - Haikun Qi
- School of Biomedical Engineering and Imaging Sciences, King's College London, St. Thomas' Hospital, London, United Kingdom
| | - Radhouene Neji
- School of Biomedical Engineering and Imaging Sciences, King's College London, St. Thomas' Hospital, London, United Kingdom
- MR Research Collaborations, Siemens Healthcare Limited, Frimley, United Kingdom
| | - Karl Kunze
- School of Biomedical Engineering and Imaging Sciences, King's College London, St. Thomas' Hospital, London, United Kingdom
- MR Research Collaborations, Siemens Healthcare Limited, Frimley, United Kingdom
| | - Reza Hajhosseiny
- School of Biomedical Engineering and Imaging Sciences, King's College London, St. Thomas' Hospital, London, United Kingdom
| | - Claudia Prieto
- School of Biomedical Engineering and Imaging Sciences, King's College London, St. Thomas' Hospital, London, United Kingdom
- Escuela de Ingeniería, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - René Botnar
- School of Biomedical Engineering and Imaging Sciences, King's College London, St. Thomas' Hospital, London, United Kingdom
- Escuela de Ingeniería, Pontificia Universidad Católica de Chile, Santiago, Chile
| |
Collapse
|
228
|
Lahiri A, Wang G, Ravishankar S, Fessler JA. Blind Primed Supervised (BLIPS) Learning for MR Image Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3113-3124. [PMID: 34191725 PMCID: PMC8672324 DOI: 10.1109/tmi.2021.3093770] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
This paper examines a combined supervised-unsupervised framework involving dictionary-based blind learning and deep supervised learning for MR image reconstruction from under-sampled k-space data. A major focus of the work is to investigate the possible synergy of learned features in traditional shallow reconstruction using adaptive sparsity-based priors and deep prior-based reconstruction. Specifically, we propose a framework that uses an unrolled network to refine a blind dictionary learning-based reconstruction. We compare the proposed method with strictly supervised deep learning-based reconstruction approaches on several datasets of varying sizes and anatomies. We also compare the proposed method to alternative approaches for combining dictionary-based methods with supervised learning in MR image reconstruction. The improvements yielded by the proposed framework suggest that the blind dictionary-based approach preserves fine image details that the supervised approach can iteratively refine, suggesting that the features learned using the two methods are complementary.
Collapse
|
229
|
Agarwal D, Marques G, de la Torre-Díez I, Franco Martin MA, García Zapiraín B, Martín Rodríguez F. Transfer Learning for Alzheimer's Disease through Neuroimaging Biomarkers: A Systematic Review. SENSORS (BASEL, SWITZERLAND) 2021; 21:7259. [PMID: 34770565 PMCID: PMC8587338 DOI: 10.3390/s21217259] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/04/2021] [Revised: 10/27/2021] [Accepted: 10/27/2021] [Indexed: 11/16/2022]
Abstract
Alzheimer's disease (AD) is a remarkable challenge for healthcare in the 21st century. Since 2017, deep learning models with transfer learning approaches have been gaining recognition in AD detection, and progression prediction by using neuroimaging biomarkers. This paper presents a systematic review of the current state of early AD detection by using deep learning models with transfer learning and neuroimaging biomarkers. Five databases were used and the results before screening report 215 studies published between 2010 and 2020. After screening, 13 studies met the inclusion criteria. We noted that the maximum accuracy achieved to date for AD classification is 98.20% by using the combination of 3D convolutional networks and local transfer learning, and that for the prognostic prediction of AD is 87.78% by using pre-trained 3D convolutional network-based architectures. The results show that transfer learning helps researchers in developing a more accurate system for the early diagnosis of AD. However, there is a need to consider some points in future research, such as improving the accuracy of the prognostic prediction of AD, exploring additional biomarkers such as tau-PET and amyloid-PET to understand highly discriminative feature representation to separate similar brain patterns, managing the size of the datasets due to the limited availability.
Collapse
Affiliation(s)
- Deevyankar Agarwal
- Department of Signal Theory and Communications and Telematics Engineering, University of Valladolid, Paseo de Belén 15, 47011 Valladolid, Spain; (G.M.); (I.d.l.T.-D.)
| | - Gonçalo Marques
- Department of Signal Theory and Communications and Telematics Engineering, University of Valladolid, Paseo de Belén 15, 47011 Valladolid, Spain; (G.M.); (I.d.l.T.-D.)
- Polytechnic of Coimbra, ESTGOH, Rua General Santos Costa, 3400-124 Oliveira do Hospital, Portugal
| | - Isabel de la Torre-Díez
- Department of Signal Theory and Communications and Telematics Engineering, University of Valladolid, Paseo de Belén 15, 47011 Valladolid, Spain; (G.M.); (I.d.l.T.-D.)
| | - Manuel A. Franco Martin
- Psychiatric Department, University Rio Hortega Hospital–Valladolid, 47011 Valladolid, Spain;
| | - Begoña García Zapiraín
- eVIDA Laboratory, University of Deusto, Avenida de las Universidades 24, 48007 Bilbao, Spain;
| | - Francisco Martín Rodríguez
- Advanced Clinical Simulation Center, School of Medicine, University of Valladolid, 47011 Valladolid, Spain;
| |
Collapse
|
230
|
Lu H, Zou X, Liao L, Li K, Liu J. Deep Convolutional Neural Network for Compressive Sensing of Magnetic Resonance Images. INT J PATTERN RECOGN 2021. [DOI: 10.1142/s0218001421520194] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Compressive Sensing for Magnetic Resonance Imaging (CS-MRI) aims to reconstruct Magnetic Resonance (MR) images from under-sampled raw data. There are two challenges to improve CS-MRI methods, i.e. designing an under-sampling algorithm to achieve optimal sampling, as well as designing fast and small deep neural networks to obtain reconstructed MR images with superior quality. To improve the reconstruction quality of MR images, we propose a novel deep convolutional neural network architecture for CS-MRI named MRCSNet. The MRCSNet consists of three sub-networks, a compressive sensing sampling sub-network, an initial reconstruction sub-network, and a refined reconstruction sub-network. Experimental results demonstrate that MRCSNet generates high-quality reconstructed MR images at various under-sampling ratios, and also meets the requirements of real-time CS-MRI applications. Compared to state-of-the-art CS-MRI approaches, MRCSNet offers a significant improvement in reconstruction accuracies, such as Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity (SSIM). Besides, it reduces the reconstruction error evaluated by the Normalized Root-Mean-Square Error (NRMSE). The source codes are available at https://github.com/TaihuLight/MRCSNet .
Collapse
Affiliation(s)
- Hong Lu
- College of Computer Science and Technology, Nanjing University, Nanjing University of Science and Technology, Zijin College, Nanjing 210023, P. R. China
| | - Xiaofei Zou
- Information Assurance Department of Airborne Army, Beijing, 100083, P. R. China
- College of Information and Communication, National University of Defense Technology, Wuhan 430019, P. R. China
| | - Longlong Liao
- College of Computer and Data Science, Fuzhou University, Fuzhou, Fujian 350116, P. R. China
| | - Kenli Li
- College of Computer Science and Electronic Engineering, Hunan University, Changsha 410082, P. R. China
| | - Jie Liu
- College of Computer, National University of Defense, Technology, Changsha 410073, P. R. China
| |
Collapse
|
231
|
Abstract
We present an overview of current clinical musculoskeletal imaging applications for artificial intelligence, as well as potential future applications and techniques.
Collapse
|
232
|
Wang F, Zhang H, Dai F, Chen W, Wang C, Wang H. MAGnitude-Image-to-Complex K-space (MAGIC-K) Net: A Data Augmentation Network for Image Reconstruction. Diagnostics (Basel) 2021; 11:1935. [PMID: 34679632 PMCID: PMC8534839 DOI: 10.3390/diagnostics11101935] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2021] [Revised: 09/19/2021] [Accepted: 10/15/2021] [Indexed: 11/16/2022] Open
Abstract
Deep learning has demonstrated superior performance in image reconstruction compared to most conventional iterative algorithms. However, their effectiveness and generalization capability are highly dependent on the sample size and diversity of the training data. Deep learning-based reconstruction requires multi-coil raw k-space data, which are not collected by routine scans. On the other hand, large amounts of magnitude images are readily available in hospitals. Hence, we proposed the MAGnitude Images to Complex K-space (MAGIC-K) Net to generate multi-coil k-space data from existing magnitude images and a limited number of required raw k-space data to facilitate the reconstruction. Compared to some basic data augmentation methods applying global intensity and displacement transformations to the source images, the MAGIC-K Net can generate more realistic intensity variations and displacements from pairs of anatomical Digital Imaging and Communications in Medicine (DICOM) images. The reconstruction performance was validated in 30 healthy volunteers and 6 patients with different types of tumors. The experimental results demonstrated that the high-resolution Diffusion Weighted Image (DWI) reconstruction benefited from the proposed augmentation method. The MAGIC-K Net enabled the deep learning network to reconstruct images with superior performance in both healthy and tumor patients, qualitatively and quantitatively.
Collapse
Affiliation(s)
- Fanwen Wang
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai 200433, China; (F.W.); (H.Z.); (F.D.)
| | - Hui Zhang
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai 200433, China; (F.W.); (H.Z.); (F.D.)
| | - Fei Dai
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai 200433, China; (F.W.); (H.Z.); (F.D.)
| | - Weibo Chen
- Philips Healthcare, Shanghai 200072, China;
| | - Chengyan Wang
- Human Phenome Institute, Fudan University, Shanghai 201203, China
| | - He Wang
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai 200433, China; (F.W.); (H.Z.); (F.D.)
- Human Phenome Institute, Fudan University, Shanghai 201203, China
| |
Collapse
|
233
|
Mabu S, Miyake M, Kuremoto T, Kido S. Semi-supervised CycleGAN for domain transformation of chest CT images and its application to opacity classification of diffuse lung diseases. Int J Comput Assist Radiol Surg 2021; 16:1925-1935. [PMID: 34661818 PMCID: PMC8522550 DOI: 10.1007/s11548-021-02490-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2021] [Accepted: 08/31/2021] [Indexed: 11/05/2022]
Abstract
Purpose The performance of deep learning may fluctuate depending on the imaging devices and settings. Although domain transformation such as CycleGAN for normalizing images is useful, CycleGAN does not use information on the disease classes. Therefore, we propose a semi-supervised CycleGAN with an additional classification loss to transform images suitable for the diagnosis. The method is evaluated by opacity classification of chest CT. Methods (1) CT images taken at two hospitals (source and target domains) are used. (2) A classifier is trained on the target domain. (3) Class labels are given to a small number of source domain images for semi-supervised learning. (4) The source domain images are transformed to the target domain. (5) A classification loss of the transformed images with class labels is calculated. Results The proposed method showed an F-measure of 0.727 in the domain transformation from hospital A to B, and 0.745 in that from hospital B to A, where significant differences are between the proposed method and the other three methods. Conclusions The proposed method not only transforms the appearance of the images but also retains the features being important to classify opacities, and shows the best precision, recall, and F-measure.
Collapse
Affiliation(s)
- Shingo Mabu
- Graduate School of Sciences and Technology for Innovation, Yamaguchi University, 2-16-1, Tokiwadai, Ube, Yamaguchi, 755-8611, Japan.
| | - Masashi Miyake
- Graduate School of Sciences and Technology for Innovation, Yamaguchi University, 2-16-1, Tokiwadai, Ube, Yamaguchi, 755-8611, Japan
| | - Takashi Kuremoto
- Department of Information Technology and Media Design, Nippon Institute of Technology, 4-1 Gakuendai, Miyashiro-machi, Minamisaitama-gun, Saitama, 345-8501, Japan
| | - Shoji Kido
- Graduate School of Medicine, Osaka University, 2-2, Yamadaoka, Suita, Osaka, 565-0871, Japan
| |
Collapse
|
234
|
|
235
|
GeneCGAN: A conditional generative adversarial network based on genetic tree for point cloud reconstruction. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.07.087] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
236
|
Liu R, Mu P, Zhang J. Investigating Customization Strategies and Convergence Behaviors of Task-Specific ADMM. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:8278-8292. [PMID: 34559653 DOI: 10.1109/tip.2021.3113796] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Alternating Direction Method of Multiplier (ADMM) has been a popular algorithmic framework for separable optimization problems with linear constraints. For numerical ADMM fail to exploit the particular structure of the problem at hand nor the input data information, leveraging task-specific modules (e.g., neural networks and other data-driven architectures) to extend ADMM is a significant but challenging task. This work focuses on designing a flexible algorithmic framework to incorporate various task-specific modules (with no additional constraints) to improve the performance of ADMM in real-world applications. Specifically, we propose Guidance from Optimality (GO), a new customization strategy, to embed task-specific modules into ADMM (GO-ADMM). By introducing an optimality-based criterion to guide the propagation, GO-ADMM establishes an updating scheme agnostic to the choice of additional modules. The existing task-specific methods just plug their task-specific modules into the numerical iterations in a straightforward manner. Even with some restrictive constraints on the plug-in modules, they can only obtain some relatively weaker convergence properties for the resulted ADMM iterations. Fortunately, without any restrictions on the embedded modules, we prove the convergence of GO-ADMM regarding objective values and constraint violations, and derive the worst-case convergence rate measured by iteration complexity. Extensive experiments are conducted to verify the theoretical results and demonstrate the efficiency of GO-ADMM.
Collapse
|
237
|
Mani A, Santini T, Puppala R, Dahl M, Venkatesh S, Walker E, DeHaven M, Isitan C, Ibrahim TS, Wang L, Zhang T, Gong E, Barrios-Martinez J, Yeh FC, Krafty R, Mettenburg JM, Xia Z. Applying Deep Learning to Accelerated Clinical Brain Magnetic Resonance Imaging for Multiple Sclerosis. Front Neurol 2021; 12:685276. [PMID: 34646227 PMCID: PMC8504490 DOI: 10.3389/fneur.2021.685276] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Accepted: 08/24/2021] [Indexed: 11/14/2022] Open
Abstract
Background: Magnetic resonance (MR) scans are routine clinical procedures for monitoring people with multiple sclerosis (PwMS). Patient discomfort, timely scheduling, and financial burden motivate the need to accelerate MR scan time. We examined the clinical application of a deep learning (DL) model in restoring the image quality of accelerated routine clinical brain MR scans for PwMS. Methods: We acquired fast 3D T1w BRAVO and fast 3D T2w FLAIR MRI sequences (half the phase encodes and half the number of slices) in parallel to conventional parameters. Using a subset of the scans, we trained a DL model to generate images from fast scans with quality similar to the conventional scans and then applied the model to the remaining scans. We calculated clinically relevant T1w volumetrics (normalized whole brain, thalamic, gray matter, and white matter volume) for all scans and T2 lesion volume in a sub-analysis. We performed paired t-tests comparing conventional, fast, and fast with DL for these volumetrics, and fit repeated measures mixed-effects models to test for differences in correlations between volumetrics and clinically relevant patient-reported outcomes (PRO). Results: We found statistically significant but small differences between conventional and fast scans with DL for all T1w volumetrics. There was no difference in the extent to which the key T1w volumetrics correlated with clinically relevant PROs of MS symptom burden and neurological disability. Conclusion: A deep learning model that improves the image quality of the accelerated routine clinical brain MR scans has the potential to inform clinically relevant outcomes in MS.
Collapse
Affiliation(s)
- Ashika Mani
- Department of Biostatistics, University of Pittsburgh, Pittsburgh, PA, United States
| | - Tales Santini
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, United States
| | - Radhika Puppala
- Department of Neurology, University of Pittsburgh, Pittsburgh, PA, United States
| | - Megan Dahl
- Department of Neurology, University of Pittsburgh, Pittsburgh, PA, United States
| | - Shruthi Venkatesh
- Department of Neurology, University of Pittsburgh, Pittsburgh, PA, United States
| | - Elizabeth Walker
- Department of Neurology, University of Pittsburgh, Pittsburgh, PA, United States
| | - Megan DeHaven
- Department of Neurology, University of Pittsburgh, Pittsburgh, PA, United States
| | - Cigdem Isitan
- Department of Neurology, University of Pittsburgh, Pittsburgh, PA, United States
| | - Tamer S. Ibrahim
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, United States
| | - Long Wang
- Subtle Medical Inc., Menlo Park, CA, United States
| | - Tao Zhang
- Subtle Medical Inc., Menlo Park, CA, United States
| | - Enhao Gong
- Subtle Medical Inc., Menlo Park, CA, United States
| | | | - Fang-Cheng Yeh
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, PA, United States
| | - Robert Krafty
- Department of Biostatistics and Bioinformatics, Emory University, Atlanta, GA, United States
| | - Joseph M. Mettenburg
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA, United States
| | - Zongqi Xia
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, United States
- Department of Neurology, University of Pittsburgh, Pittsburgh, PA, United States
| |
Collapse
|
238
|
Wang C, Li J, Zhang F, Sun X, Dong H, Yu Y, Wang Y. Bilateral Asymmetry Guided Counterfactual Generating Network for Mammogram Classification. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:7980-7994. [PMID: 34534086 DOI: 10.1109/tip.2021.3112053] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Mammogram benign or malignant classification with only image-level labels is challenging due to the absence of lesion annotations. Motivated by the symmetric prior that the lesions on one side of breasts rarely appear in the corresponding areas on the other side, we explore to answer a counterfactual question to identify the lesion areas. This counterfactual question means: given an image with lesions, how would the features have behaved if there were no lesions in the image? To answer this question, we derive a new theoretical result based on the symmetric prior. Specifically, by building a causal model that entails such a prior for bilateral images, we identify to optimize the distances in distribution between i) the counterfactual features and the target side's features in lesion-free areas; and ii) the counterfactual features and the reference side's features in lesion areas. To realize these optimizations for better benign/malignant classification, we propose a counterfactual generative network, which is mainly composed of Generator Adversarial Network and a prediction feedback mechanism, they are optimized jointly and prompt each other. Specifically, the former can further improve the classi?cation performance by generating counterfactual features to calculate lesion areas. On the other hand, the latter helps counterfactual generation by the supervision of classification loss. The utility of our method and the effectiveness of each module in our model can be verified by state-of-the-art performance on INBreast and an in-house dataset and ablation studies.
Collapse
|
239
|
Muckley MJ, Riemenschneider B, Radmanesh A, Kim S, Jeong G, Ko J, Jun Y, Shin H, Hwang D, Mostapha M, Arberet S, Nickel D, Ramzi Z, Ciuciu P, Starck JL, Teuwen J, Karkalousos D, Zhang C, Sriram A, Huang Z, Yakubova N, Lui YW, Knoll F. Results of the 2020 fastMRI Challenge for Machine Learning MR Image Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2306-2317. [PMID: 33929957 PMCID: PMC8428775 DOI: 10.1109/tmi.2021.3075856] [Citation(s) in RCA: 114] [Impact Index Per Article: 28.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Accelerating MRI scans is one of the principal outstanding problems in the MRI research community. Towards this goal, we hosted the second fastMRI competition targeted towards reconstructing MR images with subsampled k-space data. We provided participants with data from 7,299 clinical brain scans (de-identified via a HIPAA-compliant procedure by NYU Langone Health), holding back the fully-sampled data from 894 of these scans for challenge evaluation purposes. In contrast to the 2019 challenge, we focused our radiologist evaluations on pathological assessment in brain images. We also debuted a new Transfer track that required participants to submit models evaluated on MRI scanners from outside the training set. We received 19 submissions from eight different groups. Results showed one team scoring best in both SSIM scores and qualitative radiologist evaluations. We also performed analysis on alternative metrics to mitigate the effects of background noise and collected feedback from the participants to inform future challenges. Lastly, we identify common failure modes across the submissions, highlighting areas of need for future research in the MRI reconstruction community.
Collapse
|
240
|
Li GY, Wang CY, Lv J. Current status of deep learning in abdominal image reconstruction. Artif Intell Med Imaging 2021; 2:86-94. [DOI: 10.35711/aimi.v2.i4.86] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/24/2021] [Revised: 06/24/2021] [Accepted: 08/17/2021] [Indexed: 02/06/2023] Open
Affiliation(s)
- Guang-Yuan Li
- School of Computer and Control Engineering, Yantai University, Yantai 264000, Shandong Province, China
| | - Cheng-Yan Wang
- Human Phenome Institute, Fudan University, Shanghai 201203, China
| | - Jun Lv
- School of Computer and Control Engineering, Yantai University, Yantai 264000, Shandong Province, China
| |
Collapse
|
241
|
Kumar PA, Gunasundari R, Aarthi R. Systematic Analysis and Review of Magnetic Resonance Imaging (MRI) Reconstruction Techniques. Curr Med Imaging 2021; 17:943-955. [PMID: 33402090 DOI: 10.2174/1573405616666210105125542] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2020] [Revised: 10/24/2020] [Accepted: 11/12/2020] [Indexed: 11/22/2022]
Abstract
BACKGROUND Magnetic Resonance Imaging (MRI) plays an important role in the field of medical diagnostic imaging as it poses non-invasive acquisition and high soft-tissue contrast. However, a huge time is needed for the MRI scanning process that results in motion artifacts, degrades image quality, misinterprets the data, and may cause discomfort to the patient. Thus, the main goal of MRI research is to accelerate data acquisition processing without affecting the quality of the image. INTRODUCTION This paper presents a survey based on distinct conventional MRI reconstruction methodologies. In addition, a novel MRI reconstruction strategy is proposed based on weighted Compressive Sensing (CS), Penalty-aided minimization function, and Meta-heuristic optimization technique. METHODS An illustrative analysis is done concerning adapted methods, datasets used, execution tools, performance measures, and values of evaluation metrics. Moreover, the issues of existing methods and the research gaps considering conventional MRI reconstruction schemes are elaborated to obtain improved contribution for devising significant MRI reconstruction techniques. RESULTS The proposed method will reduce conventional aliasing artifact problems, may attain lower Mean Square Error (MSE), higher Peak Signal-to-Noise Ratio (PSNR), and Structural SIMilarity (SSIM) index. CONCLUSION The issues of existing methods and the research gaps considering conventional MRI reconstruction schemes are elaborated to devising an improved significant MRI reconstruction technique.
Collapse
Affiliation(s)
- Penta Anil Kumar
- Department of Electronics and Communication Engineering, Pondicherry Engineering College, Puducherry, India
| | - Ramalingam Gunasundari
- Department of Electronics and Communication Engineering, Pondicherry Engineering College, Puducherry, India
| | | |
Collapse
|
242
|
Herrmann J, Koerzdoerfer G, Nickel D, Mostapha M, Nadar M, Gassenmaier S, Kuestner T, Othman AE. Feasibility and Implementation of a Deep Learning MR Reconstruction for TSE Sequences in Musculoskeletal Imaging. Diagnostics (Basel) 2021; 11:diagnostics11081484. [PMID: 34441418 PMCID: PMC8394583 DOI: 10.3390/diagnostics11081484] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Revised: 07/23/2021] [Accepted: 07/31/2021] [Indexed: 01/15/2023] Open
Abstract
Magnetic Resonance Imaging (MRI) of the musculoskeletal system is one of the most common examinations in clinical routine. The application of Deep Learning (DL) reconstruction for MRI is increasingly gaining attention due to its potential to improve the image quality and reduce the acquisition time simultaneously. However, the technology has not yet been implemented in clinical routine for turbo spin echo (TSE) sequences in musculoskeletal imaging. The aim of this study was therefore to assess the technical feasibility and evaluate the image quality. Sixty examinations of knee, hip, ankle, shoulder, hand, and lumbar spine in healthy volunteers at 3 T were included in this prospective, internal-review-board-approved study. Conventional (TSES) and DL-based TSE sequences (TSEDL) were compared regarding image quality, anatomical structures, and diagnostic confidence. Overall image quality was rated to be excellent, with a significant improvement in edge sharpness and reduced noise compared to TSES (p < 0.001). No difference was found concerning the extent of artifacts, the delineation of anatomical structures, and the diagnostic confidence comparing TSES and TSEDL (p > 0.05). Therefore, DL image reconstruction for TSE sequences in MSK imaging is feasible, enabling a remarkable time saving (up to 75%), whilst maintaining excellent image quality and diagnostic confidence.
Collapse
Affiliation(s)
- Judith Herrmann
- Department of Diagnostic and Interventional Radiology, Eberhard Karls University Tuebingen, Hoppe-Seyler-Strasse 3, 72076 Tuebingen, Germany; (J.H.); (S.G.); (T.K.)
| | - Gregor Koerzdoerfer
- MR Applications Predevelopment, Siemens Healthcare GmbH, Allee am Roethelheimpark 2, 91052 Erlangen, Germany; (G.K.); (D.N.)
| | - Dominik Nickel
- MR Applications Predevelopment, Siemens Healthcare GmbH, Allee am Roethelheimpark 2, 91052 Erlangen, Germany; (G.K.); (D.N.)
| | - Mahmoud Mostapha
- Digital Technology & Innovation, Siemens Medical Solutions USA, Inc., Princeton, NJ 08540, USA; (M.M.); (M.N.)
| | - Mariappan Nadar
- Digital Technology & Innovation, Siemens Medical Solutions USA, Inc., Princeton, NJ 08540, USA; (M.M.); (M.N.)
| | - Sebastian Gassenmaier
- Department of Diagnostic and Interventional Radiology, Eberhard Karls University Tuebingen, Hoppe-Seyler-Strasse 3, 72076 Tuebingen, Germany; (J.H.); (S.G.); (T.K.)
| | - Thomas Kuestner
- Department of Diagnostic and Interventional Radiology, Eberhard Karls University Tuebingen, Hoppe-Seyler-Strasse 3, 72076 Tuebingen, Germany; (J.H.); (S.G.); (T.K.)
| | - Ahmed E. Othman
- Department of Diagnostic and Interventional Radiology, Eberhard Karls University Tuebingen, Hoppe-Seyler-Strasse 3, 72076 Tuebingen, Germany; (J.H.); (S.G.); (T.K.)
- Department of Neuroradiology, University Medical Center, 55131 Mainz, Germany
- Correspondence: ; Tel.: +49-7071-29-86676; Fax: +49-7071-29-5845
| |
Collapse
|
243
|
|
244
|
Ursuleanu TF, Luca AR, Gheorghe L, Grigorovici R, Iancu S, Hlusneac M, Preda C, Grigorovici A. Deep Learning Application for Analyzing of Constituents and Their Correlations in the Interpretations of Medical Images. Diagnostics (Basel) 2021; 11:1373. [PMID: 34441307 PMCID: PMC8393354 DOI: 10.3390/diagnostics11081373] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Revised: 07/25/2021] [Accepted: 07/27/2021] [Indexed: 12/13/2022] Open
Abstract
The need for time and attention, given by the doctor to the patient, due to the increased volume of medical data to be interpreted and filtered for diagnostic and therapeutic purposes has encouraged the development of the option to support, constructively and effectively, deep learning models. Deep learning (DL) has experienced an exponential development in recent years, with a major impact on interpretations of the medical image. This has influenced the development, diversification and increase of the quality of scientific data, the development of knowledge construction methods and the improvement of DL models used in medical applications. All research papers focus on description, highlighting, classification of one of the constituent elements of deep learning models (DL), used in the interpretation of medical images and do not provide a unified picture of the importance and impact of each constituent in the performance of DL models. The novelty in our paper consists primarily in the unitary approach, of the constituent elements of DL models, namely, data, tools used by DL architectures or specifically constructed DL architecture combinations and highlighting their "key" features, for completion of tasks in current applications in the interpretation of medical images. The use of "key" characteristics specific to each constituent of DL models and the correct determination of their correlations, may be the subject of future research, with the aim of increasing the performance of DL models in the interpretation of medical images.
Collapse
Affiliation(s)
- Tudor Florin Ursuleanu
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Surgery VI, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
- Department of Surgery I, Regional Institute of Oncology, 700483 Iasi, Romania
| | - Andreea Roxana Luca
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department Obstetrics and Gynecology, Integrated Ambulatory of Hospital “Sf. Spiridon”, 700106 Iasi, Romania
| | - Liliana Gheorghe
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Radiology, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
| | - Roxana Grigorovici
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
| | - Stefan Iancu
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
| | - Maria Hlusneac
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
| | - Cristina Preda
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Endocrinology, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
| | - Alexandru Grigorovici
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Surgery VI, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
| |
Collapse
|
245
|
Sudarshan VP, Upadhyay U, Egan GF, Chen Z, Awate SP. Towards lower-dose PET using physics-based uncertainty-aware multimodal learning with robustness to out-of-distribution data. Med Image Anal 2021; 73:102187. [PMID: 34348196 DOI: 10.1016/j.media.2021.102187] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Revised: 07/12/2021] [Accepted: 07/16/2021] [Indexed: 10/20/2022]
Abstract
Radiation exposure in positron emission tomography (PET) imaging limits its usage in the studies of radiation-sensitive populations, e.g., pregnant women, children, and adults that require longitudinal imaging. Reducing the PET radiotracer dose or acquisition time reduces photon counts, which can deteriorate image quality. Recent deep-neural-network (DNN) based methods for image-to-image translation enable the mapping of low-quality PET images (acquired using substantially reduced dose), coupled with the associated magnetic resonance imaging (MRI) images, to high-quality PET images. However, such DNN methods focus on applications involving test data that match the statistical characteristics of the training data very closely and give little attention to evaluating the performance of these DNNs on new out-of-distribution (OOD) acquisitions. We propose a novel DNN formulation that models the (i) underlying sinogram-based physics of the PET imaging system and (ii) the uncertainty in the DNN output through the per-voxel heteroscedasticity of the residuals between the predicted and the high-quality reference images. Our sinogram-based uncertainty-aware DNN framework, namely, suDNN, estimates a standard-dose PET image using multimodal input in the form of (i) a low-dose/low-count PET image and (ii) the corresponding multi-contrast MRI images, leading to improved robustness of suDNN to OOD acquisitions. Results on in vivo simultaneous PET-MRI, and various forms of OOD data in PET-MRI, show the benefits of suDNN over the current state of the art, quantitatively and qualitatively.
Collapse
Affiliation(s)
- Viswanath P Sudarshan
- Computer Science and Engineering (CSE) Department, Indian Institute of Technology (IIT) Bombay, Mumbai, India; IITB-Monash Research Academy, Indian Institute of Technology (IIT) Bombay, Mumbai, India
| | - Uddeshya Upadhyay
- Computer Science and Engineering (CSE) Department, Indian Institute of Technology (IIT) Bombay, Mumbai, India
| | - Gary F Egan
- Monash Biomedical Imaging (MBI), Monash University, Melbourne, Australia
| | - Zhaolin Chen
- Monash Biomedical Imaging (MBI), Monash University, Melbourne, Australia
| | - Suyash P Awate
- Computer Science and Engineering (CSE) Department, Indian Institute of Technology (IIT) Bombay, Mumbai, India.
| |
Collapse
|
246
|
Bitarafan A, Nikdan M, Baghshah MS. 3D Image Segmentation With Sparse Annotation by Self-Training and Internal Registration. IEEE J Biomed Health Inform 2021; 25:2665-2672. [PMID: 33211667 DOI: 10.1109/jbhi.2020.3038847] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Anatomical image segmentation is one of the foundations for medical planning. Recently, convolutional neural networks (CNN) have achieved much success in segmenting volumetric (3D) images when a large number of fully annotated 3D samples are available. However, rarely a volumetric medical image dataset containing a sufficient number of segmented 3D images is accessible since providing manual segmentation masks is monotonous and time-consuming. Thus, to alleviate the burden of manual annotation, we attempt to effectively train a 3D CNN using a sparse annotation where ground truth on just one 2D slice of the axial axis of each training 3D image is available. To tackle this problem, we propose a self-training framework that alternates between two steps consisting of assigning pseudo annotations to unlabeled voxels and updating the 3D segmentation network by employing both the labeled and pseudo labeled voxels. To produce pseudo labels more accurately, we benefit from both propagation of labels (or pseudo-labels) between adjacent slices and 3D processing of voxels. More precisely, a 2D registration-based method is proposed to gradually propagate labels between consecutive 2D slices and a 3D U-Net is employed to utilize volumetric information. Ablation studies on benchmarks show that cooperation between the 2D registration and the 3D segmentation provides accurate pseudo-labels that enable the segmentation network to be trained effectively when for each training sample only even one segmented slice by an expert is available. Our method is assessed on the CHAOS and Visceral datasets to segment abdominal organs. Results demonstrate that despite utilizing just one segmented slice for each 3D image (that is weaker supervision in comparison with the compared weakly supervised methods) can result in higher performance and also achieve closer results to the fully supervised manner.
Collapse
|
247
|
Gao Y, Cloos M, Liu F, Crozier S, Pike GB, Sun H. Accelerating quantitative susceptibility and R2* mapping using incoherent undersampling and deep neural network reconstruction. Neuroimage 2021; 240:118404. [PMID: 34280526 DOI: 10.1016/j.neuroimage.2021.118404] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Revised: 06/26/2021] [Accepted: 07/15/2021] [Indexed: 10/20/2022] Open
Abstract
Quantitative susceptibility mapping (QSM) and R2* mapping are MRI post-processing methods that quantify tissue magnetic susceptibility and transverse relaxation rate distributions. However, QSM and R2* acquisitions are relatively slow, even with parallel imaging. Incoherent undersampling and compressed sensing reconstruction techniques have been used to accelerate traditional magnitude-based MRI acquisitions; however, most do not recover the full phase signal, as required by QSM, due to its non-convex nature. In this study, a learning-based Deep Complex Residual Network (DCRNet) is proposed to recover both the magnitude and phase images from incoherently undersampled data, enabling high acceleration of QSM and R2* acquisition. Magnitude, phase, R2*, and QSM results from DCRNet were compared with two iterative and one deep learning methods on retrospectively undersampled acquisitions from six healthy volunteers, one intracranial hemorrhage and one multiple sclerosis patients, as well as one prospectively undersampled healthy subject using a 7T scanner. Peak signal to noise ratio (PSNR), structural similarity (SSIM), root-mean-squared error (RMSE), and region-of-interest susceptibility and R2* measurements are reported for numerical comparisons. The proposed DCRNet method substantially reduced artifacts and blurring compared to the other methods and resulted in the highest PSNR, SSIM, and RMSE on the magnitude, R2*, local field, and susceptibility maps. Compared to two iterative and one deep learning methods, the DCRNet method demonstrated a 3.2% to 9.1% accuracy improvement in deep grey matter susceptibility when accelerated by a factor of four. The DCRNet also dramatically shortened the reconstruction time of single 2D brain images from 36-140 seconds using conventional approaches to only 15-70 milliseconds.
Collapse
Affiliation(s)
- Yang Gao
- School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Australia
| | - Martijn Cloos
- Centre for Advanced Imaging, University of Queensland, Brisbane, Australia; ARC Training Centre for Innovation in Biomedical Imaging Technology, The University of Queensland, Brisbane, QLD, Australia
| | - Feng Liu
- School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Australia
| | - Stuart Crozier
- School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Australia
| | - G Bruce Pike
- Departments of Radiology and Clinical Neurosciences, University of Calgary, Calgary, Canada
| | - Hongfu Sun
- School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Australia.
| |
Collapse
|
248
|
Ghodrati V, Bydder M, Bedayat A, Prosper A, Yoshida T, Nguyen KL, Finn JP, Hu P. Temporally aware volumetric generative adversarial network-based MR image reconstruction with simultaneous respiratory motion compensation: Initial feasibility in 3D dynamic cine cardiac MRI. Magn Reson Med 2021; 86:2666-2683. [PMID: 34254363 PMCID: PMC10172149 DOI: 10.1002/mrm.28912] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2021] [Revised: 06/02/2021] [Accepted: 06/12/2021] [Indexed: 12/26/2022]
Abstract
PURPOSE Develop a novel three-dimensional (3D) generative adversarial network (GAN)-based technique for simultaneous image reconstruction and respiratory motion compensation of 4D MRI. Our goal was to enable high-acceleration factors 10.7X-15.8X, while maintaining robust and diagnostic image quality superior to state-of-the-art self-gating (SG) compressed sensing wavelet (CS-WV) reconstruction at lower acceleration factors 3.5X-7.9X. METHODS Our GAN was trained based on pixel-wise content loss functions, adversarial loss function, and a novel data-driven temporal aware loss function to maintain anatomical accuracy and temporal coherence. Besides image reconstruction, our network also performs respiratory motion compensation for free-breathing scans. A novel progressive growing-based strategy was adapted to make the training process possible for the proposed GAN-based structure. The proposed method was developed and thoroughly evaluated qualitatively and quantitatively based on 3D cardiac cine data from 42 patients. RESULTS Our proposed method achieved significantly better scores in general image quality and image artifacts at 10.7X-15.8X acceleration than the SG CS-WV approach at 3.5X-7.9X acceleration (4.53 ± 0.540 vs. 3.13 ± 0.681 for general image quality, 4.12 ± 0.429 vs. 2.97 ± 0.434 for image artifacts, P < .05 for both). No spurious anatomical structures were observed in our images. The proposed method enabled similar cardiac-function quantification as conventional SG CS-WV. The proposed method achieved faster central processing unit-based image reconstruction (6 s/cardiac phase) than the SG CS-WV (312 s/cardiac phase). CONCLUSION The proposed method showed promising potential for high-resolution (1 mm3 ) free-breathing 4D MR data acquisition with simultaneous respiratory motion compensation and fast reconstruction time.
Collapse
Affiliation(s)
- Vahid Ghodrati
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, California, USA.,Biomedical Physics Inter-Departmental Graduate Program, University of California, Los Angeles, California, USA
| | - Mark Bydder
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, California, USA
| | - Arash Bedayat
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, California, USA
| | - Ashley Prosper
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, California, USA
| | - Takegawa Yoshida
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, California, USA
| | - Kim-Lien Nguyen
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, California, USA.,Biomedical Physics Inter-Departmental Graduate Program, University of California, Los Angeles, California, USA.,Department of Medicine (Cardiology), David Geffen School of Medicine, University of California, Los Angeles, California, USA
| | - J Paul Finn
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, California, USA
| | - Peng Hu
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, California, USA.,Biomedical Physics Inter-Departmental Graduate Program, University of California, Los Angeles, California, USA
| |
Collapse
|
249
|
Qin C, Duan J, Hammernik K, Schlemper J, Küstner T, Botnar R, Prieto C, Price AN, Hajnal JV, Rueckert D. Complementary time-frequency domain networks for dynamic parallel MR image reconstruction. Magn Reson Med 2021; 86:3274-3291. [PMID: 34254355 DOI: 10.1002/mrm.28917] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2020] [Revised: 06/10/2021] [Accepted: 06/14/2021] [Indexed: 11/11/2022]
Abstract
PURPOSE To introduce a novel deep learning-based approach for fast and high-quality dynamic multicoil MR reconstruction by learning a complementary time-frequency domain network that exploits spatiotemporal correlations simultaneously from complementary domains. THEORY AND METHODS Dynamic parallel MR image reconstruction is formulated as a multivariable minimization problem, where the data are regularized in combined temporal Fourier and spatial (x-f) domain as well as in spatiotemporal image (x-t) domain. An iterative algorithm based on variable splitting technique is derived, which alternates among signal de-aliasing steps in x-f and x-t spaces, a closed-form point-wise data consistency step and a weighted coupling step. The iterative model is embedded into a deep recurrent neural network which learns to recover the image via exploiting spatiotemporal redundancies in complementary domains. RESULTS Experiments were performed on two datasets of highly undersampled multicoil short-axis cardiac cine MRI scans. Results demonstrate that our proposed method outperforms the current state-of-the-art approaches both quantitatively and qualitatively. The proposed model can also generalize well to data acquired from a different scanner and data with pathologies that were not seen in the training set. CONCLUSION The work shows the benefit of reconstructing dynamic parallel MRI in complementary time-frequency domains with deep neural networks. The method can effectively and robustly reconstruct high-quality images from highly undersampled dynamic multicoil data ( 16 × and 24 × yielding 15 s and 10 s scan times respectively) with fast reconstruction speed (2.8 seconds). This could potentially facilitate achieving fast single-breath-hold clinical 2D cardiac cine imaging.
Collapse
Affiliation(s)
- Chen Qin
- Institute for Digital Communications, School of Engineering, University of Edinburgh, Edinburgh, UK.,Department of Computing, Imperial College London, London, UK
| | - Jinming Duan
- School of Computer Science, University of Birmingham, Birmingham, UK
| | - Kerstin Hammernik
- Department of Computing, Imperial College London, London, UK.,Institute for AI and Informatics, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Jo Schlemper
- Department of Computing, Imperial College London, London, UK.,Hyperfine Research Inc., Guilford, CT, USA
| | - Thomas Küstner
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK.,Department of Diagnostic and Interventional Radiology, Medical Image and Data Analysis, University Hospital of Tuebingen, Tuebingen, Germany
| | - René Botnar
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Claudia Prieto
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Anthony N Price
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Joseph V Hajnal
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Daniel Rueckert
- Department of Computing, Imperial College London, London, UK.,Institute for AI and Informatics, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| |
Collapse
|
250
|
Chandra SS, Bran Lorenzana M, Liu X, Liu S, Bollmann S, Crozier S. Deep learning in magnetic resonance image reconstruction. J Med Imaging Radiat Oncol 2021; 65:564-577. [PMID: 34254448 DOI: 10.1111/1754-9485.13276] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Accepted: 06/10/2021] [Indexed: 11/26/2022]
Abstract
Magnetic resonance (MR) imaging visualises soft tissue contrast in exquisite detail without harmful ionising radiation. In this work, we provide a state-of-the-art review on the use of deep learning in MR image reconstruction from different image acquisition types involving compressed sensing techniques, parallel image acquisition and multi-contrast imaging. Publications with deep learning-based image reconstruction for MR imaging were identified from the literature (PubMed and Google Scholar), and a comprehensive description of each of the works was provided. A detailed comparison that highlights the differences, the data used and the performance of each of these works were also made. A discussion of the potential use cases for each of these methods is provided. The sparse image reconstruction methods were found to be most popular in using deep learning for improved performance, accelerating acquisitions by around 4-8 times. Multi-contrast image reconstruction methods rely on at least one pre-acquired image, but can achieve 16-fold, and even up to 32- to 50-fold acceleration depending on the set-up. Parallel imaging provides frameworks to be integrated in many of these methods for additional speed-up potential. The successful use of compressed sensing techniques and multi-contrast imaging with deep learning and parallel acquisition methods could yield significant MR acquisition speed-ups within clinical routines in the near future.
Collapse
Affiliation(s)
- Shekhar S Chandra
- School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Queensland, Australia
| | - Marlon Bran Lorenzana
- School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Queensland, Australia
| | - Xinwen Liu
- School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Queensland, Australia
| | - Siyu Liu
- School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Queensland, Australia
| | - Steffen Bollmann
- School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Queensland, Australia
| | - Stuart Crozier
- School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Queensland, Australia
| |
Collapse
|