251
|
Chandra SS, Bran Lorenzana M, Liu X, Liu S, Bollmann S, Crozier S. Deep learning in magnetic resonance image reconstruction. J Med Imaging Radiat Oncol 2021; 65:564-577. [PMID: 34254448 DOI: 10.1111/1754-9485.13276] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Accepted: 06/10/2021] [Indexed: 11/26/2022]
Abstract
Magnetic resonance (MR) imaging visualises soft tissue contrast in exquisite detail without harmful ionising radiation. In this work, we provide a state-of-the-art review on the use of deep learning in MR image reconstruction from different image acquisition types involving compressed sensing techniques, parallel image acquisition and multi-contrast imaging. Publications with deep learning-based image reconstruction for MR imaging were identified from the literature (PubMed and Google Scholar), and a comprehensive description of each of the works was provided. A detailed comparison that highlights the differences, the data used and the performance of each of these works were also made. A discussion of the potential use cases for each of these methods is provided. The sparse image reconstruction methods were found to be most popular in using deep learning for improved performance, accelerating acquisitions by around 4-8 times. Multi-contrast image reconstruction methods rely on at least one pre-acquired image, but can achieve 16-fold, and even up to 32- to 50-fold acceleration depending on the set-up. Parallel imaging provides frameworks to be integrated in many of these methods for additional speed-up potential. The successful use of compressed sensing techniques and multi-contrast imaging with deep learning and parallel acquisition methods could yield significant MR acquisition speed-ups within clinical routines in the near future.
Collapse
Affiliation(s)
- Shekhar S Chandra
- School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Queensland, Australia
| | - Marlon Bran Lorenzana
- School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Queensland, Australia
| | - Xinwen Liu
- School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Queensland, Australia
| | - Siyu Liu
- School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Queensland, Australia
| | - Steffen Bollmann
- School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Queensland, Australia
| | - Stuart Crozier
- School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Queensland, Australia
| |
Collapse
|
252
|
Rahman A, Rahman MS, Mahdy MRC. 3C-GAN: class-consistent CycleGAN for malaria domain adaptation model. Biomed Phys Eng Express 2021; 7. [PMID: 34167104 DOI: 10.1088/2057-1976/ac0e74] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Accepted: 06/24/2021] [Indexed: 11/11/2022]
Abstract
Unpaired domain translation models with distribution matching loss such as CycleGAN are now widely being used to shift domain in medical images. However, synthesizing medical images using CycleGAN can lead to misdiagnosis of a medical condition as it might hallucinate unwanted features, especially if theres a data bias. This can potentially change the original class of the input image, which is a very serious problem. In this paper, we have introduced a modified distribution matching loss for CycleGAN to eliminate feature hallucination on the malaria dataset. In the context of the malaria dataset, unintentional feature hallucination may introduce a facet that resembles a parasite or remove the parasite after the translation. Our proposed approach has enabled us to shift the domain of the malaria dataset without the risk of changing their corresponding class. We have presented experimental evidence that our modified loss significantly reduced feature hallucination by preserving original class labels. The experimental results are better in comparison to the baseline (classic CycleGAN) that targets the translating domain. We believe that our approach will expedite the process of developing unsupervised unpaired GAN that is safe for clinical use.
Collapse
Affiliation(s)
- Aimon Rahman
- Department of Electrical and Computer Engineering, North South University, Dhaka-1229, Bangladesh
| | - M Sohel Rahman
- Department of Computer Science & Engineering, Bangladesh University of Engineering & Technology (BUET), ECE Building, West Palasi, Dhaka-1205, Bangladesh
| | - M R C Mahdy
- Department of Electrical and Computer Engineering, North South University, Dhaka-1229, Bangladesh
| |
Collapse
|
253
|
Gao M, Fessler JA, Chan HP. Deep Convolutional Neural Network With Adversarial Training for Denoising Digital Breast Tomosynthesis Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1805-1816. [PMID: 33729933 PMCID: PMC8274391 DOI: 10.1109/tmi.2021.3066896] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Digital breast tomosynthesis (DBT) is a quasi-three-dimensional imaging modality that can reduce false negatives and false positives in mass lesion detection caused by overlapping breast tissue in conventional two-dimensional (2D) mammography. The patient dose of a DBT scan is similar to that of a single 2D mammogram, while acquisition of each projection view adds detector readout noise. The noise is propagated to the reconstructed DBT volume, possibly obscuring subtle signs of breast cancer such as microcalcifications (MCs). This study developed a deep convolutional neural network (DCNN) framework for denoising DBT images with a focus on improving the conspicuity of MCs as well as preserving the ill-defined margins of spiculated masses and normal tissue textures. We trained the DCNN using a weighted combination of mean squared error (MSE) loss and adversarial loss. We configured a dedicated x-ray imaging simulator in combination with digital breast phantoms to generate realistic in silico DBT data for training. We compared the DCNN training between using digital phantoms and using real physical phantoms. The proposed denoising method improved the contrast-to-noise ratio (CNR) and detectability index (d') of the simulated MCs in the validation phantom DBTs. These performance measures improved with increasing training target dose and training sample size. Promising denoising results were observed on the transferability of the digital-phantom-trained denoiser to DBT reconstructed with different techniques and on a small independent test set of human subject DBT images.
Collapse
|
254
|
Lv J, Li G, Tong X, Chen W, Huang J, Wang C, Yang G. Transfer learning enhanced generative adversarial networks for multi-channel MRI reconstruction. Comput Biol Med 2021; 134:104504. [PMID: 34062366 DOI: 10.1016/j.compbiomed.2021.104504] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2021] [Revised: 05/17/2021] [Accepted: 05/17/2021] [Indexed: 12/23/2022]
Abstract
Deep learning based generative adversarial networks (GAN) can effectively perform image reconstruction with under-sampled MR data. In general, a large number of training samples are required to improve the reconstruction performance of a certain model. However, in real clinical applications, it is difficult to obtain tens of thousands of raw patient data to train the model since saving k-space data is not in the routine clinical flow. Therefore, enhancing the generalizability of a network based on small samples is urgently needed. In this study, three novel applications were explored based on parallel imaging combined with the GAN model (PI-GAN) and transfer learning. The model was pre-trained with public Calgary brain images and then fine-tuned for use in (1) patients with tumors in our center; (2) different anatomies, including knee and liver; (3) different k-space sampling masks with acceleration factors (AFs) of 2 and 6. As for the brain tumor dataset, the transfer learning results could remove the artifacts found in PI-GAN and yield smoother brain edges. The transfer learning results for the knee and liver were superior to those of the PI-GAN model trained with its own dataset using a smaller number of training cases. However, the learning procedure converged more slowly in the knee datasets compared to the learning in the brain tumor datasets. The reconstruction performance was improved by transfer learning both in the models with AFs of 2 and 6. Of these two models, the one with AF = 2 showed better results. The results also showed that transfer learning with the pre-trained model could solve the problem of inconsistency between the training and test datasets and facilitate generalization to unseen data.
Collapse
Affiliation(s)
- Jun Lv
- School of Computer and Control Engineering, Yantai University, Yantai, China
| | - Guangyuan Li
- School of Computer and Control Engineering, Yantai University, Yantai, China
| | - Xiangrong Tong
- School of Computer and Control Engineering, Yantai University, Yantai, China
| | | | - Jiahao Huang
- School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Chengyan Wang
- Human Phenome Institute, Fudan University, Shanghai, China.
| | - Guang Yang
- Cardiovascular Research Centre, Royal Brompton Hospital, London, SW3 6NP, UK; National Heart and Lung Institute, Imperial College London, London, SW7 2AZ, UK.
| |
Collapse
|
255
|
Dai X, Lei Y, Wang T, Axente M, Xu D, Patel P, Jani AB, Curran WJ, Liu T, Yang X. Self-supervised learning for accelerated 3D high-resolution ultrasound imaging. Med Phys 2021; 48:3916-3926. [PMID: 33993508 PMCID: PMC11699523 DOI: 10.1002/mp.14946] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2020] [Revised: 05/03/2021] [Accepted: 05/10/2021] [Indexed: 12/23/2022] Open
Abstract
PURPOSE Ultrasound (US) imaging has been widely used in diagnosis, image-guided intervention, and therapy, where high-quality three-dimensional (3D) images are highly desired from sparsely acquired two-dimensional (2D) images. This study aims to develop a deep learning-based algorithm to reconstruct high-resolution (HR) 3D US images only reliant on the acquired sparsely distributed 2D images. METHODS We propose a self-supervised learning framework using cycle-consistent generative adversarial network (cycleGAN), where two independent cycleGAN models are trained with paired original US images and two sets of low-resolution (LR) US images, respectively. The two sets of LR US images are obtained through down-sampling the original US images along the two axes, respectively. In US imaging, in-plane spatial resolution is generally much higher than through-plane resolution. By learning the mapping from down-sampled in-plane LR images to original HR US images, cycleGAN can generate through-plane HR images from original sparely distributed 2D images. Finally, HR 3D US images are reconstructed by combining the generated 2D images from the two cycleGAN models. RESULTS The proposed method was assessed on two different datasets. One is automatic breast ultrasound (ABUS) images from 70 breast cancer patients, the other is collected from 45 prostate cancer patients. By applying a spatial resolution enhancement factor of 3 to the breast cases, our proposed method achieved the mean absolute error (MAE) value of 0.90 ± 0.15, the peak signal-to-noise ratio (PSNR) value of 37.88 ± 0.88 dB, and the visual information fidelity (VIF) value of 0.69 ± 0.01, which significantly outperforms bicubic interpolation. Similar performances have been achieved using the enhancement factor of 5 in these breast cases and using the enhancement factors of 5 and 10 in the prostate cases. CONCLUSIONS We have proposed and investigated a new deep learning-based algorithm for reconstructing HR 3D US images from sparely acquired 2D images. Significant improvement on through-plane resolution has been achieved by only using the acquired 2D images without any external atlas images. Its self-supervision capability could accelerate HR US imaging.
Collapse
Affiliation(s)
- Xianjin Dai
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Marian Axente
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Dong Xu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Ashesh B. Jani
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Walter J. Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| |
Collapse
|
256
|
Wang S, Xiao T, Liu Q, Zheng H. Deep learning for fast MR imaging: A review for learning reconstruction from incomplete k-space data. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102579] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
|
257
|
Lv J, Zhu J, Yang G. Which GAN? A comparative study of generative adversarial network-based fast MRI reconstruction. PHILOSOPHICAL TRANSACTIONS. SERIES A, MATHEMATICAL, PHYSICAL, AND ENGINEERING SCIENCES 2021; 379:20200203. [PMID: 33966462 DOI: 10.1098/rsta.2020.0203] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 12/14/2020] [Indexed: 05/03/2023]
Abstract
Fast magnetic resonance imaging (MRI) is crucial for clinical applications that can alleviate motion artefacts and increase patient throughput. K-space undersampling is an obvious approach to accelerate MR acquisition. However, undersampling of k-space data can result in blurring and aliasing artefacts for the reconstructed images. Recently, several studies have been proposed to use deep learning-based data-driven models for MRI reconstruction and have obtained promising results. However, the comparison of these methods remains limited because the models have not been trained on the same datasets and the validation strategies may be different. The purpose of this work is to conduct a comparative study to investigate the generative adversarial network (GAN)-based models for MRI reconstruction. We reimplemented and benchmarked four widely used GAN-based architectures including DAGAN, ReconGAN, RefineGAN and KIGAN. These four frameworks were trained and tested on brain, knee and liver MRI images using twofold, fourfold and sixfold accelerations, respectively, with a random undersampling mask. Both quantitative evaluations and qualitative visualization have shown that the RefineGAN method has achieved superior performance in reconstruction with better accuracy and perceptual quality compared to other GAN-based methods. This article is part of the theme issue 'Synergistic tomographic image reconstruction: part 1'.
Collapse
Affiliation(s)
- Jun Lv
- School of Computer and Control Engineering, Yantai University, Yantai, People's Republic of China
| | - Jin Zhu
- Department of Computer Science and Technology, University of Cambridge, Cambridge CB3 0FD, UK
| | - Guang Yang
- Cardiovascular Research Centre, Royal Brompton Hospital, SW3 6NP London, UK
- National Heart and Lung Institute, Imperial College London, London SW7 2AZ, UK
| |
Collapse
|
258
|
Hammernik K, Schlemper J, Qin C, Duan J, Summers RM, Rueckert D. Systematic evaluation of iterative deep neural networks for fast parallel MRI reconstruction with sensitivity-weighted coil combination. Magn Reson Med 2021; 86:1859-1872. [PMID: 34110037 DOI: 10.1002/mrm.28827] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2020] [Revised: 03/18/2021] [Accepted: 04/14/2021] [Indexed: 12/18/2022]
Abstract
PURPOSE To systematically investigate the influence of various data consistency layers and regularization networks with respect to variations in the training and test data domain, for sensitivity-encoded accelerated parallel MR image reconstruction. THEORY AND METHODS Magnetic resonance (MR) image reconstruction is formulated as a learned unrolled optimization scheme with a down-up network as regularization and varying data consistency layers. The proposed networks are compared to other state-of-the-art approaches on the publicly available fastMRI knee and neuro dataset and tested for stability across different training configurations regarding anatomy and number of training samples. RESULTS Data consistency layers and expressive regularization networks, such as the proposed down-up networks, form the cornerstone for robust MR image reconstruction. Physics-based reconstruction networks outperform post-processing methods substantially for R = 4 in all cases and for R = 8 when the training and test data are aligned. At R = 8, aligning training and test data is more important than architectural choices. CONCLUSION In this work, we study how dataset sizes affect single-anatomy and cross-anatomy training of neural networks for MRI reconstruction. The study provides insights into the robustness, properties, and acceleration limits of state-of-the-art networks, and our proposed down-up networks. These key insights provide essential aspects to successfully translate learning-based MRI reconstruction to clinical practice, where we are confronted with limited datasets and various imaged anatomies.
Collapse
Affiliation(s)
- Kerstin Hammernik
- Department of Computing, Imperial College London, London, United Kingdom.,Chair for AI in Healthcare and Medicine, Klinikum Rechts der Isar, Technical University of Munich, Munich, Germany
| | | | - Chen Qin
- Department of Computing, Imperial College London, London, United Kingdom.,Institute for Digital Communications, School of Engineering, University of Edinburgh, Edinburgh, United Kingdom
| | - Jinming Duan
- Department of Computing, Imperial College London, London, United Kingdom.,School of Computer Science, University of Birmingham, Birmingham, United Kingdom
| | | | - Daniel Rueckert
- Department of Computing, Imperial College London, London, United Kingdom.,Chair for AI in Healthcare and Medicine, Klinikum Rechts der Isar, Technical University of Munich, Munich, Germany
| |
Collapse
|
259
|
Hirte AU, Platscher M, Joyce T, Heit JJ, Tranvinh E, Federau C. Realistic generation of diffusion-weighted magnetic resonance brain images with deep generative models. Magn Reson Imaging 2021; 81:60-66. [PMID: 34116133 DOI: 10.1016/j.mri.2021.06.001] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2020] [Revised: 05/23/2021] [Accepted: 06/05/2021] [Indexed: 10/21/2022]
Abstract
We study two state of the art deep generative networks, the Introspective Variational Autoencoder and the Style-Based Generative Adversarial Network, for the generation of new diffusion-weighted magnetic resonance images. We show that high quality, diverse and realistic-looking images, as evaluated by external neuroradiologists blinded to the whole study, can be synthesized using these deep generative models. We evaluate diverse metrics with respect to quality and diversity of the generated synthetic brain images. These findings show that generative models could qualify as a method for data augmentation in the medical field, where access to large image database is in many aspects restricted.
Collapse
Affiliation(s)
- Alejandro Ungría Hirte
- Institute for Biomedical Engineering, ETH Zürich und University of Zürich, Gloriastrasse 35, 8092 Zürich, Switzerland
| | - Moritz Platscher
- Institute for Biomedical Engineering, ETH Zürich und University of Zürich, Gloriastrasse 35, 8092 Zürich, Switzerland
| | - Thomas Joyce
- Institute for Biomedical Engineering, ETH Zürich und University of Zürich, Gloriastrasse 35, 8092 Zürich, Switzerland
| | - Jeremy J Heit
- Departement of Radiology, Section of Neuroradiology, Stanford University, United States of America
| | - Eric Tranvinh
- Departement of Radiology, Section of Neuroradiology, Stanford University, United States of America
| | - Christian Federau
- Institute for Biomedical Engineering, ETH Zürich und University of Zürich, Gloriastrasse 35, 8092 Zürich, Switzerland; AI Medial AG, Zollikon, Switzerland.
| |
Collapse
|
260
|
A deep cascade of ensemble of dual domain networks with gradient-based T1 assistance and perceptual refinement for fast MRI reconstruction. Comput Med Imaging Graph 2021; 91:101942. [PMID: 34087612 DOI: 10.1016/j.compmedimag.2021.101942] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Revised: 05/03/2021] [Accepted: 05/14/2021] [Indexed: 11/23/2022]
Abstract
Deep learning networks have shown promising results in fast magnetic resonance imaging (MRI) reconstruction. In our work, we develop deep networks to further improve the quantitative and the perceptual quality of reconstruction. To begin with, we propose reconsynergynet (RSN), a network that combines the complementary benefits of independently operating on both the image and the Fourier domain. For a single-coil acquisition, we introduce deep cascade RSN (DC-RSN), a cascade of RSN blocks interleaved with data fidelity (DF) units. Secondly, we improve the structure recovery of DC-RSN for T2 weighted Imaging (T2WI) through assistance of T1 weighted imaging (T1WI), a sequence with short acquisition time. T1 assistance is provided to DC-RSN through a gradient of log feature (GOLF) fusion. Furthermore, we propose perceptual refinement network (PRN) to refine the reconstructions for better visual information fidelity (VIF), a metric highly correlated to radiologist's opinion on the image quality. Lastly, for multi-coil acquisition, we propose variable splitting RSN (VS-RSN), a deep cascade of blocks, each block containing RSN, multi-coil DF unit, and a weighted average module. We extensively validate our models DC-RSN and VS-RSN for single-coil and multi-coil acquisitions and report the state-of-the-art performance. We obtain a SSIM of 0.768, 0.923, and 0.878 for knee single-coil-4x, multi-coil-4x, and multi-coil-8x in fastMRI, respectively. We also conduct experiments to demonstrate the efficacy of GOLF based T1 assistance and PRN.
Collapse
|
261
|
Aggarwal HK, Pramanik A, Jacob M. ENSURE: ENSEMBLE STEIN'S UNBIASED RISK ESTIMATOR FOR UNSUPERVISED LEARNING. PROCEEDINGS OF THE ... IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING. ICASSP (CONFERENCE) 2021; 2021:10.1109/icassp39728.2021.9414513. [PMID: 34335103 PMCID: PMC8323317 DOI: 10.1109/icassp39728.2021.9414513] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/19/2023]
Abstract
Deep learning algorithms are emerging as powerful alternatives to compressed sensing methods, offering improved image quality and computational efficiency. Unfortunately, fully sampled training images may not be available or are difficult to acquire in several applications, including high-resolution and dynamic imaging. Previous studies in image reconstruction have utilized Stein's Unbiased Risk Estimator (SURE) as a mean square error (MSE) estimate for the image denoising step in an unrolled network. Unfortunately, the end-to-end training of a network using SURE remains challenging since the projected SURE loss is a poor approximation to the MSE, especially in the heavily undersampled setting. We propose an ENsemble SURE (ENSURE) approach to train a deep network only from undersampled measurements. In particular, we show that training a network using an ensemble of images, each acquired with a different sampling pattern, can closely approximate the MSE. Our preliminary experimental results show that the proposed ENSURE approach gives comparable reconstruction quality to supervised learning and a recent unsupervised learning method.
Collapse
|
262
|
Shi Z, Li H, Cao Q, Wang Z, Cheng M. A material decomposition method for dual-energy CT via dual interactive Wasserstein generative adversarial networks. Med Phys 2021; 48:2891-2905. [PMID: 33704786 DOI: 10.1002/mp.14828] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2020] [Revised: 02/26/2021] [Accepted: 02/28/2021] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Dual-energy computed tomography (DECT) is highly promising for material characterization and identification, whereas reconstructed material-specific images are affected by magnified noise and beam-hardening artifacts. Although various DECT material decomposition methods have been proposed to solve this problem, the quality of the decomposed images is still unsatisfactory, particularly in the image edges. In this study, a data-driven approach using dual interactive Wasserstein generative adversarial networks (DIWGAN) is developed to improve DECT decomposition accuracy and perform edge-preserving images. METHODS In proposed DIWGAN, two interactive generators are used to synthesize decomposed images of two basis materials by modeling the spatial and spectral correlations from input DECT reconstructed images, and the corresponding discriminators are employed to distinguish the difference between the generated images and labels. The DECT images reconstructed from high- and low-energy bins are sent to two generators separately, and each generator synthesizes one material-specific image, thereby ensuring the specificity of the network modeling. In addition, the information from different energy bins is exploited through the feature sharing of two generators. During decomposition model training, a hybrid loss function including L1 loss, edge loss, and adversarial loss is incorporated to preserve the texture and edges in the generated images. Additionally, a selector is employed to define the generator that should be trained in each iteration, which can ensure the modeling ability of two different generators and improve the material decomposition accuracy. The performance of the proposed method is evaluated using digital phantom, XCAT phantom, and real data from a mouse. RESULTS On the digital phantom, the regions of bone and soft tissue are strictly and accurately separated using the trained decomposition model. The material densities in different bone and soft-tissue regions are near the ground truth, and the error of material densities is lower than 3 mg/ml. The results from XCAT phantom show that the material-specific images generated by directed matrix inversion and iterative decomposition methods have severe noise and artifacts. Regarding to the learning-based methods, the decomposed images of fully convolutional network (FCN) and butterfly network (Butterfly-Net) still contain varying degrees of artifacts, while proposed DIWGAN can yield high quality images. Compared to Butterfly-Net, the root-mean-square error (RMSE) of soft-tissue images generated by the DIWGAN decreased by 0.01 g/ml, whereas the peak-signal-to-noise ratio (PSNR) and structural similarity (SSIM) of the soft-tissue images reached 31.43 dB and 0.9987, respectively. The mass densities of the decomposed materials are nearest to the ground truth when using the DIWGAN method. The noise standard deviation of the decomposition images reduced by 69%, 60%, 33%, and 21% compared with direct matrix inversion, iterative decomposition, FCN, and Butterfly-Net, respectively. Furthermore, the performance of the mouse data indicates the potential of the proposed material decomposition method in real scanned data. CONCLUSIONS A DECT material decomposition method based on deep learning is proposed, and the relationship between reconstructed and material-specific images is mapped by training the DIWGAN model. Results from both the simulation phantoms and real data demonstrate the advantages of this method in suppressing noise and beam-hardening artifacts.
Collapse
Affiliation(s)
- Zaifeng Shi
- School of Microelectronics, Tianjin University, Tianjin, 300072, China.,Tianjin Key Laboratory of Imaging and Sensing Microelectronic Technology, Tianjin, 300072, China
| | - Huilong Li
- School of Microelectronics, Tianjin University, Tianjin, 300072, China
| | - Qingjie Cao
- School of Mathematical Sciences, Tianjin Normal University, Tianjin, 300072, China
| | - Zhongqi Wang
- School of Microelectronics, Tianjin University, Tianjin, 300072, China
| | - Ming Cheng
- School of Microelectronics, Tianjin University, Tianjin, 300072, China
| |
Collapse
|
263
|
Wang S, Lv J, He Z, Liang D, Chen Y, Zhang M, Liu Q. Denoising auto-encoding priors in undecimated wavelet domain for MR image reconstruction. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.09.086] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
|
264
|
Shao W, Rowe SP, Du Y. SPECTnet: a deep learning neural network for SPECT image reconstruction. ANNALS OF TRANSLATIONAL MEDICINE 2021; 9:819. [PMID: 34268432 PMCID: PMC8246183 DOI: 10.21037/atm-20-3345] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Accepted: 07/30/2020] [Indexed: 12/22/2022]
Abstract
Background Single photon emission computed tomography (SPECT) is an important functional tool for clinical diagnosis and scientific research of brain disorders, but suffers from limited spatial resolution and high noise due to hardware design and imaging physics. The present study is to develop a deep learning technique for SPECT image reconstruction that directly converts raw projection data to image with high resolution and low noise, while an efficient training method specifically applicable to medical image reconstruction is presented. Methods A custom software was developed to generate 20,000 2-D brain phantoms, of which 16,000 were used to train the neural network, 2,000 for validation, and the final 2,000 for testing. To reduce development difficulty, a two-step training strategy for network design was adopted. We first compressed full-size activity image (128×128 pixels) to a one-D vector consisting of 256×1 pixels, accomplished by an autoencoder (AE) consisting of an encoder and a decoder. The vector is a good representation of the full-size image in a lower-dimensional space and was used as a compact label to develop the second network that maps between the projection-data domain and the vector domain. Since the label had 256 pixels only, the second network was compact and easy to converge. The second network, when successfully developed, was connected to the decoder (a portion of AE) to decompress the vector to a regular 128×128 image. Therefore, a complex network was essentially divided into two compact neural networks trained separately in sequence but eventually connectable. Results A total of 2,000 test examples, a synthetic brain phantom, and de-identified patient data were used to validate SPECTnet. Results obtained from SPECTnet were compared with those obtained from our clinic OS-EM method. Images with lower noise and more accurate information in the uptake areas were obtained by SPECTnet. Conclusions The challenge of developing a complex deep neural network is reduced by training two separate compact connectable networks. The combination of the two networks forms the full version of SPECTnet. Results show that the developed neural network can produce more accurate SPECT images.
Collapse
Affiliation(s)
- Wenyi Shao
- Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
| | - Steven P Rowe
- Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
| | - Yong Du
- Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
| |
Collapse
|
265
|
Shin Y, Yang J, Lee YH. Deep Generative Adversarial Networks: Applications in Musculoskeletal Imaging. Radiol Artif Intell 2021; 3:e200157. [PMID: 34136816 PMCID: PMC8204145 DOI: 10.1148/ryai.2021200157] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2020] [Revised: 02/10/2021] [Accepted: 02/16/2021] [Indexed: 12/12/2022]
Abstract
In recent years, deep learning techniques have been applied in musculoskeletal radiology to increase the diagnostic potential of acquired images. Generative adversarial networks (GANs), which are deep neural networks that can generate or transform images, have the potential to aid in faster imaging by generating images with a high level of realism across multiple contrast and modalities from existing imaging protocols. This review introduces the key architectures of GANs as well as their technical background and challenges. Key research trends are highlighted, including: (a) reconstruction of high-resolution MRI; (b) image synthesis with different modalities and contrasts; (c) image enhancement that efficiently preserves high-frequency information suitable for human interpretation; (d) pixel-level segmentation with annotation sharing between domains; and (e) applications to different musculoskeletal anatomies. In addition, an overview is provided of the key issues wherein clinical applicability is challenging to capture with conventional performance metrics and expert evaluation. When clinically validated, GANs have the potential to improve musculoskeletal imaging. Keywords: Adults and Pediatrics, Computer Aided Diagnosis (CAD), Computer Applications-General (Informatics), Informatics, Skeletal-Appendicular, Skeletal-Axial, Soft Tissues/Skin © RSNA, 2021.
Collapse
Affiliation(s)
- YiRang Shin
- From the Department of Radiology, Research Institute of Radiological Science, and Center for Clinical Imaging Data Science (CCIDS), Yonsei University College of Medicine, 250 Seongsanno, Seodaemun-gu, Seoul 220-701, Republic of Korea (Y.S., J.Y., Y.H.L.); Systems Molecular Radiology at Yonsei (SysMolRaY), Seoul, Republic of Korea (J.Y.); and Severance Biomedical Science Institute (SBSI), Yonsei University College of Medicine, Seoul, Republic of Korea (J.Y.)
| | - Jaemoon Yang
- From the Department of Radiology, Research Institute of Radiological Science, and Center for Clinical Imaging Data Science (CCIDS), Yonsei University College of Medicine, 250 Seongsanno, Seodaemun-gu, Seoul 220-701, Republic of Korea (Y.S., J.Y., Y.H.L.); Systems Molecular Radiology at Yonsei (SysMolRaY), Seoul, Republic of Korea (J.Y.); and Severance Biomedical Science Institute (SBSI), Yonsei University College of Medicine, Seoul, Republic of Korea (J.Y.)
| | - Young Han Lee
- From the Department of Radiology, Research Institute of Radiological Science, and Center for Clinical Imaging Data Science (CCIDS), Yonsei University College of Medicine, 250 Seongsanno, Seodaemun-gu, Seoul 220-701, Republic of Korea (Y.S., J.Y., Y.H.L.); Systems Molecular Radiology at Yonsei (SysMolRaY), Seoul, Republic of Korea (J.Y.); and Severance Biomedical Science Institute (SBSI), Yonsei University College of Medicine, Seoul, Republic of Korea (J.Y.)
| |
Collapse
|
266
|
Zhou Z, Guo Y, Wang Y. Ultrasound deep beamforming using a multiconstrained hybrid generative adversarial network. Med Image Anal 2021; 71:102086. [PMID: 33979760 DOI: 10.1016/j.media.2021.102086] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2020] [Revised: 04/13/2021] [Accepted: 04/16/2021] [Indexed: 11/19/2022]
Abstract
Ultrasound beamforming is a principal factor in high-quality ultrasound imaging. The conventional delay-and-sum (DAS) beamformer generates images with high computational speed but low spatial resolution; thus, many adaptive beamforming methods have been introduced to improve image qualities. However, these adaptive beamforming methods suffer from high computational complexity, which limits their practical applications. Hence, an advanced beamformer that can overcome spatiotemporal resolution bottlenecks is eagerly awaited. In this paper, we propose a novel deep-learning-based algorithm, called the multiconstrained hybrid generative adversarial network (MC-HGAN) beamformer that rapidly achieves high-quality ultrasound imaging. The MC-HGAN beamformer directly establishes a one-shot mapping between the radio frequency signals and the reconstructed ultrasound images through a hybrid generative adversarial network (GAN) model. Through two specific branches, the hybrid GAN model extracts both radio frequency-based and image-based features and integrates them through a fusion module. We also introduce a multiconstrained training strategy to provide comprehensive guidance for the network by invoking intermediates to co-constrain the training process. Moreover, our beamformer is designed to adapt to various ultrasonic emission modes, which improves its generalizability for clinical applications. We conducted experiments on a variety of datasets scanned by line-scan and plane wave emission modes and evaluated the results with both similarity-based and ultrasound-specific metrics. The comparisons demonstrate that the MC-HGAN beamformer generates ultrasound images whose quality is higher than that of images generated by other deep learning-based methods and shows very high robustness in different clinical datasets. This technology also shows great potential in real-time imaging.
Collapse
Affiliation(s)
- Zixia Zhou
- Fudan University, Department of Electronic Engineering, Shanghai 200433, China
| | - Yi Guo
- Fudan University, Department of Electronic Engineering, Shanghai 200433, China.
| | - Yuanyuan Wang
- Fudan University, Department of Electronic Engineering, Shanghai 200433, China; Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention of Shanghai, Shanghai 200032, China.
| |
Collapse
|
267
|
Mizusawa S, Sei Y, Orihara R, Ohsuga A. Computed tomography image reconstruction using stacked U-Net. Comput Med Imaging Graph 2021; 90:101920. [PMID: 33901918 DOI: 10.1016/j.compmedimag.2021.101920] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2020] [Revised: 02/10/2021] [Accepted: 04/05/2021] [Indexed: 10/21/2022]
Abstract
Since the development of deep learning methods, many researchers have focused on image quality improvement using convolutional neural networks. They proved its effectivity in noise reduction, single-image super-resolution, and segmentation. In this study, we apply stacked U-Net, a deep learning method, for X-ray computed tomography image reconstruction to generate high-quality images in a short time with a small number of projections. It is not easy to create highly accurate models because medical images have few training images due to patients' privacy issues. Thus, we utilize various images from the ImageNet, a widely known visual database. Results show that a cross-sectional image with a peak signal-to-noise ratio of 27.93 db and a structural similarity of 0.886 is recovered for a 512 × 512 image using 360-degree rotation, 512 detectors, and 64 projections, with a processing time of 0.11 s on the GPU. Therefore, the proposed method has a shorter reconstruction time and better image quality than the existing methods.
Collapse
Affiliation(s)
- Satoru Mizusawa
- The University of Electro-Communications, 1-5-1 Chofugaoka, Chofu, Tokyo 182-8585, Japan.
| | - Yuichi Sei
- The University of Electro-Communications, 1-5-1 Chofugaoka, Chofu, Tokyo 182-8585, Japan
| | - Ryohei Orihara
- The University of Electro-Communications, 1-5-1 Chofugaoka, Chofu, Tokyo 182-8585, Japan
| | - Akihiko Ohsuga
- The University of Electro-Communications, 1-5-1 Chofugaoka, Chofu, Tokyo 182-8585, Japan
| |
Collapse
|
268
|
Hardy E, Porée J, Belgharbi H, Bourquin C, Lesage F, Provost J. Sparse channel sampling for ultrasound localization microscopy (SPARSE-ULM). Phys Med Biol 2021; 66. [PMID: 33761492 DOI: 10.1088/1361-6560/abf1b6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Accepted: 03/24/2021] [Indexed: 01/23/2023]
Abstract
Ultrasound localization microscopy (ULM) has recently enabled the mapping of the cerebral vasculaturein vivowith a resolution ten times smaller than the wavelength used, down to ten microns. However, with frame rates up to 20000 frames per second, this method requires large amount of data to be acquired, transmitted, stored, and processed. The transfer rate is, as of today, one of the main limiting factors of this technology. Herein, we introduce a novel reconstruction framework to decrease this quantity of data to be acquired and the complexity of the required hardware by randomly subsampling the channels of a linear probe. Method performance evaluation as well as parameters optimization were conductedin silicousing the SIMUS simulation software in an anatomically realistic phantom and then compared toin vivoacquisitions in a rat brain after craniotomy. Results show that reducing the number of active elements deteriorates the signal-to-noise ratio and could lead to false microbubbles detections but has limited effect on localization accuracy. In simulation, the false positive rate on microbubble detection deteriorates from 3.7% for 128 channels in receive and 7 steered angles to 11% for 16 channels and 7 angles. The average localization accuracy ranges from 10.6μm and 9.93μm for 16 channels/3 angles and 128 channels/13 angles respectively. These results suggest that a compromise can be found between the number of channels and the quality of the reconstructed vascular network and demonstrate feasibility of performing ULM with a reduced number of channels in receive, paving the way for low-cost devices enabling high-resolution vascular mapping.
Collapse
Affiliation(s)
- Erwan Hardy
- Engineering Physics Department, Polytechnique Montréal, Montréal, Canada
| | - Jonathan Porée
- Engineering Physics Department, Polytechnique Montréal, Montréal, Canada
| | - Hatim Belgharbi
- Engineering Physics Department, Polytechnique Montréal, Montréal, Canada
| | - Chloé Bourquin
- Engineering Physics Department, Polytechnique Montréal, Montréal, Canada
| | - Frédéric Lesage
- Electrical Engineering Department, Polytechnique Montréal, Montréal, Canada.,Montréal Heart Institute, Montréal, Canada
| | - Jean Provost
- Engineering Physics Department, Polytechnique Montréal, Montréal, Canada.,Montréal Heart Institute, Montréal, Canada
| |
Collapse
|
269
|
Ryu K, Lee JH, Nam Y, Gho SM, Kim HS, Kim DH. Accelerated multicontrast reconstruction for synthetic MRI using joint parallel imaging and variable splitting networks. Med Phys 2021; 48:2939-2950. [PMID: 33733464 DOI: 10.1002/mp.14848] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2020] [Revised: 03/12/2021] [Accepted: 03/12/2021] [Indexed: 11/10/2022] Open
Abstract
PURPOSE Synthetic magnetic resonance imaging (MRI) requires the acquisition of multicontrast images to estimate quantitative parameter maps, such as T1 , T2 , and proton density (PD). The study aims to develop a multicontrast reconstruction method based on joint parallel imaging (JPI) and joint deep learning (JDL) to enable further acceleration of synthetic MRI. METHODS The JPI and JDL methods are extended and combined to improve reconstruction for better-quality, synthesized images. JPI is performed as a first step to estimate the missing k-space lines, and JDL is then performed to correct and refine the previous estimate with a trained neural network. For the JDL architecture, the original variable splitting network (VS-Net) is modified and extended to form a joint variable splitting network (JVS-Net) to apply to multicontrast reconstructions. The proposed method is designed and tested for multidynamic multiecho (MDME) images with Cartesian uniform under-sampling using acceleration factors between 4 and 8. RESULTS It is demonstrated that the normalized root-mean-square error (nRMSE) is lower and the structural similarity index measure (SSIM) values are higher with the proposed method compared to both the JPI and JDL methods individually. The method also demonstrates the potential to produce a set of synthesized contrast-weighted images that closely resemble those from the fully sampled acquisition without erroneous artifacts. CONCLUSION Combining JPI and JDL enables the reconstruction of highly accelerated synthetic MRIs.
Collapse
Affiliation(s)
- Kanghyun Ryu
- Department of Radiology, Stanford University, Stanford, CA, USA.,Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
| | - Jae-Hun Lee
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
| | - Yoonho Nam
- Department of Biomedical Engineering, Hankuk University of Foreign Studies, Yongin, Republic of Korea
| | - Sung-Min Gho
- MR Collaboration and Development, GE Healthcare, Seoul, Republic of Korea
| | - Ho-Sung Kim
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea
| | - Dong-Hyun Kim
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
| |
Collapse
|
270
|
Radial Undersampling-Based Interpolation Scheme for Multislice CSMRI Reconstruction Techniques. BIOMED RESEARCH INTERNATIONAL 2021; 2021:6638588. [PMID: 33954189 PMCID: PMC8057880 DOI: 10.1155/2021/6638588] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Accepted: 04/05/2021] [Indexed: 11/18/2022]
Abstract
Magnetic Resonance Imaging (MRI) is an important yet slow medical imaging modality. Compressed sensing (CS) theory has enabled to accelerate the MRI acquisition process using some nonlinear reconstruction techniques from even 10% of the Nyquist samples. In recent years, interpolated compressed sensing (iCS) has further reduced the scan time, as compared to CS, by exploiting the strong interslice correlation of multislice MRI. In this paper, an improved efficient interpolated compressed sensing (EiCS) technique is proposed using radial undersampling schemes. The proposed efficient interpolation technique uses three consecutive slices to estimate the missing samples of the central target slice from its two neighboring slices. Seven different evaluation metrics are used to analyze the performance of the proposed technique such as structural similarity index measurement (SSIM), feature similarity index measurement (FSIM), mean square error (MSE), peak signal to noise ratio (PSNR), correlation (CORR), sharpness index (SI), and perceptual image quality evaluator (PIQE) and compared with the latest interpolation techniques. The simulation results show that the proposed EiCS technique has improved image quality and performance using both golden angle and uniform angle radial sampling patterns, with an even lower sampling ratio and maximum information content and using a more practical sampling scheme.
Collapse
|
271
|
Chung H, Cha E, Sunwoo L, Ye JC. Two-stage deep learning for accelerated 3D time-of-flight MRA without matched training data. Med Image Anal 2021; 71:102047. [PMID: 33895617 DOI: 10.1016/j.media.2021.102047] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2020] [Revised: 03/18/2021] [Accepted: 03/19/2021] [Indexed: 10/21/2022]
Abstract
Time-of-flight magnetic resonance angiography (TOF-MRA) is one of the most widely used non-contrast MR imaging methods to visualize blood vessels, but due to the 3-D volume acquisition highly accelerated acquisition is necessary. Accordingly, high quality reconstruction from undersampled TOF-MRA is an important research topic for deep learning. However, most existing deep learning works require matched reference data for supervised training, which are often difficult to obtain. By extending the recent theoretical understanding of cycleGAN from the optimal transport theory, here we propose a novel two-stage unsupervised deep learning approach, which is composed of the multi-coil reconstruction network along the coronal plane followed by a multi-planar refinement network along the axial plane. Specifically, the first network is trained in the square-root of sum of squares (SSoS) domain to achieve high quality parallel image reconstruction, whereas the second refinement network is designed to efficiently learn the characteristics of highly-activated blood flow using double-headed projection discriminator. Extensive experiments demonstrate that the proposed learning process without matched reference exceeds performance of state-of-the-art compressed sensing (CS)-based method and provides comparable or even better results than supervised learning approaches.
Collapse
Affiliation(s)
- Hyungjin Chung
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 34141, Republic of Korea
| | - Eunju Cha
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 34141, Republic of Korea
| | - Leonard Sunwoo
- Department of Radiology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, Seongnam, Republic of Korea.
| | - Jong Chul Ye
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 34141, Republic of Korea.
| |
Collapse
|
272
|
Rizvi SKJ, Azad MA, Fraz MM. Spectrum of Advancements and Developments in Multidisciplinary Domains for Generative Adversarial Networks (GANs). ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING : STATE OF THE ART REVIEWS 2021; 28:4503-4521. [PMID: 33824572 PMCID: PMC8017345 DOI: 10.1007/s11831-021-09543-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/29/2020] [Accepted: 01/10/2021] [Indexed: 06/12/2023]
Abstract
The survey paper summarizes the recent applications and developments in the domain of Generative Adversarial Networks (GANs) i.e. a back propagation based neural network architecture for generative modeling. GANs is one of the most highlighted research avenue due to its synthetic data generation capabilities and benefits of representations comprehended irrespective of the application. While several reviews for GANs in the arena of image processing have been conducted by present but none have given attention on the review of GANs over multi-disciplinary domains. Therefore, in this survey, use of GAN in multidisciplinary applications areas and its implementation challenges have been done by conducting a rigorous search for journal/research article related to GAN and in this regard five renowned journal databases i.e. "ACM Digital Library"," Elsevier", "IEEE Explore", "Science Direct", "Springer" and proceedings of best domain specific conference are considered. By employing hybrid research methodology and article inclusion and exclusion criteria, 100 research articles are considered encompassing 23 application domains for the survey. In this paper applications of GAN in various practical domain and their implementation challenges its associated advantages and disadvantages have been discussed. For the first time a survey of this type have been done where GAN with wide range of application and its associated advantages and disadvantages issue have been reviewed. Finally, this article presents several diversified prominent developing trends in the respective research domain which will provide a visionary perspective regarding ongoing GANs related research and eventually help to develop an intuition for problem solving using GANs.
Collapse
Affiliation(s)
- Syed Khurram Jah Rizvi
- National University of Sciences and Technology (NUST), Islamabad, Pakistan
- University of Warwick, Coventry, CV47AL UK
| | | | - Muhammad Moazam Fraz
- National University of Sciences and Technology (NUST), Islamabad, Pakistan
- The Alan Turing Institute, London, NW1 2DB UK
| |
Collapse
|
273
|
Lin DJ, Johnson PM, Knoll F, Lui YW. Artificial Intelligence for MR Image Reconstruction: An Overview for Clinicians. J Magn Reson Imaging 2021; 53:1015-1028. [PMID: 32048372 PMCID: PMC7423636 DOI: 10.1002/jmri.27078] [Citation(s) in RCA: 114] [Impact Index Per Article: 28.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2019] [Revised: 01/15/2020] [Accepted: 01/17/2020] [Indexed: 12/22/2022] Open
Abstract
Artificial intelligence (AI) shows tremendous promise in the field of medical imaging, with recent breakthroughs applying deep-learning models for data acquisition, classification problems, segmentation, image synthesis, and image reconstruction. With an eye towards clinical applications, we summarize the active field of deep-learning-based MR image reconstruction. We review the basic concepts of how deep-learning algorithms aid in the transformation of raw k-space data to image data, and specifically examine accelerated imaging and artifact suppression. Recent efforts in these areas show that deep-learning-based algorithms can match and, in some cases, eclipse conventional reconstruction methods in terms of image quality and computational efficiency across a host of clinical imaging applications, including musculoskeletal, abdominal, cardiac, and brain imaging. This article is an introductory overview aimed at clinical radiologists with no experience in deep-learning-based MR image reconstruction and should enable them to understand the basic concepts and current clinical applications of this rapidly growing area of research across multiple organ systems.
Collapse
Affiliation(s)
- Dana J. Lin
- Department of Radiology, NYU School of Medicine / NYU Langone Health
| | | | - Florian Knoll
- New York University School of Medicine, Center for Biomedical Imaging
| | - Yvonne W. Lui
- Department of Radiology, NYU School of Medicine / NYU Langone Health
| |
Collapse
|
274
|
Xiao Z, Du N, Liu J, Zhang W. SR-Net: A sequence offset fusion net and refine net for undersampled multislice MR image reconstruction. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 202:105997. [PMID: 33621943 DOI: 10.1016/j.cmpb.2021.105997] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/11/2020] [Accepted: 02/06/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE The study of deep learning-based fast magnetic resonance imaging (MRI) reconstruction methods has become popular in recent years. However, there is still a challenge when MRI results undersample large acceleration factors. The objective of this study was to improve the reconstruction quality of undersampled MR images by exploring data redundancy among slices. METHODS There are two aspects of redundancy in multislice MR images including correlations inside a single slice and correlations among slices. Thus, we built two subnets for the two kinds of redundancy. For correlations among slices, we built a bidirectional recurrent convolutional neural network, named Sequence Offset Fusion Net (S-Net). In S-Net, we used a deformable convolution module to construct a neighbor slice feature extractor. For the correlation inside a single slice, we built a Refine Net (R-Net), which has 5 layers of 2D convolutions. In addition, we used a data consistency (DC) operation to maintain data fidelity in k-space. Finally, we treated the reconstruction task as a dealiasing problem in the image domain, and S-Net and R-Net are applied alternately and iteratively to generate the final reconstructions. RESULTS The proposed algorithm was evaluated using two online public MRI datasets. Compared with several state-of-the-art methods, the proposed method achieved better reconstruction results in terms of dealiasing and restoring tissue structure. Moreover, with over 14 slices per second reconstruction speed on 256x256 pixel images, the proposed method can meet the need for real-time processing. CONCLUSION With spatial correlation among slices as additional prior information, the proposed method dramatically improves the reconstruction quality of undersampled MR images.
Collapse
Affiliation(s)
- Zhiyong Xiao
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China.
| | - Nianmao Du
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China
| | - Jianjun Liu
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China
| | - Weidong Zhang
- Department of Automation, Shanghai JiaoTong University, Shanghai 200240, China.
| |
Collapse
|
275
|
Zhang Y, She H, Du YP. Dynamic MRI of the abdomen using parallel non-Cartesian convolutional recurrent neural networks. Magn Reson Med 2021; 86:964-973. [PMID: 33749023 DOI: 10.1002/mrm.28774] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Revised: 02/25/2021] [Accepted: 02/25/2021] [Indexed: 11/10/2022]
Abstract
PURPOSE To improve the image quality and reduce computational time for the reconstruction of undersampled non-Cartesian abdominal dynamic parallel MR data using the deep learning approach. METHODS An algorithm of parallel non-Cartesian convolutional recurrent neural networks (PNCRNNs) was developed to enable the use of the redundant information in both spatial and temporal domains, and achieve data fidelity for the reconstruction of non-Cartesian parallel MR data. The performance of PNCRNNs was evaluated for various acceleration rates, motion patterns, and imaging applications in comparison with that of the state-of-the-art algorithms of dynamic imaging, including extra-dimensional golden-angle radial sparse parallel MRI (XD-GRASP), low-rank plus sparse matrix decomposition (L+S), blind compressive sensing (BCS), and 3D convolutional neural networks (3D CNNs). RESULTS PNCRNNs increased the peak SNR of 9.07 dB compared with XD-GRASP, 9.26 dB compared with L+S, 3.48 dB compared with BCS, and 3.14 dB compared with 3D CNN at R = 16. The reconstruction time was 18 ms for each bin, which was two orders faster than that of XD-GRASP, L+S, and BCS. PNCRNNs provided good reconstruction for various motion patterns, k-space trajectories, and imaging applications. CONCLUSION The proposed PNCRNN provides substantial improvement of the image quality for dynamic golden-angle radial imaging of the abdomen in comparison with XD-GRASP, L+S, BCS, and 3D CNN. The reconstruction time of PNCRNN can be as fast as 50 bins per second, due to the use of the highly computational efficient Toeplitz approach.
Collapse
Affiliation(s)
- Yufei Zhang
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Huajun She
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Yiping P Du
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
276
|
Cole E, Cheng J, Pauly J, Vasanawala S. Analysis of deep complex-valued convolutional neural networks for MRI reconstruction and phase-focused applications. Magn Reson Med 2021; 86:1093-1109. [PMID: 33724507 DOI: 10.1002/mrm.28733] [Citation(s) in RCA: 54] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Revised: 12/26/2020] [Accepted: 01/25/2021] [Indexed: 01/27/2023]
Abstract
PURPOSE Deep learning has had success with MRI reconstruction, but previously published works use real-valued networks. The few works which have tried complex-valued networks have not fully assessed their impact on phase. Therefore, the purpose of this work is to fully investigate end-to-end complex-valued convolutional neural networks (CNNs) for accelerated MRI reconstruction and in several phase-based applications in comparison to 2-channel real-valued networks. METHODS Several complex-valued activation functions for MRI reconstruction were implemented, and their performance was compared. Complex-valued convolution was implemented and tested on an unrolled network architecture and a U-Net-based architecture over a wide range of network widths and depths with knee, body, and phase-contrast datasets. RESULTS Quantitative and qualitative results demonstrated that complex-valued CNNs with complex-valued convolutions provided superior reconstructions compared to real-valued convolutions with the same number of trainable parameters for both an unrolled network architecture and a U-Net-based architecture, and for 3 different datasets. Complex-valued CNNs consistently had superior normalized RMS error, structural similarity index, and peak SNR compared to real-valued CNNs. CONCLUSION Complex-valued CNNs can enable superior accelerated MRI reconstruction and phase-based applications such as fat-water separation, and flow quantification compared to real-valued convolutional neural networks.
Collapse
Affiliation(s)
- Elizabeth Cole
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
| | - Joseph Cheng
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
| | - John Pauly
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
| | | |
Collapse
|
277
|
Zhou X, Qiu S, Joshi PS, Xue C, Killiany RJ, Mian AZ, Chin SP, Au R, Kolachalama VB. Enhancing magnetic resonance imaging-driven Alzheimer's disease classification performance using generative adversarial learning. Alzheimers Res Ther 2021; 13:60. [PMID: 33715635 PMCID: PMC7958452 DOI: 10.1186/s13195-021-00797-5] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2020] [Accepted: 02/22/2021] [Indexed: 12/30/2022]
Abstract
BACKGROUND Generative adversarial networks (GAN) can produce images of improved quality but their ability to augment image-based classification is not fully explored. We evaluated if a modified GAN can learn from magnetic resonance imaging (MRI) scans of multiple magnetic field strengths to enhance Alzheimer's disease (AD) classification performance. METHODS T1-weighted brain MRI scans from 151 participants of the Alzheimer's Disease Neuroimaging Initiative (ADNI), who underwent both 1.5-Tesla (1.5-T) and 3-Tesla imaging at the same time were selected to construct a GAN model. This model was trained along with a three-dimensional fully convolutional network (FCN) using the generated images (3T*) as inputs to predict AD status. Quality of the generated images was evaluated using signal to noise ratio (SNR), Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE) and Natural Image Quality Evaluator (NIQE). Cases from the Australian Imaging, Biomarker & Lifestyle Flagship Study of Ageing (AIBL, n = 107) and the National Alzheimer's Coordinating Center (NACC, n = 565) were used for model validation. RESULTS The 3T*-based FCN classifier performed better than the FCN model trained using the 1.5-T scans. Specifically, the mean area under curve increased from 0.907 to 0.932, from 0.934 to 0.940, and from 0.870 to 0.907 on the ADNI test, AIBL, and NACC datasets, respectively. Additionally, we found that the mean quality of the generated (3T*) images was consistently higher than the 1.5-T images, as measured using SNR, BRISQUE, and NIQE on the validation datasets. CONCLUSION This study demonstrates a proof of principle that GAN frameworks can be constructed to augment AD classification performance and improve image quality.
Collapse
Affiliation(s)
- Xiao Zhou
- Section of Computational Biomedicine, Department of Medicine, Boston University School of Medicine, 72 E. Concord Street, Evans 636, Boston, MA, 02118, USA
- Department of Computer Science, College of Arts & Sciences, Boston University, Boston, MA, USA
| | - Shangran Qiu
- Section of Computational Biomedicine, Department of Medicine, Boston University School of Medicine, 72 E. Concord Street, Evans 636, Boston, MA, 02118, USA
- Department of Physics, College of Arts & Sciences, Boston University, Boston, MA, USA
| | - Prajakta S Joshi
- Department of Anatomy and Neurobiology, Boston University School of Medicine, Boston, MA, USA
- Department of General Dentistry, Boston University School of Dental Medicine, Boston, MA, USA
| | - Chonghua Xue
- Section of Computational Biomedicine, Department of Medicine, Boston University School of Medicine, 72 E. Concord Street, Evans 636, Boston, MA, 02118, USA
| | - Ronald J Killiany
- Department of Anatomy and Neurobiology, Boston University School of Medicine, Boston, MA, USA
- Department of Radiology, Boston University School of Medicine, Boston, MA, USA
- Department of Neurology, Boston University School of Medicine, Boston, MA, USA
- Boston University Alzheimer's Disease Center, Boston, MA, USA
| | - Asim Z Mian
- Department of Radiology, Boston University School of Medicine, Boston, MA, USA
| | - Sang P Chin
- Department of Computer Science, College of Arts & Sciences, Boston University, Boston, MA, USA
- Department of Brain and Cognitive Science, Massachusetts Institute of Technology, Cambridge, MA, USA
- Center of Mathematical Sciences & Applications, Harvard University, Cambridge, MA, USA
| | - Rhoda Au
- Department of Anatomy and Neurobiology, Boston University School of Medicine, Boston, MA, USA
- Department of Neurology, Boston University School of Medicine, Boston, MA, USA
- Boston University Alzheimer's Disease Center, Boston, MA, USA
- The Framingham Heart Study, Boston University School of Medicine, Boston, MA, USA
- Department of Epidemiology, Boston University School of Public Health, Boston, MA, USA
| | - Vijaya B Kolachalama
- Section of Computational Biomedicine, Department of Medicine, Boston University School of Medicine, 72 E. Concord Street, Evans 636, Boston, MA, 02118, USA.
- Department of Computer Science, College of Arts & Sciences, Boston University, Boston, MA, USA.
- Boston University Alzheimer's Disease Center, Boston, MA, USA.
- Faculty of Computing & Data Sciences, Boston University, Boston, MA, USA.
| |
Collapse
|
278
|
Jiang H, Tang S, Liu W, Zhang Y. Deep learning for COVID-19 chest CT (computed tomography) image analysis: A lesson from lung cancer. Comput Struct Biotechnol J 2021; 19:1391-1399. [PMID: 33680351 PMCID: PMC7923948 DOI: 10.1016/j.csbj.2021.02.016] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2020] [Revised: 02/17/2021] [Accepted: 02/20/2021] [Indexed: 12/31/2022] Open
Abstract
As a recent global health emergency, the quick and reliable diagnosis of COVID-19 is urgently needed. Thus, many artificial intelligence (AI)-base methods are proposed for COVID-19 chest CT (computed tomography) image analysis. However, there are very limited COVID-19 chest CT images publicly available to evaluate those deep neural networks. On the other hand, a huge amount of CT images from lung cancer are publicly available. To build a reliable deep learning model trained and tested with a larger scale dataset, the proposed model builds a public COVID-19 CT dataset, containing 1186 CT images synthesized from lung cancer CT images using CycleGAN. Additionally, various deep learning models are tested with synthesized or real chest CT images for COVID-19 and Non-COVID-19 classification. In comparison, all models achieve excellent results in accuracy, precision, recall and F1 score for both synthesized and real COVID-19 CT images, demonstrating the reliable of the synthesized dataset. The public dataset and deep learning models can facilitate the development of accurate and efficient diagnostic testing for COVID-19.
Collapse
Affiliation(s)
- Hao Jiang
- College of Science, Harbin Institute of Technology (Shenzhen), Shenzhen, Guangdong 518055, China
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan 610054, China
| | - Shiming Tang
- School of Computing and Engineering, University of Missouri-Kansas City, MO, United States
| | - Weihuang Liu
- College of Science, Harbin Institute of Technology (Shenzhen), Shenzhen, Guangdong 518055, China
- Department of Computer and Information Science, University of Macau, Macau, China
| | - Yang Zhang
- College of Science, Harbin Institute of Technology (Shenzhen), Shenzhen, Guangdong 518055, China
| |
Collapse
|
279
|
Montalt-Tordera J, Muthurangu V, Hauptmann A, Steeden JA. Machine learning in Magnetic Resonance Imaging: Image reconstruction. Phys Med 2021; 83:79-87. [DOI: 10.1016/j.ejmp.2021.02.020] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Accepted: 02/23/2021] [Indexed: 12/27/2022] Open
|
280
|
La Rosa F, Yu T, Barquero G, Thiran JP, Granziera C, Bach Cuadra M. MPRAGE to MP2RAGE UNI translation via generative adversarial network improves the automatic tissue and lesion segmentation in multiple sclerosis patients. Comput Biol Med 2021; 132:104297. [PMID: 33711559 DOI: 10.1016/j.compbiomed.2021.104297] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2020] [Revised: 02/08/2021] [Accepted: 02/22/2021] [Indexed: 11/25/2022]
Abstract
BACKGROUND AND OBJECTIVE Compared to the conventional magnetization-prepared rapid gradient-echo imaging (MPRAGE) MRI sequence, the specialized magnetization prepared 2 rapid acquisition gradient echoes (MP2RAGE) shows a higher brain tissue and lesion contrast in multiple sclerosis (MS) patients. The goal of this work is to retrospectively generate realistic-looking MP2RAGE uniform images (UNI) from already acquired MPRAGE images in order to improve the automatic lesion and tissue segmentation. METHODS For this task we propose a generative adversarial network (GAN). Multi-contrast MRI data of 12 healthy controls and 44 patients diagnosed with MS was retrospectively analyzed. Imaging was acquired at 3T using a SIEMENS scanner with MPRAGE, MP2RAGE, FLAIR, and DIR sequences. We train the GAN with both healthy controls and MS patients to generate synthetic MP2RAGE UNI images. These images were then compared to the real MP2RAGE UNI (considered as ground truth) analyzing the output of automatic brain tissue and lesion segmentation tools. Reference-based metrics as well as the lesion-wise true and false positives, Dice coefficient, and volume difference were considered for the evaluation. Statistical differences were assessed with the Wilcoxon signed-rank test. RESULTS The synthetic MP2RAGE UNI significantly improves the lesion and tissue segmentation masks in terms of Dice coefficient and volume difference (p-values < 0.001) compared to the MPRAGE. For the segmentation metrics analyzed no statistically significant differences are found between the synthetic and acquired MP2RAGE UNI. CONCLUSION Synthesized MP2RAGE UNI images are visually realistic and improve the output of automatic segmentation tools.
Collapse
Affiliation(s)
- Francesco La Rosa
- Signal Processing Laboratory (LTS5), Ecole Polytechnique Fédérale de Lausanne, Switzerland; CIBM Center for Biomedical Imaging, Switzerland; Medical Image Analysis Laboratory (MIAL), Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland.
| | - Thomas Yu
- Signal Processing Laboratory (LTS5), Ecole Polytechnique Fédérale de Lausanne, Switzerland; CIBM Center for Biomedical Imaging, Switzerland; Medical Image Analysis Laboratory (MIAL), Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Germán Barquero
- Signal Processing Laboratory (LTS5), Ecole Polytechnique Fédérale de Lausanne, Switzerland; CIBM Center for Biomedical Imaging, Switzerland; Medical Image Analysis Laboratory (MIAL), Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Jean-Philippe Thiran
- Signal Processing Laboratory (LTS5), Ecole Polytechnique Fédérale de Lausanne, Switzerland; Department of Radiology, Lausanne University Hospital and University of Lausanne, Switzerland
| | - Cristina Granziera
- Neurologic Clinic and Policlinic, Departments of Medicine, Clinical Research and Biomedical Engineering, University Hospital Basel and University of Basel, Basel, Switzerland; Translational Imaging in Neurology (ThINK) Basel, Department of Medicine and Biomedical Engineering, University Hospital Basel and University of Basel, Basel, Switzerland
| | - Meritxell Bach Cuadra
- CIBM Center for Biomedical Imaging, Switzerland; Medical Image Analysis Laboratory (MIAL), Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland; Department of Radiology, Lausanne University Hospital and University of Lausanne, Switzerland
| |
Collapse
|
281
|
Zunair H, Hamza AB. Synthesis of COVID-19 chest X-rays using unpaired image-to-image translation. SOCIAL NETWORK ANALYSIS AND MINING 2021; 11:23. [PMID: 33643491 PMCID: PMC7903408 DOI: 10.1007/s13278-021-00731-5] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2020] [Revised: 01/05/2021] [Accepted: 02/04/2021] [Indexed: 12/28/2022]
Abstract
Motivated by the lack of publicly available datasets of chest radiographs of positive patients with coronavirus disease 2019 (COVID-19), we build the first-of-its-kind open dataset of synthetic COVID-19 chest X-ray images of high fidelity using an unsupervised domain adaptation approach by leveraging class conditioning and adversarial training. Our contributions are twofold. First, we show considerable performance improvements on COVID-19 detection using various deep learning architectures when employing synthetic images as additional training set. Second, we show how our image synthesis method can serve as a data anonymization tool by achieving comparable detection performance when trained only on synthetic data. In addition, the proposed data generation framework offers a viable solution to the COVID-19 detection in particular, and to medical image classification tasks in general. Our publicly available benchmark dataset (https://github.com/hasibzunair/synthetic-covid-cxr-dataset.) consists of 21,295 synthetic COVID-19 chest X-ray images. The insights gleaned from this dataset can be used for preventive actions in the fight against the COVID-19 pandemic.
Collapse
Affiliation(s)
- Hasib Zunair
- Concordia Institute for Information Systems Engineering, Concordia University, Montreal, QC Canada
| | - A Ben Hamza
- Concordia Institute for Information Systems Engineering, Concordia University, Montreal, QC Canada
| |
Collapse
|
282
|
Machine Learning and Deep Neural Networks: Applications in Patient and Scan Preparation, Contrast Medium, and Radiation Dose Optimization. J Thorac Imaging 2021; 35 Suppl 1:S17-S20. [PMID: 32079904 DOI: 10.1097/rti.0000000000000482] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
Artificial intelligence (AI) algorithms are dependent on a high amount of robust data and the application of appropriate computational power and software. AI offers the potential for major changes in cardiothoracic imaging. Beyond image processing, machine learning and deep learning have the potential to support the image acquisition process. AI applications may improve patient care through superior image quality and have the potential to lower radiation dose with AI-driven reconstruction algorithms and may help avoid overscanning. This review summarizes recent promising applications of AI in patient and scan preparation as well as contrast medium and radiation dose optimization.
Collapse
|
283
|
Zhang X, Lu H, Guo D, Bao L, Huang F, Xu Q, Qu X. A guaranteed convergence analysis for the projected fast iterative soft-thresholding algorithm in parallel MRI. Med Image Anal 2021; 69:101987. [PMID: 33588120 DOI: 10.1016/j.media.2021.101987] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2020] [Revised: 01/06/2021] [Accepted: 01/26/2021] [Indexed: 01/16/2023]
Abstract
Sparse sampling and parallel imaging techniques are two effective approaches to alleviate the lengthy magnetic resonance imaging (MRI) data acquisition problem. Promising data recoveries can be obtained from a few MRI samples with the help of sparse reconstruction models. To solve the optimization models, proper algorithms are indispensable. The pFISTA, a simple and efficient algorithm, has been successfully extended to parallel imaging. However, its convergence criterion is still an open question. Besides, the existing convergence criterion of single-coil pFISTA cannot be applied to the parallel imaging pFISTA, which, therefore, imposes confusions and difficulties on users about determining the only parameter - step size. In this work, we provide the guaranteed convergence analysis of the parallel imaging version pFISTA to solve the two well-known parallel imaging reconstruction models, SENSE and SPIRiT. Along with the convergence analysis, we provide recommended step size values for SENSE and SPIRiT reconstructions to obtain fast and promising reconstructions. Experiments on in vivo brain images demonstrate the validity of the convergence criterion.
Collapse
Affiliation(s)
- Xinlin Zhang
- Department of Electronic Science, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, School of Electronic Science and Engineering, National Model Microelectronics College, Xiamen University, Xiamen 361005, China
| | - Hengfa Lu
- Department of Electronic Science, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, School of Electronic Science and Engineering, National Model Microelectronics College, Xiamen University, Xiamen 361005, China
| | - Di Guo
- School of Computer and Information Engineering, Fujian Provincial University Key Laboratory of Internet of Things Application Technology, Xiamen University of Technology, Xiamen 361024, China
| | - Lijun Bao
- Department of Electronic Science, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, School of Electronic Science and Engineering, National Model Microelectronics College, Xiamen University, Xiamen 361005, China
| | - Feng Huang
- Neusoft Medical System, Shanghai 200241, China
| | - Qin Xu
- Neusoft Medical System, Shanghai 200241, China
| | - Xiaobo Qu
- Department of Electronic Science, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, School of Electronic Science and Engineering, National Model Microelectronics College, Xiamen University, Xiamen 361005, China.
| |
Collapse
|
284
|
Lu T, Chen T, Gao F, Sun B, Ntziachristos V, Li J. LV-GAN: A deep learning approach for limited-view optoacoustic imaging based on hybrid datasets. JOURNAL OF BIOPHOTONICS 2021; 14:e202000325. [PMID: 33098215 DOI: 10.1002/jbio.202000325] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Revised: 09/28/2020] [Accepted: 10/13/2020] [Indexed: 06/11/2023]
Abstract
The optoacoustic imaging (OAI) methods are rapidly evolving for resolving optical contrast in medical imaging applications. In practice, measurement strategies are commonly implemented under limited-view conditions due to oversized image objectives or system design limitations. Data acquired by limited-view detection may impart artifacts and distortions in reconstructed optoacoustic (OA) images. We propose a hybrid data-driven deep learning approach based on generative adversarial network (GAN), termed as LV-GAN, to efficiently recover high quality images from limited-view OA images. Trained on both simulation and experiment data, LV-GAN is found capable of achieving high recovery accuracy even under limited detection angles less than 60° . The feasibility of LV-GAN for artifact removal in biological applications was validated by ex vivo experiments based on two different OAI systems, suggesting high potential of a ubiquitous use of LV-GAN to optimize image quality or system design for different scanners and application scenarios.
Collapse
Affiliation(s)
- Tong Lu
- School of Precision Instrument and Optoelectronics Engineering, Tianjin University, Tianjin, China
| | - Tingting Chen
- School of Precision Instrument and Optoelectronics Engineering, Tianjin University, Tianjin, China
| | - Feng Gao
- School of Precision Instrument and Optoelectronics Engineering, Tianjin University, Tianjin, China
- Tianjin Key Laboratory of Biomedical Detecting Techniques and Instruments, Tianjin University, Tianjin, China
| | - Biao Sun
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Vasilis Ntziachristos
- Institute of Biological and Medical Imaging, Helmholtz Zentrum Munchen, Munich, Germany
- Chair of Biological Imaging and TranslaTUM, Technical University of Munich, Munich, Germany
| | - Jiao Li
- School of Precision Instrument and Optoelectronics Engineering, Tianjin University, Tianjin, China
- Tianjin Key Laboratory of Biomedical Detecting Techniques and Instruments, Tianjin University, Tianjin, China
| |
Collapse
|
285
|
Ghodrati V, Bydder M, Ali F, Gao C, Prosper A, Nguyen KL, Hu P. Retrospective respiratory motion correction in cardiac cine MRI reconstruction using adversarial autoencoder and unsupervised learning. NMR IN BIOMEDICINE 2021; 34:e4433. [PMID: 33258197 PMCID: PMC10193526 DOI: 10.1002/nbm.4433] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/22/2019] [Revised: 09/18/2020] [Accepted: 10/02/2020] [Indexed: 05/20/2023]
Abstract
The aim of this study was to develop a deep neural network for respiratory motion compensation in free-breathing cine MRI and evaluate its performance. An adversarial autoencoder network was trained using unpaired training data from healthy volunteers and patients who underwent clinically indicated cardiac MRI examinations. A U-net structure was used for the encoder and decoder parts of the network and the code space was regularized by an adversarial objective. The autoencoder learns the identity map for the free-breathing motion-corrupted images and preserves the structural content of the images, while the discriminator, which interacts with the output of the encoder, forces the encoder to remove motion artifacts. The network was first evaluated based on data that were artificially corrupted with simulated rigid motion with regard to motion-correction accuracy and the presence of any artificially created structures. Subsequently, to demonstrate the feasibility of the proposed approach in vivo, our network was trained on respiratory motion-corrupted images in an unpaired manner and was tested on volunteer and patient data. In the simulation study, mean structural similarity index scores for the synthesized motion-corrupted images and motion-corrected images were 0.76 and 0.93 (out of 1), respectively. The proposed method increased the Tenengrad focus measure of the motion-corrupted images by 12% in the simulation study and by 7% in the in vivo study. The average overall subjective image quality scores for the motion-corrupted images, motion-corrected images and breath-held images were 2.5, 3.5 and 4.1 (out of 5.0), respectively. Nonparametric-paired comparisons showed that there was significant difference between the image quality scores of the motion-corrupted and breath-held images (P < .05); however, after correction there was no significant difference between the image quality scores of the motion-corrected and breath-held images. This feasibility study demonstrates the potential of an adversarial autoencoder network for correcting respiratory motion-related image artifacts without requiring paired data.
Collapse
Affiliation(s)
- Vahid Ghodrati
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
- Biomedical Physics Inter-Departmental Graduate Program, University of California, Los Angeles, CA, USA
| | - Mark Bydder
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Fadil Ali
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
- Biomedical Physics Inter-Departmental Graduate Program, University of California, Los Angeles, CA, USA
| | - Chang Gao
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
- Biomedical Physics Inter-Departmental Graduate Program, University of California, Los Angeles, CA, USA
| | - Ashley Prosper
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Kim-Lien Nguyen
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
- Biomedical Physics Inter-Departmental Graduate Program, University of California, Los Angeles, CA, USA
- Department of Medicine, Division of Cardiology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Peng Hu
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
- Biomedical Physics Inter-Departmental Graduate Program, University of California, Los Angeles, CA, USA
- Correspondence to: Peng Hu, PhD, Department of Radiological Sciences, 300 UCLA Medical Plaza Suite B119, Los Angeles, CA 90095,
| |
Collapse
|
286
|
Xue H, Zhang Q, Zou S, Zhang W, Zhou C, Tie C, Wan Q, Teng Y, Li Y, Liang D, Liu X, Yang Y, Zheng H, Zhu X, Hu Z. LCPR-Net: low-count PET image reconstruction using the domain transform and cycle-consistent generative adversarial networks. Quant Imaging Med Surg 2021; 11:749-762. [PMID: 33532274 PMCID: PMC7779905 DOI: 10.21037/qims-20-66] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2020] [Accepted: 09/25/2020] [Indexed: 11/06/2022]
Abstract
BACKGROUND Reducing the radiation tracer dose and scanning time during positron emission tomography (PET) imaging can reduce the cost of the tracer, reduce motion artifacts, and increase the efficiency of the scanner. However, the reconstructed images to be noisy. It is very important to reconstruct high-quality images with low-count (LC) data. Therefore, we propose a deep learning method called LCPR-Net, which is used for directly reconstructing full-count (FC) PET images from corresponding LC sinogram data. METHODS Based on the framework of a generative adversarial network (GAN), we enforce a cyclic consistency constraint on the least-squares loss to establish a nonlinear end-to-end mapping process from LC sinograms to FC images. In this process, we merge a convolutional neural network (CNN) and a residual network for feature extraction and image reconstruction. In addition, the domain transform (DT) operation sends a priori information to the cycle-consistent GAN (CycleGAN) network, avoiding the need for a large amount of computational resources to learn this transformation. RESULTS The main advantages of this method are as follows. First, the network can use LC sinogram data as input to directly reconstruct an FC PET image. The reconstruction speed is faster than that provided by model-based iterative reconstruction. Second, reconstruction based on the CycleGAN framework improves the quality of the reconstructed image. CONCLUSIONS Compared with other state-of-the-art methods, the quantitative and qualitative evaluation results show that the proposed method is accurate and effective for FC PET image reconstruction.
Collapse
Affiliation(s)
- Hengzhi Xue
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Qiyang Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
| | - Sijuan Zou
- Department of Nuclear Medicine and PET, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Weiguang Zhang
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Chao Zhou
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Changjun Tie
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Qian Wan
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yueyang Teng
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Yongchang Li
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Xin Liu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yongfeng Yang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Hairong Zheng
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Xiaohua Zhu
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
287
|
Xie X, Niu J, Liu X, Chen Z, Tang S, Yu S. A survey on incorporating domain knowledge into deep learning for medical image analysis. Med Image Anal 2021; 69:101985. [PMID: 33588117 DOI: 10.1016/j.media.2021.101985] [Citation(s) in RCA: 87] [Impact Index Per Article: 21.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2020] [Revised: 12/04/2020] [Accepted: 01/26/2021] [Indexed: 12/27/2022]
Abstract
Although deep learning models like CNNs have achieved great success in medical image analysis, the small size of medical datasets remains a major bottleneck in this area. To address this problem, researchers have started looking for external information beyond current available medical datasets. Traditional approaches generally leverage the information from natural images via transfer learning. More recent works utilize the domain knowledge from medical doctors, to create networks that resemble how medical doctors are trained, mimic their diagnostic patterns, or focus on the features or areas they pay particular attention to. In this survey, we summarize the current progress on integrating medical domain knowledge into deep learning models for various tasks, such as disease diagnosis, lesion, organ and abnormality detection, lesion and organ segmentation. For each task, we systematically categorize different kinds of medical domain knowledge that have been utilized and their corresponding integrating methods. We also provide current challenges and directions for future research.
Collapse
Affiliation(s)
- Xiaozheng Xie
- State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University, 37 Xueyuan Road, Haidian District, Beijing 100191, China
| | - Jianwei Niu
- State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University, 37 Xueyuan Road, Haidian District, Beijing 100191, China; Beijing Advanced Innovation Center for Big Data and Brain Computing (BDBC) and Hangzhou Innovation Institute of Beihang University, 18 Chuanghui Street, Binjiang District, Hangzhou 310000, China
| | - Xuefeng Liu
- State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University, 37 Xueyuan Road, Haidian District, Beijing 100191, China.
| | - Zhengsu Chen
- State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University, 37 Xueyuan Road, Haidian District, Beijing 100191, China
| | - Shaojie Tang
- Jindal School of Management, The University of Texas at Dallas, 800 W Campbell Rd, Richardson, TX 75080-3021, USA
| | - Shui Yu
- School of Computer Science, University of Technology Sydney, 15 Broadway, Ultimo NSW 2007, Australia
| |
Collapse
|
288
|
Zhao D, Huang Y, Zhao F, Qin B, Zheng J. Reference-Driven Undersampled MR Image Reconstruction Using Wavelet Sparsity-Constrained Deep Image Prior. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2021; 2021:8865582. [PMID: 33552232 PMCID: PMC7846397 DOI: 10.1155/2021/8865582] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/28/2020] [Revised: 12/17/2020] [Accepted: 12/31/2020] [Indexed: 11/29/2022]
Abstract
Deep learning has shown potential in significantly improving performance for undersampled magnetic resonance (MR) image reconstruction. However, one challenge for the application of deep learning to clinical scenarios is the requirement of large, high-quality patient-based datasets for network training. In this paper, we propose a novel deep learning-based method for undersampled MR image reconstruction that does not require pre-training procedure and pre-training datasets. The proposed reference-driven method using wavelet sparsity-constrained deep image prior (RWS-DIP) is based on the DIP framework and thereby reduces the dependence on datasets. Moreover, RWS-DIP explores and introduces structure and sparsity priors into network learning to improve the efficiency of learning. By employing a high-resolution reference image as the network input, RWS-DIP incorporates structural information into network. RWS-DIP also uses the wavelet sparsity to further enrich the implicit regularization of traditional DIP by formulating the training of network parameters as a constrained optimization problem, which is solved using the alternating direction method of multipliers (ADMM) algorithm. Experiments on in vivo MR scans have demonstrated that the RWS-DIP method can reconstruct MR images more accurately and preserve features and textures from undersampled k-space measurements.
Collapse
Affiliation(s)
- Di Zhao
- Key Laboratory of Complex System Optimization and Big Data Processing, Guangxi Colleges and Universities, Yulin Normal University, Yulin 537000, China
- School of Physics and Telecommunication Engineering, Yulin Normal University, Yulin 537000, China
| | - Yanhu Huang
- School of Physics and Telecommunication Engineering, Yulin Normal University, Yulin 537000, China
| | - Feng Zhao
- Key Laboratory of Complex System Optimization and Big Data Processing, Guangxi Colleges and Universities, Yulin Normal University, Yulin 537000, China
| | - Binyi Qin
- Key Laboratory of Complex System Optimization and Big Data Processing, Guangxi Colleges and Universities, Yulin Normal University, Yulin 537000, China
- School of Physics and Telecommunication Engineering, Yulin Normal University, Yulin 537000, China
| | - Jincun Zheng
- Key Laboratory of Complex System Optimization and Big Data Processing, Guangxi Colleges and Universities, Yulin Normal University, Yulin 537000, China
- School of Physics and Telecommunication Engineering, Yulin Normal University, Yulin 537000, China
| |
Collapse
|
289
|
Cocola J, Hand P, Voroninski V. No Statistical-Computational Gap in Spiked Matrix Models with Generative Network Priors. ENTROPY (BASEL, SWITZERLAND) 2021; 23:E115. [PMID: 33467175 PMCID: PMC7830301 DOI: 10.3390/e23010115] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Revised: 12/30/2020] [Accepted: 01/08/2021] [Indexed: 11/16/2022]
Abstract
We provide a non-asymptotic analysis of the spiked Wishart and Wigner matrix models with a generative neural network prior. Spiked random matrices have the form of a rank-one signal plus noise and have been used as models for high dimensional Principal Component Analysis (PCA), community detection and synchronization over groups. Depending on the prior imposed on the spike, these models can display a statistical-computational gap between the information theoretically optimal reconstruction error that can be achieved with unbounded computational resources and the sub-optimal performances of currently known polynomial time algorithms. These gaps are believed to be fundamental, as in the emblematic case of Sparse PCA. In stark contrast to such cases, we show that there is no statistical-computational gap under a generative network prior, in which the spike lies on the range of a generative neural network. Specifically, we analyze a gradient descent method for minimizing a nonlinear least squares objective over the range of an expansive-Gaussian neural network and show that it can recover in polynomial time an estimate of the underlying spike with a rate-optimal sample complexity and dependence on the noise level.
Collapse
Affiliation(s)
- Jorio Cocola
- Department of Mathematics, Northeastern University, Boston, MA 02115, USA
| | - Paul Hand
- Department of Mathematics, Northeastern University, Boston, MA 02115, USA
- Khoury College of Computer Sciences, Northeastern University, Boston, MA 02115, USA;
| | | |
Collapse
|
290
|
Huang MX, Huang CW, Harrington DL, Robb-Swan A, Angeles-Quinto A, Nichols S, Huang JW, Le L, Rimmele C, Matthews S, Drake A, Song T, Ji Z, Cheng CK, Shen Q, Foote E, Lerman I, Yurgil KA, Hansen HB, Naviaux RK, Dynes R, Baker DG, Lee RR. Resting-state magnetoencephalography source magnitude imaging with deep-learning neural network for classification of symptomatic combat-related mild traumatic brain injury. Hum Brain Mapp 2021; 42:1987-2004. [PMID: 33449442 PMCID: PMC8046098 DOI: 10.1002/hbm.25340] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2020] [Revised: 11/16/2020] [Accepted: 12/23/2020] [Indexed: 12/20/2022] Open
Abstract
Combat‐related mild traumatic brain injury (cmTBI) is a leading cause of sustained physical, cognitive, emotional, and behavioral disabilities in Veterans and active‐duty military personnel. Accurate diagnosis of cmTBI is challenging since the symptom spectrum is broad and conventional neuroimaging techniques are insensitive to the underlying neuropathology. The present study developed a novel deep‐learning neural network method, 3D‐MEGNET, and applied it to resting‐state magnetoencephalography (rs‐MEG) source‐magnitude imaging data from 59 symptomatic cmTBI individuals and 42 combat‐deployed healthy controls (HCs). Analytic models of individual frequency bands and all bands together were tested. The All‐frequency model, which combined delta‐theta (1–7 Hz), alpha (8–12 Hz), beta (15–30 Hz), and gamma (30–80 Hz) frequency bands, outperformed models based on individual bands. The optimized 3D‐MEGNET method distinguished cmTBI individuals from HCs with excellent sensitivity (99.9 ± 0.38%) and specificity (98.9 ± 1.54%). Receiver‐operator‐characteristic curve analysis showed that diagnostic accuracy was 0.99. The gamma and delta‐theta band models outperformed alpha and beta band models. Among cmTBI individuals, but not controls, hyper delta‐theta and gamma‐band activity correlated with lower performance on neuropsychological tests, whereas hypo alpha and beta‐band activity also correlated with lower neuropsychological test performance. This study provides an integrated framework for condensing large source‐imaging variable sets into optimal combinations of regions and frequencies with high diagnostic accuracy and cognitive relevance in cmTBI. The all‐frequency model offered more discriminative power than each frequency‐band model alone. This approach offers an effective path for optimal characterization of behaviorally relevant neuroimaging features in neurological and psychiatric disorders.
Collapse
Affiliation(s)
- Ming-Xiong Huang
- Radiology, Research, and Psychiatry Services, VA San Diego Healthcare System, San Diego, California, USA.,Department of Radiology, University of California, San Diego, California, USA
| | - Charles W Huang
- Department of Bioengineering, Stanford University, Stanford, California, USA
| | - Deborah L Harrington
- Radiology, Research, and Psychiatry Services, VA San Diego Healthcare System, San Diego, California, USA.,Department of Radiology, University of California, San Diego, California, USA
| | - Ashley Robb-Swan
- Radiology, Research, and Psychiatry Services, VA San Diego Healthcare System, San Diego, California, USA.,Department of Radiology, University of California, San Diego, California, USA
| | - Annemarie Angeles-Quinto
- Radiology, Research, and Psychiatry Services, VA San Diego Healthcare System, San Diego, California, USA.,Department of Radiology, University of California, San Diego, California, USA
| | - Sharon Nichols
- Department of Neurosciences, University of California, San Diego, California, USA
| | - Jeffrey W Huang
- Department of Computer Science, Columbia University, New York, New York, USA
| | - Lu Le
- ASPIRE Center, VASDHS Residential Rehabilitation Treatment Program, San Diego, California, USA
| | - Carl Rimmele
- ASPIRE Center, VASDHS Residential Rehabilitation Treatment Program, San Diego, California, USA
| | - Scott Matthews
- ASPIRE Center, VASDHS Residential Rehabilitation Treatment Program, San Diego, California, USA
| | - Angela Drake
- Cedar Sinai Medical Group Chronic Pain Program, Beverly Hills, California, USA
| | - Tao Song
- Department of Radiology, University of California, San Diego, California, USA
| | - Zhengwei Ji
- Department of Radiology, University of California, San Diego, California, USA
| | - Chung-Kuan Cheng
- Department of Computer Science and Engineering, University of California, San Diego, California, USA
| | - Qian Shen
- Department of Radiology, University of California, San Diego, California, USA
| | - Ericka Foote
- Radiology, Research, and Psychiatry Services, VA San Diego Healthcare System, San Diego, California, USA
| | - Imanuel Lerman
- Radiology, Research, and Psychiatry Services, VA San Diego Healthcare System, San Diego, California, USA
| | - Kate A Yurgil
- Radiology, Research, and Psychiatry Services, VA San Diego Healthcare System, San Diego, California, USA.,Department of Psychological Sciences, Loyola University New Orleans, Louisiana, USA
| | - Hayden B Hansen
- Radiology, Research, and Psychiatry Services, VA San Diego Healthcare System, San Diego, California, USA
| | - Robert K Naviaux
- Department of Medicine, University of California, San Diego, California, USA.,Department of Pediatrics, University of California, San Diego, California, USA.,Department of Pathology, University of California, San Diego, California, USA
| | - Robert Dynes
- Department of Physics, University of California, San Diego, California, USA
| | - Dewleen G Baker
- Radiology, Research, and Psychiatry Services, VA San Diego Healthcare System, San Diego, California, USA.,VA Center of Excellence for Stress and Mental Health, San Diego, California, USA.,Department of Psychiatry, University of California, San Diego, California, USA
| | - Roland R Lee
- Radiology, Research, and Psychiatry Services, VA San Diego Healthcare System, San Diego, California, USA.,Department of Radiology, University of California, San Diego, California, USA
| |
Collapse
|
291
|
High quality and fast compressed sensing MRI reconstruction via edge-enhanced dual discriminator generative adversarial network. Magn Reson Imaging 2021; 77:124-136. [PMID: 33359427 DOI: 10.1016/j.mri.2020.12.011] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2020] [Revised: 11/24/2020] [Accepted: 12/20/2020] [Indexed: 11/21/2022]
Abstract
Generative adversarial networks (GAN) are widely used for fast compressed sensing magnetic resonance imaging (CSMRI) reconstruction. However, most existing methods are difficult to make an effective trade-off between abstract global high-level features and edge features. It easily causes problems, such as significant remaining aliasing artifacts and clearly over-smoothed reconstruction details. To tackle these issues, we propose a novel edge-enhanced dual discriminator generative adversarial network architecture called EDDGAN for CSMRI reconstruction with high quality. In this model, we extract effective edge features by fusing edge information from different depths. Then, leveraging the relationship between abstract global high-level features and edge features, a three-player game is introduced to control the hallucination of details and stabilize the training process. The resulting EDDGAN can offer more focus on edge restoration and de-aliasing. Extensive experimental results demonstrate that our method consistently outperforms state-of-the-art methods and obtains reconstructed images with rich edge details. In addition, our method also shows remarkable generalization, and its time consumption for each 256 × 256 image reconstruction is approximately 8.39 ms.
Collapse
|
292
|
Lv J, Wang C, Yang G. PIC-GAN: A Parallel Imaging Coupled Generative Adversarial Network for Accelerated Multi-Channel MRI Reconstruction. Diagnostics (Basel) 2021; 11:61. [PMID: 33401777 PMCID: PMC7824530 DOI: 10.3390/diagnostics11010061] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2020] [Revised: 12/28/2020] [Accepted: 12/29/2020] [Indexed: 12/16/2022] Open
Abstract
In this study, we proposed a model combing parallel imaging (PI) with generative adversarial network (GAN) architecture (PIC-GAN) for accelerated multi-channel magnetic resonance imaging (MRI) reconstruction. This model integrated data fidelity and regularization terms into the generator to benefit from multi-coils information and provide an "end-to-end" reconstruction. Besides, to better preserve image details during reconstruction, we combined the adversarial loss with pixel-wise loss in both image and frequency domains. The proposed PIC-GAN framework was evaluated on abdominal and knee MRI images using 2, 4 and 6-fold accelerations with different undersampling patterns. The performance of the PIC-GAN was compared to the sparsity-based parallel imaging (L1-ESPIRiT), the variational network (VN), and conventional GAN with single-channel images as input (zero-filled (ZF)-GAN). Experimental results show that our PIC-GAN can effectively reconstruct multi-channel MR images at a low noise level and improved structure similarity of the reconstructed images. PIC-GAN has yielded the lowest Normalized Mean Square Error (in ×10-5) (PIC-GAN: 0.58 ± 0.37, ZF-GAN: 1.93 ± 1.41, VN: 1.87 ± 1.28, L1-ESPIRiT: 2.49 ± 1.04 for abdominal MRI data and PIC-GAN: 0.80 ± 0.26, ZF-GAN: 0.93 ± 0.29, VN:1.18 ± 0.31, L1-ESPIRiT: 1.28 ± 0.24 for knee MRI data) and the highest Peak Signal to Noise Ratio (PIC-GAN: 34.43 ± 1.92, ZF-GAN: 31.45 ± 4.0, VN: 29.26 ± 2.98, L1-ESPIRiT: 25.40 ± 1.88 for abdominal MRI data and PIC-GAN: 34.10 ± 1.09, ZF-GAN: 31.47 ± 1.05, VN: 30.01 ± 1.01, L1-ESPIRiT: 28.01 ± 0.98 for knee MRI data) compared to ZF-GAN, VN and L1-ESPIRiT with an under-sampling factor of 6. The proposed PIC-GAN framework has shown superior reconstruction performance in terms of reducing aliasing artifacts and restoring tissue structures as compared to other conventional and state-of-the-art reconstruction methods.
Collapse
Affiliation(s)
- Jun Lv
- School of Computer and Control Engineering, Yantai University, Yantai 264005, China;
| | - Chengyan Wang
- Human Phenome Institute, Fudan University, Shanghai 201203, China
| | - Guang Yang
- Cardiovascular Research Centre, Royal Brompton Hospital, London SW3 6NP, UK
- National Heart and Lung Institute, Imperial College London, London SW7 2AZ, UK
| |
Collapse
|
293
|
Reader AJ, Corda G, Mehranian A, Costa-Luis CD, Ellis S, Schnabel JA. Deep Learning for PET Image Reconstruction. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021. [DOI: 10.1109/trpms.2020.3014786] [Citation(s) in RCA: 65] [Impact Index Per Article: 16.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
294
|
Zhou W, Du H, Mei W, Fang L. Efficient structurally-strengthened generative adversarial network for MRI reconstruction. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.09.008] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
295
|
Warner E, Wang N, Lee J, Rao A. Meaningful incorporation of artificial intelligence for personalized patient management during cancer: Quantitative imaging, risk assessment, and therapeutic outcomes. Artif Intell Med 2021. [DOI: 10.1016/b978-0-12-821259-2.00017-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
296
|
Lei K, Mardani M, Pauly JM, Vasanawala SS. Wasserstein GANs for MR Imaging: From Paired to Unpaired Training. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:105-115. [PMID: 32915728 PMCID: PMC7797774 DOI: 10.1109/tmi.2020.3022968] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Lack of ground-truth MR images impedes the common supervised training of neural networks for image reconstruction. To cope with this challenge, this article leverages unpaired adversarial training for reconstruction networks, where the inputs are undersampled k-space and naively reconstructed images from one dataset, and the labels are high-quality images from another dataset. The reconstruction networks consist of a generator which suppresses the input image artifacts, and a discriminator using a pool of (unpaired) labels to adjust the reconstruction quality. The generator is an unrolled neural network - a cascade of convolutional and data consistency layers. The discriminator is also a multilayer CNN that plays the role of a critic scoring the quality of reconstructed images based on the Wasserstein distance. Our experiments with knee MRI datasets demonstrate that the proposed unpaired training enables diagnostic-quality reconstruction when high-quality image labels are not available for the input types of interest, or when the amount of labels is small. In addition, our adversarial training scheme can achieve better image quality (as rated by expert radiologists) compared with the paired training schemes with pixel-wise loss.
Collapse
|
297
|
Hadjiiski L, Samala R, Chan HP. Image Processing Analytics: Enhancements and Segmentation. Mol Imaging 2021. [DOI: 10.1016/b978-0-12-816386-3.00057-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022] Open
|
298
|
Li G, Lv J, Tong X, Wang C, Yang G. High-Resolution Pelvic MRI Reconstruction Using a Generative Adversarial Network With Attention and Cyclic Loss. IEEE ACCESS 2021; 9:105951-105964. [DOI: 10.1109/access.2021.3099695] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/29/2023]
|
299
|
Tanno R, Worrall DE, Kaden E, Ghosh A, Grussu F, Bizzi A, Sotiropoulos SN, Criminisi A, Alexander DC. Uncertainty modelling in deep learning for safer neuroimage enhancement: Demonstration in diffusion MRI. Neuroimage 2021; 225:117366. [DOI: 10.1016/j.neuroimage.2020.117366] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2019] [Revised: 08/28/2020] [Accepted: 09/05/2020] [Indexed: 12/14/2022] Open
|
300
|
Ran M, Xia W, Huang Y, Lu Z, Bao P, Liu Y, Sun H, Zhou J, Zhang Y. MD-Recon-Net: A Parallel Dual-Domain Convolutional Neural Network for Compressed Sensing MRI. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021. [DOI: 10.1109/trpms.2020.2991877] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|