151
|
Hammernik K, Schlemper J, Qin C, Duan J, Summers RM, Rueckert D. Systematic evaluation of iterative deep neural networks for fast parallel MRI reconstruction with sensitivity-weighted coil combination. Magn Reson Med 2021; 86:1859-1872. [PMID: 34110037 DOI: 10.1002/mrm.28827] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2020] [Revised: 03/18/2021] [Accepted: 04/14/2021] [Indexed: 12/18/2022]
Abstract
PURPOSE To systematically investigate the influence of various data consistency layers and regularization networks with respect to variations in the training and test data domain, for sensitivity-encoded accelerated parallel MR image reconstruction. THEORY AND METHODS Magnetic resonance (MR) image reconstruction is formulated as a learned unrolled optimization scheme with a down-up network as regularization and varying data consistency layers. The proposed networks are compared to other state-of-the-art approaches on the publicly available fastMRI knee and neuro dataset and tested for stability across different training configurations regarding anatomy and number of training samples. RESULTS Data consistency layers and expressive regularization networks, such as the proposed down-up networks, form the cornerstone for robust MR image reconstruction. Physics-based reconstruction networks outperform post-processing methods substantially for R = 4 in all cases and for R = 8 when the training and test data are aligned. At R = 8, aligning training and test data is more important than architectural choices. CONCLUSION In this work, we study how dataset sizes affect single-anatomy and cross-anatomy training of neural networks for MRI reconstruction. The study provides insights into the robustness, properties, and acceleration limits of state-of-the-art networks, and our proposed down-up networks. These key insights provide essential aspects to successfully translate learning-based MRI reconstruction to clinical practice, where we are confronted with limited datasets and various imaged anatomies.
Collapse
Affiliation(s)
- Kerstin Hammernik
- Department of Computing, Imperial College London, London, United Kingdom.,Chair for AI in Healthcare and Medicine, Klinikum Rechts der Isar, Technical University of Munich, Munich, Germany
| | | | - Chen Qin
- Department of Computing, Imperial College London, London, United Kingdom.,Institute for Digital Communications, School of Engineering, University of Edinburgh, Edinburgh, United Kingdom
| | - Jinming Duan
- Department of Computing, Imperial College London, London, United Kingdom.,School of Computer Science, University of Birmingham, Birmingham, United Kingdom
| | | | - Daniel Rueckert
- Department of Computing, Imperial College London, London, United Kingdom.,Chair for AI in Healthcare and Medicine, Klinikum Rechts der Isar, Technical University of Munich, Munich, Germany
| |
Collapse
|
152
|
Kim MG, Oh S, Kim Y, Kwon H, Bae HM. Robust Single-Probe Quantitative Ultrasonic Imaging System with a Target-Aware Deep Neural Network. IEEE Trans Biomed Eng 2021; 68:3737-3747. [PMID: 34097600 DOI: 10.1109/tbme.2021.3086856] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
OBJECTIVE The speed of sound (SoS) has great potential as a quantitative imaging biomarker since it is sensitive to pathological changes in tissues. In this paper, a target-aware deep neural (TAD) network reconstructing an SoS image quantitatively from pulse-echo phase-shift maps gathered from a single conventional ultrasound probe is presented. METHODS In the proposed TAD network, the reconstruction process is guided by feature maps created from segmented target images for accuracy and contrast. In addition, the feature extraction process utilizes phase difference information instead of direct pulse-echo radio frequency (RF) data for robust image reconstruction against noise in the pulse-echo data. RESULTS The TAD network outperforms the fully convolutional network in root mean square error (RMSE), contrast-to-noise ratio (CNR), and structural similarity index (SSIM) in the presence of nearby reflectors. The measured RMSE and CNR are 5.4 m/s and 22 dB, respectively with the tissue attenuation coefficient of 2 dB/cm/MHz, which are 72% and 13 dB improvement over the state of the art design in RMSE and CNR, respectively. In the in-vivo test, the proposed method classifies the tissues in the neck area using SoS with a p-value below 0.025. CONCLUSION The proposed TAD network is the most accurate and robust single-probe SoS image reconstruction method reported to date. SIGNIFICANCE The accuracy and robustness demonstrated by the proposed SoS imaging method open up the possibilities of wide-spread clinical application of the single-probe SoS imaging system.
Collapse
|
153
|
A deep cascade of ensemble of dual domain networks with gradient-based T1 assistance and perceptual refinement for fast MRI reconstruction. Comput Med Imaging Graph 2021; 91:101942. [PMID: 34087612 DOI: 10.1016/j.compmedimag.2021.101942] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Revised: 05/03/2021] [Accepted: 05/14/2021] [Indexed: 11/23/2022]
Abstract
Deep learning networks have shown promising results in fast magnetic resonance imaging (MRI) reconstruction. In our work, we develop deep networks to further improve the quantitative and the perceptual quality of reconstruction. To begin with, we propose reconsynergynet (RSN), a network that combines the complementary benefits of independently operating on both the image and the Fourier domain. For a single-coil acquisition, we introduce deep cascade RSN (DC-RSN), a cascade of RSN blocks interleaved with data fidelity (DF) units. Secondly, we improve the structure recovery of DC-RSN for T2 weighted Imaging (T2WI) through assistance of T1 weighted imaging (T1WI), a sequence with short acquisition time. T1 assistance is provided to DC-RSN through a gradient of log feature (GOLF) fusion. Furthermore, we propose perceptual refinement network (PRN) to refine the reconstructions for better visual information fidelity (VIF), a metric highly correlated to radiologist's opinion on the image quality. Lastly, for multi-coil acquisition, we propose variable splitting RSN (VS-RSN), a deep cascade of blocks, each block containing RSN, multi-coil DF unit, and a weighted average module. We extensively validate our models DC-RSN and VS-RSN for single-coil and multi-coil acquisitions and report the state-of-the-art performance. We obtain a SSIM of 0.768, 0.923, and 0.878 for knee single-coil-4x, multi-coil-4x, and multi-coil-8x in fastMRI, respectively. We also conduct experiments to demonstrate the efficacy of GOLF based T1 assistance and PRN.
Collapse
|
154
|
Du T, Zhang H, Li Y, Pickup S, Rosen M, Zhou R, Song HK, Fan Y. Adaptive convolutional neural networks for accelerating magnetic resonance imaging via k-space data interpolation. Med Image Anal 2021; 72:102098. [PMID: 34091426 DOI: 10.1016/j.media.2021.102098] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2020] [Revised: 03/11/2021] [Accepted: 04/28/2021] [Indexed: 10/21/2022]
Abstract
Deep learning in k-space has demonstrated great potential for image reconstruction from undersampled k-space data in fast magnetic resonance imaging (MRI). However, existing deep learning-based image reconstruction methods typically apply weight-sharing convolutional neural networks (CNNs) to k-space data without taking into consideration the k-space data's spatial frequency properties, leading to ineffective learning of the image reconstruction models. Moreover, complementary information of spatially adjacent slices is often ignored in existing deep learning methods. To overcome such limitations, we have developed a deep learning algorithm, referred to as adaptive convolutional neural networks for k-space data interpolation (ACNN-k-Space), which adopts a residual Encoder-Decoder network architecture to interpolate the undersampled k-space data by integrating spatially contiguous slices as multi-channel input, along with k-space data from multiple coils if available. The network is enhanced by self-attention layers to adaptively focus on k-space data at different spatial frequencies and channels. We have evaluated our method on two public datasets and compared it with state-of-the-art existing methods. Ablation studies and experimental results demonstrate that our method effectively reconstructs images from undersampled k-space data and achieves significantly better image reconstruction performance than current state-of-the-art techniques. Source code of the method is available at https://gitlab.com/qgpmztmf/acnn-k-space.
Collapse
Affiliation(s)
- Tianming Du
- Center for Biomedical Image Computing and Analytics (CBICA), Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA; Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA; School of Artificial Intelligence, Beijing University of Posts and Telecommunications, Beijing, China
| | - Honggang Zhang
- School of Artificial Intelligence, Beijing University of Posts and Telecommunications, Beijing, China
| | - Yuemeng Li
- Center for Biomedical Image Computing and Analytics (CBICA), Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA; Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Stephen Pickup
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Mark Rosen
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Rong Zhou
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Hee Kwon Song
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Yong Fan
- Center for Biomedical Image Computing and Analytics (CBICA), Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA; Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA.
| |
Collapse
|
155
|
Buchlak QD, Esmaili N, Leveque JC, Bennett C, Farrokhi F, Piccardi M. Machine learning applications to neuroimaging for glioma detection and classification: An artificial intelligence augmented systematic review. J Clin Neurosci 2021; 89:177-198. [PMID: 34119265 DOI: 10.1016/j.jocn.2021.04.043] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2020] [Accepted: 04/30/2021] [Indexed: 12/13/2022]
Abstract
Glioma is the most common primary intraparenchymal tumor of the brain and the 5-year survival rate of high-grade glioma is poor. Magnetic resonance imaging (MRI) is essential for detecting, characterizing and monitoring brain tumors but definitive diagnosis still relies on surgical pathology. Machine learning has been applied to the analysis of MRI data in glioma research and has the potential to change clinical practice and improve patient outcomes. This systematic review synthesizes and analyzes the current state of machine learning applications to glioma MRI data and explores the use of machine learning for systematic review automation. Various datapoints were extracted from the 153 studies that met inclusion criteria and analyzed. Natural language processing (NLP) analysis involved keyword extraction, topic modeling and document classification. Machine learning has been applied to tumor grading and diagnosis, tumor segmentation, non-invasive genomic biomarker identification, detection of progression and patient survival prediction. Model performance was generally strong (AUC = 0.87 ± 0.09; sensitivity = 0.87 ± 0.10; specificity = 0.0.86 ± 0.10; precision = 0.88 ± 0.11). Convolutional neural network, support vector machine and random forest algorithms were top performers. Deep learning document classifiers yielded acceptable performance (mean 5-fold cross-validation AUC = 0.71). Machine learning tools and data resources were synthesized and summarized to facilitate future research. Machine learning has been widely applied to the processing of MRI data in glioma research and has demonstrated substantial utility. NLP and transfer learning resources enabled the successful development of a replicable method for automating the systematic review article screening process, which has potential for shortening the time from discovery to clinical application in medicine.
Collapse
Affiliation(s)
- Quinlan D Buchlak
- School of Medicine, The University of Notre Dame Australia, Sydney, NSW, Australia.
| | - Nazanin Esmaili
- School of Medicine, The University of Notre Dame Australia, Sydney, NSW, Australia; Faculty of Engineering and IT, University of Technology Sydney, Ultimo, NSW, Australia
| | | | - Christine Bennett
- School of Medicine, The University of Notre Dame Australia, Sydney, NSW, Australia
| | - Farrokh Farrokhi
- Neuroscience Institute, Virginia Mason Medical Center, Seattle, WA, USA
| | - Massimo Piccardi
- Faculty of Engineering and IT, University of Technology Sydney, Ultimo, NSW, Australia
| |
Collapse
|
156
|
Quan TM, Hildebrand DGC, Jeong WK. FusionNet: A Deep Fully Residual Convolutional Neural Network for Image Segmentation in Connectomics. FRONTIERS IN COMPUTER SCIENCE 2021. [DOI: 10.3389/fcomp.2021.613981] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Cellular-resolution connectomics is an ambitious research direction with the goal of generating comprehensive brain connectivity maps using high-throughput, nano-scale electron microscopy. One of the main challenges in connectomics research is developing scalable image analysis algorithms that require minimal user intervention. Deep learning has provided exceptional performance in image classification tasks in computer vision, leading to a recent explosion in popularity. Similarly, its application to connectomic analyses holds great promise. Here, we introduce a deep neural network architecture, FusionNet, with a focus on its application to accomplish automatic segmentation of neuronal structures in connectomics data. FusionNet combines recent advances in machine learning, such as semantic segmentation and residual neural networks, with summation-based skip connections. This results in a much deeper network architecture and improves segmentation accuracy. We demonstrate the performance of the proposed method by comparing it with several other popular electron microscopy segmentation methods. We further illustrate its flexibility through segmentation results for two different tasks: cell membrane segmentation and cell nucleus segmentation.
Collapse
|
157
|
Koshino K, Werner RA, Pomper MG, Bundschuh RA, Toriumi F, Higuchi T, Rowe SP. Narrative review of generative adversarial networks in medical and molecular imaging. ANNALS OF TRANSLATIONAL MEDICINE 2021; 9:821. [PMID: 34268434 PMCID: PMC8246192 DOI: 10.21037/atm-20-6325] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/09/2020] [Accepted: 01/08/2021] [Indexed: 12/22/2022]
Abstract
Recent years have witnessed a rapidly expanding use of artificial intelligence and machine learning in medical imaging. Generative adversarial networks (GANs) are techniques to synthesize images based on artificial neural networks and deep learning. In addition to the flexibility and versatility inherent in deep learning on which the GANs are based, the potential problem-solving ability of the GANs has attracted attention and is being vigorously studied in the medical and molecular imaging fields. Here this narrative review provides a comprehensive overview for GANs and discuss their usefulness in medical and molecular imaging on the following topics: (I) data augmentation to increase training data for AI-based computer-aided diagnosis as a solution for the data-hungry nature of such training sets; (II) modality conversion to complement the shortcomings of a single modality that reflects certain physical measurement principles, such as from magnetic resonance (MR) to computed tomography (CT) images or vice versa; (III) de-noising to realize less injection and/or radiation dose for nuclear medicine and CT; (IV) image reconstruction for shortening MR acquisition time while maintaining high image quality; (V) super-resolution to produce a high-resolution image from low-resolution one; (VI) domain adaptation which utilizes knowledge such as supervised labels and annotations from a source domain to the target domain with no or insufficient knowledge; and (VII) image generation with disease severity and radiogenomics. GANs are promising tools for medical and molecular imaging. The progress of model architectures and their applications should continue to be noteworthy.
Collapse
Affiliation(s)
- Kazuhiro Koshino
- Department of Systems and Informatics, Hokkaido Information University, Ebetsu, Japan
| | - Rudolf A. Werner
- The Russell H. Morgan Department of Radiology and Radiological Science, Division of Nuclear Medicine and Molecular Imaging, Johns Hopkins School of Medicine, Baltimore, MD, USA
| | - Martin G. Pomper
- The Russell H. Morgan Department of Radiology and Radiological Science, Division of Nuclear Medicine and Molecular Imaging, Johns Hopkins School of Medicine, Baltimore, MD, USA
| | | | - Fujio Toriumi
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan
| | - Takahiro Higuchi
- Department of Nuclear Medicine, University Hospital, University of Würzburg, Würzburg, Germany
- Comprehensive Heart Failure Center, University Hospital, University of Würzburg, Würzburg, Germany
- Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama, Japan
| | - Steven P. Rowe
- The Russell H. Morgan Department of Radiology and Radiological Science, Division of Nuclear Medicine and Molecular Imaging, Johns Hopkins School of Medicine, Baltimore, MD, USA
| |
Collapse
|
158
|
Rahim T, Novamizanti L, Apraz Ramatryana IN, Shin SY. Compressed medical imaging based on average sparsity model and reweighted analysis of multiple basis pursuit. Comput Med Imaging Graph 2021; 90:101927. [PMID: 33930735 DOI: 10.1016/j.compmedimag.2021.101927] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2020] [Revised: 02/17/2021] [Accepted: 04/05/2021] [Indexed: 11/28/2022]
Abstract
In medical imaging and applications, efficient image sampling and transfer are some of the key fields of research. The compressed sensing (CS) theory has shown that such compression can be performed during the data retrieval process and that the uncompressed image can be retrieved using a computationally flexible optimization method. The objective of this study is to propose compressed medical imaging for a different type of medical images, based on the combination of the average sparsity model and reweighted analysis of multiple basis pursuit (M-BP) reconstruction methods, referred to as multiple basis reweighted analysis (M-BRA). The proposed algorithm includes the joint multiple sparsity averaging to improves the signal sparsity in M-BP. In this study, four types of medical images are opted to fill the gap of lacking a detailed analysis of M-BRA in medical images. The medical dataset consists of magnetic resonance imaging (MRI) data, computed tomography (CT) data, colonoscopy data, and endoscopy data. Employing the proposed approach, a signal-to-noise ratio (SNR) of 30 dB was achieved for MRI data on a sampling ratio of M/N=0.3. SNR of 34, 30, and 34 dB are corresponding to CT, colonoscopy, and endoscopy data on the same sampling ratio of M/N=0.15. The proposed M-BRA performance indicates the potential for compressed medical imaging analysis with high reconstruction image quality.
Collapse
Affiliation(s)
- Tariq Rahim
- Department of IT Convergence Engineering, Kumoh National Institute of Technology (KIT), Gumi 39177, South Korea
| | - Ledya Novamizanti
- School of Electrical Engineering, Telkom University, Bandung 40257, Indonesia
| | - I Nyoman Apraz Ramatryana
- Department of IT Convergence Engineering, Kumoh National Institute of Technology (KIT), Gumi 39177, South Korea
| | - Soo Young Shin
- Department of IT Convergence Engineering, Kumoh National Institute of Technology (KIT), Gumi 39177, South Korea.
| |
Collapse
|
159
|
Wang S, Lv J, He Z, Liang D, Chen Y, Zhang M, Liu Q. Denoising auto-encoding priors in undecimated wavelet domain for MR image reconstruction. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.09.086] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
|
160
|
Shao W, Rowe SP, Du Y. SPECTnet: a deep learning neural network for SPECT image reconstruction. ANNALS OF TRANSLATIONAL MEDICINE 2021; 9:819. [PMID: 34268432 PMCID: PMC8246183 DOI: 10.21037/atm-20-3345] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Accepted: 07/30/2020] [Indexed: 12/22/2022]
Abstract
Background Single photon emission computed tomography (SPECT) is an important functional tool for clinical diagnosis and scientific research of brain disorders, but suffers from limited spatial resolution and high noise due to hardware design and imaging physics. The present study is to develop a deep learning technique for SPECT image reconstruction that directly converts raw projection data to image with high resolution and low noise, while an efficient training method specifically applicable to medical image reconstruction is presented. Methods A custom software was developed to generate 20,000 2-D brain phantoms, of which 16,000 were used to train the neural network, 2,000 for validation, and the final 2,000 for testing. To reduce development difficulty, a two-step training strategy for network design was adopted. We first compressed full-size activity image (128×128 pixels) to a one-D vector consisting of 256×1 pixels, accomplished by an autoencoder (AE) consisting of an encoder and a decoder. The vector is a good representation of the full-size image in a lower-dimensional space and was used as a compact label to develop the second network that maps between the projection-data domain and the vector domain. Since the label had 256 pixels only, the second network was compact and easy to converge. The second network, when successfully developed, was connected to the decoder (a portion of AE) to decompress the vector to a regular 128×128 image. Therefore, a complex network was essentially divided into two compact neural networks trained separately in sequence but eventually connectable. Results A total of 2,000 test examples, a synthetic brain phantom, and de-identified patient data were used to validate SPECTnet. Results obtained from SPECTnet were compared with those obtained from our clinic OS-EM method. Images with lower noise and more accurate information in the uptake areas were obtained by SPECTnet. Conclusions The challenge of developing a complex deep neural network is reduced by training two separate compact connectable networks. The combination of the two networks forms the full version of SPECTnet. Results show that the developed neural network can produce more accurate SPECT images.
Collapse
Affiliation(s)
- Wenyi Shao
- Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
| | - Steven P Rowe
- Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
| | - Yong Du
- Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
| |
Collapse
|
161
|
Wolterink JM, Mukhopadhyay A, Leiner T, Vogl TJ, Bucher AM, Išgum I. Generative Adversarial Networks: A Primer for Radiologists. Radiographics 2021; 41:840-857. [PMID: 33891522 DOI: 10.1148/rg.2021200151] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Artificial intelligence techniques involving the use of artificial neural networks-that is, deep learning techniques-are expected to have a major effect on radiology. Some of the most exciting applications of deep learning in radiology make use of generative adversarial networks (GANs). GANs consist of two artificial neural networks that are jointly optimized but with opposing goals. One neural network, the generator, aims to synthesize images that cannot be distinguished from real images. The second neural network, the discriminator, aims to distinguish these synthetic images from real images. These deep learning models allow, among other applications, the synthesis of new images, acceleration of image acquisitions, reduction of imaging artifacts, efficient and accurate conversion between medical images acquired with different modalities, and identification of abnormalities depicted on images. The authors provide an introduction to GANs and adversarial deep learning methods. In addition, the different ways in which GANs can be used for image synthesis and image-to-image translation tasks, as well as the principles underlying conditional GANs and cycle-consistent GANs, are described. Illustrated examples of GAN applications in radiologic image analysis for different imaging modalities and different tasks are provided. The clinical potential of GANs, future clinical GAN applications, and potential pitfalls and caveats that radiologists should be aware of also are discussed in this review. The online slide presentation from the RSNA Annual Meeting is available for this article. ©RSNA, 2021.
Collapse
Affiliation(s)
- Jelmer M Wolterink
- From the Department of Applied Mathematics, Faculty of Electrical Engineering, Mathematics and Computer Science, Technical Medical Centre, University of Twente, Zilverling, PO Box 217, 7500 AE Enschede, the Netherlands (J.M.W.); Department of Biomedical Engineering and Physics (J.M.W., I.I.) and Department of Radiology and Nuclear Medicine (I.I.), Amsterdam University Medical Center, Amsterdam, the Netherlands; Department of Informatics, Technische Universität Darmstadt, Darmstadt, Germany (A.M.); Department of Radiology, Utrecht University Medical Center, Utrecht, the Netherlands (T.L.); and Institute of Diagnostic and Interventional Radiology, Universitätsklinikum Frankfurt, Frankfurt, Germany (T.J.V., A.M.B.)
| | - Anirban Mukhopadhyay
- From the Department of Applied Mathematics, Faculty of Electrical Engineering, Mathematics and Computer Science, Technical Medical Centre, University of Twente, Zilverling, PO Box 217, 7500 AE Enschede, the Netherlands (J.M.W.); Department of Biomedical Engineering and Physics (J.M.W., I.I.) and Department of Radiology and Nuclear Medicine (I.I.), Amsterdam University Medical Center, Amsterdam, the Netherlands; Department of Informatics, Technische Universität Darmstadt, Darmstadt, Germany (A.M.); Department of Radiology, Utrecht University Medical Center, Utrecht, the Netherlands (T.L.); and Institute of Diagnostic and Interventional Radiology, Universitätsklinikum Frankfurt, Frankfurt, Germany (T.J.V., A.M.B.)
| | - Tim Leiner
- From the Department of Applied Mathematics, Faculty of Electrical Engineering, Mathematics and Computer Science, Technical Medical Centre, University of Twente, Zilverling, PO Box 217, 7500 AE Enschede, the Netherlands (J.M.W.); Department of Biomedical Engineering and Physics (J.M.W., I.I.) and Department of Radiology and Nuclear Medicine (I.I.), Amsterdam University Medical Center, Amsterdam, the Netherlands; Department of Informatics, Technische Universität Darmstadt, Darmstadt, Germany (A.M.); Department of Radiology, Utrecht University Medical Center, Utrecht, the Netherlands (T.L.); and Institute of Diagnostic and Interventional Radiology, Universitätsklinikum Frankfurt, Frankfurt, Germany (T.J.V., A.M.B.)
| | - Thomas J Vogl
- From the Department of Applied Mathematics, Faculty of Electrical Engineering, Mathematics and Computer Science, Technical Medical Centre, University of Twente, Zilverling, PO Box 217, 7500 AE Enschede, the Netherlands (J.M.W.); Department of Biomedical Engineering and Physics (J.M.W., I.I.) and Department of Radiology and Nuclear Medicine (I.I.), Amsterdam University Medical Center, Amsterdam, the Netherlands; Department of Informatics, Technische Universität Darmstadt, Darmstadt, Germany (A.M.); Department of Radiology, Utrecht University Medical Center, Utrecht, the Netherlands (T.L.); and Institute of Diagnostic and Interventional Radiology, Universitätsklinikum Frankfurt, Frankfurt, Germany (T.J.V., A.M.B.)
| | - Andreas M Bucher
- From the Department of Applied Mathematics, Faculty of Electrical Engineering, Mathematics and Computer Science, Technical Medical Centre, University of Twente, Zilverling, PO Box 217, 7500 AE Enschede, the Netherlands (J.M.W.); Department of Biomedical Engineering and Physics (J.M.W., I.I.) and Department of Radiology and Nuclear Medicine (I.I.), Amsterdam University Medical Center, Amsterdam, the Netherlands; Department of Informatics, Technische Universität Darmstadt, Darmstadt, Germany (A.M.); Department of Radiology, Utrecht University Medical Center, Utrecht, the Netherlands (T.L.); and Institute of Diagnostic and Interventional Radiology, Universitätsklinikum Frankfurt, Frankfurt, Germany (T.J.V., A.M.B.)
| | - Ivana Išgum
- From the Department of Applied Mathematics, Faculty of Electrical Engineering, Mathematics and Computer Science, Technical Medical Centre, University of Twente, Zilverling, PO Box 217, 7500 AE Enschede, the Netherlands (J.M.W.); Department of Biomedical Engineering and Physics (J.M.W., I.I.) and Department of Radiology and Nuclear Medicine (I.I.), Amsterdam University Medical Center, Amsterdam, the Netherlands; Department of Informatics, Technische Universität Darmstadt, Darmstadt, Germany (A.M.); Department of Radiology, Utrecht University Medical Center, Utrecht, the Netherlands (T.L.); and Institute of Diagnostic and Interventional Radiology, Universitätsklinikum Frankfurt, Frankfurt, Germany (T.J.V., A.M.B.)
| |
Collapse
|
162
|
Shin Y, Yang J, Lee YH. Deep Generative Adversarial Networks: Applications in Musculoskeletal Imaging. Radiol Artif Intell 2021; 3:e200157. [PMID: 34136816 PMCID: PMC8204145 DOI: 10.1148/ryai.2021200157] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2020] [Revised: 02/10/2021] [Accepted: 02/16/2021] [Indexed: 12/12/2022]
Abstract
In recent years, deep learning techniques have been applied in musculoskeletal radiology to increase the diagnostic potential of acquired images. Generative adversarial networks (GANs), which are deep neural networks that can generate or transform images, have the potential to aid in faster imaging by generating images with a high level of realism across multiple contrast and modalities from existing imaging protocols. This review introduces the key architectures of GANs as well as their technical background and challenges. Key research trends are highlighted, including: (a) reconstruction of high-resolution MRI; (b) image synthesis with different modalities and contrasts; (c) image enhancement that efficiently preserves high-frequency information suitable for human interpretation; (d) pixel-level segmentation with annotation sharing between domains; and (e) applications to different musculoskeletal anatomies. In addition, an overview is provided of the key issues wherein clinical applicability is challenging to capture with conventional performance metrics and expert evaluation. When clinically validated, GANs have the potential to improve musculoskeletal imaging. Keywords: Adults and Pediatrics, Computer Aided Diagnosis (CAD), Computer Applications-General (Informatics), Informatics, Skeletal-Appendicular, Skeletal-Axial, Soft Tissues/Skin © RSNA, 2021.
Collapse
Affiliation(s)
- YiRang Shin
- From the Department of Radiology, Research Institute of Radiological Science, and Center for Clinical Imaging Data Science (CCIDS), Yonsei University College of Medicine, 250 Seongsanno, Seodaemun-gu, Seoul 220-701, Republic of Korea (Y.S., J.Y., Y.H.L.); Systems Molecular Radiology at Yonsei (SysMolRaY), Seoul, Republic of Korea (J.Y.); and Severance Biomedical Science Institute (SBSI), Yonsei University College of Medicine, Seoul, Republic of Korea (J.Y.)
| | - Jaemoon Yang
- From the Department of Radiology, Research Institute of Radiological Science, and Center for Clinical Imaging Data Science (CCIDS), Yonsei University College of Medicine, 250 Seongsanno, Seodaemun-gu, Seoul 220-701, Republic of Korea (Y.S., J.Y., Y.H.L.); Systems Molecular Radiology at Yonsei (SysMolRaY), Seoul, Republic of Korea (J.Y.); and Severance Biomedical Science Institute (SBSI), Yonsei University College of Medicine, Seoul, Republic of Korea (J.Y.)
| | - Young Han Lee
- From the Department of Radiology, Research Institute of Radiological Science, and Center for Clinical Imaging Data Science (CCIDS), Yonsei University College of Medicine, 250 Seongsanno, Seodaemun-gu, Seoul 220-701, Republic of Korea (Y.S., J.Y., Y.H.L.); Systems Molecular Radiology at Yonsei (SysMolRaY), Seoul, Republic of Korea (J.Y.); and Severance Biomedical Science Institute (SBSI), Yonsei University College of Medicine, Seoul, Republic of Korea (J.Y.)
| |
Collapse
|
163
|
Sun W, Wang W, Zhu K, Chen CZ, Wen XX, Zeng MS, Rao SX. Feasibility of compressed sensing technique for isotropic dynamic contrast-enhanced liver magnetic resonance imaging. Eur J Radiol 2021; 139:109729. [PMID: 33905976 DOI: 10.1016/j.ejrad.2021.109729] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2020] [Revised: 04/14/2021] [Accepted: 04/18/2021] [Indexed: 10/21/2022]
Abstract
PURPOSE To investigate whether an isotropic T1-weighted gradient echo (T1-GRE) sequence using a compressed sensing (CS) technique during liver magnetic resonance imaging (MRI) can improve the image quality compared to that using a standard parallel imaging (PI) technique in patients with hepatocellular carcinoma (HCC). METHODS Forty-nine patients with single pathologically confirmed HCC were included in the prospective study, who underwent a 3.0 T MRI including the two T1-GRE sequences (CS and PI). Qualitative analysis including the relative contrast (RC) of liver-to-lesion, liver-to-portal vein and liver-to-hepatic vein on pre-contrast and postcontrast (delayed phase) images were calculated. Respiratory motion artifact, gastrointestinal motion artifact and overall image quality were scored by using a 4-point scale. RESULTS RC of liver-to-lesion, liver-to-portal vein and liver-to-hepatic vein measured on both pre-contrast and postcontrast phase images were significantly higher for CS than for PI. The scores of overall image quality was comparable between PI and CS (3.98 ± 0.10vs 3.96 ± 0.13, P = 0.083 for pre-contrast; 3.96 ± 0.16 vs 3.93 ± 0.17, P = 0.132 for postcontrast, respectively). The scores of gastrointestinal motion artifact was significantly higher for PI than for CS (3.92 ± 0.21 vs 3.69 ± 0.33 for pre-contrast; 3.86 ± 0.21 vs 3.59 ± 0.30 for postcontrast, P < 0.001 for both). The scores of respiratory motion artifact was significantly higher for PI only in pre-contrast sequence (3.97±0.11 vs 3.89 ± 0.22, P = 0.002 for pre-contrast; 3.95 ± 0.18 vs 3.90 ± 0.22, P = 0.083 for postcontrast, respectively). CONCLUSIONS Compared to the standard PI sequence, the CS technique can provide greater contrast in displaying HCCs and hepatic vessels in MRI without compromise of overall image quality.
Collapse
Affiliation(s)
- Wei Sun
- Department of Radiology, Zhongshan Hospital, Fudan University, China
| | - Wentao Wang
- Department of Radiology, Zhongshan Hospital, Fudan University, China; Shanghai Medical Imaging Institute, Shanghai, China
| | - Kai Zhu
- Liver Cancer Institute, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Cai-Zhong Chen
- Department of Radiology, Zhongshan Hospital, Fudan University, China
| | - Xi-Xi Wen
- United Imaging Healthcare, Shanghai, China
| | - Meng-Su Zeng
- Department of Radiology, Zhongshan Hospital, Fudan University, China; Shanghai Medical Imaging Institute, Shanghai, China
| | - Sheng-Xiang Rao
- Department of Radiology, Zhongshan Hospital, Fudan University, China; Shanghai Medical Imaging Institute, Shanghai, China.
| |
Collapse
|
164
|
Chung H, Cha E, Sunwoo L, Ye JC. Two-stage deep learning for accelerated 3D time-of-flight MRA without matched training data. Med Image Anal 2021; 71:102047. [PMID: 33895617 DOI: 10.1016/j.media.2021.102047] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2020] [Revised: 03/18/2021] [Accepted: 03/19/2021] [Indexed: 10/21/2022]
Abstract
Time-of-flight magnetic resonance angiography (TOF-MRA) is one of the most widely used non-contrast MR imaging methods to visualize blood vessels, but due to the 3-D volume acquisition highly accelerated acquisition is necessary. Accordingly, high quality reconstruction from undersampled TOF-MRA is an important research topic for deep learning. However, most existing deep learning works require matched reference data for supervised training, which are often difficult to obtain. By extending the recent theoretical understanding of cycleGAN from the optimal transport theory, here we propose a novel two-stage unsupervised deep learning approach, which is composed of the multi-coil reconstruction network along the coronal plane followed by a multi-planar refinement network along the axial plane. Specifically, the first network is trained in the square-root of sum of squares (SSoS) domain to achieve high quality parallel image reconstruction, whereas the second refinement network is designed to efficiently learn the characteristics of highly-activated blood flow using double-headed projection discriminator. Extensive experiments demonstrate that the proposed learning process without matched reference exceeds performance of state-of-the-art compressed sensing (CS)-based method and provides comparable or even better results than supervised learning approaches.
Collapse
Affiliation(s)
- Hyungjin Chung
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 34141, Republic of Korea
| | - Eunju Cha
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 34141, Republic of Korea
| | - Leonard Sunwoo
- Department of Radiology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, Seongnam, Republic of Korea.
| | - Jong Chul Ye
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 34141, Republic of Korea.
| |
Collapse
|
165
|
A dual-task dual-domain model for blind MRI reconstruction. Comput Med Imaging Graph 2021; 89:101862. [PMID: 33798914 DOI: 10.1016/j.compmedimag.2021.101862] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2020] [Revised: 12/15/2020] [Accepted: 12/28/2020] [Indexed: 11/20/2022]
Abstract
MRI reconstruction is the key technology to accelerate MR acquisition. Recent cascade models have gained satisfactory results, however, they deeply rely on the known sample mask, which we call it mask prior. To restore the MR image without mask prior, we designed an auxiliary network to estimate the mask from sampled k-space data. Experimentally, the sample mask can be completely estimated by the proposed network and be used to input to the cascade models. Moreover, we rethink the MRI reconstruction model as a k-space inpainting task. A dual-domain cascade network, which utilized partial convolutional layers to inpaint features in k-space, was presented to restore the MR image. Without the mask prior, our blind reconstruction model demonstrates the best reconstruction ability in both 4x acceleration and 8x acceleration.
Collapse
|
166
|
Lin DJ, Johnson PM, Knoll F, Lui YW. Artificial Intelligence for MR Image Reconstruction: An Overview for Clinicians. J Magn Reson Imaging 2021; 53:1015-1028. [PMID: 32048372 PMCID: PMC7423636 DOI: 10.1002/jmri.27078] [Citation(s) in RCA: 114] [Impact Index Per Article: 28.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2019] [Revised: 01/15/2020] [Accepted: 01/17/2020] [Indexed: 12/22/2022] Open
Abstract
Artificial intelligence (AI) shows tremendous promise in the field of medical imaging, with recent breakthroughs applying deep-learning models for data acquisition, classification problems, segmentation, image synthesis, and image reconstruction. With an eye towards clinical applications, we summarize the active field of deep-learning-based MR image reconstruction. We review the basic concepts of how deep-learning algorithms aid in the transformation of raw k-space data to image data, and specifically examine accelerated imaging and artifact suppression. Recent efforts in these areas show that deep-learning-based algorithms can match and, in some cases, eclipse conventional reconstruction methods in terms of image quality and computational efficiency across a host of clinical imaging applications, including musculoskeletal, abdominal, cardiac, and brain imaging. This article is an introductory overview aimed at clinical radiologists with no experience in deep-learning-based MR image reconstruction and should enable them to understand the basic concepts and current clinical applications of this rapidly growing area of research across multiple organ systems.
Collapse
Affiliation(s)
- Dana J. Lin
- Department of Radiology, NYU School of Medicine / NYU Langone Health
| | | | - Florian Knoll
- New York University School of Medicine, Center for Biomedical Imaging
| | - Yvonne W. Lui
- Department of Radiology, NYU School of Medicine / NYU Langone Health
| |
Collapse
|
167
|
Xiao Z, Du N, Liu J, Zhang W. SR-Net: A sequence offset fusion net and refine net for undersampled multislice MR image reconstruction. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 202:105997. [PMID: 33621943 DOI: 10.1016/j.cmpb.2021.105997] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/11/2020] [Accepted: 02/06/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE The study of deep learning-based fast magnetic resonance imaging (MRI) reconstruction methods has become popular in recent years. However, there is still a challenge when MRI results undersample large acceleration factors. The objective of this study was to improve the reconstruction quality of undersampled MR images by exploring data redundancy among slices. METHODS There are two aspects of redundancy in multislice MR images including correlations inside a single slice and correlations among slices. Thus, we built two subnets for the two kinds of redundancy. For correlations among slices, we built a bidirectional recurrent convolutional neural network, named Sequence Offset Fusion Net (S-Net). In S-Net, we used a deformable convolution module to construct a neighbor slice feature extractor. For the correlation inside a single slice, we built a Refine Net (R-Net), which has 5 layers of 2D convolutions. In addition, we used a data consistency (DC) operation to maintain data fidelity in k-space. Finally, we treated the reconstruction task as a dealiasing problem in the image domain, and S-Net and R-Net are applied alternately and iteratively to generate the final reconstructions. RESULTS The proposed algorithm was evaluated using two online public MRI datasets. Compared with several state-of-the-art methods, the proposed method achieved better reconstruction results in terms of dealiasing and restoring tissue structure. Moreover, with over 14 slices per second reconstruction speed on 256x256 pixel images, the proposed method can meet the need for real-time processing. CONCLUSION With spatial correlation among slices as additional prior information, the proposed method dramatically improves the reconstruction quality of undersampled MR images.
Collapse
Affiliation(s)
- Zhiyong Xiao
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China.
| | - Nianmao Du
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China
| | - Jianjun Liu
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China
| | - Weidong Zhang
- Department of Automation, Shanghai JiaoTong University, Shanghai 200240, China.
| |
Collapse
|
168
|
Yang Y, Wang H, Li W, Wang X, Wei S, Liu Y, Xu Y. Prediction and analysis of multiple protein lysine modified sites based on conditional wasserstein generative adversarial networks. BMC Bioinformatics 2021; 22:171. [PMID: 33789579 PMCID: PMC8010967 DOI: 10.1186/s12859-021-04101-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2020] [Accepted: 03/23/2021] [Indexed: 01/05/2023] Open
Abstract
BACKGROUND Protein post-translational modification (PTM) is a key issue to investigate the mechanism of protein's function. With the rapid development of proteomics technology, a large amount of protein sequence data has been generated, which highlights the importance of the in-depth study and analysis of PTMs in proteins. METHOD We proposed a new multi-classification machine learning pipeline MultiLyGAN to identity seven types of lysine modified sites. Using eight different sequential and five structural construction methods, 1497 valid features were remained after the filtering by Pearson correlation coefficient. To solve the data imbalance problem, Conditional Generative Adversarial Network (CGAN) and Conditional Wasserstein Generative Adversarial Network (CWGAN), two influential deep generative methods were leveraged and compared to generate new samples for the types with fewer samples. Finally, random forest algorithm was utilized to predict seven categories. RESULTS In the tenfold cross-validation, accuracy (Acc) and Matthews correlation coefficient (MCC) were 0.8589 and 0.8376, respectively. In the independent test, Acc and MCC were 0.8549 and 0.8330, respectively. The results indicated that CWGAN better solved the existing data imbalance and stabilized the training error. Alternatively, an accumulated feature importance analysis reported that CKSAAP, PWM and structural features were the three most important feature-encoding schemes. MultiLyGAN can be found at https://github.com/Lab-Xu/MultiLyGAN . CONCLUSIONS The CWGAN greatly improved the predictive performance in all experiments. Features derived from CKSAAP, PWM and structure schemes are the most informative and had the greatest contribution to the prediction of PTM.
Collapse
Affiliation(s)
- Yingxi Yang
- Department of Information and Computer Science, University of Science and Technology Beijing, Beijing, 100083, China
| | - Hui Wang
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, 100080, China
| | - Wen Li
- Department of Information and Computer Science, University of Science and Technology Beijing, Beijing, 100083, China
| | - Xiaobo Wang
- Department of Information and Computer Science, University of Science and Technology Beijing, Beijing, 100083, China
| | - Shizhao Wei
- No. 15 Research Institute, China Electronics Technology Group Corporation, Beijing, 100083, China
| | - Yulong Liu
- No. 15 Research Institute, China Electronics Technology Group Corporation, Beijing, 100083, China
| | - Yan Xu
- Department of Information and Computer Science, University of Science and Technology Beijing, Beijing, 100083, China.
| |
Collapse
|
169
|
Zhou X, Qiu S, Joshi PS, Xue C, Killiany RJ, Mian AZ, Chin SP, Au R, Kolachalama VB. Enhancing magnetic resonance imaging-driven Alzheimer's disease classification performance using generative adversarial learning. Alzheimers Res Ther 2021; 13:60. [PMID: 33715635 PMCID: PMC7958452 DOI: 10.1186/s13195-021-00797-5] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2020] [Accepted: 02/22/2021] [Indexed: 12/30/2022]
Abstract
BACKGROUND Generative adversarial networks (GAN) can produce images of improved quality but their ability to augment image-based classification is not fully explored. We evaluated if a modified GAN can learn from magnetic resonance imaging (MRI) scans of multiple magnetic field strengths to enhance Alzheimer's disease (AD) classification performance. METHODS T1-weighted brain MRI scans from 151 participants of the Alzheimer's Disease Neuroimaging Initiative (ADNI), who underwent both 1.5-Tesla (1.5-T) and 3-Tesla imaging at the same time were selected to construct a GAN model. This model was trained along with a three-dimensional fully convolutional network (FCN) using the generated images (3T*) as inputs to predict AD status. Quality of the generated images was evaluated using signal to noise ratio (SNR), Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE) and Natural Image Quality Evaluator (NIQE). Cases from the Australian Imaging, Biomarker & Lifestyle Flagship Study of Ageing (AIBL, n = 107) and the National Alzheimer's Coordinating Center (NACC, n = 565) were used for model validation. RESULTS The 3T*-based FCN classifier performed better than the FCN model trained using the 1.5-T scans. Specifically, the mean area under curve increased from 0.907 to 0.932, from 0.934 to 0.940, and from 0.870 to 0.907 on the ADNI test, AIBL, and NACC datasets, respectively. Additionally, we found that the mean quality of the generated (3T*) images was consistently higher than the 1.5-T images, as measured using SNR, BRISQUE, and NIQE on the validation datasets. CONCLUSION This study demonstrates a proof of principle that GAN frameworks can be constructed to augment AD classification performance and improve image quality.
Collapse
Affiliation(s)
- Xiao Zhou
- Section of Computational Biomedicine, Department of Medicine, Boston University School of Medicine, 72 E. Concord Street, Evans 636, Boston, MA, 02118, USA
- Department of Computer Science, College of Arts & Sciences, Boston University, Boston, MA, USA
| | - Shangran Qiu
- Section of Computational Biomedicine, Department of Medicine, Boston University School of Medicine, 72 E. Concord Street, Evans 636, Boston, MA, 02118, USA
- Department of Physics, College of Arts & Sciences, Boston University, Boston, MA, USA
| | - Prajakta S Joshi
- Department of Anatomy and Neurobiology, Boston University School of Medicine, Boston, MA, USA
- Department of General Dentistry, Boston University School of Dental Medicine, Boston, MA, USA
| | - Chonghua Xue
- Section of Computational Biomedicine, Department of Medicine, Boston University School of Medicine, 72 E. Concord Street, Evans 636, Boston, MA, 02118, USA
| | - Ronald J Killiany
- Department of Anatomy and Neurobiology, Boston University School of Medicine, Boston, MA, USA
- Department of Radiology, Boston University School of Medicine, Boston, MA, USA
- Department of Neurology, Boston University School of Medicine, Boston, MA, USA
- Boston University Alzheimer's Disease Center, Boston, MA, USA
| | - Asim Z Mian
- Department of Radiology, Boston University School of Medicine, Boston, MA, USA
| | - Sang P Chin
- Department of Computer Science, College of Arts & Sciences, Boston University, Boston, MA, USA
- Department of Brain and Cognitive Science, Massachusetts Institute of Technology, Cambridge, MA, USA
- Center of Mathematical Sciences & Applications, Harvard University, Cambridge, MA, USA
| | - Rhoda Au
- Department of Anatomy and Neurobiology, Boston University School of Medicine, Boston, MA, USA
- Department of Neurology, Boston University School of Medicine, Boston, MA, USA
- Boston University Alzheimer's Disease Center, Boston, MA, USA
- The Framingham Heart Study, Boston University School of Medicine, Boston, MA, USA
- Department of Epidemiology, Boston University School of Public Health, Boston, MA, USA
| | - Vijaya B Kolachalama
- Section of Computational Biomedicine, Department of Medicine, Boston University School of Medicine, 72 E. Concord Street, Evans 636, Boston, MA, 02118, USA.
- Department of Computer Science, College of Arts & Sciences, Boston University, Boston, MA, USA.
- Boston University Alzheimer's Disease Center, Boston, MA, USA.
- Faculty of Computing & Data Sciences, Boston University, Boston, MA, USA.
| |
Collapse
|
170
|
Montalt-Tordera J, Muthurangu V, Hauptmann A, Steeden JA. Machine learning in Magnetic Resonance Imaging: Image reconstruction. Phys Med 2021; 83:79-87. [DOI: 10.1016/j.ejmp.2021.02.020] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Accepted: 02/23/2021] [Indexed: 12/27/2022] Open
|
171
|
Zhao D, Huang Y, Zhao F, Qin B, Zheng J. Reference-Driven Undersampled MR Image Reconstruction Using Wavelet Sparsity-Constrained Deep Image Prior. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2021; 2021:8865582. [PMID: 33552232 PMCID: PMC7846397 DOI: 10.1155/2021/8865582] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/28/2020] [Revised: 12/17/2020] [Accepted: 12/31/2020] [Indexed: 11/29/2022]
Abstract
Deep learning has shown potential in significantly improving performance for undersampled magnetic resonance (MR) image reconstruction. However, one challenge for the application of deep learning to clinical scenarios is the requirement of large, high-quality patient-based datasets for network training. In this paper, we propose a novel deep learning-based method for undersampled MR image reconstruction that does not require pre-training procedure and pre-training datasets. The proposed reference-driven method using wavelet sparsity-constrained deep image prior (RWS-DIP) is based on the DIP framework and thereby reduces the dependence on datasets. Moreover, RWS-DIP explores and introduces structure and sparsity priors into network learning to improve the efficiency of learning. By employing a high-resolution reference image as the network input, RWS-DIP incorporates structural information into network. RWS-DIP also uses the wavelet sparsity to further enrich the implicit regularization of traditional DIP by formulating the training of network parameters as a constrained optimization problem, which is solved using the alternating direction method of multipliers (ADMM) algorithm. Experiments on in vivo MR scans have demonstrated that the RWS-DIP method can reconstruct MR images more accurately and preserve features and textures from undersampled k-space measurements.
Collapse
Affiliation(s)
- Di Zhao
- Key Laboratory of Complex System Optimization and Big Data Processing, Guangxi Colleges and Universities, Yulin Normal University, Yulin 537000, China
- School of Physics and Telecommunication Engineering, Yulin Normal University, Yulin 537000, China
| | - Yanhu Huang
- School of Physics and Telecommunication Engineering, Yulin Normal University, Yulin 537000, China
| | - Feng Zhao
- Key Laboratory of Complex System Optimization and Big Data Processing, Guangxi Colleges and Universities, Yulin Normal University, Yulin 537000, China
| | - Binyi Qin
- Key Laboratory of Complex System Optimization and Big Data Processing, Guangxi Colleges and Universities, Yulin Normal University, Yulin 537000, China
- School of Physics and Telecommunication Engineering, Yulin Normal University, Yulin 537000, China
| | - Jincun Zheng
- Key Laboratory of Complex System Optimization and Big Data Processing, Guangxi Colleges and Universities, Yulin Normal University, Yulin 537000, China
- School of Physics and Telecommunication Engineering, Yulin Normal University, Yulin 537000, China
| |
Collapse
|
172
|
High quality and fast compressed sensing MRI reconstruction via edge-enhanced dual discriminator generative adversarial network. Magn Reson Imaging 2021; 77:124-136. [PMID: 33359427 DOI: 10.1016/j.mri.2020.12.011] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2020] [Revised: 11/24/2020] [Accepted: 12/20/2020] [Indexed: 11/21/2022]
Abstract
Generative adversarial networks (GAN) are widely used for fast compressed sensing magnetic resonance imaging (CSMRI) reconstruction. However, most existing methods are difficult to make an effective trade-off between abstract global high-level features and edge features. It easily causes problems, such as significant remaining aliasing artifacts and clearly over-smoothed reconstruction details. To tackle these issues, we propose a novel edge-enhanced dual discriminator generative adversarial network architecture called EDDGAN for CSMRI reconstruction with high quality. In this model, we extract effective edge features by fusing edge information from different depths. Then, leveraging the relationship between abstract global high-level features and edge features, a three-player game is introduced to control the hallucination of details and stabilize the training process. The resulting EDDGAN can offer more focus on edge restoration and de-aliasing. Extensive experimental results demonstrate that our method consistently outperforms state-of-the-art methods and obtains reconstructed images with rich edge details. In addition, our method also shows remarkable generalization, and its time consumption for each 256 × 256 image reconstruction is approximately 8.39 ms.
Collapse
|
173
|
Lv J, Wang C, Yang G. PIC-GAN: A Parallel Imaging Coupled Generative Adversarial Network for Accelerated Multi-Channel MRI Reconstruction. Diagnostics (Basel) 2021; 11:61. [PMID: 33401777 PMCID: PMC7824530 DOI: 10.3390/diagnostics11010061] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2020] [Revised: 12/28/2020] [Accepted: 12/29/2020] [Indexed: 12/16/2022] Open
Abstract
In this study, we proposed a model combing parallel imaging (PI) with generative adversarial network (GAN) architecture (PIC-GAN) for accelerated multi-channel magnetic resonance imaging (MRI) reconstruction. This model integrated data fidelity and regularization terms into the generator to benefit from multi-coils information and provide an "end-to-end" reconstruction. Besides, to better preserve image details during reconstruction, we combined the adversarial loss with pixel-wise loss in both image and frequency domains. The proposed PIC-GAN framework was evaluated on abdominal and knee MRI images using 2, 4 and 6-fold accelerations with different undersampling patterns. The performance of the PIC-GAN was compared to the sparsity-based parallel imaging (L1-ESPIRiT), the variational network (VN), and conventional GAN with single-channel images as input (zero-filled (ZF)-GAN). Experimental results show that our PIC-GAN can effectively reconstruct multi-channel MR images at a low noise level and improved structure similarity of the reconstructed images. PIC-GAN has yielded the lowest Normalized Mean Square Error (in ×10-5) (PIC-GAN: 0.58 ± 0.37, ZF-GAN: 1.93 ± 1.41, VN: 1.87 ± 1.28, L1-ESPIRiT: 2.49 ± 1.04 for abdominal MRI data and PIC-GAN: 0.80 ± 0.26, ZF-GAN: 0.93 ± 0.29, VN:1.18 ± 0.31, L1-ESPIRiT: 1.28 ± 0.24 for knee MRI data) and the highest Peak Signal to Noise Ratio (PIC-GAN: 34.43 ± 1.92, ZF-GAN: 31.45 ± 4.0, VN: 29.26 ± 2.98, L1-ESPIRiT: 25.40 ± 1.88 for abdominal MRI data and PIC-GAN: 34.10 ± 1.09, ZF-GAN: 31.47 ± 1.05, VN: 30.01 ± 1.01, L1-ESPIRiT: 28.01 ± 0.98 for knee MRI data) compared to ZF-GAN, VN and L1-ESPIRiT with an under-sampling factor of 6. The proposed PIC-GAN framework has shown superior reconstruction performance in terms of reducing aliasing artifacts and restoring tissue structures as compared to other conventional and state-of-the-art reconstruction methods.
Collapse
Affiliation(s)
- Jun Lv
- School of Computer and Control Engineering, Yantai University, Yantai 264005, China;
| | - Chengyan Wang
- Human Phenome Institute, Fudan University, Shanghai 201203, China
| | - Guang Yang
- Cardiovascular Research Centre, Royal Brompton Hospital, London SW3 6NP, UK
- National Heart and Lung Institute, Imperial College London, London SW7 2AZ, UK
| |
Collapse
|
174
|
Hu D, Liu J, Lv T, Zhao Q, Zhang Y, Quan G, Feng J, Chen Y, Luo L. Hybrid-Domain Neural Network Processing for Sparse-View CT Reconstruction. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021. [DOI: 10.1109/trpms.2020.3011413] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
175
|
Zhou W, Du H, Mei W, Fang L. Efficient structurally-strengthened generative adversarial network for MRI reconstruction. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.09.008] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
176
|
Wang T, Lei Y, Fu Y, Wynne JF, Curran WJ, Liu T, Yang X. A review on medical imaging synthesis using deep learning and its clinical applications. J Appl Clin Med Phys 2021; 22:11-36. [PMID: 33305538 PMCID: PMC7856512 DOI: 10.1002/acm2.13121] [Citation(s) in RCA: 126] [Impact Index Per Article: 31.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Revised: 11/12/2020] [Accepted: 11/21/2020] [Indexed: 02/06/2023] Open
Abstract
This paper reviewed the deep learning-based studies for medical imaging synthesis and its clinical application. Specifically, we summarized the recent developments of deep learning-based methods in inter- and intra-modality image synthesis by listing and highlighting the proposed methods, study designs, and reported performances with related clinical applications on representative studies. The challenges among the reviewed studies were then summarized with discussion.
Collapse
Affiliation(s)
- Tonghe Wang
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Yang Lei
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
| | - Yabo Fu
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
| | - Jacob F. Wynne
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
| | - Walter J. Curran
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Tian Liu
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Xiaofeng Yang
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| |
Collapse
|
177
|
Lei K, Mardani M, Pauly JM, Vasanawala SS. Wasserstein GANs for MR Imaging: From Paired to Unpaired Training. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:105-115. [PMID: 32915728 PMCID: PMC7797774 DOI: 10.1109/tmi.2020.3022968] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Lack of ground-truth MR images impedes the common supervised training of neural networks for image reconstruction. To cope with this challenge, this article leverages unpaired adversarial training for reconstruction networks, where the inputs are undersampled k-space and naively reconstructed images from one dataset, and the labels are high-quality images from another dataset. The reconstruction networks consist of a generator which suppresses the input image artifacts, and a discriminator using a pool of (unpaired) labels to adjust the reconstruction quality. The generator is an unrolled neural network - a cascade of convolutional and data consistency layers. The discriminator is also a multilayer CNN that plays the role of a critic scoring the quality of reconstructed images based on the Wasserstein distance. Our experiments with knee MRI datasets demonstrate that the proposed unpaired training enables diagnostic-quality reconstruction when high-quality image labels are not available for the input types of interest, or when the amount of labels is small. In addition, our adversarial training scheme can achieve better image quality (as rated by expert radiologists) compared with the paired training schemes with pixel-wise loss.
Collapse
|
178
|
Hadjiiski L, Samala R, Chan HP. Image Processing Analytics: Enhancements and Segmentation. Mol Imaging 2021. [DOI: 10.1016/b978-0-12-816386-3.00057-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022] Open
|
179
|
Li G, Lv J, Tong X, Wang C, Yang G. High-Resolution Pelvic MRI Reconstruction Using a Generative Adversarial Network With Attention and Cyclic Loss. IEEE ACCESS 2021; 9:105951-105964. [DOI: 10.1109/access.2021.3099695] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/29/2023]
|
180
|
Ran M, Xia W, Huang Y, Lu Z, Bao P, Liu Y, Sun H, Zhou J, Zhang Y. MD-Recon-Net: A Parallel Dual-Domain Convolutional Neural Network for Compressed Sensing MRI. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021. [DOI: 10.1109/trpms.2020.2991877] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
181
|
Edupuganti V, Mardani M, Vasanawala S, Pauly J. Uncertainty Quantification in Deep MRI Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:239-250. [PMID: 32956045 PMCID: PMC7837266 DOI: 10.1109/tmi.2020.3025065] [Citation(s) in RCA: 40] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Reliable MRI is crucial for accurate interpretation in therapeutic and diagnostic tasks. However, undersampling during MRI acquisition as well as the overparameterized and non-transparent nature of deep learning (DL) leaves substantial uncertainty about the accuracy of DL reconstruction. With this in mind, this study aims to quantify the uncertainty in image recovery with DL models. To this end, we first leverage variational autoencoders (VAEs) to develop a probabilistic reconstruction scheme that maps out (low-quality) short scans with aliasing artifacts to the diagnostic-quality ones. The VAE encodes the acquisition uncertainty in a latent code and naturally offers a posterior of the image from which one can generate pixel variance maps using Monte-Carlo sampling. Accurately predicting risk requires knowledge of the bias as well, for which we leverage Stein's Unbiased Risk Estimator (SURE) as a proxy for mean-squared-error (MSE). A range of empirical experiments is performed for Knee MRI reconstruction under different training losses (adversarial and pixel-wise) and unrolled recurrent network architectures. Our key observations indicate that: 1) adversarial losses introduce more uncertainty; and 2) recurrent unrolled nets reduce the prediction uncertainty and risk.
Collapse
|
182
|
Zhou W, Du H, Mei W, Fang L. Spatial orthogonal attention generative adversarial network for MRI reconstruction. Med Phys 2020; 48:627-639. [PMID: 33111361 DOI: 10.1002/mp.14509] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2020] [Revised: 07/12/2020] [Accepted: 08/24/2020] [Indexed: 11/08/2022] Open
Abstract
PURPOSE Recent studies have witnessed that self-attention modules can better solve the vision understanding problems by capturing long-range dependencies. However, there are very few works designing a lightweight self-attention module to improve the quality of MRI reconstruction. Furthermore, it can be observed that several important self-attention modules (e.g., the non-local block) cause high computational complexity and need a huge number of GPU memory when the size of the input feature is large. The purpose of this study is to design a lightweight yet effective spatial orthogonal attention module (SOAM) to capture long-range dependencies, and develop a novel spatial orthogonal attention generative adversarial network, termed as SOGAN, to achieve more accurate MRI reconstruction. METHODS We first develop a lightweight SOAM, which can generate two small attention maps to effectively aggregate the long-range contextual information in vertical and horizontal directions, respectively. Then, we embed the proposed SOAMs into the concatenated convolutional autoencoders to form the generator of the proposed SOGAN. RESULTS The experimental results demonstrate that the proposed SOAMs improve the quality of the reconstructed MR images effectively by capturing long-range dependencies. Besides, compared with state-of-the-art deep learning-based CS-MRI methods, the proposed SOGAN reconstructs MR images more accurately, but with fewer model parameters. CONCLUSIONS The proposed SOAM is a lightweight yet effective self-attention module to capture long-range dependencies, thus, can improve the quality of MRI reconstruction to a large extent. Besides, with the help of SOAMs, the proposed SOGAN outperforms the state-of-the-art deep learning-based CS-MRI methods.
Collapse
Affiliation(s)
- Wenzhong Zhou
- School of Information and Electronics, Beijing Institute of Technology, Beijing, 100081, China
| | - Huiqian Du
- School of Information and Electronics, Beijing Institute of Technology, Beijing, 100081, China
| | - Wenbo Mei
- School of Information and Electronics, Beijing Institute of Technology, Beijing, 100081, China
| | - Liping Fang
- School of Mathematics and Statistics, Beijing Institute of Technology, Beijing, 100081, China
| |
Collapse
|
183
|
Burgos N, Bottani S, Faouzi J, Thibeau-Sutre E, Colliot O. Deep learning for brain disorders: from data processing to disease treatment. Brief Bioinform 2020; 22:1560-1576. [PMID: 33316030 DOI: 10.1093/bib/bbaa310] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2020] [Revised: 10/09/2020] [Accepted: 10/13/2020] [Indexed: 12/19/2022] Open
Abstract
In order to reach precision medicine and improve patients' quality of life, machine learning is increasingly used in medicine. Brain disorders are often complex and heterogeneous, and several modalities such as demographic, clinical, imaging, genetics and environmental data have been studied to improve their understanding. Deep learning, a subpart of machine learning, provides complex algorithms that can learn from such various data. It has become state of the art in numerous fields, including computer vision and natural language processing, and is also growingly applied in medicine. In this article, we review the use of deep learning for brain disorders. More specifically, we identify the main applications, the concerned disorders and the types of architectures and data used. Finally, we provide guidelines to bridge the gap between research studies and clinical routine.
Collapse
|
184
|
Ke Z, Cheng J, Ying L, Zheng H, Zhu Y, Liang D. An unsupervised deep learning method for multi-coil cine MRI. ACTA ACUST UNITED AC 2020; 65:235041. [DOI: 10.1088/1361-6560/abaffa] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
185
|
Liu R, Zhang Y, Cheng S, Luo Z, Fan X. A Deep Framework Assembling Principled Modules for CS-MRI: Unrolling Perspective, Convergence Behaviors, and Practical Modeling. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:4150-4163. [PMID: 32746155 DOI: 10.1109/tmi.2020.3014193] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Compressed Sensing Magnetic Resonance Imaging (CS-MRI) significantly accelerates MR acquisition at a sampling rate much lower than the Nyquist criterion. A major challenge for CS-MRI lies in solving the severely ill-posed inverse problem to reconstruct aliasing-free MR images from the sparse k -space data. Conventional methods typically optimize an energy function, producing restoration of high quality, but their iterative numerical solvers unavoidably bring extremely large time consumption. Recent deep techniques provide fast restoration by either learning direct prediction to final reconstruction or plugging learned modules into the energy optimizer. Nevertheless, these data-driven predictors cannot guarantee the reconstruction following principled constraints underlying the domain knowledge so that the reliability of their reconstruction process is questionable. In this paper, we propose a deep framework assembling principled modules for CS-MRI that fuses learning strategy with the iterative solver of a conventional reconstruction energy. This framework embeds an optimal condition checking mechanism, fostering efficient and reliable reconstruction. We also apply the framework to three practical tasks, i.e., complex-valued data reconstruction, parallel imaging and reconstruction with Rician noise. Extensive experiments on both benchmark and manufacturer-testing images demonstrate that the proposed method reliably converges to the optimal solution more efficiently and accurately than the state-of-the-art in various scenarios.
Collapse
|
186
|
Liu F, Kijowski R, Feng L, El Fakhri G. High-performance rapid MR parameter mapping using model-based deep adversarial learning. Magn Reson Imaging 2020; 74:152-160. [PMID: 32980503 PMCID: PMC7669737 DOI: 10.1016/j.mri.2020.09.021] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2020] [Revised: 08/27/2020] [Accepted: 09/21/2020] [Indexed: 02/01/2023]
Abstract
PURPOSE To develop and evaluate a deep adversarial learning-based image reconstruction approach for rapid and efficient MR parameter mapping. METHODS The proposed method provides an image reconstruction framework by combining the end-to-end convolutional neural network (CNN) mapping, adversarial learning, and MR physical models. The CNN performs direct image-to-parameter mapping by transforming a series of undersampled images directly into MR parameter maps. Adversarial learning is used to improve image sharpness and enable better texture restoration during the image-to-parameter conversion. An additional pathway concerning the MR signal model is added between the estimated parameter maps and undersampled k-space data to ensure the data consistency during network training. The proposed framework was evaluated on T2 mapping of the brain and the knee at an acceleration rate R = 8 and was compared with other state-of-the-art reconstruction methods. Global and regional quantitative assessments were performed to demonstrate the reconstruction performance of the proposed method. RESULTS The proposed adversarial learning approach achieved accurate T2 mapping up to R = 8 in brain and knee joint image datasets. Compared to conventional reconstruction approaches that exploit image sparsity and low-rankness, the proposed method yielded lower errors and higher similarity to the reference and better image sharpness in the T2 estimation. The quantitative metrics were normalized root mean square error of 3.6% for brain and 7.3% for knee, structural similarity index of 85.1% for brain and 83.2% for knee, and tenengrad measures of 9.2% for brain and 10.1% for the knee. The adversarial approach also achieved better performance for maintaining greater image texture and sharpness in comparison to the CNN approach without adversarial learning. CONCLUSION The proposed framework by incorporating the efficient end-to-end CNN mapping, adversarial learning, and physical model enforced data consistency is a promising approach for rapid and efficient reconstruction of quantitative MR parameters.
Collapse
Affiliation(s)
- Fang Liu
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.
| | - Richard Kijowski
- Department of Radiology, University of Wisconsin-Madison, Madison, WI, USA
| | - Li Feng
- Biomedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, USA
| | - Georges El Fakhri
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
187
|
Yaman B, Hosseini SAH, Moeller S, Ellermann J, Uğurbil K, Akçakaya M. Self-supervised learning of physics-guided reconstruction neural networks without fully sampled reference data. Magn Reson Med 2020; 84:3172-3191. [PMID: 32614100 PMCID: PMC7811359 DOI: 10.1002/mrm.28378] [Citation(s) in RCA: 127] [Impact Index Per Article: 25.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2019] [Revised: 05/21/2020] [Accepted: 05/22/2020] [Indexed: 12/25/2022]
Abstract
PURPOSE To develop a strategy for training a physics-guided MRI reconstruction neural network without a database of fully sampled data sets. METHODS Self-supervised learning via data undersampling (SSDU) for physics-guided deep learning reconstruction partitions available measurements into two disjoint sets, one of which is used in the data consistency (DC) units in the unrolled network and the other is used to define the loss for training. The proposed training without fully sampled data is compared with fully supervised training with ground-truth data, as well as conventional compressed-sensing and parallel imaging methods using the publicly available fastMRI knee database. The same physics-guided neural network is used for both proposed SSDU and supervised training. The SSDU training is also applied to prospectively two-fold accelerated high-resolution brain data sets at different acceleration rates, and compared with parallel imaging. RESULTS Results on five different knee sequences at an acceleration rate of 4 shows that the proposed self-supervised approach performs closely with supervised learning, while significantly outperforming conventional compressed-sensing and parallel imaging, as characterized by quantitative metrics and a clinical reader study. The results on prospectively subsampled brain data sets, in which supervised learning cannot be used due to lack of ground-truth reference, show that the proposed self-supervised approach successfully performs reconstruction at high acceleration rates (4, 6, and 8). Image readings indicate improved visual reconstruction quality with the proposed approach compared with parallel imaging at acquisition acceleration. CONCLUSION The proposed SSDU approach allows training of physics-guided deep learning MRI reconstruction without fully sampled data, while achieving comparable results with supervised deep learning MRI trained on fully sampled data.
Collapse
Affiliation(s)
- Burhaneddin Yaman
- Department of Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN
| | - Seyed Amir Hossein Hosseini
- Department of Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN
| | - Steen Moeller
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN
| | - Jutta Ellermann
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN
| | - Kâmil Uğurbil
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN
| | - Mehmet Akçakaya
- Department of Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN
| |
Collapse
|
188
|
Lv J, Wang P, Tong X, Wang C. Parallel imaging with a combination of sensitivity encoding and generative adversarial networks. Quant Imaging Med Surg 2020; 10:2260-2273. [PMID: 33269225 PMCID: PMC7596399 DOI: 10.21037/qims-20-518] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2020] [Accepted: 09/04/2020] [Indexed: 12/26/2022]
Abstract
BACKGROUND Magnetic resonance imaging (MRI) has the limitation of low imaging speed. Acceleration methods using under-sampled k-space data have been widely exploited to improve data acquisition without reducing the image quality. Sensitivity encoding (SENSE) is the most commonly used method for multi-channel imaging. However, SENSE has the drawback of severe g-factor artifacts when the under-sampling factor is high. This paper applies generative adversarial networks (GAN) to remove g-factor artifacts from SENSE reconstructions. METHODS Our method was evaluated on a public knee database containing 20 healthy participants. We compared our method with conventional GAN using zero-filled (ZF) images as input. Structural similarity (SSIM), peak signal to noise ratio (PSNR), and normalized mean square error (NMSE) were calculated for the assessment of image quality. A paired student's t-test was conducted to compare the image quality metrics between the different methods. Statistical significance was considered at P<0.01. RESULTS The proposed method outperformed SENSE, variational network (VN), and ZF + GAN methods in terms of SSIM (SENSE + GAN: 0.81±0.06, SENSE: 0.40±0.07, VN: 0.79±0.06, ZF + GAN: 0.77±0.06), PSNR (SENSE + GAN: 31.90±1.66, SENSE: 22.70±1.99, VN: 31.35±2.01, ZF + GAN: 29.95±1.59), and NMSE (×10-7) (SENSE + GAN: 0.95±0.34, SENSE: 4.81±1.33, VN: 0.97±0.30, ZF + GAN: 1.60±0.84) with an under-sampling factor of up to 6-fold. CONCLUSIONS This study demonstrated the feasibility of using GAN to improve the performance of SENSE reconstruction. The improvement of reconstruction is more obvious for higher under-sampling rates, which shows great potential for many clinical applications.
Collapse
Affiliation(s)
- Jun Lv
- School of Computer and Control Engineering, Yantai University, Yantai, China
| | - Peng Wang
- School of Computer and Control Engineering, Yantai University, Yantai, China
| | - Xiangrong Tong
- School of Computer and Control Engineering, Yantai University, Yantai, China
| | - Chengyan Wang
- Human Phenome Institute, Fudan University, Shanghai, China
| |
Collapse
|
189
|
Dai X, Lei Y, Fu Y, Curran WJ, Liu T, Mao H, Yang X. Multimodal MRI synthesis using unified generative adversarial networks. Med Phys 2020; 47:6343-6354. [PMID: 33053202 PMCID: PMC7796974 DOI: 10.1002/mp.14539] [Citation(s) in RCA: 37] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2020] [Revised: 08/27/2020] [Accepted: 10/01/2020] [Indexed: 11/11/2022] Open
Abstract
PURPOSE Complementary information obtained from multiple contrasts of tissue facilitates physicians assessing, diagnosing and planning treatment of a variety of diseases. However, acquiring multiple contrasts magnetic resonance images (MRI) for every patient using multiple pulse sequences is time-consuming and expensive, where, medical image synthesis has been demonstrated as an effective alternative. The purpose of this study is to develop a unified framework for multimodal MR image synthesis. METHODS A unified generative adversarial network consisting of only a single generator and a single discriminator was developed to learn the mappings among images of four different modalities. The generator took an image and its modality label as inputs and learned to synthesize the image in the target modality, while the discriminator was trained to distinguish between real and synthesized images and classify them to their corresponding modalities. The network was trained and tested using multimodal brain MRI consisting of four different contrasts which are T1-weighted (T1), T1-weighted and contrast-enhanced (T1c), T2-weighted (T2), and fluid-attenuated inversion recovery (Flair). Quantitative assessments of our proposed method were made through computing normalized mean absolute error (NMAE), peak signal-to-noise ratio (PSNR), structural similarity index measurement (SSIM), visual information fidelity (VIF), and naturalness image quality evaluator (NIQE). RESULTS The proposed model was trained and tested on a cohort of 274 glioma patients with well-aligned multi-types of MRI scans. After the model was trained, tests were conducted by using each of T1, T1c, T2, Flair as a single input modality to generate its respective rest modalities. Our proposed method shows high accuracy and robustness for image synthesis with arbitrary MRI modality that is available in the database as input. For example, with T1 as input modality, the NMAEs for the generated T1c, T2, Flair respectively are 0.034 ± 0.005, 0.041 ± 0.006, and 0.041 ± 0.006, the PSNRs respectively are 32.353 ± 2.525 dB, 30.016 ± 2.577 dB, and 29.091 ± 2.795 dB, the SSIMs are 0.974 ± 0.059, 0.969 ± 0.059, and 0.959 ± 0.059, the VIF are 0.750 ± 0.087, 0.706 ± 0.097, and 0.654 ± 0.062, and NIQE are 1.396 ± 0.401, 1.511 ± 0.460, and 1.259 ± 0.358, respectively. CONCLUSIONS We proposed a novel multimodal MR image synthesis method based on a unified generative adversarial network. The network takes an image and its modality label as inputs and synthesizes multimodal images in a single forward pass. The results demonstrate that the proposed method is able to accurately synthesize multimodal MR images from a single MR image.
Collapse
Affiliation(s)
- Xianjin Dai
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Walter J. Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Hui Mao
- Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| |
Collapse
|
190
|
Deep Convolutional Encoder-Decoder algorithm for MRI brain reconstruction. Med Biol Eng Comput 2020; 59:85-106. [PMID: 33231848 DOI: 10.1007/s11517-020-02285-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2020] [Accepted: 10/31/2020] [Indexed: 10/22/2022]
Abstract
Compressed Sensing Magnetic Resonance Imaging (CS-MRI) could be considered a challenged task since it could be designed as an efficient technique for fast MRI acquisition which could be highly beneficial for several clinical routines. In fact, it could grant better scan quality by reducing motion artifacts amount as well as the contrast washout effect. It offers also the possibility to reduce the exploration cost and the patient's anxiety. Recently, Deep Learning Neuronal Network (DL) has been suggested in order to reconstruct MRI scans with conserving the structural details and improving parallel imaging-based fast MRI. In this paper, we propose Deep Convolutional Encoder-Decoder architecture for CS-MRI reconstruction. Such architecture bridges the gap between the non-learning techniques, using data from only one image, and approaches using large training data. The proposed approach is based on autoencoder architecture divided into two parts: an encoder and a decoder. The encoder as well as the decoder has essentially three convolutional blocks. The proposed architecture has been evaluated through two databases: Hammersmith dataset (for the normal scans) and MICCAI 2018 (for pathological MRI). Moreover, we extend our model to cope with noisy pathological MRI scans. The normalized mean square error (NMSE), the peak-to-noise ratio (PSNR), and the structural similarity index (SSIM) have been adopted as evaluation metrics in order to evaluate the proposed architecture performance and to make a comparative study with the state-of-the-art reconstruction algorithms. The higher PSNR and SSIM values as well as the lowest NMSE values could attest that the proposed architecture offers better reconstruction and preserves textural image details. Furthermore, the running time is about 0.8 s, which is suitable for real-time processing. Such results could encourage the neurologist to adopt it in their clinical routines. Graphical abstract.
Collapse
|
191
|
Huang Q, Xian Y, Yang D, Qu H, Yi J, Wu P, Metaxas DN. Dynamic MRI reconstruction with end-to-end motion-guided network. Med Image Anal 2020; 68:101901. [PMID: 33285480 DOI: 10.1016/j.media.2020.101901] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2019] [Revised: 10/23/2020] [Accepted: 11/09/2020] [Indexed: 10/23/2022]
Abstract
Temporal correlation in dynamic magnetic resonance imaging (MRI), such as cardiac MRI, is informative and important to understand motion mechanisms of body regions. Modeling such information into the MRI reconstruction process produces temporally coherent image sequence and reduces imaging artifacts and blurring. However, existing deep learning based approaches neglect motion information during the reconstruction procedure, while traditional motion-guided methods are hindered by heuristic parameter tuning and long inference time. We propose a novel dynamic MRI reconstruction approach called MODRN and an end-to-end improved version called MODRN(e2e), both of which enhance the reconstruction quality by infusing motion information into the modeling process with deep neural networks. The central idea is to decompose the motion-guided optimization problem of dynamic MRI reconstruction into three components: Dynamic Reconstruction Network, Motion Estimation and Motion Compensation. Extensive experiments have demonstrated the effectiveness of our proposed approach compared to other state-of-the-art approaches.
Collapse
Affiliation(s)
- Qiaoying Huang
- Department of Computer Science, Rutgers University, Piscataway, NJ 08854, USA.
| | - Yikun Xian
- Department of Computer Science, Rutgers University, Piscataway, NJ 08854, USA.
| | | | - Hui Qu
- Department of Computer Science, Rutgers University, Piscataway, NJ 08854, USA.
| | - Jingru Yi
- Department of Computer Science, Rutgers University, Piscataway, NJ 08854, USA.
| | - Pengxiang Wu
- Department of Computer Science, Rutgers University, Piscataway, NJ 08854, USA.
| | - Dimitris N Metaxas
- Department of Computer Science, Rutgers University, Piscataway, NJ 08854, USA.
| |
Collapse
|
192
|
|
193
|
IKWI-net: A cross-domain convolutional neural network for undersampled magnetic resonance image reconstruction. Magn Reson Imaging 2020; 73:1-10. [DOI: 10.1016/j.mri.2020.06.015] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2019] [Revised: 06/08/2020] [Accepted: 06/24/2020] [Indexed: 12/16/2022]
|
194
|
Improving Amide Proton Transfer-Weighted MRI Reconstruction Using T2-Weighted Images. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2020; 12262:3-12. [PMID: 33103161 DOI: 10.1007/978-3-030-59713-9_1] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
Current protocol of Amide Proton Transfer-weighted (APTw) imaging commonly starts with the acquisition of high-resolution T2-weighted (T2w) images followed by APTw imaging at particular geometry and locations (i.e. slice) determined by the acquired T2w images. Although many advanced MRI reconstruction methods have been proposed to accelerate MRI, existing methods for APTw MRI lacks the capability of taking advantage of structural information in the acquired T2w images for reconstruction. In this paper, we present a novel APTw image reconstruction framework that can accelerate APTw imaging by reconstructing APTw images directly from highly undersampled k-space data and corresponding T2w image at the same location. The proposed framework starts with a novel sparse representation-based slice matching algorithm that aims to find the matched T2w slice given only the undersampled APTw image. A Recurrent Feature Sharing Reconstruction network (RFS-Rec) is designed to utilize intermediate features extracted from the matched T2w image by a Convolutional Recurrent Neural Network (CRNN), so that the missing structural information can be incorporated into the undersampled APT raw image thus effectively improving the image quality of the reconstructed APTw image. We evaluate the proposed method on two real datasets consisting of brain data from rats and humans. Extensive experiments demonstrate that the proposed RFS-Rec approach can outperform the state-of-the-art methods.
Collapse
|
195
|
Wang CJ, Rost NS, Golland P. Spatial-Intensity Transform GANs for High Fidelity Medical Image-to-Image Translation. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2020; 12262:749-759. [PMID: 33615318 PMCID: PMC7888153 DOI: 10.1007/978-3-030-59713-9_72] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/11/2023]
Abstract
Despite recent progress in image-to-image translation, it remains challenging to apply such techniques to clinical quality medical images. We develop a novel parameterization of conditional generative adversarial networks that achieves high image fidelity when trained to transform MRIs conditioned on a patient's age and disease severity. The spatial-intensity transform generative adversarial network (SIT-GAN) constrains the generator to a smooth spatial transform composed with sparse intensity changes. This technique improves image quality and robustness to artifacts, and generalizes to different scanners. We demonstrate SIT-GAN on a large clinical image dataset of stroke patients, where it captures associations between ventricle expansion and aging, as well as between white matter hyperintensities and stroke severity. Additionally, SIT-GAN provides a disentangled view of the variation in shape and appearance across subjects.
Collapse
Affiliation(s)
- Clinton J Wang
- Computer Science and Artificial Intelligence Lab, MIT, Cambridge, MA, USA
| | - Natalia S Rost
- Department of Neurology, Massachusetts General Hospital, Boston, MA, USA
| | - Polina Golland
- Computer Science and Artificial Intelligence Lab, MIT, Cambridge, MA, USA
| |
Collapse
|
196
|
Aggarwal HK, Jacob M. J-MoDL: Joint Model-Based Deep Learning for Optimized Sampling and Reconstruction. IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING 2020; 14:1151-1162. [PMID: 33613806 PMCID: PMC7893809 DOI: 10.1109/jstsp.2020.3004094] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
Modern MRI schemes, which rely on compressed sensing or deep learning algorithms to recover MRI data from undersampled multichannel Fourier measurements, are widely used to reduce the scan time. The image quality of these approaches is heavily dependent on the sampling pattern. We introduce a continuous strategy to optimize the sampling pattern and the network parameters jointly. We use a multichannel forward model, consisting of a non-uniform Fourier transform with continuously defined sampling locations, to realize the data consistency block within a model-based deep learning image reconstruction scheme. This approach facilitates the joint and continuous optimization of the sampling pattern and the CNN parameters to improve image quality. We observe that the joint optimization of the sampling patterns and the reconstruction module significantly improves the performance of most deep learning reconstruction algorithms. The source code is available at https://github.com/hkaggarwal/J-MoDL.
Collapse
Affiliation(s)
- Hemant Kumar Aggarwal
- Department of Electrical and Computer Engineering, University of Iowa, IA, USA, 52242
| | - Mathews Jacob
- Department of Electrical and Computer Engineering, University of Iowa, IA, USA, 52242
| |
Collapse
|
197
|
Liu X, Zhou T, Lu M, Yang Y, He Q, Luo J. Deep Learning for Ultrasound Localization Microscopy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3064-3078. [PMID: 32286964 DOI: 10.1109/tmi.2020.2986781] [Citation(s) in RCA: 51] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
By localizing microbubbles (MBs) in the vasculature, ultrasound localization microscopy (ULM) has recently been proposed, which greatly improves the spatial resolution of ultrasound (US) imaging and will be helpful for clinical diagnosis. Nevertheless, several challenges remain in fast ULM imaging. The main problems are that current localization methods used to implement fast ULM imaging, e.g., a previously reported localization method based on sparse recovery (CS-ULM), suffer from long data-processing time and exhaustive parameter tuning (optimization). To address these problems, in this paper, we propose a ULM method based on deep learning, which is achieved by using a modified sub-pixel convolutional neural network (CNN), termed as mSPCN-ULM. Simulations and in vivo experiments are performed to evaluate the performance of mSPCN-ULM. Simulation results show that even if under high-density condition (6.4 MBs/mm2), a high localization precision ( [Formula: see text] in the lateral direction and [Formula: see text] in the axial direction) and a high localization reliability (Jaccard index of 0.66) can be obtained by mSPCN-ULM, compared to CS-ULM. The in vivo experimental results indicate that with plane wave scan at a transmit center frequency of 15.625 MHz, microvessels with diameters of [Formula: see text] can be detected and adjacent microvessels with a distance of [Formula: see text] can be separated. Furthermore, when using GPU acceleration, the data-processing time of mSPCN-ULM can be shortened to ~6 sec/frame in the simulations and ~23 sec/frame in the in vivo experiments, which is 3-4 orders of magnitude faster than CS-ULM. Finally, once the network is trained, mSPCN-ULM does not need parameter tuning to implement ULM. As a result, mSPCN-ULM opens the door to implement ULM with fast data-processing speed, high imaging accuracy, short data-acquisition time, and high flexibility (robustness to parameters) characteristics.
Collapse
|
198
|
Wang G, Gong E, Banerjee S, Martin D, Tong E, Choi J, Chen H, Wintermark M, Pauly JM, Zaharchuk G. Synthesize High-Quality Multi-Contrast Magnetic Resonance Imaging From Multi-Echo Acquisition Using Multi-Task Deep Generative Model. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3089-3099. [PMID: 32286966 DOI: 10.1109/tmi.2020.2987026] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Multi-echo saturation recovery sequence can provide redundant information to synthesize multi-contrast magnetic resonance imaging. Traditional synthesis methods, such as GE's MAGiC platform, employ a model-fitting approach to generate parameter-weighted contrasts. However, models' over-simplification, as well as imperfections in the acquisition, can lead to undesirable reconstruction artifacts, especially in T2-FLAIR contrast. To improve the image quality, in this study, a multi-task deep learning model is developed to synthesize multi-contrast neuroimaging jointly using both signal relaxation relationships and spatial information. Compared with previous deep learning-based synthesis, the correlation between different destination contrast is utilized to enhance reconstruction quality. To improve model generalizability and evaluate clinical significance, the proposed model was trained and tested on a large multi-center dataset, including healthy subjects and patients with pathology. Results from both quantitative comparison and clinical reader study demonstrate that the multi-task formulation leads to more efficient and accurate contrast synthesis than previous methods.
Collapse
|
199
|
Dual-domain cascade of U-nets for multi-channel magnetic resonance image reconstruction. Magn Reson Imaging 2020; 71:140-153. [DOI: 10.1016/j.mri.2020.06.002] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2020] [Revised: 05/20/2020] [Accepted: 06/09/2020] [Indexed: 11/17/2022]
|
200
|
Singhal V, Majumdar A. Reconstructing multi-echo magnetic resonance images via structured deep dictionary learning. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.11.107] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|