151
|
Various Generative Adversarial Networks Model for Synthetic Prohibitory Sign Image Generation. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11072913] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
A synthetic image is a critical issue for computer vision. Traffic sign images synthesized from standard models are commonly used to build computer recognition algorithms for acquiring more knowledge on various and low-cost research issues. Convolutional Neural Network (CNN) achieves excellent detection and recognition of traffic signs with sufficient annotated training data. The consistency of the entire vision system is dependent on neural networks. However, locating traffic sign datasets from most countries in the world is complicated. This work uses various generative adversarial networks (GAN) models to construct intricate images, such as Least Squares Generative Adversarial Networks (LSGAN), Deep Convolutional Generative Adversarial Networks (DCGAN), and Wasserstein Generative Adversarial Networks (WGAN). This paper also discusses, in particular, the quality of the images produced by various GANs with different parameters. For processing, we use a picture with a specific number and scale. The Structural Similarity Index (SSIM) and Mean Squared Error (MSE) will be used to measure image consistency. Between the generated image and the corresponding real image, the SSIM values will be compared. As a result, the images display a strong similarity to the real image when using more training images. LSGAN outperformed other GAN models in the experiment with maximum SSIM values achieved using 200 images as inputs, 2000 epochs, and size 32 × 32.
Collapse
|
152
|
El Gueddari L, Giliyar Radhakrishna C, Chouzenoux E, Ciuciu P. Calibration-Less Multi-Coil Compressed Sensing Magnetic Resonance Image Reconstruction Based on OSCAR Regularization. J Imaging 2021; 7:58. [PMID: 34460714 PMCID: PMC8321316 DOI: 10.3390/jimaging7030058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2021] [Revised: 03/09/2021] [Accepted: 03/11/2021] [Indexed: 11/16/2022] Open
Abstract
Over the last decade, the combination of compressed sensing (CS) with acquisition over multiple receiver coils in magnetic resonance imaging (MRI) has allowed the emergence of faster scans while maintaining a good signal-to-noise ratio (SNR). Self-calibrating techniques, such as ESPiRIT, have become the standard approach to estimating the coil sensitivity maps prior to the reconstruction stage. In this work, we proceed differently and introduce a new calibration-less multi-coil CS reconstruction method. Calibration-less techniques no longer require the prior extraction of sensitivity maps to perform multi-coil image reconstruction but usually alternate estimation sensitivity map estimation and image reconstruction. Here, to get rid of the nonconvexity of the latter approach we reconstruct as many MR images as the number of coils. To compensate for the ill-posedness of this inverse problem, we leverage structured sparsity of the multi-coil images in a wavelet transform domain while adapting to variations in SNR across coils owing to the OSCAR (octagonal shrinkage and clustering algorithm for regression) regularization. Coil-specific complex-valued MR images are thus obtained by minimizing a convex but nonsmooth objective function using the proximal primal-dual Condat-Vù algorithm. Comparison and validation on retrospective Cartesian and non-Cartesian studies based on the Brain fastMRI data set demonstrate that the proposed reconstruction method outperforms the state-of-the-art (ℓ1-ESPIRiT, calibration-less AC-LORAKS and CaLM methods) significantly on magnitude images for the T1 and FLAIR contrasts. Additionally, further validation operated on 8 to 20-fold prospectively accelerated high-resolution ex vivo human brain MRI data collected at 7 Tesla confirms the retrospective results. Overall, OSCAR-based regularization preserves phase information more accurately (both visually and quantitatively) compared to other approaches, an asset that can only be assessed on real prospective experiments.
Collapse
Affiliation(s)
- Loubna El Gueddari
- NeuroSpin, CEA, Université Paris-Saclay, 91191 Gif-sur-Yvette, France; (L.E.G.); (C.G.R.); (P.C.)
- Parietal, Inria, 91120 Palaiseau, France
| | - Chaithya Giliyar Radhakrishna
- NeuroSpin, CEA, Université Paris-Saclay, 91191 Gif-sur-Yvette, France; (L.E.G.); (C.G.R.); (P.C.)
- Parietal, Inria, 91120 Palaiseau, France
| | | | - Philippe Ciuciu
- NeuroSpin, CEA, Université Paris-Saclay, 91191 Gif-sur-Yvette, France; (L.E.G.); (C.G.R.); (P.C.)
- Parietal, Inria, 91120 Palaiseau, France
| |
Collapse
|
153
|
Cole E, Cheng J, Pauly J, Vasanawala S. Analysis of deep complex-valued convolutional neural networks for MRI reconstruction and phase-focused applications. Magn Reson Med 2021; 86:1093-1109. [PMID: 33724507 DOI: 10.1002/mrm.28733] [Citation(s) in RCA: 56] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Revised: 12/26/2020] [Accepted: 01/25/2021] [Indexed: 01/27/2023]
Abstract
PURPOSE Deep learning has had success with MRI reconstruction, but previously published works use real-valued networks. The few works which have tried complex-valued networks have not fully assessed their impact on phase. Therefore, the purpose of this work is to fully investigate end-to-end complex-valued convolutional neural networks (CNNs) for accelerated MRI reconstruction and in several phase-based applications in comparison to 2-channel real-valued networks. METHODS Several complex-valued activation functions for MRI reconstruction were implemented, and their performance was compared. Complex-valued convolution was implemented and tested on an unrolled network architecture and a U-Net-based architecture over a wide range of network widths and depths with knee, body, and phase-contrast datasets. RESULTS Quantitative and qualitative results demonstrated that complex-valued CNNs with complex-valued convolutions provided superior reconstructions compared to real-valued convolutions with the same number of trainable parameters for both an unrolled network architecture and a U-Net-based architecture, and for 3 different datasets. Complex-valued CNNs consistently had superior normalized RMS error, structural similarity index, and peak SNR compared to real-valued CNNs. CONCLUSION Complex-valued CNNs can enable superior accelerated MRI reconstruction and phase-based applications such as fat-water separation, and flow quantification compared to real-valued convolutional neural networks.
Collapse
Affiliation(s)
- Elizabeth Cole
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
| | - Joseph Cheng
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
| | - John Pauly
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
| | | |
Collapse
|
154
|
Montalt-Tordera J, Muthurangu V, Hauptmann A, Steeden JA. Machine learning in Magnetic Resonance Imaging: Image reconstruction. Phys Med 2021; 83:79-87. [DOI: 10.1016/j.ejmp.2021.02.020] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Accepted: 02/23/2021] [Indexed: 12/27/2022] Open
|
155
|
Mauer MAD, Well EJV, Herrmann J, Groth M, Morlock MM, Maas R, Säring D. Automated age estimation of young individuals based on 3D knee MRI using deep learning. Int J Legal Med 2021; 135:649-663. [PMID: 33331995 PMCID: PMC7870623 DOI: 10.1007/s00414-020-02465-z] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2020] [Accepted: 11/09/2020] [Indexed: 01/05/2023]
Abstract
Age estimation is a crucial element of forensic medicine to assess the chronological age of living individuals without or lacking valid legal documentation. Methods used in practice are labor-intensive, subjective, and frequently comprise radiation exposure. Recently, also non-invasive methods using magnetic resonance imaging (MRI) have evaluated and confirmed a correlation between growth plate ossification in long bones and the chronological age of young subjects. However, automated and user-independent approaches are required to perform reliable assessments on large datasets. The aim of this study was to develop a fully automated and computer-based method for age estimation based on 3D knee MRIs using machine learning. The proposed solution is based on three parts: image-preprocessing, bone segmentation, and age estimation. A total of 185 coronal and 404 sagittal MR volumes from Caucasian male subjects in the age range of 13 and 21 years were available. The best result of the fivefold cross-validation was a mean absolute error of 0.67 ± 0.49 years in age regression and an accuracy of 90.9%, a sensitivity of 88.6%, and a specificity of 94.2% in classification (18-year age limit) using a combination of convolutional neural networks and tree-based machine learning algorithms. The potential of deep learning for age estimation is reflected in the results and can be further improved if it is trained on even larger and more diverse datasets.
Collapse
Affiliation(s)
- Markus Auf der Mauer
- Medical and Industrial Image Processing, University of Applied Sciences of Wedel, Feldstraße 143, 22880 Wedel, Germany
| | - Eilin Jopp-van Well
- Department of Legal Medicine, University Medical Center Hamburg-Eppendorf (UKE), Butenfeld 34, 22529 Hamburg, Germany
| | - Jochen Herrmann
- Section of Pediatric Radiology, Department of Diagnostic and Interventional Radiology and Nuclear Medicine, University Medical Center Hamburg-Eppendorf (UKE), Martinistr. 52, 20246 Hamburg, Germany
| | - Michael Groth
- Section of Pediatric Radiology, Department of Diagnostic and Interventional Radiology and Nuclear Medicine, University Medical Center Hamburg-Eppendorf (UKE), Martinistr. 52, 20246 Hamburg, Germany
| | - Michael M. Morlock
- Institute of Biomechanics M3, Hamburg University of Technology (TUHH), Denickestraße 15, 21073 Hamburg, Germany
| | - Rainer Maas
- Radiologie Raboisen 38, Raboisen 38, 20095 Hamburg, Germany
| | - Dennis Säring
- Medical and Industrial Image Processing, University of Applied Sciences of Wedel, Feldstraße 143, 22880 Wedel, Germany
| |
Collapse
|
156
|
Demirel OB, Weingärtner S, Moeller S, Akçakaya M. Improved simultaneous multislice cardiac MRI using readout concatenated k-space SPIRiT (ROCK-SPIRiT). Magn Reson Med 2021; 85:3036-3048. [PMID: 33566378 DOI: 10.1002/mrm.28680] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2020] [Revised: 12/21/2020] [Accepted: 12/22/2020] [Indexed: 01/31/2023]
Abstract
PURPOSE To develop and evaluate a simultaneous multislice (SMS) reconstruction technique that provides noise reduction and leakage blocking for highly accelerated cardiac MRI. METHODS ReadOut Concatenated k-space SPIRiT (ROCK-SPIRiT) uses the concept of readout concatenation in image domain to represent SMS encoding, and performs coil self-consistency as in SPIRiT-type reconstruction in an extended k-space, while allowing regularization for further denoising. The proposed method is implemented with and without regularization, and validated on retrospectively SMS-accelerated cine imaging with three-fold SMS and two-fold in-plane acceleration. ROCK-SPIRiT is compared with two leakage-blocking SMS reconstruction methods: readout-SENSE-GRAPPA and split slice-GRAPPA. Further evaluation and comparisons are performed using prospectively SMS-accelerated cine imaging. RESULTS Results on retrospectively three-fold SMS and two-fold in-plane accelerated cine imaging show that ROCK-SPIRiT without regularization significantly improves on existing methods in terms of PSNR (readout-SENSE-GRAPPA: 33.5 ± 3.2, split slice-GRAPPA: 34.1 ± 3.8, ROCK-SPIRiT: 35.0 ± 3.3) and SSIM (readout-SENSE-GRAPPA: 84.4 ± 8.9, split slice-GRAPPA: 85.0 ± 8.9, ROCK-SPIRiT: 88.2 ± 6.6 [in percentage]). Regularized ROCK-SPIRiT significantly outperforms all methods, as characterized by these quantitative metrics (PSNR: 37.6 ± 3.8, SSIM: 94.2 ± 4.1 [in percentage]). The prospectively five-fold SMS and two-fold in-plane accelerated data show that ROCK-SPIRiT and regularized ROCK-SPIRiT have visually improved image quality compared with existing methods. CONCLUSION The proposed ROCK-SPIRiT technique reduces noise and interslice leakage in accelerated SMS cardiac cine MRI, improving on existing methods both quantitatively and qualitatively.
Collapse
Affiliation(s)
- Omer Burak Demirel
- Department of Electrical and Computer Engineering, University of Minnesota, Minneapolis, Minnesota, USA.,Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, Minnesota, USA
| | - Sebastian Weingärtner
- Department of Electrical and Computer Engineering, University of Minnesota, Minneapolis, Minnesota, USA.,Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, Minnesota, USA.,Department of Imaging Physics, Delft University of Technology, Delft, the Netherlands
| | - Steen Moeller
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, Minnesota, USA
| | - Mehmet Akçakaya
- Department of Electrical and Computer Engineering, University of Minnesota, Minneapolis, Minnesota, USA.,Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, Minnesota, USA
| |
Collapse
|
157
|
Ghodrati V, Bydder M, Ali F, Gao C, Prosper A, Nguyen KL, Hu P. Retrospective respiratory motion correction in cardiac cine MRI reconstruction using adversarial autoencoder and unsupervised learning. NMR IN BIOMEDICINE 2021; 34:e4433. [PMID: 33258197 PMCID: PMC10193526 DOI: 10.1002/nbm.4433] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/22/2019] [Revised: 09/18/2020] [Accepted: 10/02/2020] [Indexed: 05/20/2023]
Abstract
The aim of this study was to develop a deep neural network for respiratory motion compensation in free-breathing cine MRI and evaluate its performance. An adversarial autoencoder network was trained using unpaired training data from healthy volunteers and patients who underwent clinically indicated cardiac MRI examinations. A U-net structure was used for the encoder and decoder parts of the network and the code space was regularized by an adversarial objective. The autoencoder learns the identity map for the free-breathing motion-corrupted images and preserves the structural content of the images, while the discriminator, which interacts with the output of the encoder, forces the encoder to remove motion artifacts. The network was first evaluated based on data that were artificially corrupted with simulated rigid motion with regard to motion-correction accuracy and the presence of any artificially created structures. Subsequently, to demonstrate the feasibility of the proposed approach in vivo, our network was trained on respiratory motion-corrupted images in an unpaired manner and was tested on volunteer and patient data. In the simulation study, mean structural similarity index scores for the synthesized motion-corrupted images and motion-corrected images were 0.76 and 0.93 (out of 1), respectively. The proposed method increased the Tenengrad focus measure of the motion-corrupted images by 12% in the simulation study and by 7% in the in vivo study. The average overall subjective image quality scores for the motion-corrupted images, motion-corrected images and breath-held images were 2.5, 3.5 and 4.1 (out of 5.0), respectively. Nonparametric-paired comparisons showed that there was significant difference between the image quality scores of the motion-corrupted and breath-held images (P < .05); however, after correction there was no significant difference between the image quality scores of the motion-corrected and breath-held images. This feasibility study demonstrates the potential of an adversarial autoencoder network for correcting respiratory motion-related image artifacts without requiring paired data.
Collapse
Affiliation(s)
- Vahid Ghodrati
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
- Biomedical Physics Inter-Departmental Graduate Program, University of California, Los Angeles, CA, USA
| | - Mark Bydder
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Fadil Ali
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
- Biomedical Physics Inter-Departmental Graduate Program, University of California, Los Angeles, CA, USA
| | - Chang Gao
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
- Biomedical Physics Inter-Departmental Graduate Program, University of California, Los Angeles, CA, USA
| | - Ashley Prosper
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Kim-Lien Nguyen
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
- Biomedical Physics Inter-Departmental Graduate Program, University of California, Los Angeles, CA, USA
- Department of Medicine, Division of Cardiology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Peng Hu
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
- Biomedical Physics Inter-Departmental Graduate Program, University of California, Los Angeles, CA, USA
- Correspondence to: Peng Hu, PhD, Department of Radiological Sciences, 300 UCLA Medical Plaza Suite B119, Los Angeles, CA 90095,
| |
Collapse
|
158
|
Muhammad K, Khan S, Ser JD, Albuquerque VHCD. Deep Learning for Multigrade Brain Tumor Classification in Smart Healthcare Systems: A Prospective Survey. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:507-522. [PMID: 32603291 DOI: 10.1109/tnnls.2020.2995800] [Citation(s) in RCA: 94] [Impact Index Per Article: 23.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Brain tumor is one of the most dangerous cancers in people of all ages, and its grade recognition is a challenging problem for radiologists in health monitoring and automated diagnosis. Recently, numerous methods based on deep learning have been presented in the literature for brain tumor classification (BTC) in order to assist radiologists for a better diagnostic analysis. In this overview, we present an in-depth review of the surveys published so far and recent deep learning-based methods for BTC. Our survey covers the main steps of deep learning-based BTC methods, including preprocessing, features extraction, and classification, along with their achievements and limitations. We also investigate the state-of-the-art convolutional neural network models for BTC by performing extensive experiments using transfer learning with and without data augmentation. Furthermore, this overview describes available benchmark data sets used for the evaluation of BTC. Finally, this survey does not only look into the past literature on the topic but also steps on it to delve into the future of this area and enumerates some research directions that should be followed in the future, especially for personalized and smart healthcare.
Collapse
|
159
|
Raza K, Singh NK. A Tour of Unsupervised Deep Learning for Medical Image Analysis. Curr Med Imaging 2021; 17:1059-1077. [PMID: 33504314 DOI: 10.2174/1573405617666210127154257] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2020] [Revised: 11/17/2020] [Accepted: 12/16/2020] [Indexed: 11/22/2022]
Abstract
BACKGROUND Interpretation of medical images for the diagnosis and treatment of complex diseases from high-dimensional and heterogeneous data remains a key challenge in transforming healthcare. In the last few years, both supervised and unsupervised deep learning achieved promising results in the area of medical image analysis. Several reviews on supervised deep learning are published, but hardly any rigorous review on unsupervised deep learning for medical image analysis is available. OBJECTIVES The objective of this review is to systematically present various unsupervised deep learning models, tools, and benchmark datasets applied to medical image analysis. Some of the discussed models are autoencoders and its other variants, Restricted Boltzmann machines (RBM), Deep belief networks (DBN), Deep Boltzmann machine (DBM), and Generative adversarial network (GAN). Further, future research opportunities and challenges of unsupervised deep learning techniques for medical image analysis are also discussed. CONCLUSION Currently, interpretation of medical images for diagnostic purposes is usually performed by human experts that may be replaced by computer-aided diagnosis due to advancement in machine learning techniques, including deep learning, and the availability of cheap computing infrastructure through cloud computing. Both supervised and unsupervised machine learning approaches are widely applied in medical image analysis, each of them having certain pros and cons. Since human supervisions are not always available or inadequate or biased, therefore, unsupervised learning algorithms give a big hope with lots of advantages for biomedical image analysis.
Collapse
Affiliation(s)
- Khalid Raza
- Department of Computer Science, Jamia Millia Islamia, New Delhi. India
| | | |
Collapse
|
160
|
High quality and fast compressed sensing MRI reconstruction via edge-enhanced dual discriminator generative adversarial network. Magn Reson Imaging 2021; 77:124-136. [PMID: 33359427 DOI: 10.1016/j.mri.2020.12.011] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2020] [Revised: 11/24/2020] [Accepted: 12/20/2020] [Indexed: 11/21/2022]
Abstract
Generative adversarial networks (GAN) are widely used for fast compressed sensing magnetic resonance imaging (CSMRI) reconstruction. However, most existing methods are difficult to make an effective trade-off between abstract global high-level features and edge features. It easily causes problems, such as significant remaining aliasing artifacts and clearly over-smoothed reconstruction details. To tackle these issues, we propose a novel edge-enhanced dual discriminator generative adversarial network architecture called EDDGAN for CSMRI reconstruction with high quality. In this model, we extract effective edge features by fusing edge information from different depths. Then, leveraging the relationship between abstract global high-level features and edge features, a three-player game is introduced to control the hallucination of details and stabilize the training process. The resulting EDDGAN can offer more focus on edge restoration and de-aliasing. Extensive experimental results demonstrate that our method consistently outperforms state-of-the-art methods and obtains reconstructed images with rich edge details. In addition, our method also shows remarkable generalization, and its time consumption for each 256 × 256 image reconstruction is approximately 8.39 ms.
Collapse
|
161
|
Kimanius D, Zickert G, Nakane T, Adler J, Lunz S, Schönlieb CB, Öktem O, Scheres SHW. Exploiting prior knowledge about biological macromolecules in cryo-EM structure determination. IUCRJ 2021; 8:60-75. [PMID: 33520243 PMCID: PMC7793004 DOI: 10.1107/s2052252520014384] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/10/2020] [Accepted: 10/29/2020] [Indexed: 05/07/2023]
Abstract
Three-dimensional reconstruction of the electron-scattering potential of biological macromolecules from electron cryo-microscopy (cryo-EM) projection images is an ill-posed problem. The most popular cryo-EM software solutions to date rely on a regularization approach that is based on the prior assumption that the scattering potential varies smoothly over three-dimensional space. Although this approach has been hugely successful in recent years, the amount of prior knowledge that it exploits compares unfavorably with the knowledge about biological structures that has been accumulated over decades of research in structural biology. Here, a regularization framework for cryo-EM structure determination is presented that exploits prior knowledge about biological structures through a convolutional neural network that is trained on known macromolecular structures. This neural network is inserted into the iterative cryo-EM structure-determination process through an approach that is inspired by regularization by denoising. It is shown that the new regularization approach yields better reconstructions than the current state of the art for simulated data, and options to extend this work for application to experimental cryo-EM data are discussed.
Collapse
Affiliation(s)
- Dari Kimanius
- MRC Laboratory of Molecular Biology, Cambridge, United Kingdom
| | - Gustav Zickert
- Department of Mathematics, Royal Institute of Technology (KTH), Sweden
| | - Takanori Nakane
- MRC Laboratory of Molecular Biology, Cambridge, United Kingdom
| | | | - Sebastian Lunz
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge, United Kingdom
| | - Carola-Bibiane Schönlieb
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge, United Kingdom
| | - Ozan Öktem
- Department of Mathematics, Royal Institute of Technology (KTH), Sweden
| | | |
Collapse
|
162
|
Wang T, Lei Y, Fu Y, Wynne JF, Curran WJ, Liu T, Yang X. A review on medical imaging synthesis using deep learning and its clinical applications. J Appl Clin Med Phys 2021; 22:11-36. [PMID: 33305538 PMCID: PMC7856512 DOI: 10.1002/acm2.13121] [Citation(s) in RCA: 126] [Impact Index Per Article: 31.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Revised: 11/12/2020] [Accepted: 11/21/2020] [Indexed: 02/06/2023] Open
Abstract
This paper reviewed the deep learning-based studies for medical imaging synthesis and its clinical application. Specifically, we summarized the recent developments of deep learning-based methods in inter- and intra-modality image synthesis by listing and highlighting the proposed methods, study designs, and reported performances with related clinical applications on representative studies. The challenges among the reviewed studies were then summarized with discussion.
Collapse
Affiliation(s)
- Tonghe Wang
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Yang Lei
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
| | - Yabo Fu
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
| | - Jacob F. Wynne
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
| | - Walter J. Curran
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Tian Liu
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Xiaofeng Yang
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| |
Collapse
|
163
|
|
164
|
Lei K, Mardani M, Pauly JM, Vasanawala SS. Wasserstein GANs for MR Imaging: From Paired to Unpaired Training. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:105-115. [PMID: 32915728 PMCID: PMC7797774 DOI: 10.1109/tmi.2020.3022968] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Lack of ground-truth MR images impedes the common supervised training of neural networks for image reconstruction. To cope with this challenge, this article leverages unpaired adversarial training for reconstruction networks, where the inputs are undersampled k-space and naively reconstructed images from one dataset, and the labels are high-quality images from another dataset. The reconstruction networks consist of a generator which suppresses the input image artifacts, and a discriminator using a pool of (unpaired) labels to adjust the reconstruction quality. The generator is an unrolled neural network - a cascade of convolutional and data consistency layers. The discriminator is also a multilayer CNN that plays the role of a critic scoring the quality of reconstructed images based on the Wasserstein distance. Our experiments with knee MRI datasets demonstrate that the proposed unpaired training enables diagnostic-quality reconstruction when high-quality image labels are not available for the input types of interest, or when the amount of labels is small. In addition, our adversarial training scheme can achieve better image quality (as rated by expert radiologists) compared with the paired training schemes with pixel-wise loss.
Collapse
|
165
|
Hadjiiski L, Samala R, Chan HP. Image Processing Analytics: Enhancements and Segmentation. Mol Imaging 2021. [DOI: 10.1016/b978-0-12-816386-3.00057-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022] Open
|
166
|
Li G, Lv J, Tong X, Wang C, Yang G. High-Resolution Pelvic MRI Reconstruction Using a Generative Adversarial Network With Attention and Cyclic Loss. IEEE ACCESS 2021; 9:105951-105964. [DOI: 10.1109/access.2021.3099695] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/29/2023]
|
167
|
Sandino CM, Lai P, Vasanawala SS, Cheng JY. Accelerating cardiac cine MRI using a deep learning-based ESPIRiT reconstruction. Magn Reson Med 2021; 85:152-167. [PMID: 32697891 PMCID: PMC7722220 DOI: 10.1002/mrm.28420] [Citation(s) in RCA: 93] [Impact Index Per Article: 23.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2019] [Revised: 06/17/2020] [Accepted: 06/18/2020] [Indexed: 12/29/2022]
Abstract
PURPOSE To propose a novel combined parallel imaging and deep learning-based reconstruction framework for robust reconstruction of highly accelerated 2D cardiac cine MRI data. METHODS We propose DL-ESPIRiT, an unrolled neural network architecture that utilizes an extended coil sensitivity model to address SENSE-related field-of-view (FOV) limitations in previously proposed deep learning-based reconstruction frameworks. Additionally, we propose a novel neural network design based on (2+1)D spatiotemporal convolutions to produce more accurate dynamic MRI reconstructions than conventional 3D convolutions. The network is trained on fully sampled 2D cardiac cine datasets collected from 11 healthy volunteers with IRB approval. DL-ESPIRiT is compared against a state-of-the-art parallel imaging and compressed sensing method known as l 1 -ESPIRiT. The reconstruction accuracy of both methods is evaluated on retrospectively undersampled datasets (R = 12) with respect to standard image quality metrics as well as automatic deep learning-based segmentations of left ventricular volumes. Feasibility of DL-ESPIRiT is demonstrated on two prospectively undersampled datasets acquired in a single heartbeat per slice. RESULTS The (2+1)D DL-ESPIRiT method produces higher fidelity image reconstructions when compared to l 1 -ESPIRiT reconstructions with respect to standard image quality metrics (P < .001). As a result of improved image quality, segmentations made from (2+1)D DL-ESPIRiT images are also more accurate than segmentations from l 1 -ESPIRiT images. CONCLUSIONS DL-ESPIRiT synergistically combines a robust parallel imaging model and deep learning-based priors to produce high-fidelity reconstructions of retrospectively undersampled 2D cardiac cine data acquired with reduced FOV. Although a proof-of-concept is shown, further experiments are necessary to determine the efficacy of DL-ESPIRiT in prospectively undersampled data.
Collapse
Affiliation(s)
| | - Peng Lai
- Applied Sciences Laboratory, GE Healthcare, Menlo Park, CA, USA
| | | | - Joseph Y Cheng
- Department of Radiology, Stanford University, Stanford, CA, USA
| |
Collapse
|
168
|
Ran M, Xia W, Huang Y, Lu Z, Bao P, Liu Y, Sun H, Zhou J, Zhang Y. MD-Recon-Net: A Parallel Dual-Domain Convolutional Neural Network for Compressed Sensing MRI. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021. [DOI: 10.1109/trpms.2020.2991877] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
169
|
Edupuganti V, Mardani M, Vasanawala S, Pauly J. Uncertainty Quantification in Deep MRI Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:239-250. [PMID: 32956045 PMCID: PMC7837266 DOI: 10.1109/tmi.2020.3025065] [Citation(s) in RCA: 40] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Reliable MRI is crucial for accurate interpretation in therapeutic and diagnostic tasks. However, undersampling during MRI acquisition as well as the overparameterized and non-transparent nature of deep learning (DL) leaves substantial uncertainty about the accuracy of DL reconstruction. With this in mind, this study aims to quantify the uncertainty in image recovery with DL models. To this end, we first leverage variational autoencoders (VAEs) to develop a probabilistic reconstruction scheme that maps out (low-quality) short scans with aliasing artifacts to the diagnostic-quality ones. The VAE encodes the acquisition uncertainty in a latent code and naturally offers a posterior of the image from which one can generate pixel variance maps using Monte-Carlo sampling. Accurately predicting risk requires knowledge of the bias as well, for which we leverage Stein's Unbiased Risk Estimator (SURE) as a proxy for mean-squared-error (MSE). A range of empirical experiments is performed for Knee MRI reconstruction under different training losses (adversarial and pixel-wise) and unrolled recurrent network architectures. Our key observations indicate that: 1) adversarial losses introduce more uncertainty; and 2) recurrent unrolled nets reduce the prediction uncertainty and risk.
Collapse
|
170
|
Shen D, Ghosh S, Haji-Valizadeh H, Pathrose A, Schiffers F, Lee DC, Freed BH, Markl M, Cossairt OS, Katsaggelos AK, Kim D. Rapid reconstruction of highly undersampled, non-Cartesian real-time cine k-space data using a perceptual complex neural network (PCNN). NMR IN BIOMEDICINE 2021; 34:e4405. [PMID: 32875668 PMCID: PMC8793037 DOI: 10.1002/nbm.4405] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/30/2020] [Revised: 08/13/2020] [Accepted: 08/15/2020] [Indexed: 05/11/2023]
Abstract
Highly accelerated real-time cine MRI using compressed sensing (CS) is a promising approach to achieve high spatio-temporal resolution and clinically acceptable image quality in patients with arrhythmia and/or dyspnea. However, its lengthy image reconstruction time may hinder its clinical translation. The purpose of this study was to develop a neural network for reconstruction of non-Cartesian real-time cine MRI k-space data faster (<1 min per slice with 80 frames) than graphics processing unit (GPU)-accelerated CS reconstruction, without significant loss in image quality or accuracy in left ventricular (LV) functional parameters. We introduce a perceptual complex neural network (PCNN) that trains on complex-valued MRI signal and incorporates a perceptual loss term to suppress incoherent image details. This PCNN was trained and tested with multi-slice, multi-phase, cine images from 40 patients (20 for training, 20 for testing), where the zero-filled images were used as input and the corresponding CS reconstructed images were used as practical ground truth. The resulting images were compared using quantitative metrics (structural similarity index (SSIM) and normalized root mean square error (NRMSE)) and visual scores (conspicuity, temporal fidelity, artifacts, and noise scores), individually graded on a five-point scale (1, worst; 3, acceptable; 5, best), and LV ejection fraction (LVEF). The mean processing time per slice with 80 frames for PCNN was 23.7 ± 1.9 s for pre-processing (Step 1, same as CS) and 0.822 ± 0.004 s for dealiasing (Step 2, 166 times faster than CS). Our PCNN produced higher data fidelity metrics (SSIM = 0.88 ± 0.02, NRMSE = 0.014 ± 0.004) compared with CS. While all the visual scores were significantly different (P < 0.05), the median scores were all 4.0 or higher for both CS and PCNN. LVEFs measured from CS and PCNN were strongly correlated (R2 = 0.92) and in good agreement (mean difference = -1.4% [2.3% of mean]; limit of agreement = 10.6% [17.6% of mean]). The proposed PCNN is capable of rapid reconstruction (25 s per slice with 80 frames) of non-Cartesian real-time cine MRI k-space data, without significant loss in image quality or accuracy in LV functional parameters.
Collapse
Affiliation(s)
- Daming Shen
- Biomedical Engineering, Northwestern University, McCormick School of Engineering and Applied Science, Evanston, Illinois, United States
- Department of Radiology, Northwestern University Feinberg School of Medicine, Chicago, Illinois, United States
| | - Sushobhan Ghosh
- Department of Computer Science, Northwestern University, McCormick School of Engineering and Applied Science, Evanston, Illinois, United States
| | - Hassan Haji-Valizadeh
- Biomedical Engineering, Northwestern University, McCormick School of Engineering and Applied Science, Evanston, Illinois, United States
- Department of Radiology, Northwestern University Feinberg School of Medicine, Chicago, Illinois, United States
| | - Ashitha Pathrose
- Department of Radiology, Northwestern University Feinberg School of Medicine, Chicago, Illinois, United States
| | - Florian Schiffers
- Department of Computer Science, Northwestern University, McCormick School of Engineering and Applied Science, Evanston, Illinois, United States
| | - Daniel C Lee
- Department of Radiology, Northwestern University Feinberg School of Medicine, Chicago, Illinois, United States
- Department of Medicine, Northwestern University Feinberg School of Medicine, Chicago, Illinois, United States
| | - Benjamin H Freed
- Department of Medicine, Northwestern University Feinberg School of Medicine, Chicago, Illinois, United States
| | - Michael Markl
- Biomedical Engineering, Northwestern University, McCormick School of Engineering and Applied Science, Evanston, Illinois, United States
- Department of Radiology, Northwestern University Feinberg School of Medicine, Chicago, Illinois, United States
| | - Oliver S. Cossairt
- Department of Computer Science, Northwestern University, McCormick School of Engineering and Applied Science, Evanston, Illinois, United States
| | - Aggelos K. Katsaggelos
- Department of Electrical and Computer Engineering, McCormick School of Engineering and Applied Science, Northwestern University, Evanston, Illinois, United States
| | - Daniel Kim
- Biomedical Engineering, Northwestern University, McCormick School of Engineering and Applied Science, Evanston, Illinois, United States
- Department of Radiology, Northwestern University Feinberg School of Medicine, Chicago, Illinois, United States
| |
Collapse
|
171
|
Liu R, Zhang Y, Cheng S, Luo Z, Fan X. A Deep Framework Assembling Principled Modules for CS-MRI: Unrolling Perspective, Convergence Behaviors, and Practical Modeling. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:4150-4163. [PMID: 32746155 DOI: 10.1109/tmi.2020.3014193] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Compressed Sensing Magnetic Resonance Imaging (CS-MRI) significantly accelerates MR acquisition at a sampling rate much lower than the Nyquist criterion. A major challenge for CS-MRI lies in solving the severely ill-posed inverse problem to reconstruct aliasing-free MR images from the sparse k -space data. Conventional methods typically optimize an energy function, producing restoration of high quality, but their iterative numerical solvers unavoidably bring extremely large time consumption. Recent deep techniques provide fast restoration by either learning direct prediction to final reconstruction or plugging learned modules into the energy optimizer. Nevertheless, these data-driven predictors cannot guarantee the reconstruction following principled constraints underlying the domain knowledge so that the reliability of their reconstruction process is questionable. In this paper, we propose a deep framework assembling principled modules for CS-MRI that fuses learning strategy with the iterative solver of a conventional reconstruction energy. This framework embeds an optimal condition checking mechanism, fostering efficient and reliable reconstruction. We also apply the framework to three practical tasks, i.e., complex-valued data reconstruction, parallel imaging and reconstruction with Rician noise. Extensive experiments on both benchmark and manufacturer-testing images demonstrate that the proposed method reliably converges to the optimal solution more efficiently and accurately than the state-of-the-art in various scenarios.
Collapse
|
172
|
Liu F, Kijowski R, Feng L, El Fakhri G. High-performance rapid MR parameter mapping using model-based deep adversarial learning. Magn Reson Imaging 2020; 74:152-160. [PMID: 32980503 PMCID: PMC7669737 DOI: 10.1016/j.mri.2020.09.021] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2020] [Revised: 08/27/2020] [Accepted: 09/21/2020] [Indexed: 02/01/2023]
Abstract
PURPOSE To develop and evaluate a deep adversarial learning-based image reconstruction approach for rapid and efficient MR parameter mapping. METHODS The proposed method provides an image reconstruction framework by combining the end-to-end convolutional neural network (CNN) mapping, adversarial learning, and MR physical models. The CNN performs direct image-to-parameter mapping by transforming a series of undersampled images directly into MR parameter maps. Adversarial learning is used to improve image sharpness and enable better texture restoration during the image-to-parameter conversion. An additional pathway concerning the MR signal model is added between the estimated parameter maps and undersampled k-space data to ensure the data consistency during network training. The proposed framework was evaluated on T2 mapping of the brain and the knee at an acceleration rate R = 8 and was compared with other state-of-the-art reconstruction methods. Global and regional quantitative assessments were performed to demonstrate the reconstruction performance of the proposed method. RESULTS The proposed adversarial learning approach achieved accurate T2 mapping up to R = 8 in brain and knee joint image datasets. Compared to conventional reconstruction approaches that exploit image sparsity and low-rankness, the proposed method yielded lower errors and higher similarity to the reference and better image sharpness in the T2 estimation. The quantitative metrics were normalized root mean square error of 3.6% for brain and 7.3% for knee, structural similarity index of 85.1% for brain and 83.2% for knee, and tenengrad measures of 9.2% for brain and 10.1% for the knee. The adversarial approach also achieved better performance for maintaining greater image texture and sharpness in comparison to the CNN approach without adversarial learning. CONCLUSION The proposed framework by incorporating the efficient end-to-end CNN mapping, adversarial learning, and physical model enforced data consistency is a promising approach for rapid and efficient reconstruction of quantitative MR parameters.
Collapse
Affiliation(s)
- Fang Liu
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.
| | - Richard Kijowski
- Department of Radiology, University of Wisconsin-Madison, Madison, WI, USA
| | - Li Feng
- Biomedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, USA
| | - Georges El Fakhri
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
173
|
Yaman B, Hosseini SAH, Moeller S, Ellermann J, Uğurbil K, Akçakaya M. Self-supervised learning of physics-guided reconstruction neural networks without fully sampled reference data. Magn Reson Med 2020; 84:3172-3191. [PMID: 32614100 PMCID: PMC7811359 DOI: 10.1002/mrm.28378] [Citation(s) in RCA: 130] [Impact Index Per Article: 26.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2019] [Revised: 05/21/2020] [Accepted: 05/22/2020] [Indexed: 12/25/2022]
Abstract
PURPOSE To develop a strategy for training a physics-guided MRI reconstruction neural network without a database of fully sampled data sets. METHODS Self-supervised learning via data undersampling (SSDU) for physics-guided deep learning reconstruction partitions available measurements into two disjoint sets, one of which is used in the data consistency (DC) units in the unrolled network and the other is used to define the loss for training. The proposed training without fully sampled data is compared with fully supervised training with ground-truth data, as well as conventional compressed-sensing and parallel imaging methods using the publicly available fastMRI knee database. The same physics-guided neural network is used for both proposed SSDU and supervised training. The SSDU training is also applied to prospectively two-fold accelerated high-resolution brain data sets at different acceleration rates, and compared with parallel imaging. RESULTS Results on five different knee sequences at an acceleration rate of 4 shows that the proposed self-supervised approach performs closely with supervised learning, while significantly outperforming conventional compressed-sensing and parallel imaging, as characterized by quantitative metrics and a clinical reader study. The results on prospectively subsampled brain data sets, in which supervised learning cannot be used due to lack of ground-truth reference, show that the proposed self-supervised approach successfully performs reconstruction at high acceleration rates (4, 6, and 8). Image readings indicate improved visual reconstruction quality with the proposed approach compared with parallel imaging at acquisition acceleration. CONCLUSION The proposed SSDU approach allows training of physics-guided deep learning MRI reconstruction without fully sampled data, while achieving comparable results with supervised deep learning MRI trained on fully sampled data.
Collapse
Affiliation(s)
- Burhaneddin Yaman
- Department of Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN
| | - Seyed Amir Hossein Hosseini
- Department of Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN
| | - Steen Moeller
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN
| | - Jutta Ellermann
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN
| | - Kâmil Uğurbil
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN
| | - Mehmet Akçakaya
- Department of Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN
| |
Collapse
|
174
|
Lv J, Wang P, Tong X, Wang C. Parallel imaging with a combination of sensitivity encoding and generative adversarial networks. Quant Imaging Med Surg 2020; 10:2260-2273. [PMID: 33269225 PMCID: PMC7596399 DOI: 10.21037/qims-20-518] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2020] [Accepted: 09/04/2020] [Indexed: 12/26/2022]
Abstract
BACKGROUND Magnetic resonance imaging (MRI) has the limitation of low imaging speed. Acceleration methods using under-sampled k-space data have been widely exploited to improve data acquisition without reducing the image quality. Sensitivity encoding (SENSE) is the most commonly used method for multi-channel imaging. However, SENSE has the drawback of severe g-factor artifacts when the under-sampling factor is high. This paper applies generative adversarial networks (GAN) to remove g-factor artifacts from SENSE reconstructions. METHODS Our method was evaluated on a public knee database containing 20 healthy participants. We compared our method with conventional GAN using zero-filled (ZF) images as input. Structural similarity (SSIM), peak signal to noise ratio (PSNR), and normalized mean square error (NMSE) were calculated for the assessment of image quality. A paired student's t-test was conducted to compare the image quality metrics between the different methods. Statistical significance was considered at P<0.01. RESULTS The proposed method outperformed SENSE, variational network (VN), and ZF + GAN methods in terms of SSIM (SENSE + GAN: 0.81±0.06, SENSE: 0.40±0.07, VN: 0.79±0.06, ZF + GAN: 0.77±0.06), PSNR (SENSE + GAN: 31.90±1.66, SENSE: 22.70±1.99, VN: 31.35±2.01, ZF + GAN: 29.95±1.59), and NMSE (×10-7) (SENSE + GAN: 0.95±0.34, SENSE: 4.81±1.33, VN: 0.97±0.30, ZF + GAN: 1.60±0.84) with an under-sampling factor of up to 6-fold. CONCLUSIONS This study demonstrated the feasibility of using GAN to improve the performance of SENSE reconstruction. The improvement of reconstruction is more obvious for higher under-sampling rates, which shows great potential for many clinical applications.
Collapse
Affiliation(s)
- Jun Lv
- School of Computer and Control Engineering, Yantai University, Yantai, China
| | - Peng Wang
- School of Computer and Control Engineering, Yantai University, Yantai, China
| | - Xiangrong Tong
- School of Computer and Control Engineering, Yantai University, Yantai, China
| | - Chengyan Wang
- Human Phenome Institute, Fudan University, Shanghai, China
| |
Collapse
|
175
|
Lim H, Chun IY, Dewaraja YK, Fessler JA. Improved Low-Count Quantitative PET Reconstruction With an Iterative Neural Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3512-3522. [PMID: 32746100 PMCID: PMC7685233 DOI: 10.1109/tmi.2020.2998480] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
Image reconstruction in low-count PET is particularly challenging because gammas from natural radioactivity in Lu-based crystals cause high random fractions that lower the measurement signal-to-noise-ratio (SNR). In model-based image reconstruction (MBIR), using more iterations of an unregularized method may increase the noise, so incorporating regularization into the image reconstruction is desirable to control the noise. New regularization methods based on learned convolutional operators are emerging in MBIR. We modify the architecture of an iterative neural network, BCD-Net, for PET MBIR, and demonstrate the efficacy of the trained BCD-Net using XCAT phantom data that simulates the low true coincidence count-rates with high random fractions typical for Y-90 PET patient imaging after Y-90 microsphere radioembolization. Numerical results show that the proposed BCD-Net significantly improves CNR and RMSE of the reconstructed images compared to MBIR methods using non-trained regularizers, total variation (TV) and non-local means (NLM). Moreover, BCD-Net successfully generalizes to test data that differs from the training data. Improvements were also demonstrated for the clinically relevant phantom measurement data where we used training and testing datasets having very different activity distributions and count-levels.
Collapse
|
176
|
Chaudhari AS, Kogan F, Pedoia V, Majumdar S, Gold GE, Hargreaves BA. Rapid Knee MRI Acquisition and Analysis Techniques for Imaging Osteoarthritis. J Magn Reson Imaging 2020; 52:1321-1339. [PMID: 31755191 PMCID: PMC7925938 DOI: 10.1002/jmri.26991] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2019] [Revised: 10/22/2019] [Accepted: 10/22/2019] [Indexed: 12/16/2022] Open
Abstract
Osteoarthritis (OA) of the knee is a major source of disability that has no known treatment or cure. Morphological and compositional MRI is commonly used for assessing the bone and soft tissues in the knee to enhance the understanding of OA pathophysiology. However, it is challenging to extend these imaging methods and their subsequent analysis techniques to study large population cohorts due to slow and inefficient imaging acquisition and postprocessing tools. This can create a bottleneck in assessing early OA changes and evaluating the responses of novel therapeutics. The purpose of this review article is to highlight recent developments in tools for enhancing the efficiency of knee MRI methods useful to study OA. Advances in efficient MRI data acquisition and reconstruction tools for morphological and compositional imaging, efficient automated image analysis tools, and hardware improvements to further drive efficient imaging are discussed in this review. For each topic, we discuss the current challenges as well as potential future opportunities to alleviate these challenges. LEVEL OF EVIDENCE: 5 TECHNICAL EFFICACY STAGE: 3.
Collapse
Affiliation(s)
| | - Feliks Kogan
- Department of Radiology, Stanford University, Stanford, California, USA
| | - Valentina Pedoia
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, California, USA
- Center of Digital Health Innovation (CDHI), University of California San Francisco, San Francisco, California, USA
| | - Sharmila Majumdar
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, California, USA
- Center of Digital Health Innovation (CDHI), University of California San Francisco, San Francisco, California, USA
| | - Garry E. Gold
- Department of Radiology, Stanford University, Stanford, California, USA
- Department of Orthopaedic Surgery, Stanford University, Stanford, California, USA
- Department of Bioengineering, Stanford University, Stanford, California, USA
| | - Brian A. Hargreaves
- Department of Radiology, Stanford University, Stanford, California, USA
- Department of Bioengineering, Stanford University, Stanford, California, USA
- Department of Electrical Engineering, Stanford University, Stanford, California, USA
| |
Collapse
|
177
|
Fu Z, Mandava S, Keerthivasan MB, Li Z, Johnson K, Martin DR, Altbach MI, Bilgin A. A multi-scale residual network for accelerated radial MR parameter mapping. Magn Reson Imaging 2020; 73:152-162. [PMID: 32882339 PMCID: PMC7580302 DOI: 10.1016/j.mri.2020.08.013] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2020] [Revised: 07/17/2020] [Accepted: 08/20/2020] [Indexed: 01/04/2023]
Abstract
A deep learning MR parameter mapping framework which combines accelerated radial data acquisition with a multi-scale residual network (MS-ResNet) for image reconstruction is proposed. The proposed supervised learning strategy uses input image patches from multi-contrast images with radial undersampling artifacts and target image patches from artifact-free multi-contrast images. Subspace filtering is used during pre-processing to denoise input patches. For each anatomy and relaxation parameter, an individual network is trained. in vivo T1 mapping results are obtained on brain and abdomen datasets and in vivo T2 mapping results are obtained on brain and knee datasets. Quantitative results for the T2 mapping of the knee show that MS-ResNet trained using either fully sampled or undersampled data outperforms conventional model-based compressed sensing methods. This is significant because obtaining fully sampled training data is not possible in many applications. in vivo brain and abdomen results for T1 mapping and in vivo brain results for T2 mapping demonstrate that MS-ResNet yields contrast-weighted images and parameter maps that are comparable to those achieved by model-based iterative methods while offering two orders of magnitude reduction in reconstruction times. The proposed approach enables recovery of high-quality contrast-weighted images and parameter maps from highly accelerated radial data acquisitions. The rapid image reconstructions enabled by the proposed approach makes it a good candidate for routine clinical use.
Collapse
Affiliation(s)
- Zhiyang Fu
- Department of Electrical and Computer Engineering, University of Arizona, Tucson, AZ, USA; Department of Medical Imaging, University of Arizona, Tucson, AZ, USA
| | - Sagar Mandava
- Department of Electrical and Computer Engineering, University of Arizona, Tucson, AZ, USA; Department of Medical Imaging, University of Arizona, Tucson, AZ, USA
| | - Mahesh B Keerthivasan
- Department of Electrical and Computer Engineering, University of Arizona, Tucson, AZ, USA; Department of Medical Imaging, University of Arizona, Tucson, AZ, USA
| | - Zhitao Li
- Department of Electrical and Computer Engineering, University of Arizona, Tucson, AZ, USA; Department of Medical Imaging, University of Arizona, Tucson, AZ, USA
| | - Kevin Johnson
- Department of Medical Imaging, University of Arizona, Tucson, AZ, USA
| | - Diego R Martin
- Department of Medical Imaging, University of Arizona, Tucson, AZ, USA
| | - Maria I Altbach
- Department of Medical Imaging, University of Arizona, Tucson, AZ, USA; Department of Biomedical Engineering, University of Arizona, Tucson, AZ, USA
| | - Ali Bilgin
- Department of Electrical and Computer Engineering, University of Arizona, Tucson, AZ, USA; Department of Medical Imaging, University of Arizona, Tucson, AZ, USA; Department of Biomedical Engineering, University of Arizona, Tucson, AZ, USA.
| |
Collapse
|
178
|
Shiyam Sundar LK, Muzik O, Buvat I, Bidaut L, Beyer T. Potentials and caveats of AI in hybrid imaging. Methods 2020; 188:4-19. [PMID: 33068741 DOI: 10.1016/j.ymeth.2020.10.004] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2020] [Revised: 10/05/2020] [Accepted: 10/07/2020] [Indexed: 12/18/2022] Open
Abstract
State-of-the-art patient management frequently mandates the investigation of both anatomy and physiology of the patients. Hybrid imaging modalities such as the PET/MRI, PET/CT and SPECT/CT have the ability to provide both structural and functional information of the investigated tissues in a single examination. With the introduction of such advanced hardware fusion, new problems arise such as the exceedingly large amount of multi-modality data that requires novel approaches of how to extract a maximum of clinical information from large sets of multi-dimensional imaging data. Artificial intelligence (AI) has emerged as one of the leading technologies that has shown promise in facilitating highly integrative analysis of multi-parametric data. Specifically, the usefulness of AI algorithms in the medical imaging field has been heavily investigated in the realms of (1) image acquisition and reconstruction, (2) post-processing and (3) data mining and modelling. Here, we aim to provide an overview of the challenges encountered in hybrid imaging and discuss how AI algorithms can facilitate potential solutions. In addition, we highlight the pitfalls and challenges in using advanced AI algorithms in the context of hybrid imaging and provide suggestions for building robust AI solutions that enable reproducible and transparent research.
Collapse
Affiliation(s)
- Lalith Kumar Shiyam Sundar
- QIMP Team, Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | | | - Irène Buvat
- Laboratoire d'Imagerie Translationnelle en Oncologie, Inserm, Institut Curie, Orsay, France
| | - Luc Bidaut
- College of Science, University of Lincoln, Lincoln, UK
| | - Thomas Beyer
- QIMP Team, Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria.
| |
Collapse
|
179
|
Pattern Classification Approaches for Breast Cancer Identification via MRI: State-Of-The-Art and Vision for the Future. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10207201] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
Abstract
Mining algorithms for Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) of breast tissue are discussed. The algorithms are based on recent advances in multi-dimensional signal processing and aim to advance current state-of-the-art computer-aided detection and analysis of breast tumours when these are observed at various states of development. The topics discussed include image feature extraction, information fusion using radiomics, multi-parametric computer-aided classification and diagnosis using information fusion of tensorial datasets as well as Clifford algebra based classification approaches and convolutional neural network deep learning methodologies. The discussion also extends to semi-supervised deep learning and self-supervised strategies as well as generative adversarial networks and algorithms using generated confrontational learning approaches. In order to address the problem of weakly labelled tumour images, generative adversarial deep learning strategies are considered for the classification of different tumour types. The proposed data fusion approaches provide a novel Artificial Intelligence (AI) based framework for more robust image registration that can potentially advance the early identification of heterogeneous tumour types, even when the associated imaged organs are registered as separate entities embedded in more complex geometric spaces. Finally, the general structure of a high-dimensional medical imaging analysis platform that is based on multi-task detection and learning is proposed as a way forward. The proposed algorithm makes use of novel loss functions that form the building blocks for a generated confrontation learning methodology that can be used for tensorial DCE-MRI. Since some of the approaches discussed are also based on time-lapse imaging, conclusions on the rate of proliferation of the disease can be made possible. The proposed framework can potentially reduce the costs associated with the interpretation of medical images by providing automated, faster and more consistent diagnosis.
Collapse
|
180
|
Using Deep Learning to Accelerate Knee MRI at 3 T: Results of an Interchangeability Study. AJR Am J Roentgenol 2020; 215:1421-1429. [PMID: 32755163 DOI: 10.2214/ajr.20.23313] [Citation(s) in RCA: 111] [Impact Index Per Article: 22.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
OBJECTIVE. Deep learning (DL) image reconstruction has the potential to disrupt the current state of MRI by significantly decreasing the time required for MRI examinations. Our goal was to use DL to accelerate MRI to allow a 5-minute comprehensive examination of the knee without compromising image quality or diagnostic accuracy. MATERIALS AND METHODS. A DL model for image reconstruction using a variational network was optimized. The model was trained using dedicated multisequence training, in which a single reconstruction model was trained with data from multiple sequences with different contrast and orientations. After training, data from 108 patients were retrospectively undersampled in a manner that would correspond with a net 3.49-fold acceleration of fully sampled data acquisition and a 1.88-fold acceleration compared with our standard twofold accelerated parallel acquisition. An interchangeability study was performed, in which the ability of six readers to detect internal derangement of the knee was compared for clinical and DL-accelerated images. RESULTS. We found a high degree of interchangeability between standard and DL-accelerated images. In particular, results showed that interchanging the sequences would produce discordant clinical opinions no more than 4% of the time for any feature evaluated. Moreover, the accelerated sequence was judged by all six readers to have better quality than the clinical sequence. CONCLUSION. An optimized DL model allowed acceleration of knee images that performed interchangeably with standard images for detection of internal derangement of the knee. Importantly, readers preferred the quality of accelerated images to that of standard clinical images.
Collapse
|
181
|
Du T, Zhang Y, Shi X, Chen S. Multiple Slice k-space Deep Learning for Magnetic Resonance Imaging Reconstruction. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:1564-1567. [PMID: 33018291 DOI: 10.1109/embc44109.2020.9175642] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Magnetic resonance imaging (MRI) has been one of the most powerful and valuable imaging methods for medical diagnosis and staging of disease. Due to the long scan time of MRI acquisition, k-space under-samplings is required during the acquisition processing. Thus, MRI reconstruction, which transfers undersampled k-space data to high-quality magnetic resonance imaging, becomes an important and meaningful task. There have been many explorations on k-space interpolation for MRI reconstruction. However, most of these methods ignore the strong correlation between target slice and its adjacent slices. Inspired by this, we propose a fully data-driven deep learning algorithm for k-space interpolation, utilizing the correlation information between the target slice and its neighboring slices. A novel network is proposed, which models the inter-dependencies between different slices. In addition, the network is easily implemented and expended. Experiments show that our methods consistently surpass existing image-domain and k-space-domain magnetic resonance imaging reconstructing methods.
Collapse
|
182
|
Lai KW, Aggarwal M, van Zijl P, Li X, Sulam J. Learned Proximal Networks for Quantitative Susceptibility Mapping. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2020; 12262:125-135. [PMID: 33163993 DOI: 10.1007/978-3-030-59713-9_13] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Quantitative Susceptibility Mapping (QSM) estimates tissue magnetic susceptibility distributions from Magnetic Resonance (MR) phase measurements by solving an ill-posed dipole inversion problem. Conventional single orientation QSM methods usually employ regularization strategies to stabilize such inversion, but may suffer from streaking artifacts or over-smoothing. Multiple orientation QSM such as calculation of susceptibility through multiple orientation sampling (COSMOS) can give well-conditioned inversion and an artifact free solution but has expensive acquisition costs. On the other hand, Convolutional Neural Networks (CNN) show great potential for medical image reconstruction, albeit often with limited interpretability. Here, we present a Learned Proximal Convolutional Neural Network (LP-CNN) for solving the ill-posed QSM dipole inversion problem in an iterative proximal gradient descent fashion. This approach combines the strengths of data-driven restoration priors and the clear interpretability of iterative solvers that can take into account the physical model of dipole convolution. During training, our LP-CNN learns an implicit regularizer via its proximal, enabling the decoupling between the forward operator and the data-driven parameters in the reconstruction algorithm. More importantly, this framework is believed to be the first deep learning QSM approach that can naturally handle an arbitrary number of phase input measurements without the need for any ad-hoc rotation or re-training. We demonstrate that the LP-CNN provides state-of-the-art reconstruction results compared to both traditional and deep learning methods while allowing for more flexibility in the reconstruction process.
Collapse
Affiliation(s)
- Kuo-Wei Lai
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
- F.M. Kirby Research Center for Functional Brain Imaging, Kennedy Krieger Institute, Baltimore, MD 21205, USA
| | - Manisha Aggarwal
- Department of Radiology and Radiological Sciences, Johns Hopkins University, Baltimore, MD 21205, USA
| | - Peter van Zijl
- F.M. Kirby Research Center for Functional Brain Imaging, Kennedy Krieger Institute, Baltimore, MD 21205, USA
- Department of Radiology and Radiological Sciences, Johns Hopkins University, Baltimore, MD 21205, USA
| | - Xu Li
- F.M. Kirby Research Center for Functional Brain Imaging, Kennedy Krieger Institute, Baltimore, MD 21205, USA
- Department of Radiology and Radiological Sciences, Johns Hopkins University, Baltimore, MD 21205, USA
| | - Jeremias Sulam
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| |
Collapse
|
183
|
Henschel L, Conjeti S, Estrada S, Diers K, Fischl B, Reuter M. FastSurfer - A fast and accurate deep learning based neuroimaging pipeline. Neuroimage 2020; 219:117012. [PMID: 32526386 PMCID: PMC7898243 DOI: 10.1016/j.neuroimage.2020.117012] [Citation(s) in RCA: 239] [Impact Index Per Article: 47.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2020] [Revised: 05/29/2020] [Accepted: 05/31/2020] [Indexed: 02/01/2023] Open
Abstract
Traditional neuroimage analysis pipelines involve computationally intensive, time-consuming optimization steps, and thus, do not scale well to large cohort studies with thousands or tens of thousands of individuals. In this work we propose a fast and accurate deep learning based neuroimaging pipeline for the automated processing of structural human brain MRI scans, replicating FreeSurfer's anatomical segmentation including surface reconstruction and cortical parcellation. To this end, we introduce an advanced deep learning architecture capable of whole-brain segmentation into 95 classes. The network architecture incorporates local and global competition via competitive dense blocks and competitive skip pathways, as well as multi-slice information aggregation that specifically tailor network performance towards accurate segmentation of both cortical and subcortical structures. Further, we perform fast cortical surface reconstruction and thickness analysis by introducing a spectral spherical embedding and by directly mapping the cortical labels from the image to the surface. This approach provides a full FreeSurfer alternative for volumetric analysis (in under 1 min) and surface-based thickness analysis (within only around 1 h runtime). For sustainability of this approach we perform extensive validation: we assert high segmentation accuracy on several unseen datasets, measure generalizability and demonstrate increased test-retest reliability, and high sensitivity to group differences in dementia.
Collapse
Affiliation(s)
- Leonie Henschel
- German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany
| | - Sailesh Conjeti
- German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany
| | - Santiago Estrada
- German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany
| | - Kersten Diers
- German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany
| | - Bruce Fischl
- A.A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston MA, USA; Department of Radiology, Harvard Medical School, Boston, MA, USA; Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
| | - Martin Reuter
- German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany; A.A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston MA, USA; Department of Radiology, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
184
|
Accelerating quantitative MR imaging with the incorporation of B1 compensation using deep learning. Magn Reson Imaging 2020; 72:78-86. [DOI: 10.1016/j.mri.2020.06.011] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2020] [Revised: 05/20/2020] [Accepted: 06/13/2020] [Indexed: 11/21/2022]
|
185
|
Aggarwal HK, Jacob M. J-MoDL: Joint Model-Based Deep Learning for Optimized Sampling and Reconstruction. IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING 2020; 14:1151-1162. [PMID: 33613806 PMCID: PMC7893809 DOI: 10.1109/jstsp.2020.3004094] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
Modern MRI schemes, which rely on compressed sensing or deep learning algorithms to recover MRI data from undersampled multichannel Fourier measurements, are widely used to reduce the scan time. The image quality of these approaches is heavily dependent on the sampling pattern. We introduce a continuous strategy to optimize the sampling pattern and the network parameters jointly. We use a multichannel forward model, consisting of a non-uniform Fourier transform with continuously defined sampling locations, to realize the data consistency block within a model-based deep learning image reconstruction scheme. This approach facilitates the joint and continuous optimization of the sampling pattern and the CNN parameters to improve image quality. We observe that the joint optimization of the sampling patterns and the reconstruction module significantly improves the performance of most deep learning reconstruction algorithms. The source code is available at https://github.com/hkaggarwal/J-MoDL.
Collapse
Affiliation(s)
- Hemant Kumar Aggarwal
- Department of Electrical and Computer Engineering, University of Iowa, IA, USA, 52242
| | - Mathews Jacob
- Department of Electrical and Computer Engineering, University of Iowa, IA, USA, 52242
| |
Collapse
|
186
|
Liu X, Zhou T, Lu M, Yang Y, He Q, Luo J. Deep Learning for Ultrasound Localization Microscopy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3064-3078. [PMID: 32286964 DOI: 10.1109/tmi.2020.2986781] [Citation(s) in RCA: 51] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
By localizing microbubbles (MBs) in the vasculature, ultrasound localization microscopy (ULM) has recently been proposed, which greatly improves the spatial resolution of ultrasound (US) imaging and will be helpful for clinical diagnosis. Nevertheless, several challenges remain in fast ULM imaging. The main problems are that current localization methods used to implement fast ULM imaging, e.g., a previously reported localization method based on sparse recovery (CS-ULM), suffer from long data-processing time and exhaustive parameter tuning (optimization). To address these problems, in this paper, we propose a ULM method based on deep learning, which is achieved by using a modified sub-pixel convolutional neural network (CNN), termed as mSPCN-ULM. Simulations and in vivo experiments are performed to evaluate the performance of mSPCN-ULM. Simulation results show that even if under high-density condition (6.4 MBs/mm2), a high localization precision ( [Formula: see text] in the lateral direction and [Formula: see text] in the axial direction) and a high localization reliability (Jaccard index of 0.66) can be obtained by mSPCN-ULM, compared to CS-ULM. The in vivo experimental results indicate that with plane wave scan at a transmit center frequency of 15.625 MHz, microvessels with diameters of [Formula: see text] can be detected and adjacent microvessels with a distance of [Formula: see text] can be separated. Furthermore, when using GPU acceleration, the data-processing time of mSPCN-ULM can be shortened to ~6 sec/frame in the simulations and ~23 sec/frame in the in vivo experiments, which is 3-4 orders of magnitude faster than CS-ULM. Finally, once the network is trained, mSPCN-ULM does not need parameter tuning to implement ULM. As a result, mSPCN-ULM opens the door to implement ULM with fast data-processing speed, high imaging accuracy, short data-acquisition time, and high flexibility (robustness to parameters) characteristics.
Collapse
|
187
|
Munsch F, Taso M, Zhao L, Lebel RM, Guidon A, Detre JA, Alsop DC. Rotated spiral RARE for high spatial and temporal resolution volumetric arterial spin labeling acquisition. Neuroimage 2020; 223:117371. [PMID: 32931943 PMCID: PMC9470008 DOI: 10.1016/j.neuroimage.2020.117371] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2020] [Revised: 09/08/2020] [Accepted: 09/09/2020] [Indexed: 12/29/2022] Open
Abstract
Background: Arterial Spin Labeling (ASL) MRI can provide quantitative images that are sensitive to both time averaged blood flow and its temporal fluctuations. 3D image acquisitions for ASL are desirable because they are more readily compatible with background suppression to reduce noise, can reduce signal loss and distortion, and provide uniform flow sensitivity across the brain. However, single-shot 3D acquisition for maximal temporal resolution typically involves degradation of image quality through blurring or noise amplification by parallel imaging. Here, we report a new approach to accelerate a common stack of spirals 3D image acquisition by pseudo golden-angle rotation and compressed sensing reconstruction without any degradation of time averaged blood flow images. Methods: 28 healthy volunteers were imaged at 3T with background-suppressed unbalanced pseudo-continuous ASL combined with a pseudo golden-angle Stack-of-Spirals 3D RARE readout. A fully-sampled perfusion-weighted volume was reconstructed by 3D non-uniform Fast Fourier Transform (nuFFT) followed by sum-of-squares combination of the 32 individual channels. Coil sensitivities were estimated followed by reconstruction of the 39 single-shot volumes using an L1-wavelet Compressed-Sensing reconstruction. Finally, brain connectivity analyses were performed in regions where BOLD signal suffers from low signal-to-noise ratio and susceptibility artifacts. Results: Image quality, assessed with a non-reference 3D blurring metric, of full time averaged blood flow was comparable to a conventional interleaved acquisition. The temporal resolution provided by the acceleration enabled identification and quantification of resting-state networks even in inferior regions such as the amygdala and inferior frontal lobes, where susceptibility artifacts can degrade conventional resting-state fMRI acquisitions. Conclusion: This approach can provide measures of blood flow modulations and resting-state networks for free within any research or clinical protocol employing ASL for resting blood flow.
Collapse
Affiliation(s)
- Fanny Munsch
- Division of MRI Research, Department of Radiology, Beth Israel Deaconess Medical Center, Harvard Medical School, 330 Brookline Avenue, Boston, MA 02215, USA.
| | - Manuel Taso
- Division of MRI Research, Department of Radiology, Beth Israel Deaconess Medical Center, Harvard Medical School, 330 Brookline Avenue, Boston, MA 02215, USA
| | - Li Zhao
- Diagnostic Imaging and Radiology, Children's National Hospital, Washington, DC, USA
| | - R Marc Lebel
- Global MR Applications and Workflow, GE Healthcare, Calgary, AB, Canada
| | - Arnaud Guidon
- Global MR Applications and Workflow, GE Healthcare, Boston, MA, USA
| | - John A Detre
- Departments of Neurology and Radiology, University of Pennsylvania, Philadelphia, PA, USA
| | - David C Alsop
- Division of MRI Research, Department of Radiology, Beth Israel Deaconess Medical Center, Harvard Medical School, 330 Brookline Avenue, Boston, MA 02215, USA
| |
Collapse
|
188
|
Dual-domain cascade of U-nets for multi-channel magnetic resonance image reconstruction. Magn Reson Imaging 2020; 71:140-153. [DOI: 10.1016/j.mri.2020.06.002] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2020] [Revised: 05/20/2020] [Accepted: 06/09/2020] [Indexed: 11/17/2022]
|
189
|
Polak D, Cauley S, Bilgic B, Gong E, Bachert P, Adalsteinsson E, Setsompop K. Joint multi-contrast variational network reconstruction (jVN) with application to rapid 2D and 3D imaging. Magn Reson Med 2020; 84:1456-1469. [PMID: 32129529 PMCID: PMC7539238 DOI: 10.1002/mrm.28219] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2019] [Revised: 01/20/2020] [Accepted: 01/29/2020] [Indexed: 12/14/2022]
Abstract
PURPOSE To improve the image quality of highly accelerated multi-channel MRI data by learning a joint variational network that reconstructs multiple clinical contrasts jointly. METHODS Data from our multi-contrast acquisition were embedded into the variational network architecture where shared anatomical information is exchanged by mixing the input contrasts. Complementary k-space sampling across imaging contrasts and Bunch-Phase/Wave-Encoding were used for data acquisition to improve the reconstruction at high accelerations. At 3T, our joint variational network approach across T1w, T2w and T2-FLAIR-weighted brain scans was tested for retrospective under-sampling at R = 6 (2D) and R = 4 × 4 (3D) acceleration. Prospective acceleration was also performed for 3D data where the combined acquisition time for whole brain coverage at 1 mm isotropic resolution across three contrasts was less than 3 min. RESULTS Across all test datasets, our joint multi-contrast network better preserved fine anatomical details with reduced image-blurring when compared to the corresponding single-contrast reconstructions. Improvement in image quality was also obtained through complementary k-space sampling and Bunch-Phase/Wave-Encoding where the synergistic combination yielded the overall best performance as evidenced by exemplary slices and quantitative error metrics. CONCLUSION By leveraging shared anatomical structures across the jointly reconstructed scans, our joint multi-contrast approach learnt more efficient regularizers, which helped to retain natural image appearance and avoid over-smoothing. When synergistically combined with advanced encoding techniques, the performance was further improved, enabling up to R = 16-fold acceleration with good image quality. This should help pave the way to very rapid high-resolution brain exams.
Collapse
Affiliation(s)
- Daniel Polak
- Department of Physics and Astronomy, Heidelberg University, Heidelberg, Germany
- Department of Radiology, A. A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Siemens Healthcare GmbH, Erlangen, Germany
| | - Stephen Cauley
- Department of Radiology, A. A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Harvard Medical School, Boston, MA, USA
- Harvard-MIT Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Berkin Bilgic
- Department of Radiology, A. A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Harvard Medical School, Boston, MA, USA
- Harvard-MIT Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, USA
| | | | - Peter Bachert
- Department of Physics and Astronomy, Heidelberg University, Heidelberg, Germany
- Medical Physics in Radiology, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Elfar Adalsteinsson
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Kawin Setsompop
- Department of Radiology, A. A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Harvard Medical School, Boston, MA, USA
- Harvard-MIT Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
190
|
Singhal V, Majumdar A. Reconstructing multi-echo magnetic resonance images via structured deep dictionary learning. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.11.107] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
191
|
|
192
|
Yan J, Chen S, Zhang Y, Li X. Neural Architecture Search for compressed sensing Magnetic Resonance image reconstruction. Comput Med Imaging Graph 2020; 85:101784. [PMID: 32860972 DOI: 10.1016/j.compmedimag.2020.101784] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2020] [Revised: 07/26/2020] [Accepted: 08/15/2020] [Indexed: 01/04/2023]
Abstract
Recent works have demonstrated that deep learning (DL) based compressed sensing (CS) implementation can accelerate Magnetic Resonance (MR) Imaging by reconstructing MR images from sub-sampled k-space data. However, network architectures adopted in previous methods are all designed by handcraft. Neural Architecture Search (NAS) algorithms can automatically build neural network architectures which have outperformed human designed ones in several vision tasks. Inspired by this, here we proposed a novel and efficient network for the MR image reconstruction problem via NAS instead of manual attempts. Particularly, a specific cell structure, which was integrated into the model-driven MR reconstruction pipeline, was automatically searched from a flexible pre-defined operation search space in a differentiable manner. Experimental results show that our searched network can produce better reconstruction results compared to previous state-of-the-art methods in terms of PSNR and SSIM with 4∼6 times fewer computation resources. Extensive experiments were conducted to analyze how hyper-parameters affect reconstruction performance and the searched structures. The generalizability of the searched architecture was also evaluated on different organ MR datasets. Our proposed method can reach a better trade-off between computation cost and reconstruction performance for MR reconstruction problem with good generalizability and offer insights to design neural networks for other medical image applications. The evaluation code will be available at https://github.com/yjump/NAS-for-CSMRI.
Collapse
Affiliation(s)
- Jiangpeng Yan
- Department of Automation, Tsinghua University, Beijing 100091, China; Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China
| | - Shou Chen
- Center for Biomedical Imaging Research, Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100091, China
| | - Yongbing Zhang
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China
| | - Xiu Li
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China.
| |
Collapse
|
193
|
Küstner T, Fuin N, Hammernik K, Bustin A, Qi H, Hajhosseiny R, Masci PG, Neji R, Rueckert D, Botnar RM, Prieto C. CINENet: deep learning-based 3D cardiac CINE MRI reconstruction with multi-coil complex-valued 4D spatio-temporal convolutions. Sci Rep 2020; 10:13710. [PMID: 32792507 PMCID: PMC7426830 DOI: 10.1038/s41598-020-70551-8] [Citation(s) in RCA: 124] [Impact Index Per Article: 24.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2020] [Accepted: 07/31/2020] [Indexed: 11/29/2022] Open
Abstract
Cardiac CINE magnetic resonance imaging is the gold-standard for the assessment of cardiac function. Imaging accelerations have shown to enable 3D CINE with left ventricular (LV) coverage in a single breath-hold. However, 3D imaging remains limited to anisotropic resolution and long reconstruction times. Recently deep learning has shown promising results for computationally efficient reconstructions of highly accelerated 2D CINE imaging. In this work, we propose a novel 4D (3D + time) deep learning-based reconstruction network, termed 4D CINENet, for prospectively undersampled 3D Cartesian CINE imaging. CINENet is based on (3 + 1)D complex-valued spatio-temporal convolutions and multi-coil data processing. We trained and evaluated the proposed CINENet on in-house acquired 3D CINE data of 20 healthy subjects and 15 patients with suspected cardiovascular disease. The proposed CINENet network outperforms iterative reconstructions in visual image quality and contrast (+ 67% improvement). We found good agreement in LV function (bias ± 95% confidence) in terms of end-systolic volume (0 ± 3.3 ml), end-diastolic volume (− 0.4 ± 2.0 ml) and ejection fraction (0.1 ± 3.2%) compared to clinical gold-standard 2D CINE, enabling single breath-hold isotropic 3D CINE in less than 10 s scan and ~ 5 s reconstruction time.
Collapse
Affiliation(s)
- Thomas Küstner
- School of Biomedical Engineering and Imaging Sciences, King's College London, St. Thomas' Hospital, Lambeth Wing, London, UK.
| | - Niccolo Fuin
- School of Biomedical Engineering and Imaging Sciences, King's College London, St. Thomas' Hospital, Lambeth Wing, London, UK
| | | | - Aurelien Bustin
- School of Biomedical Engineering and Imaging Sciences, King's College London, St. Thomas' Hospital, Lambeth Wing, London, UK
| | - Haikun Qi
- School of Biomedical Engineering and Imaging Sciences, King's College London, St. Thomas' Hospital, Lambeth Wing, London, UK
| | - Reza Hajhosseiny
- School of Biomedical Engineering and Imaging Sciences, King's College London, St. Thomas' Hospital, Lambeth Wing, London, UK
| | - Pier Giorgio Masci
- School of Biomedical Engineering and Imaging Sciences, King's College London, St. Thomas' Hospital, Lambeth Wing, London, UK
| | - Radhouene Neji
- School of Biomedical Engineering and Imaging Sciences, King's College London, St. Thomas' Hospital, Lambeth Wing, London, UK.,MR Research Collaborations, Siemens Healthcare Limited, Frimley, UK
| | - Daniel Rueckert
- Department of Computing, Imperial College London, London, UK
| | - René M Botnar
- School of Biomedical Engineering and Imaging Sciences, King's College London, St. Thomas' Hospital, Lambeth Wing, London, UK.,Escuela de Ingeniería, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - Claudia Prieto
- School of Biomedical Engineering and Imaging Sciences, King's College London, St. Thomas' Hospital, Lambeth Wing, London, UK.,Escuela de Ingeniería, Pontificia Universidad Católica de Chile, Santiago, Chile
| |
Collapse
|
194
|
Gu Y, Zeng Z, Chen H, Wei J, Zhang Y, Chen B, Li Y, Qin Y, Xie Q, Jiang Z, Lu Y. MedSRGAN: medical images super-resolution using generative adversarial networks. MULTIMEDIA TOOLS AND APPLICATIONS 2020; 79:21815-21840. [DOI: 10.1007/s11042-020-08980-w] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/02/2019] [Revised: 03/12/2020] [Accepted: 04/22/2020] [Indexed: 01/03/2025]
|
195
|
Skin lesion segmentation via generative adversarial networks with dual discriminators. Med Image Anal 2020; 64:101716. [DOI: 10.1016/j.media.2020.101716] [Citation(s) in RCA: 85] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2019] [Revised: 03/26/2020] [Accepted: 04/24/2020] [Indexed: 11/21/2022]
|
196
|
Lui YW, Chang PD, Zaharchuk G, Barboriak DP, Flanders AE, Wintermark M, Hess CP, Filippi CG. Artificial Intelligence in Neuroradiology: Current Status and Future Directions. AJNR Am J Neuroradiol 2020; 41:E52-E59. [PMID: 32732276 PMCID: PMC7658873 DOI: 10.3174/ajnr.a6681] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Fueled by new techniques, computational tools, and broader availability of imaging data, artificial intelligence has the potential to transform the practice of neuroradiology. The recent exponential increase in publications related to artificial intelligence and the central focus on artificial intelligence at recent professional and scientific radiology meetings underscores the importance. There is growing momentum behind leveraging artificial intelligence techniques to improve workflow and diagnosis and treatment and to enhance the value of quantitative imaging techniques. This article explores the reasons why neuroradiologists should care about the investments in new artificial intelligence applications, highlights current activities and the roles neuroradiologists are playing, and renders a few predictions regarding the near future of artificial intelligence in neuroradiology.
Collapse
Affiliation(s)
- Y W Lui
- From the Department of Radiology (Y.W.L.), New York University Langone Medical Center, New York, New York
| | - P D Chang
- Department of Radiology (P.D.C.), University of California Irvine Health Medical Center, Orange, California
| | - G Zaharchuk
- Department of Neuroradiology (G.Z., M.W.), Stanford University, Stanford, California
| | - D P Barboriak
- Department of Radiology (D.P.B.), Duke University Medical Center, Durham, North Carolina
| | - A E Flanders
- Department of Radiology (A.E.F.), Thomas Jefferson University Hospital, Philadelphia, Pennsylvania
| | - M Wintermark
- Department of Neuroradiology (G.Z., M.W.), Stanford University, Stanford, California
| | - C P Hess
- Department of Radiology and Biomedical Imaging (C.P.H.), University of California, San Francisco, San Francisco, California
| | - C G Filippi
- Department of Radiology (C.G.F.), Northwell Health, New York, New York.
| |
Collapse
|
197
|
Lin E, Lin CH, Lane HY. Relevant Applications of Generative Adversarial Networks in Drug Design and Discovery: Molecular De Novo Design, Dimensionality Reduction, and De Novo Peptide and Protein Design. Molecules 2020; 25:3250. [PMID: 32708785 PMCID: PMC7397124 DOI: 10.3390/molecules25143250] [Citation(s) in RCA: 37] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2020] [Revised: 07/11/2020] [Accepted: 07/14/2020] [Indexed: 01/16/2023] Open
Abstract
A growing body of evidence now suggests that artificial intelligence and machine learning techniques can serve as an indispensable foundation for the process of drug design and discovery. In light of latest advancements in computing technologies, deep learning algorithms are being created during the development of clinically useful drugs for treatment of a number of diseases. In this review, we focus on the latest developments for three particular arenas in drug design and discovery research using deep learning approaches, such as generative adversarial network (GAN) frameworks. Firstly, we review drug design and discovery studies that leverage various GAN techniques to assess one main application such as molecular de novo design in drug design and discovery. In addition, we describe various GAN models to fulfill the dimension reduction task of single-cell data in the preclinical stage of the drug development pipeline. Furthermore, we depict several studies in de novo peptide and protein design using GAN frameworks. Moreover, we outline the limitations in regard to the previous drug design and discovery studies using GAN models. Finally, we present a discussion of directions and challenges for future research.
Collapse
Affiliation(s)
- Eugene Lin
- Department of Biostatistics, University of Washington, Seattle, WA 98195, USA;
- Department of Electrical & Computer Engineering, University of Washington, Seattle, WA 98195, USA
- Graduate Institute of Biomedical Sciences, China Medical University, Taichung 40402, Taiwan
| | - Chieh-Hsin Lin
- Graduate Institute of Biomedical Sciences, China Medical University, Taichung 40402, Taiwan
- Department of Psychiatry, Kaohsiung Chang Gung Memorial Hospital, Chang Gung University College of Medicine, Kaohsiung 83301, Taiwan
- School of Medicine, Chang Gung University, Taoyuan 33302, Taiwan
| | - Hsien-Yuan Lane
- Graduate Institute of Biomedical Sciences, China Medical University, Taichung 40402, Taiwan
- Department of Psychiatry, China Medical University Hospital, Taichung 40447, Taiwan
- Brain Disease Research Center, China Medical University Hospital, Taichung 40447, Taiwan
- Department of Psychology, College of Medical and Health Sciences, Asia University, Taichung 41354, Taiwan
| |
Collapse
|
198
|
El-Rewaidy H, Neisius U, Mancio J, Kucukseymen S, Rodriguez J, Paskavitz A, Menze B, Nezafat R. Deep complex convolutional network for fast reconstruction of 3D late gadolinium enhancement cardiac MRI. NMR IN BIOMEDICINE 2020; 33:e4312. [PMID: 32352197 DOI: 10.1002/nbm.4312] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/05/2019] [Revised: 03/19/2020] [Accepted: 03/24/2020] [Indexed: 06/11/2023]
Abstract
Several deep-learning models have been proposed to shorten MRI scan time. Prior deep-learning models that utilize real-valued kernels have limited capability to learn rich representations of complex MRI data. In this work, we utilize a complex-valued convolutional network (ℂNet) for fast reconstruction of highly under-sampled MRI data and evaluate its ability to rapidly reconstruct 3D late gadolinium enhancement (LGE) data. ℂNet preserves the complex nature and optimal combination of real and imaginary components of MRI data throughout the reconstruction process by utilizing complex-valued convolution, novel radial batch normalization, and complex activation function layers in a U-Net architecture. A prospectively under-sampled 3D LGE cardiac MRI dataset of 219 patients (17 003 images) at acceleration rates R = 3 through R = 5 was used to evaluate ℂNet. The dataset was further retrospectively under-sampled to a maximum of R = 8 to simulate higher acceleration rates. We created three reconstructions of the 3D LGE dataset using (1) ℂNet, (2) a compressed-sensing-based low-dimensional-structure self-learning and thresholding algorithm (LOST), and (3) a real-valued U-Net (realNet) with the same number of parameters as ℂNet. LOST-reconstructed data were considered the reference for training and evaluation of all models. The reconstructed images were quantitatively evaluated using mean-squared error (MSE) and the structural similarity index measure (SSIM), and subjectively evaluated by three independent readers. Quantitatively, ℂNet-reconstructed images had significantly improved MSE and SSIM values compared with realNet (MSE, 0.077 versus 0.091; SSIM, 0.876 versus 0.733, respectively; p < 0.01). Subjective quality assessment showed that ℂNet-reconstructed image quality was similar to that of compressed sensing and significantly better than that of realNet. ℂNet reconstruction was also more than 300 times faster than compressed sensing. Retrospective under-sampled images demonstrate the potential of ℂNet at higher acceleration rates. ℂNet enables fast reconstruction of highly accelerated 3D MRI with superior performance to real-valued networks, and achieves faster reconstruction than compressed sensing.
Collapse
Affiliation(s)
- Hossam El-Rewaidy
- Department of Medicine (Cardiovascular Division), Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts
- Department of Computer Science, Technical University of Munich, Munich, Germany
| | - Ulf Neisius
- Department of Medicine (Cardiovascular Division), Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts
| | - Jennifer Mancio
- Department of Medicine (Cardiovascular Division), Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts
| | - Selcuk Kucukseymen
- Department of Medicine (Cardiovascular Division), Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts
| | - Jennifer Rodriguez
- Department of Medicine (Cardiovascular Division), Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts
| | - Amanda Paskavitz
- Department of Medicine (Cardiovascular Division), Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts
| | - Bjoern Menze
- Department of Computer Science, Technical University of Munich, Munich, Germany
| | - Reza Nezafat
- Department of Medicine (Cardiovascular Division), Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
199
|
A multi-scale variational neural network for accelerating motion-compensated whole-heart 3D coronary MR angiography. Magn Reson Imaging 2020; 70:155-167. [DOI: 10.1016/j.mri.2020.04.007] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2019] [Revised: 02/03/2020] [Accepted: 04/12/2020] [Indexed: 11/22/2022]
|
200
|
Chen XL, Yan TY, Wang N, von Deneen KM. Rising role of artificial intelligence in image reconstruction for biomedical imaging. Artif Intell Med Imaging 2020; 1:1-5. [DOI: 10.35711/aimi.v1.i1.1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Revised: 06/09/2020] [Accepted: 06/16/2020] [Indexed: 02/06/2023] Open
Abstract
In this editorial, we review recent progress on the applications of artificial intelligence (AI) in image reconstruction for biomedical imaging. Because it abandons prior information of traditional artificial design and adopts a completely data-driven mode to obtain deeper prior information via learning, AI technology plays an increasingly important role in biomedical image reconstruction. The combination of AI technology and the biomedical image reconstruction method has become a hotspot in the field. Favoring AI, the performance of biomedical image reconstruction has been improved in terms of accuracy, resolution, imaging speed, etc. We specifically focus on how to use AI technology to improve the performance of biomedical image reconstruction, and propose possible future directions in this field.
Collapse
Affiliation(s)
- Xue-Li Chen
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an 710126, Shaanxi Province, China
| | - Tian-Yu Yan
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an 710126, Shaanxi Province, China
| | - Nan Wang
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an 710126, Shaanxi Province, China
| | - Karen M von Deneen
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an 710126, Shaanxi Province, China
| |
Collapse
|