201
|
Yan J, Chen S, Zhang Y, Li X. Neural Architecture Search for compressed sensing Magnetic Resonance image reconstruction. Comput Med Imaging Graph 2020; 85:101784. [PMID: 32860972 DOI: 10.1016/j.compmedimag.2020.101784] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2020] [Revised: 07/26/2020] [Accepted: 08/15/2020] [Indexed: 01/04/2023]
Abstract
Recent works have demonstrated that deep learning (DL) based compressed sensing (CS) implementation can accelerate Magnetic Resonance (MR) Imaging by reconstructing MR images from sub-sampled k-space data. However, network architectures adopted in previous methods are all designed by handcraft. Neural Architecture Search (NAS) algorithms can automatically build neural network architectures which have outperformed human designed ones in several vision tasks. Inspired by this, here we proposed a novel and efficient network for the MR image reconstruction problem via NAS instead of manual attempts. Particularly, a specific cell structure, which was integrated into the model-driven MR reconstruction pipeline, was automatically searched from a flexible pre-defined operation search space in a differentiable manner. Experimental results show that our searched network can produce better reconstruction results compared to previous state-of-the-art methods in terms of PSNR and SSIM with 4∼6 times fewer computation resources. Extensive experiments were conducted to analyze how hyper-parameters affect reconstruction performance and the searched structures. The generalizability of the searched architecture was also evaluated on different organ MR datasets. Our proposed method can reach a better trade-off between computation cost and reconstruction performance for MR reconstruction problem with good generalizability and offer insights to design neural networks for other medical image applications. The evaluation code will be available at https://github.com/yjump/NAS-for-CSMRI.
Collapse
Affiliation(s)
- Jiangpeng Yan
- Department of Automation, Tsinghua University, Beijing 100091, China; Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China
| | - Shou Chen
- Center for Biomedical Imaging Research, Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100091, China
| | - Yongbing Zhang
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China
| | - Xiu Li
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China.
| |
Collapse
|
202
|
Skin lesion segmentation via generative adversarial networks with dual discriminators. Med Image Anal 2020; 64:101716. [DOI: 10.1016/j.media.2020.101716] [Citation(s) in RCA: 85] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2019] [Revised: 03/26/2020] [Accepted: 04/24/2020] [Indexed: 11/21/2022]
|
203
|
Creating Artificial Images for Radiology Applications Using Generative Adversarial Networks (GANs) - A Systematic Review. Acad Radiol 2020; 27:1175-1185. [PMID: 32035758 DOI: 10.1016/j.acra.2019.12.024] [Citation(s) in RCA: 51] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2019] [Revised: 12/24/2019] [Accepted: 12/27/2019] [Indexed: 12/22/2022]
Abstract
RATIONALE AND OBJECTIVES Generative adversarial networks (GANs) are deep learning models aimed at generating fake realistic looking images. These novel models made a great impact on the computer vision field. Our study aims to review the literature on GANs applications in radiology. MATERIALS AND METHODS This systematic review followed the PRISMA guidelines. Electronic datasets were searched for studies describing applications of GANs in radiology. We included studies published up-to September 2019. RESULTS Data were extracted from 33 studies published between 2017 and 2019. Eighteen studies focused on CT images generation, ten on MRI, three on PET/MRI and PET/CT, one on ultrasound and one on X-ray. Applications in radiology included image reconstruction and denoising for dose and scan time reduction (fourteen studies), data augmentation (six studies), transfer between modalities (eight studies) and image segmentation (five studies). All studies reported that generated images improved the performance of the developed algorithms. CONCLUSION GANs are increasingly studied for various radiology applications. They enable the creation of new data, which can be used to improve clinical care, education and research.
Collapse
|
204
|
Eo T, Shin H, Jun Y, Kim T, Hwang D. Accelerating Cartesian MRI by domain-transform manifold learning in phase-encoding direction. Med Image Anal 2020; 63:101689. [DOI: 10.1016/j.media.2020.101689] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2018] [Revised: 03/04/2020] [Accepted: 03/13/2020] [Indexed: 01/11/2023]
|
205
|
Shao W, Du Y. Microwave Imaging by Deep Learning Network: Feasibility and Training Method. IEEE TRANSACTIONS ON ANTENNAS AND PROPAGATION 2020; 68:5626-5635. [PMID: 34113046 PMCID: PMC8189033 DOI: 10.1109/tap.2020.2978952] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Microwave image reconstruction based on a deep-learning method is investigated in this paper. The neural network is capable of converting measured microwave signals acquired from a 24×24 antenna array at 4 GHz into a 128×128 image. To reduce the training difficulty, we first developed an autoencoder by which high-resolution images (128×128) were represented with 256×1 vectors; then we developed the second neural network which aimed to map microwave signals to the compressed features (256×1 vector). Two neural networks can be combined to a full network to make reconstructions, when both are successfully developed. The present two-stage training method reduces the difficulty in training deep learning networks (DLN) for inverse reconstruction. The developed neural network is validated by simulation examples and experimental data with objects in different shapes/sizes, placed in different locations, and with dielectric constant ranging from 2~6. Comparisons between the imaging results achieved by the present method and two conventional approaches: distorted Born iterative method (DBIM) and phase confocal method (PCM) are also provided.
Collapse
Affiliation(s)
- Wenyi Shao
- Department of Radiology and Radiological Science, Johns Hopkins University, Baltimore, MD 21287 USA
| | - Yong Du
- Department of Radiology and Radiological Science, Johns Hopkins University, Baltimore, MD 21287 USA
| |
Collapse
|
206
|
A multi-scale variational neural network for accelerating motion-compensated whole-heart 3D coronary MR angiography. Magn Reson Imaging 2020; 70:155-167. [DOI: 10.1016/j.mri.2020.04.007] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2019] [Revised: 02/03/2020] [Accepted: 04/12/2020] [Indexed: 11/22/2022]
|
207
|
Feasibility of new fat suppression for breast MRI using pix2pix. Jpn J Radiol 2020; 38:1075-1081. [PMID: 32613357 DOI: 10.1007/s11604-020-01012-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2020] [Accepted: 06/25/2020] [Indexed: 02/03/2023]
Abstract
PURPOSE To generate and evaluate fat-saturated T1-weighted (FST1W) image synthesis of breast magnetic resonance imaging (MRI) using pix2pix. MATERIALS AND METHODS We collected pairs of noncontrast-enhanced T1-weighted an FST1W images of breast MRI for training data (2112 pairs from 15 patients), validation data (428 pairs from three patients), and test data (90 pairs from 30 patients). From the original images, 90 synthetic images were generated with 50, 100, and 200 epochs using pix2pix. Two breast radiologists evaluated the synthetic images (from 1 = excellent to 5 = very poor) for quality of fat suppression, anatomic structures, artifacts, etc. The average score was analyzed for each epoch and breast density. RESULTS The synthetic images were scored from 2.95 to 3.60; the best was reduction in artifacts when using 100 epochs. The average overall quality scores for fat suppression were 3.63 at 50 epochs, 3.24 at 100 epochs, and 3.12 at 200 epochs. In the analysis for breast density, each score was significantly better for nondense breasts than for dense breasts; the average score was 2.88-3.18 for nondense breasts and 3.03-3.42 for dense breasts (P = 0.000-0.042). CONCLUSION Pix2pix had the potential to generate FST1W synthesis for breast MRI.
Collapse
|
208
|
Sun L, Wu Y, Shu B, Ding X, Cai C, Huang Y, Paisley J. A dual-domain deep lattice network for rapid MRI reconstruction. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.01.063] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
209
|
Theek B, Magnuska Z, Gremse F, Hahn H, Schulz V, Kiessling F. Automation of data analysis in molecular cancer imaging and its potential impact on future clinical practice. Methods 2020; 188:30-36. [PMID: 32615232 DOI: 10.1016/j.ymeth.2020.06.019] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Accepted: 06/23/2020] [Indexed: 12/11/2022] Open
Abstract
Digitalization, especially the use of machine learning and computational intelligence, is considered to dramatically shape medical procedures in the near future. In the field of cancer diagnostics, radiomics, the extraction of multiple quantitative image features and their clustered analysis, is gaining increasing attention to obtain more detailed, reproducible, and meaningful information about the disease entity, its prognosis and the ideal therapeutic option. In this context, automation of diagnostic procedures can improve the entire pipeline, which comprises patient registration, planning and performing an imaging examination at the scanner, image reconstruction, image analysis, and feeding the diagnostic information from various sources into decision support systems. With a focus on cancer diagnostics, this review article reports and discusses how computer-assistance can be integrated into diagnostic procedures and which benefits and challenges arise from it. Besides a strong view on classical imaging modalities like x-ray, CT, MRI, ultrasound, PET, SPECT and hybrid imaging devices thereof, it is outlined how imaging data can be combined with data deriving from patient anamnesis, clinical chemistry, pathology, and different omics. In this context, the article also discusses IT infrastructures that are required to realize this integration in the clinical routine. Although there are still many challenges to comprehensively implement automated and integrated data analysis in molecular cancer imaging, the authors conclude that we are entering a new era of medical diagnostics and precision medicine.
Collapse
Affiliation(s)
- Benjamin Theek
- Institute for Experimental Molecular Imaging, University Clinic and Helmholtz Institute for Biomedical Engineering, RWTH Aachen University, Forckenbeckstrasse 55, 52074 Aachen, Germany; Fraunhofer Institute for Digital Medicine MEVIS, Am Fallturm 1, 28359 Bremen, Germany
| | - Zuzanna Magnuska
- Institute for Experimental Molecular Imaging, University Clinic and Helmholtz Institute for Biomedical Engineering, RWTH Aachen University, Forckenbeckstrasse 55, 52074 Aachen, Germany
| | - Felix Gremse
- Institute for Experimental Molecular Imaging, University Clinic and Helmholtz Institute for Biomedical Engineering, RWTH Aachen University, Forckenbeckstrasse 55, 52074 Aachen, Germany; Institute of Medical Informatics, RWTH Aachen University, Pauwelsstrasse 30, 52074 Aachen, Germany
| | - Horst Hahn
- Fraunhofer Institute for Digital Medicine MEVIS, Am Fallturm 1, 28359 Bremen, Germany
| | - Volkmar Schulz
- Institute for Experimental Molecular Imaging, University Clinic and Helmholtz Institute for Biomedical Engineering, RWTH Aachen University, Forckenbeckstrasse 55, 52074 Aachen, Germany; Fraunhofer Institute for Digital Medicine MEVIS, Am Fallturm 1, 28359 Bremen, Germany; Physics of Molecular Imaging Systems, Institute for Experimental Molecular Imaging, RWTH Aachen University, Forckenbeckstrasse 55, 52074 Aachen, Germany
| | - Fabian Kiessling
- Institute for Experimental Molecular Imaging, University Clinic and Helmholtz Institute for Biomedical Engineering, RWTH Aachen University, Forckenbeckstrasse 55, 52074 Aachen, Germany; Fraunhofer Institute for Digital Medicine MEVIS, Am Fallturm 1, 28359 Bremen, Germany.
| |
Collapse
|
210
|
Chen XL, Yan TY, Wang N, von Deneen KM. Rising role of artificial intelligence in image reconstruction for biomedical imaging. Artif Intell Med Imaging 2020; 1:1-5. [DOI: 10.35711/aimi.v1.i1.1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Revised: 06/09/2020] [Accepted: 06/16/2020] [Indexed: 02/06/2023] Open
Abstract
In this editorial, we review recent progress on the applications of artificial intelligence (AI) in image reconstruction for biomedical imaging. Because it abandons prior information of traditional artificial design and adopts a completely data-driven mode to obtain deeper prior information via learning, AI technology plays an increasingly important role in biomedical image reconstruction. The combination of AI technology and the biomedical image reconstruction method has become a hotspot in the field. Favoring AI, the performance of biomedical image reconstruction has been improved in terms of accuracy, resolution, imaging speed, etc. We specifically focus on how to use AI technology to improve the performance of biomedical image reconstruction, and propose possible future directions in this field.
Collapse
Affiliation(s)
- Xue-Li Chen
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an 710126, Shaanxi Province, China
| | - Tian-Yu Yan
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an 710126, Shaanxi Province, China
| | - Nan Wang
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an 710126, Shaanxi Province, China
| | - Karen M von Deneen
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an 710126, Shaanxi Province, China
| |
Collapse
|
211
|
Shan H, Jia X, Yan P, Li Y, Paganetti H, Wang G. Synergizing medical imaging and radiotherapy with deep learning. MACHINE LEARNING-SCIENCE AND TECHNOLOGY 2020. [DOI: 10.1088/2632-2153/ab869f] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
212
|
Zhu R, Yu H, Tan Z, Lu R, Han S, Huang Z, Wang J. Ghost imaging based on Y-net: a dynamic coding and decoding approach. OPTICS EXPRESS 2020; 28:17556-17569. [PMID: 32679962 DOI: 10.1364/oe.395000] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/13/2020] [Accepted: 05/12/2020] [Indexed: 06/11/2023]
Abstract
Ghost imaging incorporating deep learning technology has recently attracted much attention in the optical imaging field. However, deterministic illumination and multiple exposure are still essential in most scenarios. Here we propose a ghost imaging scheme based on a novel dynamic decoding deep learning framework (Y-net), which works well under both deterministic and indeterministic illumination. Benefited from the end-to-end characteristic of our network, the image of a sample can be achieved directly from the data collected by the detector. The sample is illuminated only once in the experiment, and the spatial distribution of the speckle encoding the sample in the experiment can be completely different from that of the simulation speckle in training, as long as the statistical characteristics of the speckle remain unchanged. This approach is particularly important to high-resolution x-ray ghost imaging applications due to its potential for improving image quality and reducing radiation damage.
Collapse
|
213
|
Abstract
Transform learning is a new representation learning framework where we learn an operator/transform that analyses the data to generate the coefficient/representation. We propose a variant of it called the graph transform learning; in this we explicitly account for the correlation in the dataset in terms of graph Laplacian. We will give two variants; in the first one the graph is computed from the data and fixed during the operation. In the second, the graph is learnt iteratively from the data during operation. The first technique will be applied for clustering, and the second one for solving inverse problems.
Collapse
|
214
|
Shao W, Pomper MG, Du Y. A Learned Reconstruction Network for SPECT Imaging. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2020; 5:26-34. [PMID: 33403244 DOI: 10.1109/trpms.2020.2994041] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
A neural network designed specifically for SPECT image reconstruction was developed. The network reconstructed activity images from SPECT projection data directly. Training was performed through a corpus of training data including that derived from digital phantoms generated from custom software and the corresponding projection data obtained from simulation. When using the network to reconstruct images, input projection data were initially fed to two fully connected (FC) layers to perform a basic reconstruction. Then the output of the FC layers and an attenuation map were delivered to five convolutional layers for signal-decay compensation and image optimization. To validate the system, data not used in training, simulated data from the Zubal human brain phantom, and clinical patient data were used to test reconstruction performance. Reconstructed images from the developed network proved closer to the truth with higher resolution and quantitative accuracy than those from conventional OS-EM reconstruction. To understand better the operation of the network for reconstruction, intermediate results from hidden layers were investigated for each step of the processing. The network system was also retrained with noisy projection data and compared with that developed with noise-free data. The retrained network proved even more robust after having learned to filter noise. Finally, we showed that the network still provided sharp images when using reduced view projection data (retrained with reduced view data).
Collapse
Affiliation(s)
- Wenyi Shao
- Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD 21287 USA
| | - Martin G Pomper
- Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD 21287 USA
| | - Yong Du
- Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD 21287 USA
| |
Collapse
|
215
|
Ongie G, Jalal A, Metzler CA, Baraniuk RG, Dimakis AG, Willett R. Deep Learning Techniques for Inverse Problems in Imaging. ACTA ACUST UNITED AC 2020. [DOI: 10.1109/jsait.2020.2991563] [Citation(s) in RCA: 143] [Impact Index Per Article: 28.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
216
|
Exploring linearity of deep neural network trained QSM: QSMnet+. Neuroimage 2020; 211:116619. [DOI: 10.1016/j.neuroimage.2020.116619] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2019] [Revised: 01/08/2020] [Accepted: 02/05/2020] [Indexed: 01/24/2023] Open
|
217
|
Wang S, Cheng H, Ying L, Xiao T, Ke Z, Zheng H, Liang D. DeepcomplexMRI: Exploiting deep residual network for fast parallel MR imaging with complex convolution. Magn Reson Imaging 2020; 68:136-147. [DOI: 10.1016/j.mri.2020.02.002] [Citation(s) in RCA: 68] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2019] [Revised: 01/12/2020] [Accepted: 02/04/2020] [Indexed: 01/29/2023]
|
218
|
|
219
|
Visual and Quantitative Evaluation of Amyloid Brain PET Image Synthesis with Generative Adversarial Network. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10072628] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Conventional data augmentation (DA) techniques, which have been used to improve the performance of predictive models with a lack of balanced training data sets, entail an effort to define the proper repeating operation (e.g., rotation and mirroring) according to the target class distribution. Although DA using generative adversarial network (GAN) has the potential to overcome the disadvantages of conventional DA, there are not enough cases where this technique has been applied to medical images, and in particular, not enough cases where quantitative evaluation was used to determine whether the generated images had enough realism and diversity to be used for DA. In this study, we synthesized 18F-Florbetaben (FBB) images using CGAN. The generated images were evaluated using various measures, and we presented the state of the images and the similarity value of quantitative measurement that can be expected to successfully augment data from generated images for DA. The method includes (1) conditional WGAN-GP to learn the axial image distribution extracted from pre-processed 3D FBB images, (2) pre-trained DenseNet121 and model-agnostic metrics for visual and quantitative measurements of generated image distribution, and (3) a machine learning model for observing improvement in generalization performance by generated dataset. The Visual Turing test showed similarity in the descriptions of typical patterns of amyloid deposition for each of the generated images. However, differences in similarity and classification performance per axial level were observed, which did not agree with the visual evaluation. Experimental results demonstrated that quantitative measurements were able to detect the similarity between two distributions and observe mode collapse better than the Visual Turing test and t-SNE.
Collapse
|
220
|
|
221
|
Feigin M, Freedman D, Anthony BW. A Deep Learning Framework for Single-Sided Sound Speed Inversion in Medical Ultrasound. IEEE Trans Biomed Eng 2020; 67:1142-1151. [DOI: 10.1109/tbme.2019.2931195] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
222
|
Vu T, Li M, Humayun H, Zhou Y, Yao J. A generative adversarial network for artifact removal in photoacoustic computed tomography with a linear-array transducer. Exp Biol Med (Maywood) 2020; 245:597-605. [PMID: 32208974 DOI: 10.1177/1535370220914285] [Citation(s) in RCA: 67] [Impact Index Per Article: 13.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022] Open
Abstract
With balanced spatial resolution, penetration depth, and imaging speed, photoacoustic computed tomography (PACT) is promising for clinical translation such as in breast cancer screening, functional brain imaging, and surgical guidance. Typically using a linear ultrasound (US) transducer array, PACT has great flexibility for hand-held applications. However, the linear US transducer array has a limited detection angle range and frequency bandwidth, resulting in limited-view and limited-bandwidth artifacts in the reconstructed PACT images. These artifacts significantly reduce the imaging quality. To address these issues, existing solutions often have to pay the price of system complexity, cost, and/or imaging speed. Here, we propose a deep-learning-based method that explores the Wasserstein generative adversarial network with gradient penalty (WGAN-GP) to reduce the limited-view and limited-bandwidth artifacts in PACT. Compared with existing reconstruction and convolutional neural network approach, our model has shown improvement in imaging quality and resolution. Our results on simulation, phantom, and in vivo data have collectively demonstrated the feasibility of applying WGAN-GP to improve PACT’s image quality without any modification to the current imaging set-up. Impact statement This study has the following main impacts. It offers a promising solution for removing limited-view and limited-bandwidth artifact in PACT using a linear-array transducer and conventional image reconstruction, which have long hindered its clinical translation. Our solution shows unprecedented artifact removal ability for in vivo image, which may enable important applications such as imaging tumor angiogenesis and hypoxia. The study reports, for the first time, the use of an advanced deep-learning model based on stabilized generative adversarial network. Our results have demonstrated its superiority over other state-of-the-art deep-learning methods.
Collapse
Affiliation(s)
- Tri Vu
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| | - Mucong Li
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| | - Hannah Humayun
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| | - Yuan Zhou
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA.,IBM Research-China, ZPark, Beijing 100085, China
| | - Junjie Yao
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| |
Collapse
|
223
|
Compressed-Sensing Magnetic Resonance Image Reconstruction Using an Iterative Convolutional Neural Network Approach. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10061902] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Convolutional neural networks (CNNs) demonstrate excellent performance when employed to reconstruct the images obtained by compressed-sensing magnetic resonance imaging (CS-MRI). Our study aimed to enhance image quality by developing a novel iterative reconstruction approach that utilizes image-based CNNs and k-space correction to preserve original k-space data. In the proposed method, CNNs represent a priori information concerning image spaces. First, the CNNs are trained to map zero-filling images onto corresponding full-sampled images. Then, they recover the zero-filled part of the k-space data. Subsequently, k-space corrections, which involve the replacement of unfilled regions by original k-space data, are implemented to preserve the original k-space data. The above-mentioned processes are used iteratively. The performance of the proposed method was validated using a T2-weighted brain-image dataset, and experiments were conducted with several sampling masks. Finally, the proposed method was compared with other noniterative approaches to demonstrate its effectiveness. The aliasing artifacts in the reconstructed images obtained using the proposed approach were reduced compared to those using other state-of-the-art techniques. In addition, the quantitative results obtained in the form of the peak signal-to-noise ratio and structural similarity index demonstrated the effectiveness of the proposed method. The proposed CS-MRI method enhanced MR image quality with high-throughput examinations.
Collapse
|
224
|
Benchmarking MRI Reconstruction Neural Networks on Large Public Datasets. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10051816] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Deep learning is starting to offer promising results for reconstruction in Magnetic Resonance Imaging (MRI). A lot of networks are being developed, but the comparisons remain hard because the frameworks used are not the same among studies, the networks are not properly re-trained, and the datasets used are not the same among comparisons. The recent release of a public dataset, fastMRI, consisting of raw k-space data, encouraged us to write a consistent benchmark of several deep neural networks for MR image reconstruction. This paper shows the results obtained for this benchmark, allowing to compare the networks, and links the open source implementation of all these networks in Keras. The main finding of this benchmark is that it is beneficial to perform more iterations between the image and the measurement spaces compared to having a deeper per-space network.
Collapse
|
225
|
Sun L, Wu Y, Fan Z, Ding X, Huang Y, Paisley J. A deep error correction network for compressed sensing MRI. BMC Biomed Eng 2020; 2:4. [PMID: 32903379 PMCID: PMC7422575 DOI: 10.1186/s42490-020-0037-5] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2019] [Accepted: 01/30/2020] [Indexed: 01/10/2023] Open
Abstract
BACKGROUND CS-MRI (compressed sensing for magnetic resonance imaging) exploits image sparsity properties to reconstruct MRI from very few Fourier k-space measurements. Due to imperfect modelings in the inverse imaging, state-of-the-art CS-MRI methods tend to leave structural reconstruction errors. Compensating such errors in the reconstruction could help further improve the reconstruction quality. RESULTS In this work, we propose a DECN (deep error correction network) for CS-MRI. The DECN model consists of three parts, which we refer to as modules: a guide, or template, module, an error correction module, and a data fidelity module. Existing CS-MRI algorithms can serve as the template module for guiding the reconstruction. Using this template as a guide, the error correction module learns a CNN (convolutional neural network) to map the k-space data in a way that adjusts for the reconstruction error of the template image. We propose a deep error correction network. Our experimental results show the proposed DECN CS-MRI reconstruction framework can considerably improve upon existing inversion algorithms by supplementing with an error-correcting CNN. CONCLUSIONS In the proposed a deep error correction framework, any off-the-shelf CS-MRI algorithm can be used as template generation. Then a deep neural network is used to compensate reconstruction errors. The promising experimental results validate the effectiveness and utility of the proposed framework.
Collapse
Affiliation(s)
- Liyan Sun
- Fujian Key Laboratory of Sensing and Computing for Smart City, Xiamen University, Xiamen, China
| | - Yawen Wu
- Fujian Key Laboratory of Sensing and Computing for Smart City, Xiamen University, Xiamen, China
| | - Zhiwen Fan
- Fujian Key Laboratory of Sensing and Computing for Smart City, Xiamen University, Xiamen, China
| | - Xinghao Ding
- Fujian Key Laboratory of Sensing and Computing for Smart City, Xiamen University, Xiamen, China
| | - Yue Huang
- Fujian Key Laboratory of Sensing and Computing for Smart City, Xiamen University, Xiamen, China
| | - John Paisley
- Department of Electrical Engineering, Columbia University, New York, USA
| |
Collapse
|
226
|
Bustin A, Fuin N, Botnar RM, Prieto C. From Compressed-Sensing to Artificial Intelligence-Based Cardiac MRI Reconstruction. Front Cardiovasc Med 2020; 7:17. [PMID: 32158767 PMCID: PMC7051921 DOI: 10.3389/fcvm.2020.00017] [Citation(s) in RCA: 77] [Impact Index Per Article: 15.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Accepted: 01/31/2020] [Indexed: 12/28/2022] Open
Abstract
Cardiac magnetic resonance (CMR) imaging is an important tool for the non-invasive assessment of cardiovascular disease. However, CMR suffers from long acquisition times due to the need of obtaining images with high temporal and spatial resolution, different contrasts, and/or whole-heart coverage. In addition, both cardiac and respiratory-induced motion of the heart during the acquisition need to be accounted for, further increasing the scan time. Several undersampling reconstruction techniques have been proposed during the last decades to speed up CMR acquisition. These techniques rely on acquiring less data than needed and estimating the non-acquired data exploiting some sort of prior information. Parallel imaging and compressed sensing undersampling reconstruction techniques have revolutionized the field, enabling 2- to 3-fold scan time accelerations to become standard in clinical practice. Recent scientific advances in CMR reconstruction hinge on the thriving field of artificial intelligence. Machine learning reconstruction approaches have been recently proposed to learn the non-linear optimization process employed in CMR reconstruction. Unlike analytical methods for which the reconstruction problem is explicitly defined into the optimization process, machine learning techniques make use of large data sets to learn the key reconstruction parameters and priors. In particular, deep learning techniques promise to use deep neural networks (DNN) to learn the reconstruction process from existing datasets in advance, providing a fast and efficient reconstruction that can be applied to all newly acquired data. However, before machine learning and DNN can realize their full potentials and enter widespread clinical routine for CMR image reconstruction, there are several technical hurdles that need to be addressed. In this article, we provide an overview of the recent developments in the area of artificial intelligence for CMR image reconstruction. The underlying assumptions of established techniques such as compressed sensing and low-rank reconstruction are briefly summarized, while a greater focus is given to recent advances in dictionary learning and deep learning based CMR reconstruction. In particular, approaches that exploit neural networks as implicit or explicit priors are discussed for 2D dynamic cardiac imaging and 3D whole-heart CMR imaging. Current limitations, challenges, and potential future directions of these techniques are also discussed.
Collapse
Affiliation(s)
- Aurélien Bustin
- Department of Biomedical Engineering, School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
| | - Niccolo Fuin
- Department of Biomedical Engineering, School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
| | - René M. Botnar
- Department of Biomedical Engineering, School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
- Escuela de Ingeniería, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - Claudia Prieto
- Department of Biomedical Engineering, School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
- Escuela de Ingeniería, Pontificia Universidad Católica de Chile, Santiago, Chile
| |
Collapse
|
227
|
Malavé MO, Baron CA, Koundinyan SP, Sandino CM, Ong F, Cheng JY, Nishimura DG. Reconstruction of undersampled 3D non-Cartesian image-based navigators for coronary MRA using an unrolled deep learning model. Magn Reson Med 2020; 84:800-812. [PMID: 32011021 DOI: 10.1002/mrm.28177] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2019] [Revised: 12/04/2019] [Accepted: 12/27/2019] [Indexed: 12/28/2022]
Abstract
PURPOSE To rapidly reconstruct undersampled 3D non-Cartesian image-based navigators (iNAVs) using an unrolled deep learning (DL) model, enabling nonrigid motion correction in coronary magnetic resonance angiography (CMRA). METHODS An end-to-end unrolled network is trained to reconstruct beat-to-beat 3D iNAVs acquired during a CMRA sequence. The unrolled model incorporates a nonuniform FFT operator in TensorFlow to perform the data-consistency operation, and the regularization term is learned by a convolutional neural network (CNN) based on the proximal gradient descent algorithm. The training set includes 6,000 3D iNAVs acquired from 7 different subjects and 11 scans using a variable-density (VD) cones trajectory. For testing, 3D iNAVs from 4 additional subjects are reconstructed using the unrolled model. To validate reconstruction accuracy, global and localized motion estimates from DL model-based 3D iNAVs are compared with those extracted from 3D iNAVs reconstructed with l 1 -ESPIRiT. Then, the high-resolution coronary MRA images motion corrected with autofocusing using the l 1 -ESPIRiT and DL model-based 3D iNAVs are assessed for differences. RESULTS 3D iNAVs reconstructed using the DL model-based approach and conventional l 1 -ESPIRiT generate similar global and localized motion estimates and provide equivalent coronary image quality. Reconstruction with the unrolled network completes in a fraction of the time compared to CPU and GPU implementations of l 1 -ESPIRiT (20× and 3× speed increases, respectively). CONCLUSIONS We have developed a deep neural network architecture to reconstruct undersampled 3D non-Cartesian VD cones iNAVs. Our approach decreases reconstruction time for 3D iNAVs, while preserving the accuracy of nonrigid motion information offered by them for correction.
Collapse
Affiliation(s)
- Mario O Malavé
- Magnetic Resonance Systems Research Laboratory, Department of Electrical Engineering, Stanford University, Stanford, CA
| | - Corey A Baron
- Department of Medical Biophysics, Western University, London, ON, Canada
| | - Srivathsan P Koundinyan
- Magnetic Resonance Systems Research Laboratory, Department of Electrical Engineering, Stanford University, Stanford, CA
| | - Christopher M Sandino
- Magnetic Resonance Systems Research Laboratory, Department of Electrical Engineering, Stanford University, Stanford, CA
| | - Frank Ong
- Magnetic Resonance Systems Research Laboratory, Department of Electrical Engineering, Stanford University, Stanford, CA
| | - Joseph Y Cheng
- Magnetic Resonance Systems Research Laboratory, Department of Electrical Engineering, Stanford University, Stanford, CA.,Department of Radiology, Stanford University, Stanford, CA
| | - Dwight G Nishimura
- Magnetic Resonance Systems Research Laboratory, Department of Electrical Engineering, Stanford University, Stanford, CA
| |
Collapse
|
228
|
Do W, Seo S, Han Y, Ye JC, Choi SH, Park S. Reconstruction of multicontrast MR images through deep learning. Med Phys 2020; 47:983-997. [DOI: 10.1002/mp.14006] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2019] [Revised: 12/23/2019] [Accepted: 12/23/2019] [Indexed: 12/31/2022] Open
Affiliation(s)
- Won‐Joon Do
- Department of Bio and Brain Engineering Korea Advanced Institute of Science and Technology Daejeon Korea
| | - Sunghun Seo
- Department of Bio and Brain Engineering Korea Advanced Institute of Science and Technology Daejeon Korea
| | - Yoseob Han
- Department of Bio and Brain Engineering Korea Advanced Institute of Science and Technology Daejeon Korea
| | - Jong Chul Ye
- Department of Bio and Brain Engineering Korea Advanced Institute of Science and Technology Daejeon Korea
| | - Seung Hong Choi
- Department of Radiology Seoul National University College of Medicine Seoul Korea
| | - Sung‐Hong Park
- Department of Bio and Brain Engineering Korea Advanced Institute of Science and Technology Daejeon Korea
| |
Collapse
|
229
|
Johnson PM, Recht MP, Knoll F. Improving the Speed of MRI with Artificial Intelligence. Semin Musculoskelet Radiol 2020; 24:12-20. [PMID: 31991448 DOI: 10.1055/s-0039-3400265] [Citation(s) in RCA: 42] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
Abstract
Magnetic resonance imaging (MRI) is a leading image modality for the assessment of musculoskeletal (MSK) injuries and disorders. A significant drawback, however, is the lengthy data acquisition. This issue has motivated the development of methods to improve the speed of MRI. The field of artificial intelligence (AI) for accelerated MRI, although in its infancy, has seen tremendous progress over the past 3 years. Promising approaches include deep learning methods for reconstructing undersampled MRI data and generating high-resolution from low-resolution data. Preliminary studies show the promise of the variational network, a state-of-the-art technique, to generalize to many different anatomical regions and achieve comparable diagnostic accuracy as conventional methods. This article discusses the state-of-the-art methods, considerations for clinical applicability, followed by future perspectives for the field.
Collapse
Affiliation(s)
- Patricia M Johnson
- Center for Biomedical Imaging, NYU Langone Health, Radiology Department, New York, New York
| | - Michael P Recht
- Center for Biomedical Imaging, NYU Langone Health, Radiology Department, New York, New York
| | - Florian Knoll
- Center for Biomedical Imaging, NYU Langone Health, Radiology Department, New York, New York
| |
Collapse
|
230
|
Wang H, Ying L, Liang D, Cheng J, Jia S, Qiu Z, Shi C, Zou L, Su S, Chang Y, Zhu Y. Accelerating MR Imaging via Deep Chambolle-Pock Network .. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:6818-6821. [PMID: 31947406 DOI: 10.1109/embc.2019.8857141] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Compressed sensing (CS) has been introduced to accelerate data acquisition in MR Imaging. However, CS-MRI methods suffer from detail loss with large acceleration and complicated parameter selection. To address the limitations of existing CS-MRI methods, a model-driven MR reconstruction is proposed that trains a deep network, named CP-net, which is derived from the Chambolle-Pock algorithm to reconstruct the in vivo MR images of human brains from highly undersampled complex k-space data acquired on different types of MR scanners. The proposed deep network can learn the proximal operator and parameters among the Chambolle-Pock algorithm. All of the experiments show that the proposed CP-net achieves more accurate MR reconstruction results, outperforming state-of-the-art methods across various quantitative metrics.
Collapse
|
231
|
Dar SUH, Özbey M, Çatlı AB, Çukur T. A Transfer‐Learning Approach for Accelerated MRI Using Deep Neural Networks. Magn Reson Med 2020; 84:663-685. [DOI: 10.1002/mrm.28148] [Citation(s) in RCA: 67] [Impact Index Per Article: 13.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2019] [Revised: 11/12/2019] [Accepted: 12/06/2019] [Indexed: 01/31/2023]
Affiliation(s)
- Salman Ul Hassan Dar
- Department of Electrical and Electronics Engineering Bilkent University Ankara Turkey
- National Magnetic Resonance Research Center (UMRAM) Bilkent University Ankara Turkey
| | - Muzaffer Özbey
- Department of Electrical and Electronics Engineering Bilkent University Ankara Turkey
- National Magnetic Resonance Research Center (UMRAM) Bilkent University Ankara Turkey
| | - Ahmet Burak Çatlı
- Department of Electrical and Electronics Engineering Bilkent University Ankara Turkey
- National Magnetic Resonance Research Center (UMRAM) Bilkent University Ankara Turkey
| | - Tolga Çukur
- Department of Electrical and Electronics Engineering Bilkent University Ankara Turkey
- National Magnetic Resonance Research Center (UMRAM) Bilkent University Ankara Turkey
- Neuroscience Program Sabuncu Brain Research Center Bilkent University Ankara Turkey
| |
Collapse
|
232
|
Xie H, Shan H, Cong W, Liu C, Zhang X, Liu S, Ning R, Wang GE. Deep Efficient End-to-end Reconstruction (DEER) Network for Few-view Breast CT Image Reconstruction. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2020; 8:196633-196646. [PMID: 33251081 PMCID: PMC7695229 DOI: 10.1109/access.2020.3033795] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Breast CT provides image volumes with isotropic resolution in high contrast, enabling detection of small calcification (down to a few hundred microns in size) and subtle density differences. Since breast is sensitive to x-ray radiation, dose reduction of breast CT is an important topic, and for this purpose, few-view scanning is a main approach. In this article, we propose a Deep Efficient End-to-end Reconstruction (DEER) network for few-view breast CT image reconstruction. The major merits of our network include high dose efficiency, excellent image quality, and low model complexity. By the design, the proposed network can learn the reconstruction process with as few as O ( N ) parameters, where N is the side length of an image to be reconstructed, which represents orders of magnitude improvements relative to the state-of-the-art deep-learning-based reconstruction methods that map raw data to tomographic images directly. Also, validated on a cone-beam breast CT dataset prepared by Koning Corporation on a commercial scanner, our method demonstrates a competitive performance over the state-of-the-art reconstruction networks in terms of image quality. The source code of this paper is available at: https://github.com/HuidongXie/DEER.
Collapse
Affiliation(s)
- Huidong Xie
- Department of Biomedical Engineering, Biomedical Imaging Center, Center for Biotechnology & Interdisciplinary Studies, Rensselaer Polytechnic Institute, 110 Eighth Street, Troy, NY USA
| | - Hongming Shan
- Department of Biomedical Engineering, Biomedical Imaging Center, Center for Biotechnology & Interdisciplinary Studies, Rensselaer Polytechnic Institute, 110 Eighth Street, Troy, NY USA
| | - Wenxiang Cong
- Department of Biomedical Engineering, Biomedical Imaging Center, Center for Biotechnology & Interdisciplinary Studies, Rensselaer Polytechnic Institute, 110 Eighth Street, Troy, NY USA
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | | | | | - Ruola Ning
- Koning Corporation, West Henrietta, NY USA
| | - G E Wang
- Department of Biomedical Engineering, Biomedical Imaging Center, Center for Biotechnology & Interdisciplinary Studies, Rensselaer Polytechnic Institute, 110 Eighth Street, Troy, NY USA
| |
Collapse
|
233
|
Liang D, Cheng J, Ke Z, Ying L. Deep Magnetic Resonance Image Reconstruction: Inverse Problems Meet Neural Networks. IEEE SIGNAL PROCESSING MAGAZINE 2020; 37:141-151. [PMID: 33746470 PMCID: PMC7977031 DOI: 10.1109/msp.2019.2950557] [Citation(s) in RCA: 140] [Impact Index Per Article: 28.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/17/2023]
Abstract
Image reconstruction from undersampled k-space data has been playing an important role in fast MRI. Recently, deep learning has demonstrated tremendous success in various fields and has also shown potential in significantly accelerating MRI reconstruction with fewer measurements. This article provides an overview of the deep learning-based image reconstruction methods for MRI. Two types of deep learning-based approaches are reviewed: those based on unrolled algorithms and those which are not. The main structure of both approaches are explained, respectively. Several signal processing issues for maximizing the potential of deep reconstruction in fast MRI are discussed. The discussion may facilitate further development of the networks and the analysis of performance from a theoretical point of view.
Collapse
Affiliation(s)
| | | | - Ziwen Ke
- Research Center for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, in Shenzhen, Guangdong, China
| | | |
Collapse
|
234
|
Guo Y, Wang C, Zhang H, Yang G. Deep Attentive Wasserstein Generative Adversarial Networks for MRI Reconstruction with Recurrent Context-Awareness. MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION – MICCAI 2020 2020. [DOI: 10.1007/978-3-030-59713-9_17] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
|
235
|
Ravishankar S, Ye JC, Fessler JA. Image Reconstruction: From Sparsity to Data-adaptive Methods and Machine Learning. PROCEEDINGS OF THE IEEE. INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS 2020; 108:86-109. [PMID: 32095024 PMCID: PMC7039447 DOI: 10.1109/jproc.2019.2936204] [Citation(s) in RCA: 91] [Impact Index Per Article: 18.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
The field of medical image reconstruction has seen roughly four types of methods. The first type tended to be analytical methods, such as filtered back-projection (FBP) for X-ray computed tomography (CT) and the inverse Fourier transform for magnetic resonance imaging (MRI), based on simple mathematical models for the imaging systems. These methods are typically fast, but have suboptimal properties such as poor resolution-noise trade-off for CT. A second type is iterative reconstruction methods based on more complete models for the imaging system physics and, where appropriate, models for the sensor statistics. These iterative methods improved image quality by reducing noise and artifacts. The FDA-approved methods among these have been based on relatively simple regularization models. A third type of methods has been designed to accommodate modified data acquisition methods, such as reduced sampling in MRI and CT to reduce scan time or radiation dose. These methods typically involve mathematical image models involving assumptions such as sparsity or low-rank. A fourth type of methods replaces mathematically designed models of signals and systems with data-driven or adaptive models inspired by the field of machine learning. This paper focuses on the two most recent trends in medical image reconstruction: methods based on sparsity or low-rank models, and data-driven methods based on machine learning techniques.
Collapse
Affiliation(s)
- Saiprasad Ravishankar
- Departments of Computational Mathematics, Science and Engineering, and Biomedical Engineering at Michigan State University, East Lansing, MI, 48824 USA
| | - Jong Chul Ye
- Department of Bio and Brain Engineering and Department of Mathematical Sciences at the Korea Advanced Institute of Science & Technology (KAIST), Daejeon, South Korea
| | - Jeffrey A Fessler
- Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI, 48109 USA
| |
Collapse
|
236
|
Wang S, Chen Y, Xiao T, Zhang L, Liu X, Zheng H. LANTERN: Learn analysis transform network for dynamic magnetic resonance imaging. ACTA ACUST UNITED AC 2020. [DOI: 10.3934/ipi.2020051] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
237
|
|
238
|
Unpaired Low-Dose CT Denoising Network Based on Cycle-Consistent Generative Adversarial Network with Prior Image Information. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2019; 2019:8639825. [PMID: 31885686 PMCID: PMC6925923 DOI: 10.1155/2019/8639825] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/01/2019] [Revised: 09/02/2019] [Accepted: 09/16/2019] [Indexed: 01/22/2023]
Abstract
The widespread application of X-ray computed tomography (CT) in clinical diagnosis has led to increasing public concern regarding excessive radiation dose administered to patients. However, reducing the radiation dose will inevitably cause server noise and affect radiologists' judgment and confidence. Hence, progressive low-dose CT (LDCT) image reconstruction methods must be developed to improve image quality. Over the past two years, deep learning-based approaches have shown impressive performance in noise reduction for LDCT images. Most existing deep learning-based approaches usually require the paired training dataset which the LDCT images correspond to the normal-dose CT (NDCT) images one-to-one, but the acquisition of well-paired datasets requires multiple scans, resulting the increase of radiation dose. Therefore, well-paired datasets are not readily available. To resolve this problem, this paper proposes an unpaired LDCT image denoising network based on cycle generative adversarial networks (CycleGAN) with prior image information which does not require a one-to-one training dataset. In this method, cyclic loss, an important trick in unpaired image-to-image translation, promises to map the distribution from LDCT to NDCT by using unpaired training data. Furthermore, to guarantee the accurate correspondence of the image content between the output and NDCT, the prior information obtained from the result preprocessed using the LDCT image is integrated into the network to supervise the generation of content. Given the map of distribution through the cyclic loss and the supervision of content through the prior image loss, our proposed method can not only reduce the image noise but also retain critical information. Real-data experiments were carried out to test the performance of the proposed method. The peak signal-to-noise ratio (PSNR) improves by more than 3 dB, and the structural similarity (SSIM) increases when compared with the original CycleGAN without prior information. The real LDCT data experiment demonstrates the superiority of the proposed method according to both visual inspection and quantitative evaluation.
Collapse
|
239
|
Sun L, Fan Z, Fu X, Huang Y, Ding X, Paisley J. A Deep Information Sharing Network for Multi-Contrast Compressed Sensing MRI Reconstruction. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 28:6141-6153. [PMID: 31295112 DOI: 10.1109/tip.2019.2925288] [Citation(s) in RCA: 55] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Compressed sensing (CS) theory can accelerate multi-contrast magnetic resonance imaging (MRI) by sampling fewer measurements within each contrast. However, conventional optimization-based reconstruction models suffer several limitations, including a strict assumption of shared sparse support, time-consuming optimization, and "shallow" models with difficulties in encoding the patterns contained in massive MRI data. In this paper, we propose the first deep learning model for multi-contrast CS-MRI reconstruction. We achieve information sharing through feature sharing units, which significantly reduces the number of model parameters. The feature sharing unit combines with a data fidelity unit to comprise an inference block, which are then cascaded with dense connections, allowing for efficient information transmission across different depths of the network. Experiments on various multi-contrast MRI datasets show that the proposed model outperforms both state-of-the-art single-contrast and multi-contrast MRI methods in accuracy and efficiency. We demonstrate that improved reconstruction quality can bring benefits to subsequent medical image analysis. Furthermore, the robustness of the proposed model to misregistration shows its potential in real MRI applications.
Collapse
|
240
|
|
241
|
Chen Y, Fang Z, Hung SC, Chang WT, Shen D, Lin W. High-resolution 3D MR Fingerprinting using parallel imaging and deep learning. Neuroimage 2019; 206:116329. [PMID: 31689536 DOI: 10.1016/j.neuroimage.2019.116329] [Citation(s) in RCA: 42] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2019] [Revised: 10/10/2019] [Accepted: 10/30/2019] [Indexed: 12/16/2022] Open
Abstract
MR Fingerprinting (MRF) is a relatively new imaging framework capable of providing accurate and simultaneous quantification of multiple tissue properties for improved tissue characterization and disease diagnosis. While 2D MRF has been widely available, extending the method to 3D MRF has been an actively pursued area of research as a 3D approach can provide a higher spatial resolution and better tissue characterization with an inherently higher signal-to-noise ratio. However, 3D MRF with a high spatial resolution requires lengthy acquisition times, especially for a large volume, making it impractical for most clinical applications. In this study, a high-resolution 3D MR Fingerprinting technique, combining parallel imaging and deep learning, was developed for rapid and simultaneous quantification of T1 and T2 relaxation times. Parallel imaging was first applied along the partition-encoding direction to reduce the amount of acquired data. An advanced convolutional neural network was then integrated with the MRF framework to extract features from the MRF signal evolution for improved tissue characterization and accelerated mapping. A modified 3D-MRF sequence was also developed in the study to acquire data to train the deep learning model that can be directly applied to prospectively accelerate 3D MRF scans. Our results of quantitative T1 and T2 maps demonstrate that improved tissue characterization can be achieved using the proposed method as compared to prior methods. With the integration of parallel imaging and deep learning techniques, whole-brain (26 × 26 × 18 cm3) quantitative T1 and T2 mapping with 1-mm isotropic resolution were achieved in ~7 min. In addition, a ~7-fold improvement in processing time to extract tissue properties was also accomplished with the deep learning approach as compared to the standard template matching method. All of these improvements make high-resolution whole-brain quantitative MR imaging feasible for clinical applications.
Collapse
Affiliation(s)
- Yong Chen
- Departments of Radiology, University of North Carolina, Chapel Hill, NC, USA; Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, North Carolina, USA
| | - Zhenghan Fang
- Departments of Radiology, University of North Carolina, Chapel Hill, NC, USA; Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, North Carolina, USA
| | - Sheng-Che Hung
- Departments of Radiology, University of North Carolina, Chapel Hill, NC, USA; Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, North Carolina, USA
| | - Wei-Tang Chang
- Departments of Radiology, University of North Carolina, Chapel Hill, NC, USA; Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, North Carolina, USA
| | - Dinggang Shen
- Departments of Radiology, University of North Carolina, Chapel Hill, NC, USA; Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, North Carolina, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea.
| | - Weili Lin
- Departments of Radiology, University of North Carolina, Chapel Hill, NC, USA; Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, North Carolina, USA.
| |
Collapse
|
242
|
Liu F, Samsonov A, Chen L, Kijowski R, Feng L. SANTIS: Sampling-Augmented Neural neTwork with Incoherent Structure for MR image reconstruction. Magn Reson Med 2019; 82:1890-1904. [PMID: 31166049 PMCID: PMC6660404 DOI: 10.1002/mrm.27827] [Citation(s) in RCA: 62] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2019] [Revised: 05/02/2019] [Accepted: 05/03/2019] [Indexed: 12/23/2022]
Abstract
PURPOSE To develop and evaluate a novel deep learning-based reconstruction framework called SANTIS (Sampling-Augmented Neural neTwork with Incoherent Structure) for efficient MR image reconstruction with improved robustness against sampling pattern discrepancy. METHODS With a combination of data cycle-consistent adversarial network, end-to-end convolutional neural network mapping, and data fidelity enforcement for reconstructing undersampled MR data, SANTIS additionally utilizes a sampling-augmented training strategy by extensively varying undersampling patterns during training, so that the network is capable of learning various aliasing structures and thereby removing undersampling artifacts more effectively and robustly. The performance of SANTIS was demonstrated for accelerated knee imaging and liver imaging using a Cartesian trajectory and a golden-angle radial trajectory, respectively. Quantitative metrics were used to assess its performance against different references. The feasibility of SANTIS in reconstructing dynamic contrast-enhanced images was also demonstrated using transfer learning. RESULTS Compared to conventional reconstruction that exploits image sparsity, SANTIS achieved consistently improved reconstruction performance (lower errors and greater image sharpness). Compared to standard learning-based methods without sampling augmentation (e.g., training with a fixed undersampling pattern), SANTIS provides comparable reconstruction performance, but significantly improved robustness, against sampling pattern discrepancy. SANTIS also achieved encouraging results for reconstructing liver images acquired at different contrast phases. CONCLUSION By extensively varying undersampling patterns, the sampling-augmented training strategy in SANTIS can remove undersampling artifacts more robustly. The novel concept behind SANTIS can particularly be useful for improving the robustness of deep learning-based image reconstruction against discrepancy between training and inference, an important, but currently less explored, topic.
Collapse
Affiliation(s)
- Fang Liu
- Department of Radiology, University of Wisconsin-Madison, Madison, Wisconsin, USA
| | - Alexey Samsonov
- Department of Radiology, University of Wisconsin-Madison, Madison, Wisconsin, USA
| | - Lihua Chen
- Department of Radiology, Southwest Hospital, Chongqing, China
| | - Richard Kijowski
- Department of Radiology, University of Wisconsin-Madison, Madison, Wisconsin, USA
| | - Li Feng
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, USA
| |
Collapse
|
243
|
Zhang C, Hosseini SAH, Weingärtner S, Uǧurbil K, Moeller S, Akçakaya M. Optimized fast GPU implementation of robust artificial-neural-networks for k-space interpolation (RAKI) reconstruction. PLoS One 2019; 14:e0223315. [PMID: 31644542 PMCID: PMC6808331 DOI: 10.1371/journal.pone.0223315] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2019] [Accepted: 09/18/2019] [Indexed: 02/01/2023] Open
Abstract
BACKGROUND Robust Artificial-neural-networks for k-space Interpolation (RAKI) is a recently proposed deep-learning-based reconstruction algorithm for parallel imaging. Its main premise is to perform k-space interpolation using convolutional neural networks (CNNs) trained on subject-specific autocalibration signal (ACS) data. Since training is performed individually for each subject, the reconstruction time is longer than approaches that pre-train on databases. In this study, we sought to reduce the computational time of RAKI. METHODS RAKI was implemented using CPU multi-processing and process pooling to maximize the utility of GPU resources. We also proposed an alternative CNN architecture that interpolates all output channels jointly for specific skipped k-space lines. This new architecture was compared to the original CNN architecture in RAKI, as well as to GRAPPA in phantom, brain and knee MRI datasets, both qualitatively and quantitatively. RESULTS The optimized GPU implementations were approximately 2-to-5-fold faster than a simple GPU implementation. The new CNN architecture further improved the computational time by 4-to-5-fold compared to the optimized GPU implementation using the original RAKI CNN architecture. It also provided significant improvement over GRAPPA both visually and quantitatively, although it performed slightly worse than the original RAKI CNN architecture. CONCLUSIONS The proposed implementations of RAKI bring the computational time towards clinically acceptable ranges. The new CNN architecture yields faster training, albeit at a slight performance loss, which may be acceptable for faster visualization in some settings.
Collapse
Affiliation(s)
- Chi Zhang
- Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN, United States of America
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN, United States of America
| | - Seyed Amir Hossein Hosseini
- Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN, United States of America
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN, United States of America
| | - Sebastian Weingärtner
- Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN, United States of America
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN, United States of America
- Department of Imaging Physics, Delft University of Technology, Delft, Netherlands
| | - Kâmil Uǧurbil
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN, United States of America
| | - Steen Moeller
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN, United States of America
| | - Mehmet Akçakaya
- Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN, United States of America
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN, United States of America
| |
Collapse
|
244
|
Dar SU, Yurt M, Karacan L, Erdem A, Erdem E, Cukur T. Image Synthesis in Multi-Contrast MRI With Conditional Generative Adversarial Networks. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2375-2388. [PMID: 30835216 DOI: 10.1109/tmi.2019.2901750] [Citation(s) in RCA: 233] [Impact Index Per Article: 38.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Acquiring images of the same anatomy with multiple different contrasts increases the diversity of diagnostic information available in an MR exam. Yet, the scan time limitations may prohibit the acquisition of certain contrasts, and some contrasts may be corrupted by noise and artifacts. In such cases, the ability to synthesize unacquired or corrupted contrasts can improve diagnostic utility. For multi-contrast synthesis, the current methods learn a nonlinear intensity transformation between the source and target images, either via nonlinear regression or deterministic neural networks. These methods can, in turn, suffer from the loss of structural details in synthesized images. Here, in this paper, we propose a new approach for multi-contrast MRI synthesis based on conditional generative adversarial networks. The proposed approach preserves intermediate-to-high frequency details via an adversarial loss, and it offers enhanced synthesis performance via pixel-wise and perceptual losses for registered multi-contrast images and a cycle-consistency loss for unregistered images. Information from neighboring cross-sections are utilized to further improve synthesis quality. Demonstrations on T1- and T2- weighted images from healthy subjects and patients clearly indicate the superior performance of the proposed approach compared to the previous state-of-the-art methods. Our synthesis approach can help improve the quality and versatility of the multi-contrast MRI exams without the need for prolonged or repeated examinations.
Collapse
|
245
|
Yi X, Walia E, Babyn P. Generative adversarial network in medical imaging: A review. Med Image Anal 2019; 58:101552. [PMID: 31521965 DOI: 10.1016/j.media.2019.101552] [Citation(s) in RCA: 597] [Impact Index Per Article: 99.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2018] [Revised: 08/23/2019] [Accepted: 08/30/2019] [Indexed: 01/30/2023]
Abstract
Generative adversarial networks have gained a lot of attention in the computer vision community due to their capability of data generation without explicitly modelling the probability density function. The adversarial loss brought by the discriminator provides a clever way of incorporating unlabeled samples into training and imposing higher order consistency. This has proven to be useful in many cases, such as domain adaptation, data augmentation, and image-to-image translation. These properties have attracted researchers in the medical imaging community, and we have seen rapid adoption in many traditional and novel applications, such as image reconstruction, segmentation, detection, classification, and cross-modality synthesis. Based on our observations, this trend will continue and we therefore conducted a review of recent advances in medical imaging using the adversarial training scheme with the hope of benefiting researchers interested in this technique.
Collapse
Affiliation(s)
- Xin Yi
- Department of Medical Imaging, University of Saskatchewan, 103 Hospital Dr, Saskatoon, SK S7N 0W8, Canada.
| | - Ekta Walia
- Department of Medical Imaging, University of Saskatchewan, 103 Hospital Dr, Saskatoon, SK S7N 0W8, Canada; Philips Canada, 281 Hillmount Road, Markham, Ontario, ON L6C 2S3, Canada.
| | - Paul Babyn
- Department of Medical Imaging, University of Saskatchewan, 103 Hospital Dr, Saskatoon, SK S7N 0W8, Canada.
| |
Collapse
|
246
|
Munir K, Elahi H, Ayub A, Frezza F, Rizzi A. Cancer Diagnosis Using Deep Learning: A Bibliographic Review. Cancers (Basel) 2019; 11:E1235. [PMID: 31450799 PMCID: PMC6770116 DOI: 10.3390/cancers11091235] [Citation(s) in RCA: 137] [Impact Index Per Article: 22.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2019] [Revised: 06/30/2019] [Accepted: 08/14/2019] [Indexed: 01/06/2023] Open
Abstract
In this paper, we first describe the basics of the field of cancer diagnosis, which includes steps of cancer diagnosis followed by the typical classification methods used by doctors, providing a historical idea of cancer classification techniques to the readers. These methods include Asymmetry, Border, Color and Diameter (ABCD) method, seven-point detection method, Menzies method, and pattern analysis. They are used regularly by doctors for cancer diagnosis, although they are not considered very efficient for obtaining better performance. Moreover, considering all types of audience, the basic evaluation criteria are also discussed. The criteria include the receiver operating characteristic curve (ROC curve), Area under the ROC curve (AUC), F1 score, accuracy, specificity, sensitivity, precision, dice-coefficient, average accuracy, and Jaccard index. Previously used methods are considered inefficient, asking for better and smarter methods for cancer diagnosis. Artificial intelligence and cancer diagnosis are gaining attention as a way to define better diagnostic tools. In particular, deep neural networks can be successfully used for intelligent image analysis. The basic framework of how this machine learning works on medical imaging is provided in this study, i.e., pre-processing, image segmentation and post-processing. The second part of this manuscript describes the different deep learning techniques, such as convolutional neural networks (CNNs), generative adversarial models (GANs), deep autoencoders (DANs), restricted Boltzmann's machine (RBM), stacked autoencoders (SAE), convolutional autoencoders (CAE), recurrent neural networks (RNNs), long short-term memory (LTSM), multi-scale convolutional neural network (M-CNN), multi-instance learning convolutional neural network (MIL-CNN). For each technique, we provide Python codes, to allow interested readers to experiment with the cited algorithms on their own diagnostic problems. The third part of this manuscript compiles the successfully applied deep learning models for different types of cancers. Considering the length of the manuscript, we restrict ourselves to the discussion of breast cancer, lung cancer, brain cancer, and skin cancer. The purpose of this bibliographic review is to provide researchers opting to work in implementing deep learning and artificial neural networks for cancer diagnosis a knowledge from scratch of the state-of-the-art achievements.
Collapse
Affiliation(s)
- Khushboo Munir
- Department of Information Engineering, Electronics and Telecommunications (DIET), Sapienza University of Rome, Via Eudossiana 18, 00184 Rome, Italy.
| | - Hassan Elahi
- Department of Mechanical and Aerospace Engineering (DIMA), Sapienza University of Rome, Via Eudossiana 18, 00184 Rome, Italy
| | - Afsheen Ayub
- Department of Basic and Applied Science for Engineering (SBAI), Sapienza University of Rome, Via Antonio Scarpa 14/16, 00161 Rome, Italy
| | - Fabrizio Frezza
- Department of Information Engineering, Electronics and Telecommunications (DIET), Sapienza University of Rome, Via Eudossiana 18, 00184 Rome, Italy
| | - Antonello Rizzi
- Department of Information Engineering, Electronics and Telecommunications (DIET), Sapienza University of Rome, Via Eudossiana 18, 00184 Rome, Italy
| |
Collapse
|
247
|
Zhu G, Jiang B, Tong L, Xie Y, Zaharchuk G, Wintermark M. Applications of Deep Learning to Neuro-Imaging Techniques. Front Neurol 2019; 10:869. [PMID: 31474928 PMCID: PMC6702308 DOI: 10.3389/fneur.2019.00869] [Citation(s) in RCA: 70] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2019] [Accepted: 07/26/2019] [Indexed: 12/12/2022] Open
Abstract
Many clinical applications based on deep learning and pertaining to radiology have been proposed and studied in radiology for classification, risk assessment, segmentation tasks, diagnosis, prognosis, and even prediction of therapy responses. There are many other innovative applications of AI in various technical aspects of medical imaging, particularly applied to the acquisition of images, ranging from removing image artifacts, normalizing/harmonizing images, improving image quality, lowering radiation and contrast dose, and shortening the duration of imaging studies. This article will address this topic and will seek to present an overview of deep learning applied to neuroimaging techniques.
Collapse
Affiliation(s)
| | | | | | | | | | - Max Wintermark
- Neuroradiology Section, Department of Radiology, Stanford Healthcare, Stanford, CA, United States
| |
Collapse
|
248
|
Zhang Q, Ruan G, Yang W, Liu Y, Zhao K, Feng Q, Chen W, Wu EX, Feng Y. MRI Gibbs‐ringing artifact reduction by means of machine learning using convolutional neural networks. Magn Reson Med 2019; 82:2133-2145. [DOI: 10.1002/mrm.27894] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2018] [Revised: 06/11/2019] [Accepted: 06/14/2019] [Indexed: 12/27/2022]
Affiliation(s)
- Qianqian Zhang
- School of Biomedical Engineering Southern Medical University Guangzhou China
- Guangdong Provincial Key Laboratory of Medical Image Processing Southern Medical University Guangzhou China
| | - Guohui Ruan
- School of Biomedical Engineering Southern Medical University Guangzhou China
- Guangdong Provincial Key Laboratory of Medical Image Processing Southern Medical University Guangzhou China
| | - Wei Yang
- School of Biomedical Engineering Southern Medical University Guangzhou China
- Guangdong Provincial Key Laboratory of Medical Image Processing Southern Medical University Guangzhou China
| | - Yilong Liu
- Laboratory of Biomedical Imaging and Signal Processing The University of Hong Kong Hong Kong SAR China
- Department of Electrical and Electronic Engineering The University of Hong Kong Hong Kong SAR China
| | - Kaixuan Zhao
- School of Biomedical Engineering Southern Medical University Guangzhou China
- Guangdong Provincial Key Laboratory of Medical Image Processing Southern Medical University Guangzhou China
| | - Qianjin Feng
- School of Biomedical Engineering Southern Medical University Guangzhou China
- Guangdong Provincial Key Laboratory of Medical Image Processing Southern Medical University Guangzhou China
| | - Wufan Chen
- School of Biomedical Engineering Southern Medical University Guangzhou China
- Guangdong Provincial Key Laboratory of Medical Image Processing Southern Medical University Guangzhou China
| | - Ed X. Wu
- Laboratory of Biomedical Imaging and Signal Processing The University of Hong Kong Hong Kong SAR China
- Department of Electrical and Electronic Engineering The University of Hong Kong Hong Kong SAR China
| | - Yanqiu Feng
- School of Biomedical Engineering Southern Medical University Guangzhou China
- Guangdong Provincial Key Laboratory of Medical Image Processing Southern Medical University Guangzhou China
| |
Collapse
|
249
|
Bao L, Ye F, Cai C, Wu J, Zeng K, van Zijl PCM, Chen Z. Undersampled MR image reconstruction using an enhanced recursive residual network. JOURNAL OF MAGNETIC RESONANCE (SAN DIEGO, CALIF. : 1997) 2019; 305:232-246. [PMID: 31323504 DOI: 10.1016/j.jmr.2019.07.020] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/27/2019] [Revised: 06/24/2019] [Accepted: 07/08/2019] [Indexed: 06/10/2023]
Abstract
When using aggressive undersampling, it is difficult to recover the high quality image with reliably fine features. In this paper, we propose an enhanced recursive residual network (ERRN) that improves the basic recursive residual network with a high-frequency feature guidance, an error-correction unit and dense connections. The feature guidance is designed to predict the underlying anatomy based on image a priori learned from the label data, playing a complementary role to the residual learning. The ERRN is adapted for two important applications: compressed sensing (CS) MRI and super resolution (SR) MRI, while an application-specific error-correction unit is added into the framework, i.e. data consistency for CS-MRI and back projection for SR-MRI due to their different sampling schemes. Our proposed network was evaluated using a real-valued brain dataset, a complex-valued knee dataset, pathological brain data and in vivo rat brain data with different undersampling masks and rates. Experimental results demonstrated that ERRN presented superior reconstructions at all cases with distinctly restored structural features and highest image quality metrics compared to both the state-of-the-art convolutional neural networks and the conventional optimization-based methods, particularly for the undersampling rate over 5-fold. Thus, an excellent framework design can endow the network with a flexible architecture, fewer parameters, outstanding performances for various undersampling schemes, and reduced overfitting in generalization, which will facilitate real-time reconstruction on MRI scanners.
Collapse
Affiliation(s)
- Lijun Bao
- Department of Electronic Science, Xiamen University, Xiamen 361000, China.
| | - Fuze Ye
- Department of Electronic Science, Xiamen University, Xiamen 361000, China
| | - Congbo Cai
- Department of Electronic Science, Xiamen University, Xiamen 361000, China
| | - Jian Wu
- Department of Electronic Science, Xiamen University, Xiamen 361000, China
| | - Kun Zeng
- Department of Electronic Science, Xiamen University, Xiamen 361000, China
| | - Peter C M van Zijl
- Department of Radiology, Johns Hopkins University School of Medicine, Baltimore, MD 21205, USA; F.M. Kirby Research Center for Functional Brain Imaging, Kennedy Krieger Institute, Baltimore, MD 21205, USA
| | - Zhong Chen
- Department of Electronic Science, Xiamen University, Xiamen 361000, China
| |
Collapse
|
250
|
Johnson PM, Drangova M. Conditional generative adversarial network for 3D rigid-body motion correction in MRI. Magn Reson Med 2019; 82:901-910. [PMID: 31006909 DOI: 10.1002/mrm.27772] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2018] [Revised: 03/09/2019] [Accepted: 03/22/2019] [Indexed: 11/10/2022]
Abstract
PURPOSE Subject motion in MRI remains an unsolved problem; motion during image acquisition may cause blurring and artifacts that severely degrade image quality. In this work, we approach motion correction as an image-to-image translation problem, which refers to the approach of training a deep neural network to predict an image in 1 domain from an image in another domain. Specifically, the purpose of this work was to develop and train a conditional generative adversarial network to predict artifact-free brain images from motion-corrupted data. METHODS An open source MRI data set comprising T2 *-weighted, FLASH magnitude, and phase brain images for 53 patients was used to generate complex image data for motion simulation. To simulate rigid motion, rotations and translations were applied to the image data based on randomly generated motion profiles. A conditional generative adversarial network, comprising a generator and discriminator networks, was trained using the motion-corrupted and corresponding ground truth (original) images as training pairs. RESULTS The images predicted by the conditional generative adversarial network have improved image quality compared to the motion-corrupted images. The mean absolute error between the motion-corrupted and ground-truth images of the test set was 16.4% of the image mean value, whereas the mean absolute error between the conditional generative adversarial network-predicted and ground-truth images was 10.8% The network output also demonstrated improved peak SNR and structural similarity index for all test-set images. CONCLUSION The images predicted by the conditional generative adversarial network have quantitatively and qualitatively improved image quality compared to the motion-corrupted images.
Collapse
Affiliation(s)
- Patricia M Johnson
- Imaging Research Laboratories, Robarts Research Institute, The University of Western Ontario, London, Ontario, Canada.,Department of Medical Biophysics, Schulich School of Medicine & Dentistry, The University of Western Ontario, London, Ontario, Canada
| | - Maria Drangova
- Imaging Research Laboratories, Robarts Research Institute, The University of Western Ontario, London, Ontario, Canada.,Department of Medical Biophysics, Schulich School of Medicine & Dentistry, The University of Western Ontario, London, Ontario, Canada
| |
Collapse
|