151
|
Abuzaid MM, Elshami W, McConnell J, Tekin HO. An extensive survey of radiographers from the Middle East and India on artificial intelligence integration in radiology practice. HEALTH AND TECHNOLOGY 2021; 11:1045-1050. [PMID: 34377625 PMCID: PMC8342654 DOI: 10.1007/s12553-021-00583-1] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2021] [Accepted: 07/29/2021] [Indexed: 11/30/2022]
Abstract
Assessing the current Artificial intelligence (AI) situation is a crucial step towards its implementation into radiology practice. The study aimed to assess radiographer willingness to accept AI in radiology work practice and the impact of AI in work performance. An exploratory cross-sectional online survey conducted for radiographers working within the Middle East and India was conducted from May–August 2020. A previously validated survey used to obtain radiographer's demographics, knowledge, perceptions, organization readiness, and challenges of integrating AI into radiology. The survey was accessible for radiographers and distributed through the societies page. The survey was completed by 549 radiographers distributed as (77.6%, n = 426) from the Middle East while (22.4%, n = 123) from India. A majority (86%, n = 773) agreed that AI currently plays an important role in radiology and (88.0%, n = 483) expected that AI would play a role in radiology practice and image production. The challenges for AI implementation in practice were developing AI skills (42.8%, n = 235) and AI knowledge development (37.0%, n = 203). Participants showed high interest to integrate AI in under and postgraduate curriculum. There is excitement about what AI could offer, but education input is a requirement. Fears are expressed about job security and how radiology may work across all ages and educational backgrounds. Radiographers become aware of AI role and challenges, which can be improved by education and training.
Collapse
Affiliation(s)
- Mohamed M Abuzaid
- Medical Diagnostic Imaging Department, College of Health Sciences, University of Sharjah, Sharjah, UAE
| | - Wiam Elshami
- Medical Diagnostic Imaging Department, College of Health Sciences, University of Sharjah, Sharjah, UAE
| | - Jonathan McConnell
- Radiology Department, NHS Greater Glasgow and Clyde, Glasgow, Scotland UK
| | - H O Tekin
- Medical Diagnostic Imaging Department, College of Health Sciences, University of Sharjah, Sharjah, UAE
| |
Collapse
|
152
|
Liu X, Wang J, Lin S, Crozier S, Liu F. Optimizing multicontrast MRI reconstruction with shareable feature aggregation and selection. NMR IN BIOMEDICINE 2021; 34:e4540. [PMID: 33974306 DOI: 10.1002/nbm.4540] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/27/2020] [Revised: 04/05/2021] [Accepted: 04/25/2021] [Indexed: 06/12/2023]
Abstract
This paper proposes a new method for optimizing feature sharing in deep neural network-based, rapid, multicontrast magnetic resonance imaging (MC-MRI). Using the shareable information of MC images for accelerated MC-MRI reconstruction, current algorithms stack the MC images or features without optimizing the sharing protocols, leading to suboptimal reconstruction results. In this paper, we propose a novel feature aggregation and selection scheme in a deep neural network to better leverage the MC features and improve the reconstruction results. First, we propose to extract and use the shareable information by mapping the MC images into multiresolution feature maps with multilevel layers of the neural network. In this way, the extracted features capture complementary image properties, including local patterns from the shallow layers and semantic information from the deep layers. Then, an explicit selection module is designed to compile the extracted features optimally. That is, larger weights are learned to incorporate the constructive, shareable features; and smaller weights are assigned to the unshareable information. We conduct comparative studies on publicly available T2-weighted and T2-weighted fluid attenuated inversion recovery brain images, and the results show that the proposed network consistently outperforms existing algorithms. In addition, the proposed method can recover the images with high fidelity under 16 times acceleration. The ablation studies are conducted to evaluate the effectiveness of the proposed feature aggregation and selection mechanism. The results and the visualization of the weighted features show that the proposed method does effectively improve the usage of the useful features and suppress useless information, leading to overall enhanced reconstruction results. Additionally, the selection module can zero-out repeated and redundant features and improve network efficiency.
Collapse
Affiliation(s)
- Xinwen Liu
- School of Information Technology and Electrical Engineering, The University of Queensland, Brisbane, Australia
| | - Jing Wang
- School of Information and Communication Technology, Griffith University, Brisbane, Australia
| | - Suzhen Lin
- School of Data Science and Technology, North University of China, Taiyuan, China
- The Key Laboratory of Biomedical Imaging and Big Data Processing in Shanxi Province, Shanxi, China
| | - Stuart Crozier
- School of Information Technology and Electrical Engineering, The University of Queensland, Brisbane, Australia
| | - Feng Liu
- School of Information Technology and Electrical Engineering, The University of Queensland, Brisbane, Australia
| |
Collapse
|
153
|
Xie Y, Jiang H, Du H, Xu J, Qiu B. Fasu-Net: Fast Alzheimer’s Disease Screening with Undersampled MRI Using Convolutional Neural Networks. JOURNAL OF MEDICAL IMAGING AND HEALTH INFORMATICS 2021. [DOI: 10.1166/jmihi.2021.3829] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Alzheimer’s Disease (AD) is a progressive and irreversible neurodegenerative condition, which results in dementia. Mild Cognitive Impairment (MCI) is an intermediate state between normal aging and AD. Instead of traditional questionnaire method, magnetic resonance imaging (MRI)
can be used by radiologists to diagnose and screening AD recently, but long acquisition time is not conducive to screening AD and MCI. To solve this problem, we develop a Fasu-Net (Fast Alzheimer’s disease Screening neural network with Undersampled MRI) for AD and MCI clinical classification.
The network uses undersampled structural MRI with a shorter acquisition time to improve the screening and diagnosis efficiency of AD. For achieving the best classification result, three axial planes of brain MR images were feed into the Fasu-Net with transfer learning method. The experiment
results on undersampled 3D T1-weighted images database (ADNI) show that in the AD versus MCI versus HC (Healthy Controls) classification, the Fasu-Net achieved the accuracy of 91.41%, thus can be a potential method for fast clinical screening of AD.
Collapse
Affiliation(s)
- Yuanbo Xie
- Hefei National Lab for Physical Sciences at the Microscale and the Centers for Biomedical Engineering, University of Science and Technology of China, Hefei 230027, China
| | - Haitao Jiang
- Hefei National Lab for Physical Sciences at the Microscale and the Centers for Biomedical Engineering, University of Science and Technology of China, Hefei 230027, China
| | - Hongwei Du
- Hefei National Lab for Physical Sciences at the Microscale and the Centers for Biomedical Engineering, University of Science and Technology of China, Hefei 230027, China
| | - Jinzhang Xu
- School of Electrical Engineering and Automation, Hefei University of Technology, Hefei, Anhui 230009, China
| | - Bensheng Qiu
- Hefei National Lab for Physical Sciences at the Microscale and the Centers for Biomedical Engineering, University of Science and Technology of China, Hefei 230027, China
| |
Collapse
|
154
|
Zhou J, Meng M, Xing J, Xiong Y, Xu X, Zhang Y. Iterative feature refinement with network-driven prior for image restoration. Pattern Anal Appl 2021. [DOI: 10.1007/s10044-021-01006-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
155
|
Cui J, Gong K, Guo N, Wu C, Kim K, Liu H, Li Q. Populational and individual information based PET image denoising using conditional unsupervised learning. Phys Med Biol 2021; 66. [PMID: 34198277 DOI: 10.1088/1361-6560/ac108e] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Accepted: 07/01/2021] [Indexed: 11/12/2022]
Abstract
Our study aims to improve the signal-to-noise ratio of positron emission tomography (PET) imaging using conditional unsupervised learning. The proposed method does not require low- and high-quality pairs for network training which can be easily applied to existing PET/computed tomography (CT) and PET/magnetic resonance (MR) datasets. This method consists of two steps: populational training and individual fine-tuning. As for populational training, a network was first pre-trained by a group of patients' noisy PET images and the corresponding anatomical prior images from CT or MR. As for individual fine-tuning, a new network with initial parameters inherited from the pre-trained network was fine-tuned by the test patient's noisy PET image and the corresponding anatomical prior image. Only the last few layers were fine-tuned to take advantage of the populational information and the pre-training efforts. Both networks shared the same structure and took the CT or MR images as the network input so that the network output was conditioned on the patient's anatomic prior information. The noisy PET images were used as the training and fine-tuning labels. The proposed method was evaluated on a68Ga-PPRGD2 PET/CT dataset and a18F-FDG PET/MR dataset. For the PET/CT dataset, with the original noisy PET image as the baseline, the proposed method has a significantly higher contrast-to noise ratio (CNR) improvement (71.85% ± 27.05%) than Gaussian (12.66% ± 6.19%,P= 0.002), nonlocal mean method (22.60% ± 13.11%,P= 0.002) and conditional deep image prior method (52.94% ± 21.79%,P= 0.0039). For the PET/MR dataset, compared to Gaussian (18.73% ± 9.98%,P< 0.0001), NLM (26.01% ± 19.40%,P< 0.0001) and CDIP (47.48% ± 25.36%,P< 0.0001), the CNR improvement ratio of the proposed method (58.07% ± 28.45%) is the highest. In addition, the denoised images using both datasets also showed that the proposed method can accurately restore tumor structures while also smoothing out the noise.
Collapse
Affiliation(s)
- Jianan Cui
- State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, 310027, People's Republic of China.,Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital, Boston, MA 02114, United States of America
| | - Kuang Gong
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital, Boston, MA 02114, United States of America.,Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, United States of America
| | - Ning Guo
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital, Boston, MA 02114, United States of America.,Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, United States of America
| | - Chenxi Wu
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital, Boston, MA 02114, United States of America
| | - Kyungsang Kim
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital, Boston, MA 02114, United States of America.,Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, United States of America
| | - Huafeng Liu
- State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, 310027, People's Republic of China
| | - Quanzheng Li
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital, Boston, MA 02114, United States of America.,Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, United States of America
| |
Collapse
|
156
|
Chandra SS, Bran Lorenzana M, Liu X, Liu S, Bollmann S, Crozier S. Deep learning in magnetic resonance image reconstruction. J Med Imaging Radiat Oncol 2021; 65:564-577. [PMID: 34254448 DOI: 10.1111/1754-9485.13276] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Accepted: 06/10/2021] [Indexed: 11/26/2022]
Abstract
Magnetic resonance (MR) imaging visualises soft tissue contrast in exquisite detail without harmful ionising radiation. In this work, we provide a state-of-the-art review on the use of deep learning in MR image reconstruction from different image acquisition types involving compressed sensing techniques, parallel image acquisition and multi-contrast imaging. Publications with deep learning-based image reconstruction for MR imaging were identified from the literature (PubMed and Google Scholar), and a comprehensive description of each of the works was provided. A detailed comparison that highlights the differences, the data used and the performance of each of these works were also made. A discussion of the potential use cases for each of these methods is provided. The sparse image reconstruction methods were found to be most popular in using deep learning for improved performance, accelerating acquisitions by around 4-8 times. Multi-contrast image reconstruction methods rely on at least one pre-acquired image, but can achieve 16-fold, and even up to 32- to 50-fold acceleration depending on the set-up. Parallel imaging provides frameworks to be integrated in many of these methods for additional speed-up potential. The successful use of compressed sensing techniques and multi-contrast imaging with deep learning and parallel acquisition methods could yield significant MR acquisition speed-ups within clinical routines in the near future.
Collapse
Affiliation(s)
- Shekhar S Chandra
- School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Queensland, Australia
| | - Marlon Bran Lorenzana
- School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Queensland, Australia
| | - Xinwen Liu
- School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Queensland, Australia
| | - Siyu Liu
- School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Queensland, Australia
| | - Steffen Bollmann
- School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Queensland, Australia
| | - Stuart Crozier
- School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Queensland, Australia
| |
Collapse
|
157
|
Wang S, Xiao T, Liu Q, Zheng H. Deep learning for fast MR imaging: A review for learning reconstruction from incomplete k-space data. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102579] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
|
158
|
Lv J, Zhu J, Yang G. Which GAN? A comparative study of generative adversarial network-based fast MRI reconstruction. PHILOSOPHICAL TRANSACTIONS. SERIES A, MATHEMATICAL, PHYSICAL, AND ENGINEERING SCIENCES 2021; 379:20200203. [PMID: 33966462 DOI: 10.1098/rsta.2020.0203] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 12/14/2020] [Indexed: 05/03/2023]
Abstract
Fast magnetic resonance imaging (MRI) is crucial for clinical applications that can alleviate motion artefacts and increase patient throughput. K-space undersampling is an obvious approach to accelerate MR acquisition. However, undersampling of k-space data can result in blurring and aliasing artefacts for the reconstructed images. Recently, several studies have been proposed to use deep learning-based data-driven models for MRI reconstruction and have obtained promising results. However, the comparison of these methods remains limited because the models have not been trained on the same datasets and the validation strategies may be different. The purpose of this work is to conduct a comparative study to investigate the generative adversarial network (GAN)-based models for MRI reconstruction. We reimplemented and benchmarked four widely used GAN-based architectures including DAGAN, ReconGAN, RefineGAN and KIGAN. These four frameworks were trained and tested on brain, knee and liver MRI images using twofold, fourfold and sixfold accelerations, respectively, with a random undersampling mask. Both quantitative evaluations and qualitative visualization have shown that the RefineGAN method has achieved superior performance in reconstruction with better accuracy and perceptual quality compared to other GAN-based methods. This article is part of the theme issue 'Synergistic tomographic image reconstruction: part 1'.
Collapse
Affiliation(s)
- Jun Lv
- School of Computer and Control Engineering, Yantai University, Yantai, People's Republic of China
| | - Jin Zhu
- Department of Computer Science and Technology, University of Cambridge, Cambridge CB3 0FD, UK
| | - Guang Yang
- Cardiovascular Research Centre, Royal Brompton Hospital, SW3 6NP London, UK
- National Heart and Lung Institute, Imperial College London, London SW7 2AZ, UK
| |
Collapse
|
159
|
A deep cascade of ensemble of dual domain networks with gradient-based T1 assistance and perceptual refinement for fast MRI reconstruction. Comput Med Imaging Graph 2021; 91:101942. [PMID: 34087612 DOI: 10.1016/j.compmedimag.2021.101942] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Revised: 05/03/2021] [Accepted: 05/14/2021] [Indexed: 11/23/2022]
Abstract
Deep learning networks have shown promising results in fast magnetic resonance imaging (MRI) reconstruction. In our work, we develop deep networks to further improve the quantitative and the perceptual quality of reconstruction. To begin with, we propose reconsynergynet (RSN), a network that combines the complementary benefits of independently operating on both the image and the Fourier domain. For a single-coil acquisition, we introduce deep cascade RSN (DC-RSN), a cascade of RSN blocks interleaved with data fidelity (DF) units. Secondly, we improve the structure recovery of DC-RSN for T2 weighted Imaging (T2WI) through assistance of T1 weighted imaging (T1WI), a sequence with short acquisition time. T1 assistance is provided to DC-RSN through a gradient of log feature (GOLF) fusion. Furthermore, we propose perceptual refinement network (PRN) to refine the reconstructions for better visual information fidelity (VIF), a metric highly correlated to radiologist's opinion on the image quality. Lastly, for multi-coil acquisition, we propose variable splitting RSN (VS-RSN), a deep cascade of blocks, each block containing RSN, multi-coil DF unit, and a weighted average module. We extensively validate our models DC-RSN and VS-RSN for single-coil and multi-coil acquisitions and report the state-of-the-art performance. We obtain a SSIM of 0.768, 0.923, and 0.878 for knee single-coil-4x, multi-coil-4x, and multi-coil-8x in fastMRI, respectively. We also conduct experiments to demonstrate the efficacy of GOLF based T1 assistance and PRN.
Collapse
|
160
|
Ouchi S, Ito S. Reconstruction of Compressed-sensing MR Imaging Using Deep Residual Learning in the Image Domain. Magn Reson Med Sci 2021; 20:190-203. [PMID: 32611937 PMCID: PMC8203484 DOI: 10.2463/mrms.mp.2019-0139] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose: A deep residual learning convolutional neural network (DRL-CNN) was applied to improve image quality and speed up the reconstruction of compressed sensing magnetic resonance imaging. The reconstruction performances of the proposed method was compared with iterative reconstruction methods. Methods: The proposed method adopted a DRL-CNN to learn the residual component between the input and output images (i.e., aliasing artifacts) for image reconstruction. The CNN-based reconstruction was compared with iterative reconstruction methods. To clarify the reconstruction performance of the proposed method, reconstruction experiments using 1D-, 2D-random under-sampling and sampling patterns that mix random and non-random under-sampling were executed. The peak-signal-to-noise ratio (PSNR) and the structural similarity index (SSIM) were examined for various numbers of training images, sampling rates, and numbers of training epochs. Results: The experimental results demonstrated that reconstruction time is drastically reduced to 0.022 s per image compared with that for conventional iterative reconstruction. The PSNR and SSIM were improved as the coherence of the sampling pattern increases. These results indicate that a deep CNN can learn coherent artifacts and is effective especially for cases where the randomness of k-space sampling is rather low. Simulation studies showed that variable density non-random under-sampling was a promising sampling pattern in 1D-random under-sampling of 2D image acquisition. Conclusion: A DRL-CNN can recognize and predict aliasing artifacts with low incoherence. It was demonstrated that reconstruction time is significantly reduced and the improvement in the PSNR and SSIM is higher in 1D-random under-sampling than in 2D. The requirement of incoherence for aliasing artifacts is different from that for iterative reconstruction.
Collapse
Affiliation(s)
- Shohei Ouchi
- Department of Innovation Systems Engineering, Graduate School of Engineering, Utsunomiya University
| | - Satoshi Ito
- Department of Innovation Systems Engineering, Graduate School of Engineering, Utsunomiya University
| |
Collapse
|
161
|
Liu F, Kijowski R, El Fakhri G, Feng L. Magnetic resonance parameter mapping using model-guided self-supervised deep learning. Magn Reson Med 2021; 85:3211-3226. [PMID: 33464652 PMCID: PMC9185837 DOI: 10.1002/mrm.28659] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Revised: 11/15/2020] [Accepted: 12/07/2020] [Indexed: 12/25/2022]
Abstract
PURPOSE To develop a model-guided self-supervised deep learning MRI reconstruction framework called reference-free latent map extraction (RELAX) for rapid quantitative MR parameter mapping. METHODS Two physical models are incorporated for network training in RELAX, including the inherent MR imaging model and a quantitative model that is used to fit parameters in quantitative MRI. By enforcing these physical model constraints, RELAX eliminates the need for full sampled reference data sets that are required in standard supervised learning. Meanwhile, RELAX also enables direct reconstruction of corresponding MR parameter maps from undersampled k-space. Generic sparsity constraints used in conventional iterative reconstruction, such as the total variation constraint, can be additionally included in the RELAX framework to improve reconstruction quality. The performance of RELAX was tested for accelerated T1 and T2 mapping in both simulated and actually acquired MRI data sets and was compared with supervised learning and conventional constrained reconstruction for suppressing noise and/or undersampling-induced artifacts. RESULTS In the simulated data sets, RELAX generated good T1 /T2 maps in the presence of noise and/or undersampling artifacts, comparable to artifact/noise-free ground truth. The inclusion of a spatial total variation constraint helps improve image quality. For the in vivo T1 /T2 mapping data sets, RELAX achieved superior reconstruction quality compared with conventional iterative reconstruction, and similar reconstruction performance to supervised deep learning reconstruction. CONCLUSION This work has demonstrated the initial feasibility of rapid quantitative MR parameter mapping based on self-supervised deep learning. The RELAX framework may also be further extended to other quantitative MRI applications by incorporating corresponding quantitative imaging models.
Collapse
Affiliation(s)
- Fang Liu
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA
| | - Richard Kijowski
- Department of Radiology, University of Wisconsin-Madison, Madison, Wisconsin, USA
| | - Georges El Fakhri
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA
| | - Li Feng
- Biomedical Engineering and Imaging Institute and Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, USA
| |
Collapse
|
162
|
Image Denoising Using a Novel Deep Generative Network with Multiple Target Images and Adaptive Termination Condition. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11114803] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Image denoising, a classic ill-posed problem, aims to recover a latent image from a noisy measurement. Over the past few decades, a considerable number of denoising methods have been studied extensively. Among these methods, supervised deep convolutional networks have garnered increasing attention, and their superior performance is attributed to their capability to learn realistic image priors from a large amount of paired noisy and clean images. However, if the image to be denoised is significantly different from the training images, it could lead to inferior results, and the networks may even produce hallucinations by using inappropriate image priors to handle an unseen noisy image. Recently, deep image prior (DIP) was proposed, and it overcame this drawback to some extent. The structure of the DIP generator network is capable of capturing the low-level statistics of a natural image using an unsupervised method with no training images other than the image itself. Compared with a supervised denoising model, the unsupervised DIP is more flexible when processing image content that must be denoised. Nevertheless, the denoising performance of DIP is usually inferior to the current supervised learning-based methods using deep convolutional networks, and it is susceptible to the over-fitting problem. To solve these problems, we propose a novel deep generative network with multiple target images and an adaptive termination condition. Specifically, we utilized mainstream denoising methods to generate two clear target images to be used with the original noisy image, enabling better guidance during the convergence process and improving the convergence speed. Moreover, we adopted the noise level estimation (NLE) technique to set a more reasonable adaptive termination condition, which can effectively solve the problem of over-fitting. Extensive experiments demonstrated that, according to the denoising results, the proposed approach significantly outperforms the original DIP method in tests on different databases. Specifically, the average peak signal-to-noise ratio (PSNR) performance of our proposed method on four databases at different noise levels is increased by 1.90 to 4.86 dB compared to the original DIP method. Moreover, our method achieves superior performance against state-of-the-art methods in terms of popular metrics, which include the structural similarity index (SSIM) and feature similarity index measurement (FSIM). Thus, the proposed method lays a good foundation for subsequent image processing tasks, such as target detection and super-resolution.
Collapse
|
163
|
Bogner W, Otazo R, Henning A. Accelerated MR spectroscopic imaging-a review of current and emerging techniques. NMR IN BIOMEDICINE 2021; 34:e4314. [PMID: 32399974 PMCID: PMC8244067 DOI: 10.1002/nbm.4314] [Citation(s) in RCA: 73] [Impact Index Per Article: 18.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/26/2019] [Revised: 03/24/2020] [Accepted: 03/30/2020] [Indexed: 05/14/2023]
Abstract
Over more than 30 years in vivo MR spectroscopic imaging (MRSI) has undergone an enormous evolution from theoretical concepts in the early 1980s to the robust imaging technique that it is today. The development of both fast and efficient sampling and reconstruction techniques has played a fundamental role in this process. State-of-the-art MRSI has grown from a slow purely phase-encoded acquisition technique to a method that today combines the benefits of different acceleration techniques. These include shortening of repetition times, spatial-spectral encoding, undersampling of k-space and time domain, and use of spatial-spectral prior knowledge in the reconstruction. In this way in vivo MRSI has considerably advanced in terms of spatial coverage, spatial resolution, acquisition speed, artifact suppression, number of detectable metabolites and quantification precision. Acceleration not only has been the enabling factor in high-resolution whole-brain 1 H-MRSI, but today is also common in non-proton MRSI (31 P, 2 H and 13 C) and applied in many different organs. In this process, MRSI techniques had to constantly adapt, but have also benefitted from the significant increase of magnetic field strength boosting the signal-to-noise ratio along with high gradient fidelity and high-density receive arrays. In combination with recent trends in image reconstruction and much improved computation power, these advances led to a number of novel developments with respect to MRSI acceleration. Today MRSI allows for non-invasive and non-ionizing mapping of the spatial distribution of various metabolites' tissue concentrations in animals or humans, is applied for clinical diagnostics and has been established as an important tool for neuro-scientific and metabolism research. This review highlights the developments of the last five years and puts them into the context of earlier MRSI acceleration techniques. In addition to 1 H-MRSI it also includes other relevant nuclei and is not limited to certain body regions or specific applications.
Collapse
Affiliation(s)
- Wolfgang Bogner
- High‐Field MR Center, Department of Biomedical Imaging and Image‐Guided TherapyMedical University of ViennaViennaAustria
| | - Ricardo Otazo
- Department of Medical PhysicsMemorial Sloan Kettering Cancer CenterNew York, New YorkUSA
| | - Anke Henning
- Max Planck Institute for Biological CyberneticsTübingenGermany
- Advanced Imaging Research Center, UT Southwestern Medical CenterDallasTexasUSA
| |
Collapse
|
164
|
Rahim T, Novamizanti L, Apraz Ramatryana IN, Shin SY. Compressed medical imaging based on average sparsity model and reweighted analysis of multiple basis pursuit. Comput Med Imaging Graph 2021; 90:101927. [PMID: 33930735 DOI: 10.1016/j.compmedimag.2021.101927] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2020] [Revised: 02/17/2021] [Accepted: 04/05/2021] [Indexed: 11/28/2022]
Abstract
In medical imaging and applications, efficient image sampling and transfer are some of the key fields of research. The compressed sensing (CS) theory has shown that such compression can be performed during the data retrieval process and that the uncompressed image can be retrieved using a computationally flexible optimization method. The objective of this study is to propose compressed medical imaging for a different type of medical images, based on the combination of the average sparsity model and reweighted analysis of multiple basis pursuit (M-BP) reconstruction methods, referred to as multiple basis reweighted analysis (M-BRA). The proposed algorithm includes the joint multiple sparsity averaging to improves the signal sparsity in M-BP. In this study, four types of medical images are opted to fill the gap of lacking a detailed analysis of M-BRA in medical images. The medical dataset consists of magnetic resonance imaging (MRI) data, computed tomography (CT) data, colonoscopy data, and endoscopy data. Employing the proposed approach, a signal-to-noise ratio (SNR) of 30 dB was achieved for MRI data on a sampling ratio of M/N=0.3. SNR of 34, 30, and 34 dB are corresponding to CT, colonoscopy, and endoscopy data on the same sampling ratio of M/N=0.15. The proposed M-BRA performance indicates the potential for compressed medical imaging analysis with high reconstruction image quality.
Collapse
Affiliation(s)
- Tariq Rahim
- Department of IT Convergence Engineering, Kumoh National Institute of Technology (KIT), Gumi 39177, South Korea
| | - Ledya Novamizanti
- School of Electrical Engineering, Telkom University, Bandung 40257, Indonesia
| | - I Nyoman Apraz Ramatryana
- Department of IT Convergence Engineering, Kumoh National Institute of Technology (KIT), Gumi 39177, South Korea
| | - Soo Young Shin
- Department of IT Convergence Engineering, Kumoh National Institute of Technology (KIT), Gumi 39177, South Korea.
| |
Collapse
|
165
|
Sheng J, Shi Y, Zhang Q. Improved parallel magnetic resonance imaging reconstruction with multiple variable density sampling. Sci Rep 2021; 11:9005. [PMID: 33903702 PMCID: PMC8076203 DOI: 10.1038/s41598-021-88567-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Accepted: 04/05/2021] [Indexed: 11/29/2022] Open
Abstract
Generalized auto-calibrating partially parallel acquisitions (GRAPPA) and other parallel Magnetic Resonance Imaging (pMRI) methods restore the unacquired data in k-space by linearly calculating the undersampled data around the missing points. In order to obtain the weight of the linear calculation, a small number of auto-calibration signal (ACS) lines need to be sampled at the center of the k-space. Therefore, the sampling pattern used in this type of method is to full sample data in the middle area and undersample in the outer k-space with nominal reduction factors. In this paper, we propose a novel reconstruction method with a multiple variable density sampling (MVDS) that is different from traditional sampling patterns. Our method can significantly improve the image quality using multiple reduction factors with fewer ACS lines. Specifically, the traditional sampling pattern only uses a single reduction factor to uniformly undersample data in the region outside the ACS, but we use multiple reduction factors. When sampling the k-space data, we keep the ACS lines unchanged, use a smaller reduction factor for undersampling data near the ACS lines and a larger reduction factor for the outermost part of k-space. The error is lower after reconstruction of this region by undersampled data with a smaller reduction factor. The experimental results show that with the same amount of data sampled, using NL-GRAPPA to reconstruct the k-space data sampled by our method can result in lower noise and fewer artifacts than traditional methods. In particular, our method is extremely effective when the number of ACS lines is small.
Collapse
Affiliation(s)
- Jinhua Sheng
- College of Computer Science, Hangzhou Dianzi University, Hangzhou, 310018, Zhejiang, China.
- Key Laboratory of Intelligent Image Analysis for Sensory and Cognitive Health, Ministry of Industry and Information Technology of China, Hangzhou, 310018, Zhejiang, China.
| | - Yuchen Shi
- College of Computer Science, Hangzhou Dianzi University, Hangzhou, 310018, Zhejiang, China
- Key Laboratory of Intelligent Image Analysis for Sensory and Cognitive Health, Ministry of Industry and Information Technology of China, Hangzhou, 310018, Zhejiang, China
| | - Qiao Zhang
- Beijing Hospital, Beijing, 100730, China
| |
Collapse
|
166
|
Li Y, Ye H, Ye F, Liu Y, Lv L, Zhang P, Zhang X, Zhou Y. The Current Situation and Future Prospects of Simulators in Dental Education. J Med Internet Res 2021; 23:e23635. [PMID: 33830059 PMCID: PMC8063092 DOI: 10.2196/23635] [Citation(s) in RCA: 49] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2020] [Revised: 11/26/2020] [Accepted: 01/29/2021] [Indexed: 01/22/2023] Open
Abstract
The application of virtual reality has become increasingly extensive as this technology has developed. In dental education, virtual reality is mainly used to assist or replace traditional methods of teaching clinical skills in preclinical training for several subjects, such as endodontics, prosthodontics, periodontics, implantology, and dental surgery. The application of dental simulators in teaching can make up for the deficiency of traditional teaching methods and reduce the teaching burden, improving convenience for both teachers and students. However, because of the technology limitations of virtual reality and force feedback, dental simulators still have many hardware and software disadvantages that have prevented them from being an alternative to traditional dental simulators as a primary skill training method. In the future, when combined with big data, cloud computing, 5G, and deep learning technology, dental simulators will be able to give students individualized learning assistance, and their functions will be more diverse and suitable for preclinical training. The purpose of this review is to provide an overview of current dental simulators on related technologies, advantages and disadvantages, methods of evaluating effectiveness, and future directions for development.
Collapse
Affiliation(s)
- Yaning Li
- Department of Prosthodontics, Peking University School and Hospital of Stomatology, Beijing, China
- National Clinical Research Center for Oral Diseases, Peking University School and Hospital of Stomatology, Beijing, China
- National Engineering Laboratory for Digital and Material Technology of Stomatology, Peking University School and Hospital of Stomatology, Beijing, China
- Beijing Key Laboratory of Digital Stomatology, Peking University School and Hospital of Stomatology, Beijing, China
- NHC Key Laboratory of Digital Technology of Stomatology, Peking University School and Hospital of Stomatology, Beijing, China
| | - Hongqiang Ye
- Department of Prosthodontics, Peking University School and Hospital of Stomatology, Beijing, China
- National Clinical Research Center for Oral Diseases, Peking University School and Hospital of Stomatology, Beijing, China
- National Engineering Laboratory for Digital and Material Technology of Stomatology, Peking University School and Hospital of Stomatology, Beijing, China
- Beijing Key Laboratory of Digital Stomatology, Peking University School and Hospital of Stomatology, Beijing, China
- NHC Key Laboratory of Digital Technology of Stomatology, Peking University School and Hospital of Stomatology, Beijing, China
| | - Fan Ye
- The State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University, Beijing, China
| | - Yunsong Liu
- Department of Prosthodontics, Peking University School and Hospital of Stomatology, Beijing, China
- National Clinical Research Center for Oral Diseases, Peking University School and Hospital of Stomatology, Beijing, China
- National Engineering Laboratory for Digital and Material Technology of Stomatology, Peking University School and Hospital of Stomatology, Beijing, China
- Beijing Key Laboratory of Digital Stomatology, Peking University School and Hospital of Stomatology, Beijing, China
- NHC Key Laboratory of Digital Technology of Stomatology, Peking University School and Hospital of Stomatology, Beijing, China
| | - Longwei Lv
- Department of Prosthodontics, Peking University School and Hospital of Stomatology, Beijing, China
- National Clinical Research Center for Oral Diseases, Peking University School and Hospital of Stomatology, Beijing, China
- National Engineering Laboratory for Digital and Material Technology of Stomatology, Peking University School and Hospital of Stomatology, Beijing, China
- Beijing Key Laboratory of Digital Stomatology, Peking University School and Hospital of Stomatology, Beijing, China
- NHC Key Laboratory of Digital Technology of Stomatology, Peking University School and Hospital of Stomatology, Beijing, China
| | - Ping Zhang
- Department of Prosthodontics, Peking University School and Hospital of Stomatology, Beijing, China
- National Clinical Research Center for Oral Diseases, Peking University School and Hospital of Stomatology, Beijing, China
- National Engineering Laboratory for Digital and Material Technology of Stomatology, Peking University School and Hospital of Stomatology, Beijing, China
- Beijing Key Laboratory of Digital Stomatology, Peking University School and Hospital of Stomatology, Beijing, China
- NHC Key Laboratory of Digital Technology of Stomatology, Peking University School and Hospital of Stomatology, Beijing, China
| | - Xiao Zhang
- Department of Prosthodontics, Peking University School and Hospital of Stomatology, Beijing, China
- National Clinical Research Center for Oral Diseases, Peking University School and Hospital of Stomatology, Beijing, China
- National Engineering Laboratory for Digital and Material Technology of Stomatology, Peking University School and Hospital of Stomatology, Beijing, China
- Beijing Key Laboratory of Digital Stomatology, Peking University School and Hospital of Stomatology, Beijing, China
- NHC Key Laboratory of Digital Technology of Stomatology, Peking University School and Hospital of Stomatology, Beijing, China
| | - Yongsheng Zhou
- Department of Prosthodontics, Peking University School and Hospital of Stomatology, Beijing, China
- National Clinical Research Center for Oral Diseases, Peking University School and Hospital of Stomatology, Beijing, China
- National Engineering Laboratory for Digital and Material Technology of Stomatology, Peking University School and Hospital of Stomatology, Beijing, China
- Beijing Key Laboratory of Digital Stomatology, Peking University School and Hospital of Stomatology, Beijing, China
- NHC Key Laboratory of Digital Technology of Stomatology, Peking University School and Hospital of Stomatology, Beijing, China
| |
Collapse
|
167
|
Li Y, Wang Y, Qi H, Hu Z, Chen Z, Yang R, Qiao H, Sun J, Wang T, Zhao X, Guo H, Chen H. Deep learning-enhanced T 1 mapping with spatial-temporal and physical constraint. Magn Reson Med 2021; 86:1647-1661. [PMID: 33821529 DOI: 10.1002/mrm.28793] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2020] [Revised: 03/07/2021] [Accepted: 03/15/2021] [Indexed: 12/20/2022]
Abstract
PURPOSE To propose a reconstruction framework to generate accurate T1 maps for a fast MR T1 mapping sequence. METHODS A deep learning-enhanced T1 mapping method with spatial-temporal and physical constraint (DAINTY) was proposed. This method explicitly imposed low-rank and sparsity constraints on the multiframe T1 -weighted images to exploit the spatial-temporal correlation. A deep neural network was used to efficiently perform T1 mapping as well as denoise and reduce undersampling artifacts. Additionally, the physical constraint was used to build a bridge between low-rank and sparsity constraint and deep learning prior, so the benefits of constrained reconstruction and deep learning can be both available. The DAINTY method was trained on simulated brain data sets, but tested on real acquired phantom, 6 healthy volunteers, and 7 atherosclerosis patients, compared with the narrow-band k-space-weighted image contrast filter conjugate-gradient SENSE (NK-CS) method, kt-sparse-SENSE (kt-SS) method, and low-rank plus sparsity (L+S) method with least-squares T1 fitting and direct deep learning mapping. RESULTS The DAINTY method can generate more accurate T1 maps and higher-quality T1 -weighted images compared with other methods. For atherosclerosis patients, the intraplaque hemorrhage can be successfully detected. The computation speed of DAINTY was 10 times faster than traditional methods. Meanwhile, DAINTY can reconstruct images with comparable quality using only 50% of k-space data. CONCLUSION The proposed method can provide accurate T1 maps and good-quality T1 -weighted images with high efficiency.
Collapse
Affiliation(s)
- Yuze Li
- Center for Biomedical Imaging Research, Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Yajie Wang
- Center for Biomedical Imaging Research, Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Haikun Qi
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Zhangxuan Hu
- GE Healthcare, MR Research China, Beijing, China
| | - Zhensen Chen
- Vascular Imaging Lab and BioMolecular Imaging Center, Department of Radiology, University of Washington, Seattle, Washington, USA
| | - Runyu Yang
- Center for Biomedical Imaging Research, Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Huiyu Qiao
- Center for Biomedical Imaging Research, Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Jie Sun
- GE Healthcare, MR Research China, Beijing, China
| | - Tao Wang
- Department of Neurology, Peking University Third Hospital, Beijing, China
| | - Xihai Zhao
- Center for Biomedical Imaging Research, Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Hua Guo
- Center for Biomedical Imaging Research, Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Huijun Chen
- Center for Biomedical Imaging Research, Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| |
Collapse
|
168
|
Lin DJ, Johnson PM, Knoll F, Lui YW. Artificial Intelligence for MR Image Reconstruction: An Overview for Clinicians. J Magn Reson Imaging 2021; 53:1015-1028. [PMID: 32048372 PMCID: PMC7423636 DOI: 10.1002/jmri.27078] [Citation(s) in RCA: 114] [Impact Index Per Article: 28.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2019] [Revised: 01/15/2020] [Accepted: 01/17/2020] [Indexed: 12/22/2022] Open
Abstract
Artificial intelligence (AI) shows tremendous promise in the field of medical imaging, with recent breakthroughs applying deep-learning models for data acquisition, classification problems, segmentation, image synthesis, and image reconstruction. With an eye towards clinical applications, we summarize the active field of deep-learning-based MR image reconstruction. We review the basic concepts of how deep-learning algorithms aid in the transformation of raw k-space data to image data, and specifically examine accelerated imaging and artifact suppression. Recent efforts in these areas show that deep-learning-based algorithms can match and, in some cases, eclipse conventional reconstruction methods in terms of image quality and computational efficiency across a host of clinical imaging applications, including musculoskeletal, abdominal, cardiac, and brain imaging. This article is an introductory overview aimed at clinical radiologists with no experience in deep-learning-based MR image reconstruction and should enable them to understand the basic concepts and current clinical applications of this rapidly growing area of research across multiple organ systems.
Collapse
Affiliation(s)
- Dana J. Lin
- Department of Radiology, NYU School of Medicine / NYU Langone Health
| | | | - Florian Knoll
- New York University School of Medicine, Center for Biomedical Imaging
| | - Yvonne W. Lui
- Department of Radiology, NYU School of Medicine / NYU Langone Health
| |
Collapse
|
169
|
Zhang Y, She H, Du YP. Dynamic MRI of the abdomen using parallel non-Cartesian convolutional recurrent neural networks. Magn Reson Med 2021; 86:964-973. [PMID: 33749023 DOI: 10.1002/mrm.28774] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Revised: 02/25/2021] [Accepted: 02/25/2021] [Indexed: 11/10/2022]
Abstract
PURPOSE To improve the image quality and reduce computational time for the reconstruction of undersampled non-Cartesian abdominal dynamic parallel MR data using the deep learning approach. METHODS An algorithm of parallel non-Cartesian convolutional recurrent neural networks (PNCRNNs) was developed to enable the use of the redundant information in both spatial and temporal domains, and achieve data fidelity for the reconstruction of non-Cartesian parallel MR data. The performance of PNCRNNs was evaluated for various acceleration rates, motion patterns, and imaging applications in comparison with that of the state-of-the-art algorithms of dynamic imaging, including extra-dimensional golden-angle radial sparse parallel MRI (XD-GRASP), low-rank plus sparse matrix decomposition (L+S), blind compressive sensing (BCS), and 3D convolutional neural networks (3D CNNs). RESULTS PNCRNNs increased the peak SNR of 9.07 dB compared with XD-GRASP, 9.26 dB compared with L+S, 3.48 dB compared with BCS, and 3.14 dB compared with 3D CNN at R = 16. The reconstruction time was 18 ms for each bin, which was two orders faster than that of XD-GRASP, L+S, and BCS. PNCRNNs provided good reconstruction for various motion patterns, k-space trajectories, and imaging applications. CONCLUSION The proposed PNCRNN provides substantial improvement of the image quality for dynamic golden-angle radial imaging of the abdomen in comparison with XD-GRASP, L+S, BCS, and 3D CNN. The reconstruction time of PNCRNN can be as fast as 50 bins per second, due to the use of the highly computational efficient Toeplitz approach.
Collapse
Affiliation(s)
- Yufei Zhang
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Huajun She
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Yiping P Du
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
170
|
Su T, Deng X, Yang J, Wang Z, Fang S, Zheng H, Liang D, Ge Y. DIR-DBTnet: Deep iterative reconstruction network for three-dimensional digital breast tomosynthesis imaging. Med Phys 2021; 48:2289-2300. [PMID: 33594671 DOI: 10.1002/mp.14779] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2020] [Revised: 01/21/2021] [Accepted: 02/09/2021] [Indexed: 11/08/2022] Open
Abstract
PURPOSE The goal of this study is to develop a three-dimensional (3D) iterative reconstruction framework based on the deep learning (DL) technique to improve the digital breast tomosynthesis (DBT) imaging performance. METHODS In this work, the DIR-DBTnet is developed for DBT image reconstruction by mapping the conventional iterative reconstruction (IR) algorithm to the deep neural network. By design, the DIR-DBTnet learns and optimizes the regularizer and the iteration parameters automatically during the network training with a large amount of simulated DBT data. Numerical, experimental, and clinical data are used to evaluate its performance. Quantitative metrics such as the artifact spread function (ASF), breast density, and the signal difference to noise ratio (SDNR) are measured to assess the image quality. RESULTS Results show that the proposed DIR-DBTnet is able to reduce the in-plane shadow artifacts and the out-of-plane signal leaking artifacts compared to the filtered backprojection (FBP) and the total variation (TV)-based IR methods. Quantitatively, the full width half maximum (FWHM) of the measured ASF from the clinical data is 27.1% and 23.0% smaller than those obtained with the FBP and TV methods, while the SDNR is increased by 194.5% and 21.8%, respectively. In addition, the breast density obtained from the DIR-DBTnet network is more accurate and consistent with the ground truth. CONCLUSIONS In conclusion, a deep iterative reconstruction network, DIR-DBTnet, has been proposed for 3D DBT image reconstruction. Both qualitative and quantitative analyses of the numerical, experimental, and clinical results demonstrate that the DIR-DBTnet has superior DBT imaging performance than the conventional algorithms.
Collapse
Affiliation(s)
- Ting Su
- Research Center for Medical Artificial Intelligence, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Xiaolei Deng
- College of Mechanical & Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, 210016, China
| | - Jiecheng Yang
- Research Center for Medical Artificial Intelligence, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Zhenwei Wang
- Shanghai United Imaging Healthcare Co, Ltd, Shanghai, 201807, China
| | - Shibo Fang
- Research Center for Medical Artificial Intelligence, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Hairong Zheng
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Dong Liang
- Research Center for Medical Artificial Intelligence, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.,Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Yongshuai Ge
- Research Center for Medical Artificial Intelligence, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.,Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| |
Collapse
|
171
|
Mauer MAD, Well EJV, Herrmann J, Groth M, Morlock MM, Maas R, Säring D. Automated age estimation of young individuals based on 3D knee MRI using deep learning. Int J Legal Med 2021; 135:649-663. [PMID: 33331995 PMCID: PMC7870623 DOI: 10.1007/s00414-020-02465-z] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2020] [Accepted: 11/09/2020] [Indexed: 01/05/2023]
Abstract
Age estimation is a crucial element of forensic medicine to assess the chronological age of living individuals without or lacking valid legal documentation. Methods used in practice are labor-intensive, subjective, and frequently comprise radiation exposure. Recently, also non-invasive methods using magnetic resonance imaging (MRI) have evaluated and confirmed a correlation between growth plate ossification in long bones and the chronological age of young subjects. However, automated and user-independent approaches are required to perform reliable assessments on large datasets. The aim of this study was to develop a fully automated and computer-based method for age estimation based on 3D knee MRIs using machine learning. The proposed solution is based on three parts: image-preprocessing, bone segmentation, and age estimation. A total of 185 coronal and 404 sagittal MR volumes from Caucasian male subjects in the age range of 13 and 21 years were available. The best result of the fivefold cross-validation was a mean absolute error of 0.67 ± 0.49 years in age regression and an accuracy of 90.9%, a sensitivity of 88.6%, and a specificity of 94.2% in classification (18-year age limit) using a combination of convolutional neural networks and tree-based machine learning algorithms. The potential of deep learning for age estimation is reflected in the results and can be further improved if it is trained on even larger and more diverse datasets.
Collapse
Affiliation(s)
- Markus Auf der Mauer
- Medical and Industrial Image Processing, University of Applied Sciences of Wedel, Feldstraße 143, 22880 Wedel, Germany
| | - Eilin Jopp-van Well
- Department of Legal Medicine, University Medical Center Hamburg-Eppendorf (UKE), Butenfeld 34, 22529 Hamburg, Germany
| | - Jochen Herrmann
- Section of Pediatric Radiology, Department of Diagnostic and Interventional Radiology and Nuclear Medicine, University Medical Center Hamburg-Eppendorf (UKE), Martinistr. 52, 20246 Hamburg, Germany
| | - Michael Groth
- Section of Pediatric Radiology, Department of Diagnostic and Interventional Radiology and Nuclear Medicine, University Medical Center Hamburg-Eppendorf (UKE), Martinistr. 52, 20246 Hamburg, Germany
| | - Michael M. Morlock
- Institute of Biomechanics M3, Hamburg University of Technology (TUHH), Denickestraße 15, 21073 Hamburg, Germany
| | - Rainer Maas
- Radiologie Raboisen 38, Raboisen 38, 20095 Hamburg, Germany
| | - Dennis Säring
- Medical and Industrial Image Processing, University of Applied Sciences of Wedel, Feldstraße 143, 22880 Wedel, Germany
| |
Collapse
|
172
|
Gong Y, Shan H, Teng Y, Tu N, Li M, Liang G, Wang G, Wang S. Parameter-Transferred Wasserstein Generative Adversarial Network (PT-WGAN) for Low-Dose PET Image Denoising. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021; 5:213-223. [PMID: 35402757 PMCID: PMC8993163 DOI: 10.1109/trpms.2020.3025071] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/27/2023]
Abstract
Due to the widespread use of positron emission tomography (PET) in clinical practice, the potential risk of PET-associated radiation dose to patients needs to be minimized. However, with the reduction in the radiation dose, the resultant images may suffer from noise and artifacts that compromise diagnostic performance. In this paper, we propose a parameter-transferred Wasserstein generative adversarial network (PT-WGAN) for low-dose PET image denoising. The contributions of this paper are twofold: i) a PT-WGAN framework is designed to denoise low-dose PET images without compromising structural details, and ii) a task-specific initialization based on transfer learning is developed to train PT-WGAN using trainable parameters transferred from a pretrained model, which significantly improves the training efficiency of PT-WGAN. The experimental results on clinical data show that the proposed network can suppress image noise more effectively while preserving better image fidelity than recently published state-of-the-art methods. We make our code available at https://github.com/90n9-yu/PT-WGAN.
Collapse
Affiliation(s)
- Yu Gong
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China, and Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Hongming Shan
- Institute of Science and Technology for Brain-inspired Intelligence, Fudan University, Shanghai 200433, China, and the Shanghai Center for Brain Science and Brain-Inspired Technology, Shanghai 201210, China
| | - Yueyang Teng
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China, and the Key Laboratory of Intelligent Computing in Medical Images, Ministry of Education, Shenyang 110169, China
| | - Ning Tu
- PET-CT/MRI Center and Molecular Imaging Center, Wuhan University Renmin Hospital, Wuhan, 430060, China
| | - Ming Li
- Neusoft Medical Systems Co., Ltd, Shenyang 110167, China
| | - Guodong Liang
- Neusoft Medical Systems Co., Ltd, Shenyang 110167, China
| | - Ge Wang
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY 12180 USA
| | - Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| |
Collapse
|
173
|
Han Y, Jang J, Cha E, Lee J, Chung H, Jeong M, Kim TG, Chae BG, Kim HG, Jun S, Hwang S, Lee E, Ye JC. Deep learning STEM-EDX tomography of nanocrystals. NAT MACH INTELL 2021. [DOI: 10.1038/s42256-020-00289-5] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
|
174
|
Zhang Y, Jiang K, Jiang W, Wang N, Wright AJ, Liu A, Wang J. Multi-task convolutional neural network-based design of radio frequency pulse and the accompanying gradients for magnetic resonance imaging. NMR IN BIOMEDICINE 2021; 34:e4443. [PMID: 33200468 DOI: 10.1002/nbm.4443] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/23/2020] [Revised: 10/21/2020] [Accepted: 10/21/2020] [Indexed: 06/11/2023]
Abstract
Modern MRI systems usually load the predesigned RFs and the accompanying gradients during clinical scans, with minimal adaption to the specific requirements of each scan. Here, we describe a neural network-based method for real-time design of excitation RF pulses and the accompanying gradients' waveforms to achieve spatially two-dimensional selectivity. Nine thousand sets of radio frequency (RF) and gradient waveforms with two-dimensional spatial selectivity were generated as the training dataset using the Shinnar-Le Roux (SLR) method. Neural networks were created and trained with five strategies (TS-1 to TS-5). The neural network-designed RF and gradients were compared with their SLR-designed counterparts and underwent Bloch simulation and phantom imaging to investigate their performances in spin manipulations. We demonstrate a convolutional neural network (TS-5) with multi-task learning to yield both the RF pulses and the accompanying two channels of gradient waveforms that comply with the SLR design, and these design results also provide excitation spatial profiles comparable with SLR pulses in both simulation (normalized root mean square error [NRMSE] of 0.0075 ± 0.0038 over the 400 sets of testing data between TS-5 and SLR) and phantom imaging. The output RF and gradient waveforms between the neural network and SLR methods were also compared, and the joint NRMSE, with both RF and the two channels of gradient waveforms considered, was 0.0098 ± 0.0024 between TS-5 and SLR. The RF and gradients were generated on a commercially available workstation, which took ~130 ms for TS-5. In conclusion, we present a convolutional neural network with multi-task learning, trained with SLR transformation pairs, that is capable of simultaneously generating RF and two channels of gradient waveforms, given the desired spatially two-dimensional excitation profiles.
Collapse
Affiliation(s)
- Yajing Zhang
- MR Clinical Science, Philips Healthcare (Suzhou), Suzhou, China
| | - Ke Jiang
- MSC Clinical & Technical Solutions, Philips Healthcare, Beijing, China
| | - Weiwei Jiang
- MR Clinical Science, Philips Healthcare (Suzhou), Suzhou, China
| | - Nan Wang
- Department of Radiology, the First Affiliated Hospital of Dalian Medical University, Dalian, China
| | - Alan J Wright
- Cancer Research UK Cambridge Institute, University of Cambridge, Li Ka Shing Centre, Cambridge, UK
| | - Ailian Liu
- Department of Radiology, the First Affiliated Hospital of Dalian Medical University, Dalian, China
| | - Jiazheng Wang
- MSC Clinical & Technical Solutions, Philips Healthcare, Beijing, China
| |
Collapse
|
175
|
Zhao D, Huang Y, Zhao F, Qin B, Zheng J. Reference-Driven Undersampled MR Image Reconstruction Using Wavelet Sparsity-Constrained Deep Image Prior. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2021; 2021:8865582. [PMID: 33552232 PMCID: PMC7846397 DOI: 10.1155/2021/8865582] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/28/2020] [Revised: 12/17/2020] [Accepted: 12/31/2020] [Indexed: 11/29/2022]
Abstract
Deep learning has shown potential in significantly improving performance for undersampled magnetic resonance (MR) image reconstruction. However, one challenge for the application of deep learning to clinical scenarios is the requirement of large, high-quality patient-based datasets for network training. In this paper, we propose a novel deep learning-based method for undersampled MR image reconstruction that does not require pre-training procedure and pre-training datasets. The proposed reference-driven method using wavelet sparsity-constrained deep image prior (RWS-DIP) is based on the DIP framework and thereby reduces the dependence on datasets. Moreover, RWS-DIP explores and introduces structure and sparsity priors into network learning to improve the efficiency of learning. By employing a high-resolution reference image as the network input, RWS-DIP incorporates structural information into network. RWS-DIP also uses the wavelet sparsity to further enrich the implicit regularization of traditional DIP by formulating the training of network parameters as a constrained optimization problem, which is solved using the alternating direction method of multipliers (ADMM) algorithm. Experiments on in vivo MR scans have demonstrated that the RWS-DIP method can reconstruct MR images more accurately and preserve features and textures from undersampled k-space measurements.
Collapse
Affiliation(s)
- Di Zhao
- Key Laboratory of Complex System Optimization and Big Data Processing, Guangxi Colleges and Universities, Yulin Normal University, Yulin 537000, China
- School of Physics and Telecommunication Engineering, Yulin Normal University, Yulin 537000, China
| | - Yanhu Huang
- School of Physics and Telecommunication Engineering, Yulin Normal University, Yulin 537000, China
| | - Feng Zhao
- Key Laboratory of Complex System Optimization and Big Data Processing, Guangxi Colleges and Universities, Yulin Normal University, Yulin 537000, China
| | - Binyi Qin
- Key Laboratory of Complex System Optimization and Big Data Processing, Guangxi Colleges and Universities, Yulin Normal University, Yulin 537000, China
- School of Physics and Telecommunication Engineering, Yulin Normal University, Yulin 537000, China
| | - Jincun Zheng
- Key Laboratory of Complex System Optimization and Big Data Processing, Guangxi Colleges and Universities, Yulin Normal University, Yulin 537000, China
- School of Physics and Telecommunication Engineering, Yulin Normal University, Yulin 537000, China
| |
Collapse
|
176
|
Zhou W, Du H, Mei W, Fang L. Efficient structurally-strengthened generative adversarial network for MRI reconstruction. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.09.008] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
177
|
Cha E, Chung H, Kim EY, Ye JC. Unpaired Training of Deep Learning tMRA for Flexible Spatio-Temporal Resolution. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:166-179. [PMID: 32915733 DOI: 10.1109/tmi.2020.3023620] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Time-resolved MR angiography (tMRA) has been widely used for dynamic contrast enhanced MRI (DCE-MRI) due to its highly accelerated acquisition. In tMRA, the periphery of the k -space data are sparsely sampled so that neighbouring frames can be merged to construct one temporal frame. However, this view-sharing scheme fundamentally limits the temporal resolution, and it is not possible to change the view-sharing number to achieve different spatio-temporal resolution trade-offs. Although many deep learning approaches have been recently proposed for MR reconstruction from sparse samples, the existing approaches usually require matched fully sampled k -space reference data for supervised training, which is not suitable for tMRA due to the lack of high spatio-temporal resolution ground-truth images. To address this problem, here we propose a novel unpaired training scheme for deep learning using optimal transport driven cycle-consistent generative adversarial network (cycleGAN). In contrast to the conventional cycleGAN with two pairs of generator and discriminator, the new architecture requires just a single pair of generator and discriminator, which makes the training much simpler but still improves the performance. Reconstruction results using in vivo tMRA and simulation data set confirm that the proposed method can immediately generate high quality reconstruction results at various choices of view-sharing numbers, allowing us to exploit better trade-off between spatial and temporal resolution in time-resolved MR angiography.
Collapse
|
178
|
Davendralingam N, Sebire NJ, Arthurs OJ, Shelmerdine SC. Artificial intelligence in paediatric radiology: Future opportunities. Br J Radiol 2021; 94:20200975. [PMID: 32941736 PMCID: PMC7774693 DOI: 10.1259/bjr.20200975] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2020] [Accepted: 09/04/2020] [Indexed: 12/13/2022] Open
Abstract
Artificial intelligence (AI) has received widespread and growing interest in healthcare, as a method to save time, cost and improve efficiencies. The high-performance statistics and diagnostic accuracies reported by using AI algorithms (with respect to predefined reference standards), particularly from image pattern recognition studies, have resulted in extensive applications proposed for clinical radiology, especially for enhanced image interpretation. Whilst certain sub-speciality areas in radiology, such as those relating to cancer screening, have received wide-spread attention in the media and scientific community, children's imaging has been hitherto neglected.In this article, we discuss a variety of possible 'use cases' in paediatric radiology from a patient pathway perspective where AI has either been implemented or shown early-stage feasibility, while also taking inspiration from the adult literature to propose potential areas for future development. We aim to demonstrate how a 'future, enhanced paediatric radiology service' could operate and to stimulate further discussion with avenues for research.
Collapse
Affiliation(s)
- Natasha Davendralingam
- Department of Radiology, Great Ormond Street Hospital for Children NHS Foundation Trust, London, UK
| | | | | | | |
Collapse
|
179
|
Ran M, Xia W, Huang Y, Lu Z, Bao P, Liu Y, Sun H, Zhou J, Zhang Y. MD-Recon-Net: A Parallel Dual-Domain Convolutional Neural Network for Compressed Sensing MRI. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021. [DOI: 10.1109/trpms.2020.2991877] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
180
|
Zhou W, Du H, Mei W, Fang L. Spatial orthogonal attention generative adversarial network for MRI reconstruction. Med Phys 2020; 48:627-639. [PMID: 33111361 DOI: 10.1002/mp.14509] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2020] [Revised: 07/12/2020] [Accepted: 08/24/2020] [Indexed: 11/08/2022] Open
Abstract
PURPOSE Recent studies have witnessed that self-attention modules can better solve the vision understanding problems by capturing long-range dependencies. However, there are very few works designing a lightweight self-attention module to improve the quality of MRI reconstruction. Furthermore, it can be observed that several important self-attention modules (e.g., the non-local block) cause high computational complexity and need a huge number of GPU memory when the size of the input feature is large. The purpose of this study is to design a lightweight yet effective spatial orthogonal attention module (SOAM) to capture long-range dependencies, and develop a novel spatial orthogonal attention generative adversarial network, termed as SOGAN, to achieve more accurate MRI reconstruction. METHODS We first develop a lightweight SOAM, which can generate two small attention maps to effectively aggregate the long-range contextual information in vertical and horizontal directions, respectively. Then, we embed the proposed SOAMs into the concatenated convolutional autoencoders to form the generator of the proposed SOGAN. RESULTS The experimental results demonstrate that the proposed SOAMs improve the quality of the reconstructed MR images effectively by capturing long-range dependencies. Besides, compared with state-of-the-art deep learning-based CS-MRI methods, the proposed SOGAN reconstructs MR images more accurately, but with fewer model parameters. CONCLUSIONS The proposed SOAM is a lightweight yet effective self-attention module to capture long-range dependencies, thus, can improve the quality of MRI reconstruction to a large extent. Besides, with the help of SOAMs, the proposed SOGAN outperforms the state-of-the-art deep learning-based CS-MRI methods.
Collapse
Affiliation(s)
- Wenzhong Zhou
- School of Information and Electronics, Beijing Institute of Technology, Beijing, 100081, China
| | - Huiqian Du
- School of Information and Electronics, Beijing Institute of Technology, Beijing, 100081, China
| | - Wenbo Mei
- School of Information and Electronics, Beijing Institute of Technology, Beijing, 100081, China
| | - Liping Fang
- School of Mathematics and Statistics, Beijing Institute of Technology, Beijing, 100081, China
| |
Collapse
|
181
|
Oh C, Kim D, Chung JY, Han Y, Park H. A k-space-to-image reconstruction network for MRI using recurrent neural network. Med Phys 2020; 48:193-203. [PMID: 33128235 DOI: 10.1002/mp.14566] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2020] [Revised: 10/06/2020] [Accepted: 10/23/2020] [Indexed: 11/08/2022] Open
Abstract
PURPOSE Reconstructing the images from undersampled k-space data are an ill-posed inverse problem. As a solution to this problem, we propose a method to reconstruct magnetic resonance (MR) images directly from k-space data using a recurrent neural network. METHODS A novel neural network architecture named "ETER-net" is developed as a unified solution to reconstruct MR images from undersampled k-space data, where two bi-RNNs and convolutional neural network (CNN) are utilized to perform domain transformation and de-aliasing. To demonstrate the practicality of the proposed method, we conducted model optimization, cross-validation, and network pruning using in-house data from a 3T MRI scanner and public dataset called "FastMRI." RESULTS The experimental results showed that the proposed method could be utilized for accurate image reconstruction from undersampled k-space data. The size of the proposed model was optimized and cross-validation was performed to show the robustness of the proposed method. For in-house dataset (R = 4), the proposed method provided nMSE = 1.09% and SSIM = 0.938. For "FastMRI" dataset, the proposed method provided nMSE = 1.05 % and SSIM = 0.931 for R = 4, and nMSE = 3.12 % and SSIM = 0.884 for R = 8. The performance of the pruned model trained the loss function including with L2 regularization was consistent for a pruning ratio of up to 70%. CONCLUSIONS The proposed method is an end-to-end MR image reconstruction method based on recurrent neural networks. It performs direct mapping of the input k-space data and the reconstructed images, operating as a unified solution that is applicable to various scanning trajectories.
Collapse
Affiliation(s)
- Changheun Oh
- Department of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea.,Gachon University, 191 Hambakmoe-ro, Yeonsu-gu, Incheon, 21565, Republic of Korea
| | - Dongchan Kim
- Gachon University, 191 Hambakmoe-ro, Yeonsu-gu, Incheon, 21565, Republic of Korea
| | - Jun-Young Chung
- Gachon University, 191 Hambakmoe-ro, Yeonsu-gu, Incheon, 21565, Republic of Korea
| | - Yeji Han
- Gachon University, 191 Hambakmoe-ro, Yeonsu-gu, Incheon, 21565, Republic of Korea
| | - HyunWook Park
- Department of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea
| |
Collapse
|
182
|
|
183
|
Xie N, Gong K, Guo N, Qin Z, Wu Z, Liu H, Li Q. Penalized-likelihood PET Image Reconstruction Using 3D Structural Convolutional Sparse Coding. IEEE Trans Biomed Eng 2020; 69:4-14. [PMID: 33284746 DOI: 10.1109/tbme.2020.3042907] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Positron emission tomography (PET) is widely used for clinical diagnosis. As PET suffers from low resolution and high noise, numerous efforts try to incorporate anatomical priors into PET image reconstruction, especially with the development of hybrid PET/CT and PET/MRI systems. In this work, we proposed a cube-based 3D structural convolutional sparse coding (CSC) concept for penalized-likelihood PET image reconstruction, named 3D PET-CSC. The proposed 3D PET-CSC takes advantage of the convolutional operation and manages to incorporate anatomical priors without the need of registration or supervised training. As 3D PET-CSC codes the whole 3D PET image, instead of patches, it alleviates the staircase artifacts commonly presented in traditional patch-based sparse coding methods. Compared with traditional coding methods in Fourier domain, the proposed method extends the 3D CSC to a straightforward approach based on the pursuit of localized cubes. Moreover, we developed the residual-image and order-subset mechanisms to further reduce the computational cost and accelerate the convergence for the proposed 3D PET-CSC method. Experiments based on computer simulations and clinical datasets demonstrate the superiority of 3D PET-CSC compared with other reference methods.
Collapse
|
184
|
Ke Z, Cheng J, Ying L, Zheng H, Zhu Y, Liang D. An unsupervised deep learning method for multi-coil cine MRI. ACTA ACUST UNITED AC 2020; 65:235041. [DOI: 10.1088/1361-6560/abaffa] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
185
|
Liu R, Zhang Y, Cheng S, Luo Z, Fan X. A Deep Framework Assembling Principled Modules for CS-MRI: Unrolling Perspective, Convergence Behaviors, and Practical Modeling. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:4150-4163. [PMID: 32746155 DOI: 10.1109/tmi.2020.3014193] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Compressed Sensing Magnetic Resonance Imaging (CS-MRI) significantly accelerates MR acquisition at a sampling rate much lower than the Nyquist criterion. A major challenge for CS-MRI lies in solving the severely ill-posed inverse problem to reconstruct aliasing-free MR images from the sparse k -space data. Conventional methods typically optimize an energy function, producing restoration of high quality, but their iterative numerical solvers unavoidably bring extremely large time consumption. Recent deep techniques provide fast restoration by either learning direct prediction to final reconstruction or plugging learned modules into the energy optimizer. Nevertheless, these data-driven predictors cannot guarantee the reconstruction following principled constraints underlying the domain knowledge so that the reliability of their reconstruction process is questionable. In this paper, we propose a deep framework assembling principled modules for CS-MRI that fuses learning strategy with the iterative solver of a conventional reconstruction energy. This framework embeds an optimal condition checking mechanism, fostering efficient and reliable reconstruction. We also apply the framework to three practical tasks, i.e., complex-valued data reconstruction, parallel imaging and reconstruction with Rician noise. Extensive experiments on both benchmark and manufacturer-testing images demonstrate that the proposed method reliably converges to the optimal solution more efficiently and accurately than the state-of-the-art in various scenarios.
Collapse
|
186
|
Lu J, Millioz F, Garcia D, Salles S, Liu W, Friboulet D. Reconstruction for Diverging-Wave Imaging Using Deep Convolutional Neural Networks. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2020; 67:2481-2492. [PMID: 32286972 DOI: 10.1109/tuffc.2020.2986166] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
In recent years, diverging wave (DW) ultrasound imaging has become a very promising methodology for cardiovascular imaging due to its high temporal resolution. However, if they are limited in number, DW transmits provide lower image quality compared with classical focused schemes. A conventional reconstruction approach consists in summing series of ultrasound signals coherently, at the expense of frame rate, data volume, and computation time. To deal with this limitation, we propose a convolutional neural network (CNN) architecture, Inception for DW Network (IDNet), for high-quality reconstruction of DW ultrasound images using a small number of transmissions. In order to cope with the specificities induced by the sectorial geometry associated with DW imaging, we adopted the inception model composed of the concatenation of multiscale convolution kernels. Incorporating inception modules aims at capturing different image features with multiscale receptive fields. A mapping between low-quality images and corresponding high-quality compounded reconstruction was learned by training the network using in vitro and in vivo samples. The performance of the proposed approach was evaluated in terms of contrast ratio (CR), contrast-to-noise ratio (CNR), and lateral resolution (LR), and compared with standard compounding method and conventional CNN methods. The results demonstrated that our method could produce high-quality images using only 3 DWs, yielding an image quality equivalent to that obtained with compounding of 31 DWs and outperforming more conventional CNN architectures in terms of complexity, inference time, and image quality.
Collapse
|
187
|
Yaman B, Hosseini SAH, Moeller S, Ellermann J, Uğurbil K, Akçakaya M. Self-supervised learning of physics-guided reconstruction neural networks without fully sampled reference data. Magn Reson Med 2020; 84:3172-3191. [PMID: 32614100 PMCID: PMC7811359 DOI: 10.1002/mrm.28378] [Citation(s) in RCA: 127] [Impact Index Per Article: 25.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2019] [Revised: 05/21/2020] [Accepted: 05/22/2020] [Indexed: 12/25/2022]
Abstract
PURPOSE To develop a strategy for training a physics-guided MRI reconstruction neural network without a database of fully sampled data sets. METHODS Self-supervised learning via data undersampling (SSDU) for physics-guided deep learning reconstruction partitions available measurements into two disjoint sets, one of which is used in the data consistency (DC) units in the unrolled network and the other is used to define the loss for training. The proposed training without fully sampled data is compared with fully supervised training with ground-truth data, as well as conventional compressed-sensing and parallel imaging methods using the publicly available fastMRI knee database. The same physics-guided neural network is used for both proposed SSDU and supervised training. The SSDU training is also applied to prospectively two-fold accelerated high-resolution brain data sets at different acceleration rates, and compared with parallel imaging. RESULTS Results on five different knee sequences at an acceleration rate of 4 shows that the proposed self-supervised approach performs closely with supervised learning, while significantly outperforming conventional compressed-sensing and parallel imaging, as characterized by quantitative metrics and a clinical reader study. The results on prospectively subsampled brain data sets, in which supervised learning cannot be used due to lack of ground-truth reference, show that the proposed self-supervised approach successfully performs reconstruction at high acceleration rates (4, 6, and 8). Image readings indicate improved visual reconstruction quality with the proposed approach compared with parallel imaging at acquisition acceleration. CONCLUSION The proposed SSDU approach allows training of physics-guided deep learning MRI reconstruction without fully sampled data, while achieving comparable results with supervised deep learning MRI trained on fully sampled data.
Collapse
Affiliation(s)
- Burhaneddin Yaman
- Department of Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN
| | - Seyed Amir Hossein Hosseini
- Department of Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN
| | - Steen Moeller
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN
| | - Jutta Ellermann
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN
| | - Kâmil Uğurbil
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN
| | - Mehmet Akçakaya
- Department of Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN
| |
Collapse
|
188
|
Shin D, Ji S, Lee D, Lee J, Oh SH, Lee J. Deep Reinforcement Learning Designed Shinnar-Le Roux RF Pulse Using Root-Flipping: DeepRF SLR. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:4391-4400. [PMID: 32833629 DOI: 10.1109/tmi.2020.3018508] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
A novel approach of applying deep reinforcement learning to an RF pulse design is introduced. This method, which is referred to as DeepRFSLR, is designed to minimize the peak amplitude or, equivalently, minimize the pulse duration of a multiband refocusing pulse generated by the Shinar Le-Roux (SLR) algorithm. In the method, the root pattern of SLR polynomial, which determines the RF pulse shape, is optimized by iterative applications of deep reinforcement learning and greedy tree search. When tested for the designs of the multiband pulses with three and seven slices, DeepRFSLR demonstrated improved performance compared to conventional methods, generating shorter duration RF pulses in shorter computational time. In the experiments, the RF pulse from DeepRFSLR produced a slice profile similar to the minimum-phase SLR RF pulse and the profiles matched to that of the computer simulation. Our approach suggests a new way of designing an RF by applying a machine learning algorithm, demonstrating a "machine-designed" MRI sequence.
Collapse
|
189
|
Goyal M, Knackstedt T, Yan S, Hassanpour S. Artificial intelligence-based image classification methods for diagnosis of skin cancer: Challenges and opportunities. Comput Biol Med 2020; 127:104065. [PMID: 33246265 PMCID: PMC8290363 DOI: 10.1016/j.compbiomed.2020.104065] [Citation(s) in RCA: 125] [Impact Index Per Article: 25.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2020] [Revised: 10/15/2020] [Accepted: 10/15/2020] [Indexed: 01/13/2023]
Abstract
Recently, there has been great interest in developing Artificial Intelligence (AI) enabled computer-aided diagnostics solutions for the diagnosis of skin cancer. With the increasing incidence of skin cancers, low awareness among a growing population, and a lack of adequate clinical expertise and services, there is an immediate need for AI systems to assist clinicians in this domain. A large number of skin lesion datasets are available publicly, and researchers have developed AI solutions, particularly deep learning algorithms, to distinguish malignant skin lesions from benign lesions in different image modalities such as dermoscopic, clinical, and histopathology images. Despite the various claims of AI systems achieving higher accuracy than dermatologists in the classification of different skin lesions, these AI systems are still in the very early stages of clinical application in terms of being ready to aid clinicians in the diagnosis of skin cancers. In this review, we discuss advancements in the digital image-based AI solutions for the diagnosis of skin cancer, along with some challenges and future opportunities to improve these AI systems to support dermatologists and enhance their ability to diagnose skin cancer.
Collapse
Affiliation(s)
- Manu Goyal
- Department of Biomedical Data Science, Dartmouth College, Hanover, NH, USA.
| | - Thomas Knackstedt
- Department of Dermatology, Metrohealth System and School of Medicine, Case Western Reserve University, Cleveland, OH, USA
| | - Shaofeng Yan
- Section of Dermatopathology, Department of Pathology and Laboratory Medicine, Dartmouth-Hitchcock Medical Center, Geisel School of Medicine at Dartmouth, Lebanon, NH, USA
| | - Saeed Hassanpour
- Departments of Biomedical Data Science, Computer Science, and Epidemiology, Dartmouth College, Hanover, NH, USA
| |
Collapse
|
190
|
Lv J, Wang P, Tong X, Wang C. Parallel imaging with a combination of sensitivity encoding and generative adversarial networks. Quant Imaging Med Surg 2020; 10:2260-2273. [PMID: 33269225 PMCID: PMC7596399 DOI: 10.21037/qims-20-518] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2020] [Accepted: 09/04/2020] [Indexed: 12/26/2022]
Abstract
BACKGROUND Magnetic resonance imaging (MRI) has the limitation of low imaging speed. Acceleration methods using under-sampled k-space data have been widely exploited to improve data acquisition without reducing the image quality. Sensitivity encoding (SENSE) is the most commonly used method for multi-channel imaging. However, SENSE has the drawback of severe g-factor artifacts when the under-sampling factor is high. This paper applies generative adversarial networks (GAN) to remove g-factor artifacts from SENSE reconstructions. METHODS Our method was evaluated on a public knee database containing 20 healthy participants. We compared our method with conventional GAN using zero-filled (ZF) images as input. Structural similarity (SSIM), peak signal to noise ratio (PSNR), and normalized mean square error (NMSE) were calculated for the assessment of image quality. A paired student's t-test was conducted to compare the image quality metrics between the different methods. Statistical significance was considered at P<0.01. RESULTS The proposed method outperformed SENSE, variational network (VN), and ZF + GAN methods in terms of SSIM (SENSE + GAN: 0.81±0.06, SENSE: 0.40±0.07, VN: 0.79±0.06, ZF + GAN: 0.77±0.06), PSNR (SENSE + GAN: 31.90±1.66, SENSE: 22.70±1.99, VN: 31.35±2.01, ZF + GAN: 29.95±1.59), and NMSE (×10-7) (SENSE + GAN: 0.95±0.34, SENSE: 4.81±1.33, VN: 0.97±0.30, ZF + GAN: 1.60±0.84) with an under-sampling factor of up to 6-fold. CONCLUSIONS This study demonstrated the feasibility of using GAN to improve the performance of SENSE reconstruction. The improvement of reconstruction is more obvious for higher under-sampling rates, which shows great potential for many clinical applications.
Collapse
Affiliation(s)
- Jun Lv
- School of Computer and Control Engineering, Yantai University, Yantai, China
| | - Peng Wang
- School of Computer and Control Engineering, Yantai University, Yantai, China
| | - Xiangrong Tong
- School of Computer and Control Engineering, Yantai University, Yantai, China
| | - Chengyan Wang
- Human Phenome Institute, Fudan University, Shanghai, China
| |
Collapse
|
191
|
Yuan Z, Jiang M, Wang Y, Wei B, Li Y, Wang P, Menpes-Smith W, Niu Z, Yang G. SARA-GAN: Self-Attention and Relative Average Discriminator Based Generative Adversarial Networks for Fast Compressed Sensing MRI Reconstruction. Front Neuroinform 2020; 14:611666. [PMID: 33324189 PMCID: PMC7726262 DOI: 10.3389/fninf.2020.611666] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Accepted: 11/05/2020] [Indexed: 11/17/2022] Open
Abstract
Research on undersampled magnetic resonance image (MRI) reconstruction can increase the speed of MRI imaging and reduce patient suffering. In this paper, an undersampled MRI reconstruction method based on Generative Adversarial Networks with the Self-Attention mechanism and the Relative Average discriminator (SARA-GAN) is proposed. In our SARA-GAN, the relative average discriminator theory is applied to make full use of the prior knowledge, in which half of the input data of the discriminator is true and half is fake. At the same time, a self-attention mechanism is incorporated into the high-layer of the generator to build long-range dependence of the image, which can overcome the problem of limited convolution kernel size. Besides, spectral normalization is employed to stabilize the training process. Compared with three widely used GAN-based MRI reconstruction methods, i.e., DAGAN, DAWGAN, and DAWGAN-GP, the proposed method can obtain a higher peak signal-to-noise ratio (PSNR) and structural similarity index measure(SSIM), and the details of the reconstructed image are more abundant and more realistic for further clinical scrutinization and diagnostic tasks.
Collapse
Affiliation(s)
- Zhenmou Yuan
- School of Information Science and Technology, Zhejiang Sci-Tech University, Hangzhou, China
| | - Mingfeng Jiang
- School of Information Science and Technology, Zhejiang Sci-Tech University, Hangzhou, China
| | - Yaming Wang
- School of Information Science and Technology, Zhejiang Sci-Tech University, Hangzhou, China
| | - Bo Wei
- School of Information Science and Technology, Zhejiang Sci-Tech University, Hangzhou, China
| | - Yongming Li
- College of Communication Engineering, Chongqing University, Chongqing, China
| | - Pin Wang
- College of Communication Engineering, Chongqing University, Chongqing, China
| | | | - Zhangming Niu
- Aladdin Healthcare Technologies Ltd., London, United Kingdom
| | - Guang Yang
- Cardiovascular Research Centre, Royal Brompton Hospital, London, United Kingdom
- National Heart and Lung Institute, Imperial College London, London, United Kingdom
| |
Collapse
|
192
|
Fu Z, Mandava S, Keerthivasan MB, Li Z, Johnson K, Martin DR, Altbach MI, Bilgin A. A multi-scale residual network for accelerated radial MR parameter mapping. Magn Reson Imaging 2020; 73:152-162. [PMID: 32882339 PMCID: PMC7580302 DOI: 10.1016/j.mri.2020.08.013] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2020] [Revised: 07/17/2020] [Accepted: 08/20/2020] [Indexed: 01/04/2023]
Abstract
A deep learning MR parameter mapping framework which combines accelerated radial data acquisition with a multi-scale residual network (MS-ResNet) for image reconstruction is proposed. The proposed supervised learning strategy uses input image patches from multi-contrast images with radial undersampling artifacts and target image patches from artifact-free multi-contrast images. Subspace filtering is used during pre-processing to denoise input patches. For each anatomy and relaxation parameter, an individual network is trained. in vivo T1 mapping results are obtained on brain and abdomen datasets and in vivo T2 mapping results are obtained on brain and knee datasets. Quantitative results for the T2 mapping of the knee show that MS-ResNet trained using either fully sampled or undersampled data outperforms conventional model-based compressed sensing methods. This is significant because obtaining fully sampled training data is not possible in many applications. in vivo brain and abdomen results for T1 mapping and in vivo brain results for T2 mapping demonstrate that MS-ResNet yields contrast-weighted images and parameter maps that are comparable to those achieved by model-based iterative methods while offering two orders of magnitude reduction in reconstruction times. The proposed approach enables recovery of high-quality contrast-weighted images and parameter maps from highly accelerated radial data acquisitions. The rapid image reconstructions enabled by the proposed approach makes it a good candidate for routine clinical use.
Collapse
Affiliation(s)
- Zhiyang Fu
- Department of Electrical and Computer Engineering, University of Arizona, Tucson, AZ, USA; Department of Medical Imaging, University of Arizona, Tucson, AZ, USA
| | - Sagar Mandava
- Department of Electrical and Computer Engineering, University of Arizona, Tucson, AZ, USA; Department of Medical Imaging, University of Arizona, Tucson, AZ, USA
| | - Mahesh B Keerthivasan
- Department of Electrical and Computer Engineering, University of Arizona, Tucson, AZ, USA; Department of Medical Imaging, University of Arizona, Tucson, AZ, USA
| | - Zhitao Li
- Department of Electrical and Computer Engineering, University of Arizona, Tucson, AZ, USA; Department of Medical Imaging, University of Arizona, Tucson, AZ, USA
| | - Kevin Johnson
- Department of Medical Imaging, University of Arizona, Tucson, AZ, USA
| | - Diego R Martin
- Department of Medical Imaging, University of Arizona, Tucson, AZ, USA
| | - Maria I Altbach
- Department of Medical Imaging, University of Arizona, Tucson, AZ, USA; Department of Biomedical Engineering, University of Arizona, Tucson, AZ, USA
| | - Ali Bilgin
- Department of Electrical and Computer Engineering, University of Arizona, Tucson, AZ, USA; Department of Medical Imaging, University of Arizona, Tucson, AZ, USA; Department of Biomedical Engineering, University of Arizona, Tucson, AZ, USA.
| |
Collapse
|
193
|
Intracellular Sodium Changes in Cancer Cells Using a Microcavity Array-Based Bioreactor System and Sodium Triple-Quantum MR Signal. Processes (Basel) 2020. [DOI: 10.3390/pr8101267] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022] Open
Abstract
The sodium triple-quantum (TQ) magnetic resonance (MR) signal created by interactions of sodium ions with macromolecules has been demonstrated to be a valuable biomarker for cell viability. The aim of this study was to monitor a cellular response using the sodium TQ signal during inhibition of Na/K-ATPase in living cancer cells (HepG2). The cells were dynamically investigated after exposure to 1 mM ouabain or K+-free medium for 60 min using an MR-compatible bioreactor system. An improved TQ time proportional phase incrementation (TQTPPI) pulse sequence with almost four times TQ signal-to-noise ratio (SNR) gain allowed for conducting experiments with 12–14 × 106 cells using a 9.4 T MR scanner. During cell intervention experiments, the sodium TQ signal increased to 138.9 ± 4.1% and 183.4 ± 8.9% for 1 mM ouabain (n = 3) and K+-free medium (n = 3), respectively. During reperfusion with normal medium, the sodium TQ signal further increased to 169.2 ± 5.3% for the ouabain experiment, while it recovered to 128.5 ± 6.8% for the K+-free experiment. These sodium TQ signal increases agree with an influx of sodium ions during Na/K-ATPase inhibition and hence a reduced cell viability. The improved TQ signal detection combined with this MR-compatible bioreactor system provides a capability to investigate the cellular response of a variety of cells using the sodium TQ MR signal.
Collapse
|
194
|
Hosseini SAH, Yaman B, Moeller S, Hong M, Akçakaya M. Dense Recurrent Neural Networks for Accelerated MRI: History-Cognizant Unrolling of Optimization Algorithms. IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING 2020; 14:1280-1291. [PMID: 33747334 PMCID: PMC7978039 DOI: 10.1109/jstsp.2020.3003170] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Inverse problems for accelerated MRI typically incorporate domain-specific knowledge about the forward encoding operator in a regularized reconstruction framework. Recently physics-driven deep learning (DL) methods have been proposed to use neural networks for data-driven regularization. These methods unroll iterative optimization algorithms to solve the inverse problem objective function, by alternating between domain-specific data consistency and data-driven regularization via neural networks. The whole unrolled network is then trained end-to-end to learn the parameters of the network. Due to simplicity of data consistency updates with gradient descent steps, proximal gradient descent (PGD) is a common approach to unroll physics-driven DL reconstruction methods. However, PGD methods have slow convergence rates, necessitating a higher number of unrolled iterations, leading to memory issues in training and slower reconstruction times in testing. Inspired by efficient variants of PGD methods that use a history of the previous iterates, we propose a history-cognizant unrolling of the optimization algorithm with dense connections across iterations for improved performance. In our approach, the gradient descent steps are calculated at a trainable combination of the outputs of all the previous regularization units. We also apply this idea to unrolling variable splitting methods with quadratic relaxation. Our results in reconstruction of the fastMRI knee dataset show that the proposed history-cognizant approach reduces residual aliasing artifacts compared to its conventional unrolled counterpart without requiring extra computational power or increasing reconstruction time.
Collapse
Affiliation(s)
- Seyed Amir Hossein Hosseini
- Department of Electrical and Computer Engineering, and Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN, 55455
| | - Burhaneddin Yaman
- Department of Electrical and Computer Engineering, and Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN, 55455
| | - Steen Moeller
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN, 55455
| | - Mingyi Hong
- Department of Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN, 55455
| | - Mehmet Akçakaya
- Department of Electrical and Computer Engineering, and Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN, 55455
| |
Collapse
|
195
|
Lu T, Zhang X, Huang Y, Guo D, Huang F, Xu Q, Hu Y, Ou-Yang L, Lin J, Yan Z, Qu X. pFISTA-SENSE-ResNet for parallel MRI reconstruction. JOURNAL OF MAGNETIC RESONANCE (SAN DIEGO, CALIF. : 1997) 2020; 318:106790. [PMID: 32759045 DOI: 10.1016/j.jmr.2020.106790] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/16/2020] [Revised: 07/09/2020] [Accepted: 07/09/2020] [Indexed: 06/11/2023]
Abstract
Magnetic resonance imaging has been widely applied in clinical diagnosis. However, it is limited by its long data acquisition time. Although the imaging can be accelerated by sparse sampling and parallel imaging, achieving promising reconstructed images with a fast computation speed remains a challenge. Recently, deep learning methods have attracted a lot of attention for encouraging reconstruction results, but they are lack of proper interpretability for neural networks. In this work, in order to enable high-quality image reconstruction for the parallel magnetic resonance imaging, we design the network structure from the perspective of sparse iterative reconstruction and enhance it with the residual structure. Experimental results on a public knee dataset indicate that, as compared with the state-of-the-art deep learning-based and optimization-based methods, the proposed network achieves lower error in reconstruction and is more robust under different samplings.
Collapse
Affiliation(s)
- Tieyuan Lu
- Department of Electronic Science, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, School of Electronic Science and Engineering, Xiamen University, Xiamen 361005, China
| | - Xinlin Zhang
- Department of Electronic Science, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, School of Electronic Science and Engineering, Xiamen University, Xiamen 361005, China
| | - Yihui Huang
- Department of Electronic Science, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, School of Electronic Science and Engineering, Xiamen University, Xiamen 361005, China
| | - Di Guo
- School of Computer and Information Engineering, Fujian Provincial University Key Laboratory of Internet of Things Application Technology, Xiamen University of Technology, Xiamen 361024, China
| | - Feng Huang
- Neusoft Medical System, Shanghai 200241, China
| | - Qin Xu
- Neusoft Medical System, Shanghai 200241, China
| | - Yuhan Hu
- Department of Electronic Science, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, School of Electronic Science and Engineering, Xiamen University, Xiamen 361005, China
| | - Lin Ou-Yang
- Department of Medical Imaging of Southeast Hospital, Medical College of Xiamen University, Zhangzhou 363000, China; Institute of Medical Imaging of Medical College of Xiamen University, Zhangzhou 363000, China
| | - Jianzhong Lin
- Magnetic Resonance Center, Zhongshan Hospital Xiamen University, Xiamen 361004, China
| | - Zhiping Yan
- Department of Radiology, Fujian Medical University Xiamen Humanity Hospital, Xiamen 361000, China
| | - Xiaobo Qu
- Department of Electronic Science, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, School of Electronic Science and Engineering, Xiamen University, Xiamen 361005, China.
| |
Collapse
|
196
|
Subhas N, Li H, Yang M, Winalski CS, Polster J, Obuchowski N, Mamoto K, Liu R, Zhang C, Huang P, Gaire SK, Liang D, Shen B, Li X, Ying L. Diagnostic interchangeability of deep convolutional neural networks reconstructed knee MR images: preliminary experience. Quant Imaging Med Surg 2020; 10:1748-1762. [PMID: 32879854 DOI: 10.21037/qims-20-664] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
Abstract
Background MRI acceleration using deep learning (DL) convolutional neural networks (CNNs) is a novel technique with great promise. Increasing the number of convolutional layers may allow for more accurate image reconstruction. Studies on evaluating the diagnostic interchangeability of DL reconstructed knee magnetic resonance (MR) images are scarce. The purpose of this study was to develop a deep CNN (DCNN) with an optimal number of layers for accelerating knee magnetic resonance imaging (MRI) acquisition by 6-fold and to test the diagnostic interchangeability and image quality of nonaccelerated images versus images reconstructed with a 15-layer DCNN or 3-layer CNN. Methods For the feasibility portion of this study, 10 patients were randomly selected from the Osteoarthritis Initiative (OAI) cohort. For the interchangeability portion of the study, 40 patients were randomly selected from the OAI cohort. Three readers assessed meniscal and anterior cruciate ligament (ACL) tears and cartilage defects using DCNN, CNN, and nonaccelerated images. Image quality was subjectively graded as nondiagnostic, poor, acceptable, or excellent. Interchangeability was tested by comparing the frequency of agreement when readers used both accelerated and nonaccelerated images to frequency of agreement when readers only used nonaccelerated images. A noninferiority margin of 0.10 was used to ensure type I error ≤5% and power ≥80%. A logistic regression model using generalized estimating equations was used to compare proportions; 95% confidence intervals (CIs) were constructed. Results DCNN and CNN images were interchangeable with nonaccelerated images for all structures, with excess disagreement values ranging from -2.5% [95% CI: (-6.1, 1.1)] to 3.0% [95% CI: (-0.1, 6.1)]. The quality of DCNN images was graded higher than that of CNN images but less than that of nonaccelerated images [excellent/acceptable quality: DCNN, 95% of cases (114/120); CNN, 60% (72/120); nonaccelerated, 97.5% (117/120)]. Conclusions Six-fold accelerated knee images reconstructed with a DL technique are diagnostically interchangeable with nonaccelerated images and have acceptable image quality when using a 15-layer CNN.
Collapse
Affiliation(s)
- Naveen Subhas
- Program of Advanced Musculoskeletal Imaging (PAMI), Imaging Institute, Cleveland Clinic, Cleveland, OH, USA.,Department of Diagnostic Radiology, Imaging Institute, Cleveland Clinic, Cleveland, OH, USA
| | - Hongyu Li
- Department of Biomedical Engineering, Department of Electrical Engineering, University at Buffalo, the State University of New York, Buffalo, NY, USA
| | - Mingrui Yang
- Program of Advanced Musculoskeletal Imaging (PAMI), Imaging Institute, Cleveland Clinic, Cleveland, OH, USA.,Department of Biomedical Engineering, Lerner Research Institute, Cleveland Clinic, Cleveland, OH, USA
| | - Carl S Winalski
- Program of Advanced Musculoskeletal Imaging (PAMI), Imaging Institute, Cleveland Clinic, Cleveland, OH, USA.,Department of Diagnostic Radiology, Imaging Institute, Cleveland Clinic, Cleveland, OH, USA
| | - Joshua Polster
- Program of Advanced Musculoskeletal Imaging (PAMI), Imaging Institute, Cleveland Clinic, Cleveland, OH, USA.,Department of Diagnostic Radiology, Imaging Institute, Cleveland Clinic, Cleveland, OH, USA
| | - Nancy Obuchowski
- Program of Advanced Musculoskeletal Imaging (PAMI), Imaging Institute, Cleveland Clinic, Cleveland, OH, USA.,Department of Diagnostic Radiology, Imaging Institute, Cleveland Clinic, Cleveland, OH, USA.,Department of Quantitative Health Sciences, Lerner Research Institute, Cleveland Clinic, Cleveland, OH, USA
| | - Kenji Mamoto
- Program of Advanced Musculoskeletal Imaging (PAMI), Imaging Institute, Cleveland Clinic, Cleveland, OH, USA.,Department of Biomedical Engineering, Lerner Research Institute, Cleveland Clinic, Cleveland, OH, USA
| | - Ruiying Liu
- Department of Biomedical Engineering, Department of Electrical Engineering, University at Buffalo, the State University of New York, Buffalo, NY, USA
| | - Chaoyi Zhang
- Department of Biomedical Engineering, Department of Electrical Engineering, University at Buffalo, the State University of New York, Buffalo, NY, USA
| | - Peizhou Huang
- Department of Biomedical Engineering, Department of Electrical Engineering, University at Buffalo, the State University of New York, Buffalo, NY, USA
| | - Sunil Kumar Gaire
- Department of Biomedical Engineering, Department of Electrical Engineering, University at Buffalo, the State University of New York, Buffalo, NY, USA
| | - Dong Liang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Medical AI Research Center, SIAT, CAS, Shenzhen, China
| | - Bowen Shen
- Department of Computer Science, Virginia Tech, Blacksburg, VA, USA
| | - Xiaojuan Li
- Program of Advanced Musculoskeletal Imaging (PAMI), Imaging Institute, Cleveland Clinic, Cleveland, OH, USA.,Department of Diagnostic Radiology, Imaging Institute, Cleveland Clinic, Cleveland, OH, USA.,Department of Biomedical Engineering, Lerner Research Institute, Cleveland Clinic, Cleveland, OH, USA
| | - Leslie Ying
- Department of Biomedical Engineering, Department of Electrical Engineering, University at Buffalo, the State University of New York, Buffalo, NY, USA
| |
Collapse
|
197
|
Abstract
Artificial intelligence (AI) has the potential to fundamentally alter the way medicine is practised. AI platforms excel in recognizing complex patterns in medical data and provide a quantitative, rather than purely qualitative, assessment of clinical conditions. Accordingly, AI could have particularly transformative applications in radiation oncology given the multifaceted and highly technical nature of this field of medicine with a heavy reliance on digital data processing and computer software. Indeed, AI has the potential to improve the accuracy, precision, efficiency and overall quality of radiation therapy for patients with cancer. In this Perspective, we first provide a general description of AI methods, followed by a high-level overview of the radiation therapy workflow with discussion of the implications that AI is likely to have on each step of this process. Finally, we describe the challenges associated with the clinical development and implementation of AI platforms in radiation oncology and provide our perspective on how these platforms might change the roles of radiotherapy medical professionals.
Collapse
|
198
|
Guo R, Zhao Y, Li Y, Wang T, Li Y, Sutton B, Liang ZP. Simultaneous QSM and metabolic imaging of the brain using SPICE: Further improvements in data acquisition and processing. Magn Reson Med 2020; 85:970-977. [PMID: 32810319 DOI: 10.1002/mrm.28459] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2020] [Revised: 07/09/2020] [Accepted: 07/12/2020] [Indexed: 01/23/2023]
Abstract
PURPOSE To achieve high-resolution mapping of brain tissue susceptibility in simultaneous QSM and metabolic imaging. METHODS Simultaneous QSM and metabolic imaging was first achieved using SPICE (spectroscopic imaging by exploiting spatiospectral correlation), but the QSM maps thus obtained were at relatively low-resolution (2.0 × 3.0 × 3.0 mm3 ). We overcome this limitation using an improved SPICE data acquisition method with the following novel features: 1) sampling (k, t)-space in dual densities, 2) sampling central k-space fully to achieve nominal spatial resolution of 3.0 × 3.0 × 3.0 mm3 for metabolic imaging, and 3) sampling outer k-space sparsely to achieve spatial resolution of 1.0 × 1.0 × 1.9 mm3 for QSM. To keep the scan time short, we acquired spatiospectral encodings in echo-planar spectroscopic imaging trajectories in central k-space but in CAIPIRINHA (controlled aliasing in parallel imaging results in higher acceleration) trajectories in outer k-space using blipped phase encodings. For data processing and image reconstruction, a union-of-subspaces model was used, effectively incorporating sensitivity encoding, spatial priors, and spectral priors of individual molecules. RESULTS In vivo experiments were carried out to evaluate the feasibility and potential of the proposed method. In a 6-min scan, QSM maps at 1.0 × 1.0 × 1.9 mm3 resolution and metabolic maps at 3.0 × 3.0 × 3.0 mm3 nominal resolution were obtained simultaneously. Compared with the original method, the QSM maps obtained using the new method reveal fine-scale brain structures more clearly. CONCLUSION We demonstrated the feasibility of achieving high-resolution QSM simultaneously with metabolic imaging using a modified SPICE acquisition method. The improved capability of SPICE may further enhance its practical utility in brain mapping.
Collapse
Affiliation(s)
- Rong Guo
- Beckman Institute of Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, Illinois.,Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois
| | - Yibo Zhao
- Beckman Institute of Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, Illinois.,Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois
| | - Yudu Li
- Beckman Institute of Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, Illinois.,Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois
| | - Tianyao Wang
- Department of Radiology, The Fifth People's Hospital of Shanghai, Fudan University, Shanghai, People's Republic of China
| | - Yao Li
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Brad Sutton
- Beckman Institute of Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, Illinois.,Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois.,Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, Illinois
| | - Zhi-Pei Liang
- Beckman Institute of Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, Illinois.,Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois
| |
Collapse
|
199
|
|
200
|
Khan S, Huh J, Ye JC. Adaptive and Compressive Beamforming Using Deep Learning for Medical Ultrasound. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2020; 67:1558-1572. [PMID: 32149628 DOI: 10.1109/tuffc.2020.2977202] [Citation(s) in RCA: 50] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
In ultrasound (US) imaging, various types of adaptive beamforming techniques have been investigated to improve the resolution and the contrast-to-noise ratio of the delay and sum (DAS) beamformers. Unfortunately, the performance of these adaptive beamforming approaches degrades when the underlying model is not sufficiently accurate and the number of channels decreases. To address this problem, here, we propose a deep-learning-based beamformer to generate significantly improved images over widely varying measurement conditions and channel subsampling patterns. In particular, our deep neural network is designed to directly process full or subsampled radio frequency (RF) data acquired at various subsampling rates and detector configurations so that it can generate high-quality US images using a single beamformer. The origin of such input-dependent adaptivity is also theoretically analyzed. Experimental results using the B-mode focused US confirm the efficacy of the proposed methods.
Collapse
|