101
|
Zhang C, Moeller S, Demirel OB, Uğurbil K, Akçakaya M. Residual RAKI: A hybrid linear and non-linear approach for scan-specific k-space deep learning. Neuroimage 2022; 256:119248. [PMID: 35487456 PMCID: PMC9179026 DOI: 10.1016/j.neuroimage.2022.119248] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Revised: 04/07/2022] [Accepted: 04/23/2022] [Indexed: 10/31/2022] Open
Abstract
Parallel imaging is the most clinically used acceleration technique for magnetic resonance imaging (MRI) in part due to its easy inclusion into routine acquisitions. In k-space based parallel imaging reconstruction, sub-sampled k-space data are interpolated using linear convolutions. At high acceleration rates these methods have inherent noise amplification and reduced image quality. On the other hand, non-linear deep learning methods provide improved image quality at high acceleration, but the availability of training databases for different scans, as well as their interpretability hinder their adaptation. In this work, we present an extension of Robust Artificial-neural-networks for k-space Interpolation (RAKI), called residual-RAKI (rRAKI), which achieves scan-specific machine learning reconstruction using a hybrid linear and non-linear methodology. In rRAKI, non-linear CNNs are trained jointly with a linear convolution implemented via a skip connection. In effect, the linear part provides a baseline reconstruction, while the non-linear CNN that runs in parallel provides further reduction of artifacts and noise arising from the linear part. The explicit split between the linear and non-linear aspects of the reconstruction also help improve interpretability compared to purely non-linear methods. Experiments were conducted on the publicly available fastMRI datasets, as well as high-resolution anatomical imaging, comparing GRAPPA and its variants, compressed sensing, RAKI, Scan Specific Artifact Reduction in K-space (SPARK) and the proposed rRAKI. Additionally, highly-accelerated simultaneous multi-slice (SMS) functional MRI reconstructions were also performed, where the proposed rRAKI was compred to Read-out SENSE-GRAPPA and RAKI. Our results show that the proposed rRAKI method substantially improves the image quality compared to conventional parallel imaging, and offers sharper images compared to SPARK and ℓ1-SPIRiT. Furthermore, rRAKI shows improved preservation of time-varying dynamics compared to both parallel imaging and RAKI in highly-accelerated SMS fMRI.
Collapse
Affiliation(s)
- Chi Zhang
- Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN 55455, USA; Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN 55455, USA
| | - Steen Moeller
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN 55455, USA
| | - Omer Burak Demirel
- Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN 55455, USA; Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN 55455, USA
| | - Kâmil Uğurbil
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN 55455, USA
| | - Mehmet Akçakaya
- Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN 55455, USA; Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN 55455, USA.
| |
Collapse
|
102
|
Wang Y, Wu W, Yang Y, Hu H, Yu S, Dong X, Chen F, Liu Q. Deep Learning-Based 3D MRI Contrast-Enhanced Synthesis From A 2D Non-contrast T2Flair Sequence Deep Learning, MR images Synthesis. Med Phys 2022; 49:4478-4493. [PMID: 35396712 DOI: 10.1002/mp.15636] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2021] [Revised: 03/10/2022] [Accepted: 03/16/2022] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Gadolinium-based contrast agents (GBCAs) have been successfully applied in magnetic resonance (MR) imaging to facilitate better lesion visualization. However, gadolinium deposition in the human brain raised widespread concerns recently. On the other hand, although high-resolution three-dimensional (3D) MR images are more desired for most existing medical image processing algorithms, their long scan duration and high acquiring costs make 2D MR images still much more common clinically. Therefore, developing alternative solutions for 3D contrast-enhanced MR images synthesis to replace GBCAs injection becomes an urgent requirement. METHODS This study proposed a deep learning framework that produces 3D isotropic full-contrast T2Flair images from 2D anisotropic non-contrast T2Flair image stacks. The super-resolution (SR) and contrast-enhanced (CE) synthesis tasks are completed in sequence by using an identical generative adversarial network (GAN) with the same techniques. To solve the problem that intra-modality datasets from different scanners have specific combinations of orientations, contrasts and resolutions, we conducted a region-based data augmentation technique on the fly during training to simulate various imaging protocols in the clinic. We further improved our network by introducing atrous spatial pyramid pooling, enhanced residual blocks and deep supervision for better quantitative and qualitative results. RESULTS Our proposed method achieved superior CE synthesized performance in quantitative metrics and perceptual evaluation. Detailedly, the PSNR, SSIM and AUC are 32.25 dB, 0.932 and 0.991 in the whole brain and 24.93 dB, 0.851 and 0.929 in tumor regions. The radiologists' evaluations confirmed that our proposed method has high confidence in the diagnosis. Analysis of the generalization ability showed that benefiting from the proposed data augmentation technique, our network can be applied to 'unseen' datasets with slight drops in quantitative and qualitative results. CONCLUSION Our work demonstrates the clinical potential of synthesizing diagnostic 3D isotropic CE brain MR images from a single 2D anisotropic non-contrast sequence. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Yulin Wang
- Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan University, Haikou, 570228, China
| | - Wenyuan Wu
- Department of Radiology, Hainan General Hospital, Haikou, 570311, China
| | - Yuxin Yang
- Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan University, Haikou, 570228, China
| | - Haifeng Hu
- Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan University, Haikou, 570228, China
| | - Shangqian Yu
- Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan University, Haikou, 570228, China
| | - Xiangjiang Dong
- Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Feng Chen
- Department of Radiology, Hainan General Hospital, Haikou, 570311, China
| | - Qian Liu
- Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan University, Haikou, 570228, China
| |
Collapse
|
103
|
Wang K, Tamir JI, De Goyeneche A, Wollner U, Brada R, Yu SX, Lustig M. High fidelity deep learning‐based MRI reconstruction with instance‐wise discriminative feature matching loss. Magn Reson Med 2022; 88:476-491. [DOI: 10.1002/mrm.29227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Revised: 02/08/2022] [Accepted: 02/22/2022] [Indexed: 11/12/2022]
Affiliation(s)
- Ke Wang
- Electrical Engineering and Computer Sciences University of California at Berkeley Berkeley California USA
- International Computer Science Institute University of California at Berkeley Berkeley California USA
| | - Jonathan I. Tamir
- Electrical and Computer Engineering The University of Texas at Austin Austin Texas USA
| | - Alfredo De Goyeneche
- Electrical Engineering and Computer Sciences University of California at Berkeley Berkeley California USA
| | | | | | - Stella X. Yu
- Electrical Engineering and Computer Sciences University of California at Berkeley Berkeley California USA
- International Computer Science Institute University of California at Berkeley Berkeley California USA
| | - Michael Lustig
- Electrical Engineering and Computer Sciences University of California at Berkeley Berkeley California USA
| |
Collapse
|
104
|
Zufiria B, Qiu S, Yan K, Zhao R, Wang R, She H, Zhang C, Sun B, Herman P, Du Y, Feng Y. A feature-based convolutional neural network for reconstruction of interventional MRI. NMR IN BIOMEDICINE 2022; 35:e4231. [PMID: 31856431 DOI: 10.1002/nbm.4231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/30/2019] [Revised: 11/04/2019] [Accepted: 11/05/2019] [Indexed: 06/10/2023]
Abstract
Real-time interventional MRI (I-MRI) could help to visualize the position of the interventional feature, thus improving patient outcomes in MR-guided neurosurgery. In particular, in deep brain stimulation, real-time visualization of the intervention procedure using I-MRI could improve the accuracy of the electrode placement. However, the requirements of a high undersampling rate and fast reconstruction speed for real-time imaging pose a great challenge for reconstruction of the interventional images. Based on recent advances in deep learning (DL), we proposed a feature-based convolutional neural network (FbCNN) for reconstructing interventional images from golden-angle radially sampled data. The method was composed of two stages: (a) reconstruction of the interventional feature and (b) feature refinement and postprocessing. With only five radially sampled spokes, the interventional feature was reconstructed with a cascade CNN. The final interventional image was constructed with a refined feature and a fully sampled reference image. With a comparison of traditional reconstruction techniques and recent DL-based methods, it was shown that only FbCNN could reconstruct the interventional feature and the final interventional image. With a reconstruction time of ~ 500 ms per frame and an acceleration factor of ~ 80, it was demonstrated that FbCNN had the potential for application in real-time I-MRI.
Collapse
Affiliation(s)
- Blanca Zufiria
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
- KTH School of Engineering Sciences in Chemistry, Biotechnology and Health, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Suhao Qiu
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Kang Yan
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Ruiyang Zhao
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Runke Wang
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Huajun She
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Chengcheng Zhang
- Department of Functional Neurosurgery, Ruijin Hospital affiliated to Shanghai Jiaotong University School of Medicine, Shanghai, China
| | - Bomin Sun
- Department of Functional Neurosurgery, Ruijin Hospital affiliated to Shanghai Jiaotong University School of Medicine, Shanghai, China
| | - Pawel Herman
- Division of Computational Science and Technology, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Yiping Du
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Yuan Feng
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
105
|
Karandikar P, Massaad E, Hadzipasic M, Kiapour A, Joshi RS, Shankar GM, Shin JH. Machine Learning Applications of Surgical Imaging for the Diagnosis and Treatment of Spine Disorders: Current State of the Art. Neurosurgery 2022; 90:372-382. [PMID: 35107085 DOI: 10.1227/neu.0000000000001853] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Accepted: 11/10/2021] [Indexed: 01/18/2023] Open
Abstract
Recent developments in machine learning (ML) methods demonstrate unparalleled potential for application in the spine. The ability for ML to provide diagnostic faculty, produce novel insights from existing capabilities, and augment or accelerate elements of surgical planning and decision making at levels equivalent or superior to humans will tremendously benefit spine surgeons and patients alike. In this review, we aim to provide a clinically relevant outline of ML-based technology in the contexts of spinal deformity, degeneration, and trauma, as well as an overview of commercial-level and precommercial-level surgical assist systems and decisional support tools. Furthermore, we briefly discuss potential applications of generative networks before highlighting some of the limitations of ML applications. We conclude that ML in spine imaging represents a significant addition to the neurosurgeon's armamentarium-it has the capacity to directly address and manifest clinical needs and improve diagnostic and procedural quality and safety-but is yet subject to challenges that must be addressed before widespread implementation.
Collapse
Affiliation(s)
- Paramesh Karandikar
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA
- T.H. Chan School of Medicine, University of Massachusetts, Worcester, Massachusetts, USA
| | - Elie Massaad
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA
| | - Muhamed Hadzipasic
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA
| | - Ali Kiapour
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA
| | - Rushikesh S Joshi
- Department of Neurosurgery, University of Michigan, Ann Arbor, Michigan, USA
| | - Ganesh M Shankar
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA
| | - John H Shin
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA
| |
Collapse
|
106
|
Pawar K, Chen Z, Shah NJ, Egan GF. Suppressing motion artefacts in MRI using an Inception-ResNet network with motion simulation augmentation. NMR IN BIOMEDICINE 2022; 35:e4225. [PMID: 31865624 DOI: 10.1002/nbm.4225] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/03/2019] [Revised: 10/24/2019] [Accepted: 10/24/2019] [Indexed: 06/10/2023]
Abstract
The suppression of motion artefacts from MR images is a challenging task. The purpose of this paper was to develop a standalone novel technique to suppress motion artefacts in MR images using a data-driven deep learning approach. A simulation framework was developed to generate motion-corrupted images from motion-free images using randomly generated motion profiles. An Inception-ResNet deep learning network architecture was used as the encoder and was augmented with a stack of convolution and upsampling layers to form an encoder-decoder network. The network was trained on simulated motion-corrupted images to identify and suppress those artefacts attributable to motion. The network was validated on unseen simulated datasets and real-world experimental motion-corrupted in vivo brain datasets. The trained network was able to suppress the motion artefacts in the reconstructed images, and the mean structural similarity (SSIM) increased from 0.9058 to 0.9338. The network was also able to suppress the motion artefacts from the real-world experimental dataset, and the mean SSIM increased from 0.8671 to 0.9145. The motion correction of the experimental datasets demonstrated the effectiveness of the motion simulation generation process. The proposed method successfully removed motion artefacts and outperformed an iterative entropy minimization method in terms of the SSIM index and normalized root mean squared error, which were 5-10% better for the proposed method. In conclusion, a novel, data-driven motion correction technique has been developed that can suppress motion artefacts from motion-corrupted MR images. The proposed technique is a standalone, post-processing method that does not interfere with data acquisition or reconstruction parameters, thus making it suitable for routine clinical practice.
Collapse
Affiliation(s)
- Kamlesh Pawar
- Monash Biomedical Imaging, Monash University, Melbourne, Australia
- School of Psychological Sciences, Monash University, Melbourne, Australia
| | - Zhaolin Chen
- Monash Biomedical Imaging, Monash University, Melbourne, Australia
| | - N Jon Shah
- Monash Biomedical Imaging, Monash University, Melbourne, Australia
- Research Centre Jülich, Institute of Medicine, Jülich, Germany
| | - Gary F Egan
- Monash Biomedical Imaging, Monash University, Melbourne, Australia
- School of Psychological Sciences, Monash University, Melbourne, Australia
| |
Collapse
|
107
|
Wang S, Ke Z, Cheng H, Jia S, Ying L, Zheng H, Liang D. DIMENSION: Dynamic MR imaging with both k-space and spatial prior knowledge obtained via multi-supervised network training. NMR IN BIOMEDICINE 2022; 35:e4131. [PMID: 31482598 DOI: 10.1002/nbm.4131] [Citation(s) in RCA: 40] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/16/2019] [Revised: 05/21/2019] [Accepted: 05/22/2019] [Indexed: 06/10/2023]
Abstract
Dynamic MR image reconstruction from incomplete k-space data has generated great research interest due to its capability in reducing scan time. Nevertheless, the reconstruction problem is still challenging due to its ill-posed nature. Most existing methods either suffer from long iterative reconstruction time or explore limited prior knowledge. This paper proposes a dynamic MR imaging method with both k-space and spatial prior knowledge integrated via multi-supervised network training, dubbed as DIMENSION. Specifically, the DIMENSION architecture consists of a frequential prior network for updating the k-space with its network prediction and a spatial prior network for capturing image structures and details. Furthermore, a multi-supervised network training technique is developed to constrain the frequency domain information and the spatial domain information. The comparisons with classical k-t FOCUSS, k-t SLR, L+S and the state-of-the-art CNN-based method on in vivo datasets show our method can achieve improved reconstruction results in shorter time.
Collapse
Affiliation(s)
- Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Ziwen Ke
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
| | - Huitao Cheng
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Sen Jia
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Leslie Ying
- Department of Biomedical Engineering and the Department of Electrical Engineering, The State University of New York, Buffalo, NY, USA
| | - Hairong Zheng
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Dong Liang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
108
|
SOUP-GAN: Super-Resolution MRI Using Generative Adversarial Networks. Tomography 2022; 8:905-919. [PMID: 35448707 PMCID: PMC9027099 DOI: 10.3390/tomography8020073] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Revised: 03/19/2022] [Accepted: 03/21/2022] [Indexed: 11/16/2022] Open
Abstract
There is a growing demand for high-resolution (HR) medical images for both clinical and research applications. Image quality is inevitably traded off with acquisition time, which in turn impacts patient comfort, examination costs, dose, and motion-induced artifacts. For many image-based tasks, increasing the apparent spatial resolution in the perpendicular plane to produce multi-planar reformats or 3D images is commonly used. Single-image super-resolution (SR) is a promising technique to provide HR images based on deep learning to increase the resolution of a 2D image, but there are few reports on 3D SR. Further, perceptual loss is proposed in the literature to better capture the textural details and edges versus pixel-wise loss functions, by comparing the semantic distances in the high-dimensional feature space of a pre-trained 2D network (e.g., VGG). However, it is not clear how one should generalize it to 3D medical images, and the attendant implications are unclear. In this paper, we propose a framework called SOUP-GAN: Super-resolution Optimized Using Perceptual-tuned Generative Adversarial Network (GAN), in order to produce thinner slices (e.g., higher resolution in the ‘Z’ plane) with anti-aliasing and deblurring. The proposed method outperforms other conventional resolution-enhancement methods and previous SR work on medical images based on both qualitative and quantitative comparisons. Moreover, we examine the model in terms of its generalization for arbitrarily user-selected SR ratios and imaging modalities. Our model shows promise as a novel 3D SR interpolation technique, providing potential applications for both clinical and research applications.
Collapse
|
109
|
Liu X, Du H, Xu J, Qiu B. DBGAN: A dual-branch generative adversarial network for undersampled MRI reconstruction. Magn Reson Imaging 2022; 89:77-91. [PMID: 35339616 DOI: 10.1016/j.mri.2022.03.003] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Revised: 01/17/2022] [Accepted: 03/19/2022] [Indexed: 11/28/2022]
Abstract
Compressed sensing magnetic resonance imaging (CS-MRI) greatly accelerates the acquisition process and yield considerable reconstructed images. Deep learning was introduced into CS-MRI to further speed up the reconstruction process and improve the image quality. Recently, generative adversarial network (GAN) using two-stage cascaded U-Net structure as generator has been proven to be effective in MRI reconstruction. However, previous cascaded structure was limited to few feature information propagation channels thus may lead to information missing. In this paper, we proposed a GAN-based model, DBGAN, for MRI reconstruction from undersampled k-space data. The model uses cross-stage skip connection (CSSC) between two end-to-end cascaded U-Net in our generator to widen the channels of feature propagation. To avoid discrepancy between training and inference, we replaced classical batch normalization (BN) with instance normalization (IN) . A stage loss is involved in the loss function to boost the training performance. In addition, a bilinear interpolation decoder branch is introduced in the generator to supplement the missing information of the deconvolution decoder. Tested under five variant patterns with four undersampling rates on different modality of MRI data, the quantitative results show that DBGAN model achieves mean improvements of 3.65 dB in peak signal-to-noise ratio (PSNR) and 0.016 in normalized mean square error (NMSE) compared with state-of-the-art GAN-based methods on T1-Weighted brain dataset from MICCAI 2013 grand challenge. The qualitative visual results show that our method can reconstruct considerable images on brain and knee MRI data from different modality. Furthermore, DBGAN is light and fast - the model parameters are fewer than half of state-of-the-art GAN-based methods and each 256 × 256 image is reconstructed in 60 milliseconds, which is suitable for real-time processing.
Collapse
Affiliation(s)
- Xianzhe Liu
- Center for Biomedical Image, University of Science and Technology of China, Hefei, Anhui 230026, China
| | - Hongwei Du
- Center for Biomedical Image, University of Science and Technology of China, Hefei, Anhui 230026, China.
| | - Jinzhang Xu
- School of Electrical Engineering and Automation, Hefei University of Technology, Hefei, Anhui 230009, China
| | - Bensheng Qiu
- Center for Biomedical Image, University of Science and Technology of China, Hefei, Anhui 230026, China
| |
Collapse
|
110
|
Zhu L, He Q, Huang Y, Zhang Z, Zeng J, Lu L, Kong W, Zhou F. DualMMP-GAN: Dual-scale multi-modality perceptual generative adversarial network for medical image segmentation. Comput Biol Med 2022; 144:105387. [PMID: 35305502 DOI: 10.1016/j.compbiomed.2022.105387] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2021] [Revised: 03/04/2022] [Accepted: 03/04/2022] [Indexed: 01/22/2023]
Abstract
Multi-modality magnetic resonance imaging (MRI) can reveal distinct patterns of tissue in the human body and is crucial to clinical diagnosis. But it still remains a challenge to obtain diverse and plausible multi-modality MR images due to expense, noise, and artifacts. For the same lesion, different modalities of MRI have big differences in context information, coarse location, and fine structure. In order to achieve better generation and segmentation performance, a dual-scale multi-modality perceptual generative adversarial network (DualMMP-GAN) is proposed based on cycle-consistent generative adversarial networks (CycleGAN). Dilated residual blocks are introduced to increase the receptive field, preserving structure and context information of images. A dual-scale discriminator is constructed. The generator is optimized by discriminating patches to represent lesions with different sizes. The perceptual consistency loss is introduced to learn the mapping between the generated and target modality at different semantic levels. Moreover, generative multi-modality segmentation (GMMS) combining given modalities with generated modalities is proposed for brain tumor segmentation. Experimental results show that the DualMMP-GAN outperforms the CycleGAN and some state-of-the-art methods in terms of PSNR, SSMI, and RMSE in most tasks. In addition, dice, sensitivity, specificity, and Hausdorff95 obtained from segmentation by GMMS are all higher than those from a single modality. The objective index obtained by the proposed methods are close to upper bounds obtained from real multiple modalities, indicating that GMMS can achieve similar effects as multi-modality. Overall, the proposed methods can serve as an effective method in clinical brain tumor diagnosis with promising application potential.
Collapse
Affiliation(s)
- Li Zhu
- School of Information Engineering, Nanchang University, Nanchang, 330031, China.
| | - Qiong He
- School of Information Engineering, Nanchang University, Nanchang, 330031, China.
| | - Yue Huang
- School of Informatics, Xiamen University, Xiamen, 361005, China.
| | - Zihe Zhang
- School of Information Engineering, Nanchang University, Nanchang, 330031, China.
| | - Jiaming Zeng
- School of Information Engineering, Nanchang University, Nanchang, 330031, China.
| | - Ling Lu
- School of Information Engineering, Nanchang University, Nanchang, 330031, China.
| | - Weiming Kong
- Hospital of the Joint Logistics Support Force of the Chinese People's Liberation Army, No.908, Nanchang, 330002, China.
| | - Fuqing Zhou
- Department of Radiology, The First Affiliated Hospital, Nanchang University, Nanchang, 330006, China.
| |
Collapse
|
111
|
Yurt M, Özbey M, UH Dar S, Tinaz B, Oguz KK, Çukur T. Progressively Volumetrized Deep Generative Models for Data-Efficient Contextual Learning of MR Image Recovery. Med Image Anal 2022; 78:102429. [DOI: 10.1016/j.media.2022.102429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2021] [Revised: 03/14/2022] [Accepted: 03/18/2022] [Indexed: 10/18/2022]
|
112
|
Bone and Soft Tissue Tumors. Radiol Clin North Am 2022; 60:339-358. [DOI: 10.1016/j.rcl.2021.11.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
113
|
Duan C, Xiong Y, Cheng K, Xiao S, Lyu J, Wang C, Bian X, Zhang J, Zhang D, Chen L, Zhou X, Lou X. Accelerating susceptibility-weighted imaging with deep learning by complex-valued convolutional neural network (ComplexNet): validation in clinical brain imaging. Eur Radiol 2022; 32:5679-5687. [PMID: 35182203 DOI: 10.1007/s00330-022-08638-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2021] [Revised: 12/15/2021] [Accepted: 01/11/2022] [Indexed: 11/30/2022]
Abstract
OBJECTIVES Susceptibility-weighted imaging (SWI) is crucial for the characterization of intracranial hemorrhage and mineralization, but has the drawback of long acquisition times. We aimed to propose a deep learning model to accelerate SWI, and evaluate the clinical feasibility of this approach. METHODS A complex-valued convolutional neural network (ComplexNet) was developed to reconstruct high-quality SWI from highly accelerated k-space data. ComplexNet can leverage the inherently complex-valued nature of SWI data and learn richer representations by using complex-valued network. SWI data were acquired from 117 participants who underwent clinical brain MRI examination between 2019 and 2021, including patients with tumor, stroke, hemorrhage, traumatic brain injury, etc. Reconstruction quality was evaluated using quantitative image metrics and image quality scores, including overall image quality, signal-to-noise ratio, sharpness, and artifacts. RESULTS The average reconstruction time of ComplexNet was 19 ms per section (1.33 s per participant). ComplexNet achieved significantly improved quantitative image metrics compared to a conventional compressed sensing method and a real-valued network with acceleration rates of 5 and 8 (p < 0.001). Meanwhile, there was no significant difference between fully sampled and ComplexNet approaches in terms of overall image quality and artifacts (p > 0.05) at both acceleration rates. Furthermore, ComplexNet showed comparable diagnostic performance to the fully sampled SWI for visualizing a wide range of pathology, including hemorrhage, cerebral microbleeds, and brain tumor. CONCLUSIONS ComplexNet can effectively accelerate SWI while providing superior performance in terms of overall image quality and visualization of pathology for routine clinical brain imaging. KEY POINTS • The complex-valued convolutional neural network (ComplexNet) allowed fast and high-quality reconstruction of highly accelerated SWI data, with an average reconstruction time of 19 ms per section. • ComplexNet achieved significantly improved quantitative image metrics compared to a conventional compressed sensing method and a real-valued network with acceleration rates of 5 and 8 (p < 0.001). • ComplexNet showed comparable diagnostic performance to the fully sampled SWI for visualizing a wide range of pathology, including hemorrhage, cerebral microbleeds, and brain tumor.
Collapse
Affiliation(s)
- Caohui Duan
- Department of Radiology, Chinese PLA General Hospital, Beijing, 100853, People's Republic of China
| | - Yongqin Xiong
- Department of Radiology, Chinese PLA General Hospital, Beijing, 100853, People's Republic of China
| | - Kun Cheng
- Department of Radiology, Chinese PLA General Hospital, Beijing, 100853, People's Republic of China
| | - Sa Xiao
- Department of Neurosurgery, Chinese PLA General Hospital, 28 Fuxing Road, Beijing, 100853, People's Republic of China
| | - Jinhao Lyu
- Department of Radiology, Chinese PLA General Hospital, Beijing, 100853, People's Republic of China
| | - Cheng Wang
- Department of Neurosurgery, Chinese PLA General Hospital, 28 Fuxing Road, Beijing, 100853, People's Republic of China
| | - Xiangbing Bian
- Department of Radiology, Chinese PLA General Hospital, Beijing, 100853, People's Republic of China
| | - Jing Zhang
- Department of Radiology, Chinese PLA General Hospital, Beijing, 100853, People's Republic of China
| | - Dekang Zhang
- Department of Radiology, Chinese PLA General Hospital, Beijing, 100853, People's Republic of China
| | - Ling Chen
- Department of Neurosurgery, Chinese PLA General Hospital, 28 Fuxing Road, Beijing, 100853, People's Republic of China
| | - Xin Zhou
- Key Laboratory of Magnetic Resonance in Biological Systems, State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, National Center for Magnetic Resonance in Wuhan, Wuhan Institute of Physics and Mathematics, Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences-Wuhan National Laboratory for Optoelectronics, Wuhan, 430071, People's Republic of China
| | - Xin Lou
- Department of Radiology, Chinese PLA General Hospital, Beijing, 100853, People's Republic of China.
| |
Collapse
|
114
|
A Review of Deep Learning Methods for Compressed Sensing Image Reconstruction and Its Medical Applications. ELECTRONICS 2022. [DOI: 10.3390/electronics11040586] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Compressed sensing (CS) and its medical applications are active areas of research. In this paper, we review recent works using deep learning method to solve CS problem for images or medical imaging reconstruction including computed tomography (CT), magnetic resonance imaging (MRI) and positron-emission tomography (PET). We propose a novel framework to unify traditional iterative algorithms and deep learning approaches. In short, we define two projection operators toward image prior and data consistency, respectively, and any reconstruction algorithm can be decomposed to the two parts. Though deep learning methods can be divided into several categories, they all satisfies the framework. We built the relationship between different reconstruction methods of deep learning, and connect them to traditional methods through the proposed framework. It also indicates that the key to solve CS problem and its medical applications is how to depict the image prior. Based on the framework, we analyze the current deep learning methods and point out some important directions of research in the future.
Collapse
|
115
|
An Intelligent Music Production Technology Based on Generation Confrontation Mechanism. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:5083146. [PMID: 35186065 PMCID: PMC8853763 DOI: 10.1155/2022/5083146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/12/2021] [Accepted: 12/22/2021] [Indexed: 12/04/2022]
Abstract
In recent years, with the development of deep neural network becoming more and more mature, especially after the proposal of generative confrontation mechanism, academia has made many achievements in the research of image, video and text generation. Therefore, scholars began to use similar attempts in the research of music generation. Therefore, based on the existing theoretical technology and research work, this paper studies music production, and then proposes an intelligent music production technology based on generation confrontation mechanism to enrich the research in the field of computer music generation. This paper takes the music generation method based on generation countermeasure mechanism as the research topic, and mainly studies the following: after studying the existing music generation model based on generation countermeasure network, a time structure model for maintaining music coherence is proposed. In music generation, avoid manual input and ensure the interdependence between tracks. At the same time, this paper studies and implements the generation method of discrete music events based on multi track, including multi track correlation model and discrete processing. The lakh MIDI data set is studied. On this basis, the lakh MIDI is pre-processed to obtain the LMD piano roll data set, which is used in the music generation experiment of MCT-GAN. When studying the multi track music generation based on generation countermeasure network, this paper studies and analyzes three models, and puts forward the multi track music generation method based on CT-GAN, which mainly improves the existing music generation model based on GAN. Finally, the generation results of MCT-GAN are compared with those of Muse-GAN, so as to reflect the improvement effect of MCT-GAN. Select 20 auditees to listen to the generated music and real music and distinguish them. Finally, analyze them according to the evaluation results. After evaluation, it is concluded that the research effect of multi track music generation based on CT-GAN is improved.
Collapse
|
116
|
An optimal control framework for joint-channel parallel MRI reconstruction without coil sensitivities. Magn Reson Imaging 2022; 89:1-11. [DOI: 10.1016/j.mri.2022.01.011] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2021] [Revised: 11/09/2021] [Accepted: 01/23/2022] [Indexed: 01/30/2023]
|
117
|
Yu Z, Rahman MA, Jha AK. Investigating the limited performance of a deep-learning-based SPECT denoising approach: An observer-study-based characterization. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2022; 12035:120350D. [PMID: 35847481 PMCID: PMC9286496 DOI: 10.1117/12.2613134] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/29/2023]
Abstract
Multiple objective assessment of image-quality (OAIQ)-based studies have reported that several deep-learning (DL)-based denoising methods show limited performance on signal-detection tasks. Our goal was to investigate the reasons for this limited performance. To achieve this goal, we conducted a task-based characterization of a DL-based denoising approach for individual signal properties. We conducted this study in the context of evaluating a DL-based approach for denoising single photon-emission computed tomography (SPECT) images. The training data consisted of signals of different sizes and shapes within a clustered-lumpy background, imaged with a 2D parallel-hole-collimator SPECT system. The projections were generated at normal and 20% low-count level, both of which were reconstructed using an ordered-subset-expectation-maximization (OSEM) algorithm. A convolutional neural network (CNN)-based denoiser was trained to process the low-count images. The performance of this CNN was characterized for five different signal sizes and four different signal-to-background ratio (SBRs) by designing each evaluation as a signal-known-exactly/background-known-statistically (SKE/BKS) signal-detection task. Performance on this task was evaluated using an anthropomorphic channelized Hotelling observer (CHO). As in previous studies, we observed that the DL-based denoising method did not improve performance on signal-detection tasks. Evaluation using the idea of observer-study-based characterization demonstrated that the DL-based denoising approach did not improve performance on the signal-detection task for any of the signal types. Overall, these results provide new insights on the performance of the DL-based denoising approach as a function of signal size and contrast. More generally, the observer study-based characterization provides a mechanism to evaluate the sensitivity of the method to specific object properties, and may be explored as analogous to characterizations such as modulation transfer function for linear systems. Finally, this work underscores the need for objective task-based evaluation of DL-based denoising approaches.
Collapse
Affiliation(s)
- Zitong Yu
- Department of Biomedical Engineering, Washington University
in St. Louis, St. Louis, MO, USA
| | - Md Ashequr Rahman
- Mallinckrodt Institute of Radiology, Washington University
in St. Louis, St. Louis, MO, USA
| | - Abhinav K. Jha
- Department of Biomedical Engineering, Washington University
in St. Louis, St. Louis, MO, USA
- Mallinckrodt Institute of Radiology, Washington University
in St. Louis, St. Louis, MO, USA
| |
Collapse
|
118
|
Calivà F, Namiri NK, Dubreuil M, Pedoia V, Ozhinsky E, Majumdar S. Studying osteoarthritis with artificial intelligence applied to magnetic resonance imaging. Nat Rev Rheumatol 2022; 18:112-121. [PMID: 34848883 DOI: 10.1038/s41584-021-00719-7] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/03/2021] [Indexed: 02/08/2023]
Abstract
The 3D nature and soft-tissue contrast of MRI makes it an invaluable tool for osteoarthritis research, by facilitating the elucidation of disease pathogenesis and progression. The recent increasing employment of MRI has certainly been stimulated by major advances that are due to considerable investment in research, particularly related to artificial intelligence (AI). These AI-related advances are revolutionizing the use of MRI in clinical research by augmenting activities ranging from image acquisition to post-processing. Automation is key to reducing the long acquisition times of MRI, conducting large-scale longitudinal studies and quantitatively defining morphometric and other important clinical features of both soft and hard tissues in various anatomical joints. Deep learning methods have been used recently for multiple applications in the musculoskeletal field to improve understanding of osteoarthritis. Compared with labour-intensive human efforts, AI-based methods have advantages and potential in all stages of imaging, as well as post-processing steps, including aiding diagnosis and prognosis. However, AI-based methods also have limitations, including the arguably limited interpretability of AI models. Given that the AI community is highly invested in uncovering uncertainties associated with model predictions and improving their interpretability, we envision future clinical translation and progressive increase in the use of AI algorithms to support clinicians in optimizing patient care.
Collapse
Affiliation(s)
- Francesco Calivà
- Department of Radiology and Biomedical Imaging and Center for Intelligent Imaging, University of California, San Francisco, San Francisco, CA, USA
| | - Nikan K Namiri
- Department of Radiology and Biomedical Imaging and Center for Intelligent Imaging, University of California, San Francisco, San Francisco, CA, USA
| | - Maureen Dubreuil
- Section of Rheumatology, Department of Medicine, Boston University School of Medicine, Boston, MA, USA
| | - Valentina Pedoia
- Department of Radiology and Biomedical Imaging and Center for Intelligent Imaging, University of California, San Francisco, San Francisco, CA, USA
| | - Eugene Ozhinsky
- Department of Radiology and Biomedical Imaging and Center for Intelligent Imaging, University of California, San Francisco, San Francisco, CA, USA
| | - Sharmila Majumdar
- Department of Radiology and Biomedical Imaging and Center for Intelligent Imaging, University of California, San Francisco, San Francisco, CA, USA.
| |
Collapse
|
119
|
Huang J, Ding W, Lv J, Yang J, Dong H, Del Ser J, Xia J, Ren T, Wong ST, Yang G. Edge-enhanced dual discriminator generative adversarial network for fast MRI with parallel imaging using multi-view information. APPL INTELL 2022; 52:14693-14710. [PMID: 36199853 PMCID: PMC9526695 DOI: 10.1007/s10489-021-03092-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/09/2021] [Indexed: 12/24/2022]
Abstract
In clinical medicine, magnetic resonance imaging (MRI) is one of the most important tools for diagnosis, triage, prognosis, and treatment planning. However, MRI suffers from an inherent slow data acquisition process because data is collected sequentially in k-space. In recent years, most MRI reconstruction methods proposed in the literature focus on holistic image reconstruction rather than enhancing the edge information. This work steps aside this general trend by elaborating on the enhancement of edge information. Specifically, we introduce a novel parallel imaging coupled dual discriminator generative adversarial network (PIDD-GAN) for fast multi-channel MRI reconstruction by incorporating multi-view information. The dual discriminator design aims to improve the edge information in MRI reconstruction. One discriminator is used for holistic image reconstruction, whereas the other one is responsible for enhancing edge information. An improved U-Net with local and global residual learning is proposed for the generator. Frequency channel attention blocks (FCA Blocks) are embedded in the generator for incorporating attention mechanisms. Content loss is introduced to train the generator for better reconstruction quality. We performed comprehensive experiments on Calgary-Campinas public brain MR dataset and compared our method with state-of-the-art MRI reconstruction methods. Ablation studies of residual learning were conducted on the MICCAI13 dataset to validate the proposed modules. Results show that our PIDD-GAN provides high-quality reconstructed MR images, with well-preserved edge information. The time of single-image reconstruction is below 5ms, which meets the demand of faster processing.
Collapse
Affiliation(s)
- Jiahao Huang
- College of Information Science and Technology, Zhejiang Shuren University, 310015 Hangzhou, China
- National Heart and Lung Institute, Imperial College London, London, UK
| | - Weiping Ding
- School of Information Science and Technology, Nantong University, 226019 Nantong, China
| | - Jun Lv
- School of Computer and Control Engineering, Yantai University, 264005 Yantai, China
| | - Jingwen Yang
- Department of Prosthodontics, Peking University School and Hospital of Stomatology, Beijing, China
| | - Hao Dong
- Center on Frontiers of Computing Studies, Peking University, Beijing, China
| | - Javier Del Ser
- TECNALIA, Basque Research and Technology Alliance (BRTA), 48160 Derio, Spain
- University of the Basque Country (UPV/EHU), 48013 Bilbao, Spain
| | - Jun Xia
- Department of Radiology, Shenzhen Second People’s Hospital, The First Afliated Hospital of Shenzhen University Health Science Center, Shenzhen, China
| | - Tiaojuan Ren
- College of Information Science and Technology, Zhejiang Shuren University, 310015 Hangzhou, China
| | - Stephen T. Wong
- Systems Medicine and Bioengineering Department, Departments of Radiology and Pathology, Houston Methodist Cancer Center, Houston Methodist Hospital, Weill Cornell Medicine, 77030 Houston, TX USA
| | - Guang Yang
- National Heart and Lung Institute, Imperial College London, London, UK
- Cardiovascular Research Centre, Royal Brompton Hospital, London, UK
| |
Collapse
|
120
|
Citko W, Sienko W. Inpainted Image Reconstruction Using an Extended Hopfield Neural Network Based Machine Learning System. SENSORS (BASEL, SWITZERLAND) 2022; 22:813. [PMID: 35161559 PMCID: PMC8838128 DOI: 10.3390/s22030813] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/26/2021] [Revised: 01/07/2022] [Accepted: 01/17/2022] [Indexed: 06/14/2023]
Abstract
This paper considers the use of a machine learning system for the reconstruction and recognition of distorted or damaged patterns, in particular, images of faces partially covered with masks. The most up-to-date image reconstruction structures are based on constrained optimization algorithms and suitable regularizers. In contrast with the above-mentioned image processing methods, the machine learning system presented in this paper employs the superposition of system vectors setting up asymptotic centers of attraction. The structure of the system is implemented using Hopfield-type neural network-based biorthogonal transformations. The reconstruction property gives rise to a superposition processor and reversible computations. Moreover, this paper's distorted image reconstruction sets up associative memories where images stored in memory are retrieved by distorted/inpainted key images.
Collapse
Affiliation(s)
- Wieslaw Citko
- Department of Electrical Engineering, Gdynia Maritime University, Morska 81-87, 81-225 Gdynia, Poland;
| | | |
Collapse
|
121
|
Wei H, Li Z, Wang S, Li R. Undersampled Multi-contrast MRI Reconstruction Based on Double-domain Generative Adversarial Network. IEEE J Biomed Health Inform 2022; 26:4371-4377. [PMID: 35030086 DOI: 10.1109/jbhi.2022.3143104] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Multi-contrast magnetic resonance imaging can provide comprehensive information for clinical diagnosis. However, multi-contrast imaging suffers from long acquisition time, which makes it inhibitive for daily clinical practice. Subsampling k-space is one of the main methods to speed up scan time. Missing k-space samples will lead to inevitable serious artifacts and noise. Considering the assumption that different contrast modalities share some mutual information, it may be possible to exploit this redundancy to accelerate multi-contrast imaging acquisition. Recently, generative adversarial network shows superior performance in image reconstruction and synthesis. Some studies based on k-space reconstruction also exhibit superior performance over conventional state-of-art method. In this study, we propose a cross-domain two-stage generative adversarial network for multi-contrast images reconstruction based on prior full-sampled contrast and undersampled information. The new approach integrates reconstruction and synthesis, which estimates and completes the missing k-space and then refines in image space. It takes one fully-sampled contrast modality data and highly undersampled data from several other modalities as input, and outputs high quality images for each contrast simultaneously. The network is trained and tested on a public brain dataset from healthy subjects. Quantitative comparisons against baseline clearly indicate that the proposed method can effectively reconstruct undersampled images. Even under high acceleration, the network still can recover texture details and reduce artifacts.
Collapse
|
122
|
Xue S, Cheng Z, Han G, Sun C, Fang K, Liu Y, Cheng J, Jin X, Bai R. 2D probabilistic undersampling pattern optimization for MR image reconstruction. Med Image Anal 2022; 77:102346. [PMID: 35030342 DOI: 10.1016/j.media.2021.102346] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2021] [Revised: 12/07/2021] [Accepted: 12/30/2021] [Indexed: 11/24/2022]
Abstract
With 3D magnetic resonance imaging (MRI), a tradeoff exists between higher image quality and shorter scan time. One way to solve this problem is to reconstruct high-quality MRI images from undersampled k-space. There have been many recent studies exploring effective k-space undersampling patterns and designing MRI reconstruction methods from undersampled k-space, which are two necessary steps. Most studies separately considered these two steps, although in theory, their performance is dependent on each other. In this study, we propose a joint optimization model, trained end-to-end, to simultaneously optimize the undersampling pattern in the Fourier domain and the reconstruction model in the image domain. A 2D probabilistic undersampling layer was designed to optimize the undersampling pattern and probability distribution in a differentiable manner. A 2D inverse Fourier transform layer was implemented to connect the Fourier domain and the image domain during the forward and back propagation. Finally, we discovered an optimized relationship between the probability distribution of the undersampling pattern and its corresponding sampling rate. Further testing was performed using 3D T1-weighted MR images of the brain from the MICCAI 2013 Grand Challenge on Multi-Atlas Labeling dataset and locally acquired brain 3D T1-weighted MR images of healthy volunteers and contrast-enhanced 3D T1-weighted MR images of high-grade glioma patients. The results showed that the recovered MR images using our 2D probabilistic undersampling pattern (with or without the reconstruction network) significantly outperformed those using the existing start-of-the-art undersampling strategies for both qualitative and quantitative comparison, suggesting the advantages and some extent of the generalization of our proposed method.
Collapse
Affiliation(s)
- Shengke Xue
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China
| | - Zhaowei Cheng
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China
| | - Guangxu Han
- Department of Physical Medicine and Rehabilitation of The Affiliated Sir Run Run Shaw Hospital And Interdisciplinary Institute of Neuroscience and Technology, Zhejiang University School of Medicine, Hangzhou, China; Key Laboratory of Biomedical Engineering of Education Ministry, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, China
| | - Chaoliang Sun
- Department of Physical Medicine and Rehabilitation of The Affiliated Sir Run Run Shaw Hospital And Interdisciplinary Institute of Neuroscience and Technology, Zhejiang University School of Medicine, Hangzhou, China; Key Laboratory of Biomedical Engineering of Education Ministry, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, China
| | - Ke Fang
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China
| | - Yingchao Liu
- Department of Neurosurgey, Provincial Hospital Affiliated to Shandong First Medical University, Jinan, China
| | - Jian Cheng
- School of Computer Science and Engineering, Beihang University, Beijing, China
| | - Xinyu Jin
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China
| | - Ruiliang Bai
- Department of Physical Medicine and Rehabilitation of The Affiliated Sir Run Run Shaw Hospital And Interdisciplinary Institute of Neuroscience and Technology, Zhejiang University School of Medicine, Hangzhou, China; Key Laboratory of Biomedical Engineering of Education Ministry, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, China.
| |
Collapse
|
123
|
Abstract
Background:
The CRISPR system can quickly achieve the editing of different gene loci by
changing a small sequence on a single guide RNA. But the off-target event limits the further development
of the CRISPR system. How to improve the efficiency and specificity of this technology and minimize
the risk of off-target have always been a challenge. For genome-wide CRISPR Off-Target Cleavage
Sites (OTS) prediction, an important issue is data imbalance, that is, the number of true OTS identified
is much less than that of all possible nucleotide mismatch loci.
Method:
In this work, based on the sequence-generating adversarial network (SeqGAN), positive offtarget
sequences were generated to amplify the off-target gene locus OTS dataset of Cpf1. Then we
trained the data by a deep Convolutional Neural Network (CNN) to obtain a predictor with stronger
generalization ability and better performance.
Results:
In 10-fold cross-validation, the AUC value of the CNN classifier after SeqGAN balance was
0.941, which was higher than that of the original 0.863 and over-sampling 0.929. In independence testing,
the AUC value of the CNN classifier after SeqGAN balance was 0.841, which was higher than that
of the original 0.833 and over-sampling 0.836. The PR value was 0.722 after SeqGAN, which was also
about higher 0.16 than the original data and higher about 0.03 than over-sampling.
Conclusion:
The sequence generation antagonistic network SeqGAN was firstly used to deal with data
imbalance processing on CRISPR data. All the results showed that the SeqGAN can effectively generate
positive data for CRISPR off-target sites.
Collapse
Affiliation(s)
- Wen Li
- Institute of Computing Technology, University of Science and Technology Beijing, Beijing 100083, China
| | - Xiao-Bo Wang
- Institute of Applied Physics and Computational Mathematics, Beijing 100083, China
| | - Yan Xu
- Institute of Computing Technology, University of Science and Technology Beijing, Beijing 100083, China
| |
Collapse
|
124
|
Li Y, Yang H, Xie D, Dreizin D, Zhou F, Wang Z. POCS-Augmented CycleGAN for MR Image Reconstruction. APPLIED SCIENCES (BASEL, SWITZERLAND) 2022; 12:114. [PMID: 37465648 PMCID: PMC10353773 DOI: 10.3390/app12010114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/20/2023]
Abstract
Recent years have seen increased research interest in replacing the computationally intensive Magnetic resonance (MR) image reconstruction process with deep neural networks. We claim in this paper that the traditional image reconstruction methods and deep learning (DL) are mutually complementary and can be combined to achieve better image reconstruction quality. To test this hypothesis, a hybrid DL image reconstruction method was proposed by combining a state-of-the-art deep learning network, namely a generative adversarial network with cycle loss (CycleGAN), with a traditional data reconstruction algorithm: Projection Onto Convex Set (POCS). The output of the first iteration's training results of the CycleGAN was updated by POCS and used as the extra training data for the second training iteration of the CycleGAN. The method was validated using sub-sampled Magnetic resonance imaging data. Compared with other state-of-the-art, DL-based methods (e.g., U-Net, GAN, and RefineGAN) and a traditional method (compressed sensing), our method showed the best reconstruction results.
Collapse
Affiliation(s)
- Yiran Li
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland, Baltimore, MD 21201, USA
- Department of Electrical and Computer Engineering, Temple University, Philadelphia, PA 19121, USA
| | - Hanlu Yang
- Department of Computer Science and Electrical Engineering, University of Maryland, Baltimore, MD 21250, USA
| | - Danfeng Xie
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland, Baltimore, MD 21201, USA
- Department of Electrical and Computer Engineering, Temple University, Philadelphia, PA 19121, USA
| | - David Dreizin
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland, Baltimore, MD 21201, USA
| | - Fuqing Zhou
- Department of Radiology, The First Affiliated Hospital of Nanchang University, Nanchang 330209, China
| | - Ze Wang
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland, Baltimore, MD 21201, USA
| |
Collapse
|
125
|
Chen Q, Shah NJ, Worthoff WA. Compressed Sensing in Sodium Magnetic Resonance Imaging: Techniques, Applications, and Future Prospects. J Magn Reson Imaging 2021; 55:1340-1356. [PMID: 34918429 DOI: 10.1002/jmri.28029] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Revised: 12/01/2021] [Accepted: 12/03/2021] [Indexed: 11/06/2022] Open
Abstract
Sodium (23 Na) yields the second strongest nuclear magnetic resonance (NMR) signal in biological tissues and plays a vital role in cell physiology. Sodium magnetic resonance imaging (MRI) can provide insights into cell integrity and tissue viability relative to pathologies without significant anatomical alternations, and thus it is considered to be a potential surrogate biomarker that provides complementary information for standard hydrogen (1 H) MRI in a noninvasive and quantitative manner. However, sodium MRI suffers from a relatively low signal-to-noise ratio and long acquisition times due to its relatively low NMR sensitivity. Compressed sensing-based (CS-based) methods have been shown to accelerate sodium imaging and/or improve sodium image quality significantly. In this manuscript, the basic concepts of CS and how CS might be applied to improve sodium MRI are described, and the historical milestones of CS-based sodium MRI are briefly presented. Representative advanced techniques and evaluation methods are discussed in detail, followed by an expose of clinical applications in multiple anatomical regions and diseases as well as thoughts and suggestions on potential future research prospects of CS in sodium MRI. EVIDENCE LEVEL: 5 TECHNICAL EFFICACY: Stage 1.
Collapse
Affiliation(s)
- Qingping Chen
- Institute of Neuroscience and Medicine 4, INM-4, Forschungszentrum Jülich GmbH, Jülich, Germany.,Faculty of Medicine, RWTH Aachen University, Aachen, Germany.,Department of Biomedical Engineering, The University of Melbourne, Parkville, Victoria, Australia
| | - N Jon Shah
- Institute of Neuroscience and Medicine 4, INM-4, Forschungszentrum Jülich GmbH, Jülich, Germany.,Institute of Neuroscience and Medicine 11, INM-11, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany.,JARA-BRAIN-Translational Medicine, Aachen, Germany.,Department of Neurology, RWTH Aachen University, Aachen, Germany
| | - Wieland A Worthoff
- Institute of Neuroscience and Medicine 4, INM-4, Forschungszentrum Jülich GmbH, Jülich, Germany
| |
Collapse
|
126
|
Wang Y, Du W, Wang H, Zhao Y. Intelligent Generation Method of Innovative Structures Based on Topology Optimization and Deep Learning. MATERIALS 2021; 14:ma14247680. [PMID: 34947275 PMCID: PMC8706216 DOI: 10.3390/ma14247680] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Revised: 12/04/2021] [Accepted: 12/11/2021] [Indexed: 12/23/2022]
Abstract
Computer-aided design has been widely used in structural calculation and analysis, but there are still challenges in generating innovative structures intelligently. Aiming at this issue, a new method was proposed to realize the intelligent generation of innovative structures based on topology optimization and deep learning. Firstly, a large number of structural models obtained from topology optimization under different optimization parameters were extracted to produce the training set images, and the training set labels were defined as the corresponding load cases. Then, the boundary equilibrium generative adversarial networks (BEGAN) deep learning algorithm was applied to generate numerous innovative structures. Finally, the generated structures were evaluated by a series of evaluation indexes, including innovation, aesthetics, machinability, and mechanical performance. Combined with two engineering cases, the application process of the above method is described here in detail. Furthermore, the 3D reconstruction and additive manufacturing techniques were applied to manufacture the structural models. The research results showed that the proposed approach of structural generation based on topology optimization and deep learning is feasible, and can not only generate innovative structures but also optimize the material consumption and mechanical performance further.
Collapse
Affiliation(s)
- Yingqi Wang
- Institute of Steel and Spatial Structures, College of Civil Engineering and Architecture, Henan University, Kaifeng 475004, China; (Y.W.); (H.W.); (Y.Z.)
| | - Wenfeng Du
- Institute of Steel and Spatial Structures, College of Civil Engineering and Architecture, Henan University, Kaifeng 475004, China; (Y.W.); (H.W.); (Y.Z.)
- Henan Provincial Research Center of Engineering Technology on Assembly Buildings, Kaifeng 475004, China
- Correspondence:
| | - Hui Wang
- Institute of Steel and Spatial Structures, College of Civil Engineering and Architecture, Henan University, Kaifeng 475004, China; (Y.W.); (H.W.); (Y.Z.)
| | - Yannan Zhao
- Institute of Steel and Spatial Structures, College of Civil Engineering and Architecture, Henan University, Kaifeng 475004, China; (Y.W.); (H.W.); (Y.Z.)
| |
Collapse
|
127
|
Wang S, Cao G, Wang Y, Liao S, Wang Q, Shi J, Li C, Shen D. Review and Prospect: Artificial Intelligence in Advanced Medical Imaging. FRONTIERS IN RADIOLOGY 2021; 1:781868. [PMID: 37492170 PMCID: PMC10365109 DOI: 10.3389/fradi.2021.781868] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Accepted: 11/08/2021] [Indexed: 07/27/2023]
Abstract
Artificial intelligence (AI) as an emerging technology is gaining momentum in medical imaging. Recently, deep learning-based AI techniques have been actively investigated in medical imaging, and its potential applications range from data acquisition and image reconstruction to image analysis and understanding. In this review, we focus on the use of deep learning in image reconstruction for advanced medical imaging modalities including magnetic resonance imaging (MRI), computed tomography (CT), and positron emission tomography (PET). Particularly, recent deep learning-based methods for image reconstruction will be emphasized, in accordance with their methodology designs and performances in handling volumetric imaging data. It is expected that this review can help relevant researchers understand how to adapt AI for medical imaging and which advantages can be achieved with the assistance of AI.
Collapse
Affiliation(s)
- Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences (CAS), Shenzhen, China
- Pengcheng Laboratrory, Shenzhen, China
| | - Guohua Cao
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Yan Wang
- School of Computer Science, Sichuan University, Chengdu, China
| | - Shu Liao
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Qian Wang
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Jun Shi
- School of Communication and Information Engineering, Shanghai University, Shanghai, China
| | - Cheng Li
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences (CAS), Shenzhen, China
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| |
Collapse
|
128
|
Felfeliyan B, Hareendranathan A, Kuntze G, Jaremko J, Ronsky J. MRI Knee Domain Translation for Unsupervised Segmentation By CycleGAN (data from Osteoarthritis initiative (OAI)). ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:4052-4055. [PMID: 34892119 DOI: 10.1109/embc46164.2021.9629705] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Accurate quantification of bone and cartilage features is the key to efficient management of knee osteoarthritis (OA). Bone and cartilage tissues can be accurately segmented from magnetic resonance imaging (MRI) data using supervised Deep Learning (DL) methods. DL training is commonly conducted using large datasets with expert-labeled annotations. DL models perform better if distributions of testing data (target domains) are close to those of training data (source domains). However, in practice, data distributions of images from different MRI scanners and sequences are different and DL models need to re-trained on each dataset separately. We propose a domain adaptation (DA) framework using the CycleGAN model for MRI translation that would aid in unsupervised MRI data segmentation. We have validated our pipeline on five scans from the Osteoarthritis Initiative (OAI) dataset. Using this pipeline, we translated TSE Fat Suppressed MRI sequences to pseudo-DESS images. An improved MaskRCNN (IMaskRCNN) instance segmentation network trained on DESS was used to segment cartilage and femoral head regions in TSE Fat Suppressed sequences. Segmentations of the I-MaskRCNN correlated well with approximated manual segmentation obtained from nearest DESS slices (DICE = 0.76) without the need for retraining. We anticipate this technique will aid in automatic unsupervised assessment of knee MRI using commonly acquired MRI sequences and save experts' time that would otherwise be required for manual segmentation.Clinical relevance- This technique paves the way to automatically convert one MRI sequence to its equivalent as if acquired by a different protocol or different magnet, facilitating robust, hardware-independent automated analysis. For example, routine clinically acquired knee MRI could be converted to high-resolution high-contrast images suitable for automated detection of cartilage defects.
Collapse
|
129
|
Li X, Jiang Y, Rodriguez-Andina JJ, Luo H, Yin S, Kaynak O. When medical images meet generative adversarial network: recent development and research opportunities. DISCOVER ARTIFICIAL INTELLIGENCE 2021; 1:5. [DOI: 10.1007/s44163-021-00006-0] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/08/2021] [Accepted: 07/12/2021] [Indexed: 11/27/2022]
Abstract
AbstractDeep learning techniques have promoted the rise of artificial intelligence (AI) and performed well in computer vision. Medical image analysis is an important application of deep learning, which is expected to greatly reduce the workload of doctors, contributing to more sustainable health systems. However, most current AI methods for medical image analysis are based on supervised learning, which requires a lot of annotated data. The number of medical images available is usually small and the acquisition of medical image annotations is an expensive process. Generative adversarial network (GAN), an unsupervised method that has become very popular in recent years, can simulate the distribution of real data and reconstruct approximate real data. GAN opens some exciting new ways for medical image generation, expanding the number of medical images available for deep learning methods. Generated data can solve the problem of insufficient data or imbalanced data categories. Adversarial training is another contribution of GAN to medical imaging that has been applied to many tasks, such as classification, segmentation, or detection. This paper investigates the research status of GAN in medical images and analyzes several GAN methods commonly applied in this area. The study addresses GAN application for both medical image synthesis and adversarial learning for other medical image tasks. The open challenges and future research directions are also discussed.
Collapse
|
130
|
Quan C, Zhou J, Zhu Y, Chen Y, Wang S, Liang D, Liu Q. Homotopic Gradients of Generative Density Priors for MR Image Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3265-3278. [PMID: 34010128 DOI: 10.1109/tmi.2021.3081677] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Deep learning, particularly the generative model, has demonstrated tremendous potential to significantly speed up image reconstruction with reduced measurements recently. Rather than the existing generative models that often optimize the density priors, in this work, by taking advantage of the denoising score matching, homotopic gradients of generative density priors (HGGDP) are exploited for magnetic resonance imaging (MRI) reconstruction. More precisely, to tackle the low-dimensional manifold and low data density region issues in generative density prior, we estimate the target gradients in higher-dimensional space. We train a more powerful noise conditional score network by forming high-dimensional tensor as the network input at the training phase. More artificial noise is also injected in the embedding space. At the reconstruction stage, a homotopy method is employed to pursue the density prior, such as to boost the reconstruction performance. Experiment results implied the remarkable performance of HGGDP in terms of high reconstruction accuracy. Only 10% of the k-space data can still generate image of high quality as effectively as standard MRI reconstructions with the fully sampled data.
Collapse
|
131
|
Cheng J, Cui ZX, Huang W, Ke Z, Ying L, Wang H, Zhu Y, Liang D. Learning Data Consistency and its Application to Dynamic MR Imaging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3140-3153. [PMID: 34252025 DOI: 10.1109/tmi.2021.3096232] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Magnetic resonance (MR) image reconstruction from undersampled k-space data can be formulated as a minimization problem involving data consistency and image prior. Existing deep learning (DL)-based methods for MR reconstruction employ deep networks to exploit the prior information and integrate the prior knowledge into the reconstruction under the explicit constraint of data consistency, without considering the real distribution of the noise. In this work, we propose a new DL-based approach termed Learned DC that implicitly learns the data consistency with deep networks, corresponding to the actual probability distribution of system noise. The data consistency term and the prior knowledge are both embedded in the weights of the networks, which provides an utterly implicit manner of learning reconstruction model. We evaluated the proposed approach with highly undersampled dynamic data, including the dynamic cardiac cine data with up to 24-fold acceleration and dynamic rectum data with the acceleration factor equal to the number of phases. Experimental results demonstrate the superior performance of the Learned DC both quantitatively and qualitatively than the state-of-the-art.
Collapse
|
132
|
Lahiri A, Wang G, Ravishankar S, Fessler JA. Blind Primed Supervised (BLIPS) Learning for MR Image Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3113-3124. [PMID: 34191725 PMCID: PMC8672324 DOI: 10.1109/tmi.2021.3093770] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
This paper examines a combined supervised-unsupervised framework involving dictionary-based blind learning and deep supervised learning for MR image reconstruction from under-sampled k-space data. A major focus of the work is to investigate the possible synergy of learned features in traditional shallow reconstruction using adaptive sparsity-based priors and deep prior-based reconstruction. Specifically, we propose a framework that uses an unrolled network to refine a blind dictionary learning-based reconstruction. We compare the proposed method with strictly supervised deep learning-based reconstruction approaches on several datasets of varying sizes and anatomies. We also compare the proposed method to alternative approaches for combining dictionary-based methods with supervised learning in MR image reconstruction. The improvements yielded by the proposed framework suggest that the blind dictionary-based approach preserves fine image details that the supervised approach can iteratively refine, suggesting that the features learned using the two methods are complementary.
Collapse
|
133
|
Khodatars M, Shoeibi A, Sadeghi D, Ghaasemi N, Jafari M, Moridian P, Khadem A, Alizadehsani R, Zare A, Kong Y, Khosravi A, Nahavandi S, Hussain S, Acharya UR, Berk M. Deep learning for neuroimaging-based diagnosis and rehabilitation of Autism Spectrum Disorder: A review. Comput Biol Med 2021; 139:104949. [PMID: 34737139 DOI: 10.1016/j.compbiomed.2021.104949] [Citation(s) in RCA: 100] [Impact Index Per Article: 25.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Revised: 10/02/2021] [Accepted: 10/13/2021] [Indexed: 01/23/2023]
Abstract
Accurate diagnosis of Autism Spectrum Disorder (ASD) followed by effective rehabilitation is essential for the management of this disorder. Artificial intelligence (AI) techniques can aid physicians to apply automatic diagnosis and rehabilitation procedures. AI techniques comprise traditional machine learning (ML) approaches and deep learning (DL) techniques. Conventional ML methods employ various feature extraction and classification techniques, but in DL, the process of feature extraction and classification is accomplished intelligently and integrally. DL methods for diagnosis of ASD have been focused on neuroimaging-based approaches. Neuroimaging techniques are non-invasive disease markers potentially useful for ASD diagnosis. Structural and functional neuroimaging techniques provide physicians substantial information about the structure (anatomy and structural connectivity) and function (activity and functional connectivity) of the brain. Due to the intricate structure and function of the brain, proposing optimum procedures for ASD diagnosis with neuroimaging data without exploiting powerful AI techniques like DL may be challenging. In this paper, studies conducted with the aid of DL networks to distinguish ASD are investigated. Rehabilitation tools provided for supporting ASD patients utilizing DL networks are also assessed. Finally, we will present important challenges in the automated detection and rehabilitation of ASD and propose some future works.
Collapse
Affiliation(s)
- Marjane Khodatars
- Dept. of Medical Engineering, Mashhad Branch, Islamic Azad University, Mashhad, Iran
| | - Afshin Shoeibi
- Faculty of Electrical Engineering, FPGA Lab, K. N. Toosi University of Technology, Tehran, Iran; Computer Engineering Department, Ferdowsi University of Mashhad, Mashhad, Iran.
| | - Delaram Sadeghi
- Dept. of Medical Engineering, Mashhad Branch, Islamic Azad University, Mashhad, Iran
| | - Navid Ghaasemi
- Faculty of Electrical Engineering, FPGA Lab, K. N. Toosi University of Technology, Tehran, Iran; Computer Engineering Department, Ferdowsi University of Mashhad, Mashhad, Iran
| | - Mahboobeh Jafari
- Electrical and Computer Engineering Faculty, Semnan University, Semnan, Iran
| | - Parisa Moridian
- Faculty of Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Ali Khadem
- Department of Biomedical Engineering, Faculty of Electrical Engineering, K. N. Toosi University of Technology, Tehran, Iran.
| | - Roohallah Alizadehsani
- Institute for Intelligent Systems Research and Innovation (IISRI), Deakin University, Victoria, 3217, Australia
| | - Assef Zare
- Faculty of Electrical Engineering, Gonabad Branch, Islamic Azad University, Gonabad, Iran
| | - Yinan Kong
- School of Engineering, Macquarie University, Sydney, 2109, Australia
| | - Abbas Khosravi
- Institute for Intelligent Systems Research and Innovation (IISRI), Deakin University, Victoria, 3217, Australia
| | - Saeid Nahavandi
- Institute for Intelligent Systems Research and Innovation (IISRI), Deakin University, Victoria, 3217, Australia
| | | | - U Rajendra Acharya
- Ngee Ann Polytechnic, Singapore, 599489, Singapore; Dept. of Biomedical Informatics and Medical Engineering, Asia University, Taichung, Taiwan; Dept. of Biomedical Engineering, School of Science and Technology, Singapore University of Social Sciences, Singapore
| | - Michael Berk
- Deakin University, IMPACT - the Institute for Mental and Physical Health and Clinical Translation, School of Medicine, Barwon Health, Geelong, Australia; Orygen, The National Centre of Excellence in Youth Mental Health, Centre for Youth Mental Health, Florey Institute for Neuroscience and Mental Health and the Department of Psychiatry, The University of Melbourne, Melbourne, Australia
| |
Collapse
|
134
|
Lu H, Zou X, Liao L, Li K, Liu J. Deep Convolutional Neural Network for Compressive Sensing of Magnetic Resonance Images. INT J PATTERN RECOGN 2021. [DOI: 10.1142/s0218001421520194] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Compressive Sensing for Magnetic Resonance Imaging (CS-MRI) aims to reconstruct Magnetic Resonance (MR) images from under-sampled raw data. There are two challenges to improve CS-MRI methods, i.e. designing an under-sampling algorithm to achieve optimal sampling, as well as designing fast and small deep neural networks to obtain reconstructed MR images with superior quality. To improve the reconstruction quality of MR images, we propose a novel deep convolutional neural network architecture for CS-MRI named MRCSNet. The MRCSNet consists of three sub-networks, a compressive sensing sampling sub-network, an initial reconstruction sub-network, and a refined reconstruction sub-network. Experimental results demonstrate that MRCSNet generates high-quality reconstructed MR images at various under-sampling ratios, and also meets the requirements of real-time CS-MRI applications. Compared to state-of-the-art CS-MRI approaches, MRCSNet offers a significant improvement in reconstruction accuracies, such as Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity (SSIM). Besides, it reduces the reconstruction error evaluated by the Normalized Root-Mean-Square Error (NRMSE). The source codes are available at https://github.com/TaihuLight/MRCSNet .
Collapse
Affiliation(s)
- Hong Lu
- College of Computer Science and Technology, Nanjing University, Nanjing University of Science and Technology, Zijin College, Nanjing 210023, P. R. China
| | - Xiaofei Zou
- Information Assurance Department of Airborne Army, Beijing, 100083, P. R. China
- College of Information and Communication, National University of Defense Technology, Wuhan 430019, P. R. China
| | - Longlong Liao
- College of Computer and Data Science, Fuzhou University, Fuzhou, Fujian 350116, P. R. China
| | - Kenli Li
- College of Computer Science and Electronic Engineering, Hunan University, Changsha 410082, P. R. China
| | - Jie Liu
- College of Computer, National University of Defense, Technology, Changsha 410073, P. R. China
| |
Collapse
|
135
|
Gao L, Xie K, Wu X, Lu Z, Li C, Sun J, Lin T, Sui J, Ni X. Generating synthetic CT from low-dose cone-beam CT by using generative adversarial networks for adaptive radiotherapy. Radiat Oncol 2021; 16:202. [PMID: 34649572 PMCID: PMC8515667 DOI: 10.1186/s13014-021-01928-w] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Accepted: 06/17/2021] [Indexed: 11/10/2022] Open
Abstract
OBJECTIVE To develop high-quality synthetic CT (sCT) generation method from low-dose cone-beam CT (CBCT) images by using attention-guided generative adversarial networks (AGGAN) and apply these images to dose calculations in radiotherapy. METHODS The CBCT/planning CT images of 170 patients undergoing thoracic radiotherapy were used for training and testing. The CBCT images were scanned under a fast protocol with 50% less clinical projection frames compared with standard chest M20 protocol. Training with aligned paired images was performed using conditional adversarial networks (so-called pix2pix), and training with unpaired images was carried out with cycle-consistent adversarial networks (cycleGAN) and AGGAN, through which sCT images were generated. The image quality and Hounsfield unit (HU) value of the sCT images generated by the three neural networks were compared. The treatment plan was designed on CT and copied to sCT images to calculated dose distribution. RESULTS The image quality of sCT images by all the three methods are significantly improved compared with original CBCT images. The AGGAN achieves the best image quality in the testing patients with the smallest mean absolute error (MAE, 43.5 ± 6.69), largest structural similarity (SSIM, 93.7 ± 3.88) and peak signal-to-noise ratio (PSNR, 29.5 ± 2.36). The sCT images generated by all the three methods showed superior dose calculation accuracy with higher gamma passing rates compared with original CBCT image. The AGGAN offered the highest gamma passing rates (91.4 ± 3.26) under the strictest criteria of 1 mm/1% compared with other methods. In the phantom study, the sCT images generated by AGGAN demonstrated the best image quality and the highest dose calculation accuracy. CONCLUSIONS High-quality sCT images were generated from low-dose thoracic CBCT images by using the proposed AGGAN through unpaired CBCT and CT images. The dose distribution could be calculated accurately based on sCT images in radiotherapy.
Collapse
Affiliation(s)
- Liugang Gao
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, 213003, China.,Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China
| | - Kai Xie
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, 213003, China.,Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China
| | - Xiaojin Wu
- Oncology Department, Xuzhou No.1 People's Hospital, Xuzhou, 221000, China
| | - Zhengda Lu
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, 213003, China.,Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China.,School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing, 213000, China
| | - Chunying Li
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, 213003, China.,Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China
| | - Jiawei Sun
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, 213003, China.,Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China
| | - Tao Lin
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, 213003, China.,Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China
| | - Jianfeng Sui
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, 213003, China.,Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China
| | - Xinye Ni
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, 213003, China. .,Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China.
| |
Collapse
|
136
|
|
137
|
Wu M, Chen W, Chen Q, Park H. Noise Reduction for SD-OCT Using a Structure-Preserving Domain Transfer Approach. IEEE J Biomed Health Inform 2021; 25:3460-3472. [PMID: 33822730 DOI: 10.1109/jbhi.2021.3071421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Spectral-domain optical coherence tomography (SD-OCT) images inevitably suffer from multiplicative speckle noise caused by random interference. This study proposes an unsupervised domain adaptation approach for noise reduction by translating the SD-OCT to the corresponding high-quality enhanced depth imaging (EDI)-OCT. We propose a structure-persevered cycle-consistent generative adversarial network for unpaired image-to-image translation, which can be applied to imbalanced unpaired data, and can effectively preserve retinal details based on a structure-specific cross-domain description. It also imposes smoothness by penalizing the intensity variation of the low reflective region between consecutive slices. Our approach was tested on a local data set that consisted of 268 SD-OCT volumes and two public independent validation datasets including 20 SD-OCT volumes and 17 B-scans, respectively. Experimental results show that our method can effectively suppress noise and maintain the retinal structure, compared with other traditional approaches and deep learning methods in terms of qualitative and quantitative assessments. Our proposed method shows good performance for speckle noise reduction and can assist downstream tasks of OCT analysis.
Collapse
|
138
|
Richardson ML, Garwood ER, Lee Y, Li MD, Lo HS, Nagaraju A, Nguyen XV, Probyn L, Rajiah P, Sin J, Wasnik AP, Xu K. Noninterpretive Uses of Artificial Intelligence in Radiology. Acad Radiol 2021; 28:1225-1235. [PMID: 32059956 DOI: 10.1016/j.acra.2020.01.012] [Citation(s) in RCA: 58] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2019] [Revised: 01/08/2020] [Accepted: 01/09/2020] [Indexed: 12/12/2022]
Abstract
We deem a computer to exhibit artificial intelligence (AI) when it performs a task that would normally require intelligent action by a human. Much of the recent excitement about AI in the medical literature has revolved around the ability of AI models to recognize anatomy and detect pathology on medical images, sometimes at the level of expert physicians. However, AI can also be used to solve a wide range of noninterpretive problems that are relevant to radiologists and their patients. This review summarizes some of the newer noninterpretive uses of AI in radiology.
Collapse
Affiliation(s)
| | - Elisabeth R Garwood
- Department of Radiology, University of Massachusetts, Worcester, Massachusetts
| | - Yueh Lee
- Department of Radiology, University of North Carolina, Chapel Hill, North Carolina
| | - Matthew D Li
- Department of Radiology, Harvard Medical School/Massachusetts General Hospital, Boston, Massachusets
| | - Hao S Lo
- Department of Radiology, University of Washington, Seattle, Washington
| | - Arun Nagaraju
- Department of Radiology, University of Chicago, Chicago, Illinois
| | - Xuan V Nguyen
- Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, Ohio
| | - Linda Probyn
- Department of Radiology, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario
| | - Prabhakar Rajiah
- Department of Radiology, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Jessica Sin
- Department of Radiology, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire
| | - Ashish P Wasnik
- Department of Radiology, University of Michigan, Ann Arbor, Michigan
| | - Kali Xu
- Department of Medicine, Santa Clara Valley Medical Center, Santa Clara, California
| |
Collapse
|
139
|
Jiang M, Zhi M, Wei L, Yang X, Zhang J, Li Y, Wang P, Huang J, Yang G. FA-GAN: Fused attentive generative adversarial networks for MRI image super-resolution. Comput Med Imaging Graph 2021; 92:101969. [PMID: 34411966 PMCID: PMC8453331 DOI: 10.1016/j.compmedimag.2021.101969] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Revised: 07/03/2021] [Accepted: 08/06/2021] [Indexed: 11/29/2022]
Abstract
High-resolution magnetic resonance images can provide fine-grained anatomical information, but acquiring such data requires a long scanning time. In this paper, a framework called the Fused Attentive Generative Adversarial Networks(FA-GAN) is proposed to generate the super- resolution MR image from low-resolution magnetic resonance images, which can reduce the scanning time effectively but with high resolution MR images. In the framework of the FA-GAN, the local fusion feature block, consisting of different three-pass networks by using different convolution kernels, is proposed to extract image features at different scales. And the global feature fusion module, including the channel attention module, the self-attention module, and the fusion operation, is designed to enhance the important features of the MR image. Moreover, the spectral normalization process is introduced to make the discriminator network stable. 40 sets of 3D magnetic resonance images (each set of images contains 256 slices) are used to train the network, and 10 sets of images are used to test the proposed method. The experimental results show that the PSNR and SSIM values of the super-resolution magnetic resonance image generated by the proposed FA-GAN method are higher than the state-of-the-art reconstruction methods.
Collapse
Affiliation(s)
- Mingfeng Jiang
- School of Information Science and Technology, Zhejiang Sci-Tech University, Hangzhou, 310018, China,Corresponding author at: School of Information Science and Technology, Zhejiang Sci-Tech University, Hangzhou, 310018, China.
| | - Minghao Zhi
- School of Information Science and Technology, Zhejiang Sci-Tech University, Hangzhou, 310018, China
| | - Liying Wei
- School of Information Science and Technology, Zhejiang Sci-Tech University, Hangzhou, 310018, China
| | - Xiaocheng Yang
- School of Information Science and Technology, Zhejiang Sci-Tech University, Hangzhou, 310018, China
| | - Jucheng Zhang
- Department of Clinical Engineering, the Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, 310019, China
| | - Yongming Li
- College of Communication Engineering, Chongqing University, Chongqing, China
| | - Pin Wang
- College of Communication Engineering, Chongqing University, Chongqing, China
| | - Jiahao Huang
- Cardiovascular Research Centre, Royal Brompton Hospital, London, SW3 6NP, UK,National Heart and Lung Institute, Imperial College London, London, SW7 2AZ, UK
| | - Guang Yang
- Cardiovascular Research Centre, Royal Brompton Hospital, London, SW3 6NP, UK,National Heart and Lung Institute, Imperial College London, London, SW7 2AZ, UK,Corresponding author at: National Heart and Lung Institute, Imperial College London, London, SW7 2AZ, UK.
| |
Collapse
|
140
|
Li GY, Wang CY, Lv J. Current status of deep learning in abdominal image reconstruction. Artif Intell Med Imaging 2021; 2:86-94. [DOI: 10.35711/aimi.v2.i4.86] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/24/2021] [Revised: 06/24/2021] [Accepted: 08/17/2021] [Indexed: 02/06/2023] Open
Affiliation(s)
- Guang-Yuan Li
- School of Computer and Control Engineering, Yantai University, Yantai 264000, Shandong Province, China
| | - Cheng-Yan Wang
- Human Phenome Institute, Fudan University, Shanghai 201203, China
| | - Jun Lv
- School of Computer and Control Engineering, Yantai University, Yantai 264000, Shandong Province, China
| |
Collapse
|
141
|
Generating Virtual Short Tau Inversion Recovery (STIR) Images from T1- and T2-Weighted Images Using a Conditional Generative Adversarial Network in Spine Imaging. Diagnostics (Basel) 2021; 11:diagnostics11091542. [PMID: 34573884 PMCID: PMC8467788 DOI: 10.3390/diagnostics11091542] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Revised: 08/15/2021] [Accepted: 08/21/2021] [Indexed: 11/17/2022] Open
Abstract
Short tau inversion recovery (STIR) sequences are frequently used in magnetic resonance imaging (MRI) of the spine. However, STIR sequences require a significant amount of scanning time. The purpose of the present study was to generate virtual STIR (vSTIR) images from non-contrast, non-fat-suppressed T1- and T2-weighted images using a conditional generative adversarial network (cGAN). The training dataset comprised 612 studies from 514 patients, and the validation dataset comprised 141 studies from 133 patients. For validation, 100 original STIR and respective vSTIR series were presented to six senior radiologists (blinded for the STIR type) in independent A/B-testing sessions. Additionally, for 141 real or vSTIR sequences, the testers were required to produce a structured report of 15 different findings. In the A/B-test, most testers could not reliably identify the real STIR (mean error of tester 1-6: 41%; 44%; 58%; 48%; 39%; 45%). In the evaluation of the structured reports, vSTIR was equivalent to real STIR in 13 of 15 categories. In the category of the number of STIR hyperintense vertebral bodies (p = 0.08) and in the diagnosis of bone metastases (p = 0.055), the vSTIR was only slightly insignificantly equivalent. By virtually generating STIR images of diagnostic quality from T1- and T2-weighted images using a cGAN, one can shorten examination times and increase throughput.
Collapse
|
142
|
Kumar PA, Gunasundari R, Aarthi R. Systematic Analysis and Review of Magnetic Resonance Imaging (MRI) Reconstruction Techniques. Curr Med Imaging 2021; 17:943-955. [PMID: 33402090 DOI: 10.2174/1573405616666210105125542] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2020] [Revised: 10/24/2020] [Accepted: 11/12/2020] [Indexed: 11/22/2022]
Abstract
BACKGROUND Magnetic Resonance Imaging (MRI) plays an important role in the field of medical diagnostic imaging as it poses non-invasive acquisition and high soft-tissue contrast. However, a huge time is needed for the MRI scanning process that results in motion artifacts, degrades image quality, misinterprets the data, and may cause discomfort to the patient. Thus, the main goal of MRI research is to accelerate data acquisition processing without affecting the quality of the image. INTRODUCTION This paper presents a survey based on distinct conventional MRI reconstruction methodologies. In addition, a novel MRI reconstruction strategy is proposed based on weighted Compressive Sensing (CS), Penalty-aided minimization function, and Meta-heuristic optimization technique. METHODS An illustrative analysis is done concerning adapted methods, datasets used, execution tools, performance measures, and values of evaluation metrics. Moreover, the issues of existing methods and the research gaps considering conventional MRI reconstruction schemes are elaborated to obtain improved contribution for devising significant MRI reconstruction techniques. RESULTS The proposed method will reduce conventional aliasing artifact problems, may attain lower Mean Square Error (MSE), higher Peak Signal-to-Noise Ratio (PSNR), and Structural SIMilarity (SSIM) index. CONCLUSION The issues of existing methods and the research gaps considering conventional MRI reconstruction schemes are elaborated to devising an improved significant MRI reconstruction technique.
Collapse
Affiliation(s)
- Penta Anil Kumar
- Department of Electronics and Communication Engineering, Pondicherry Engineering College, Puducherry, India
| | - Ramalingam Gunasundari
- Department of Electronics and Communication Engineering, Pondicherry Engineering College, Puducherry, India
| | | |
Collapse
|
143
|
Domain knowledge augmentation of parallel MR image reconstruction using deep learning. Comput Med Imaging Graph 2021; 92:101968. [PMID: 34390918 DOI: 10.1016/j.compmedimag.2021.101968] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Revised: 07/08/2021] [Accepted: 07/28/2021] [Indexed: 10/20/2022]
Abstract
A deep learning (DL) method for accelerated magnetic resonance (MR) imaging is presented that incorporates domain knowledge of parallel MR imaging to augment the DL networks for accurate and stable image reconstruction. The proposed DL method employs a novel loss function consisting of a combination of mean absolute error, structural similarity, and sobel edge loss. The DL model takes both original measurements and images reconstructed by the parallel imaging method as inputs to the network. The accuracy of the proposed method was evaluated using two anatomical regions and six MRI contrasts and was compared with state-of-the-art parallel imaging and deep learning methods. The proposed method significantly outperformed the other methods for all the six different contrasts in terms of structural similarity, peak signal to noise ratio, and normalized mean squared error. The out-of-sample performance of the proposed method was assessed for a truly "unseen" case in a volunteer scan. The method produced images without any artificial features, often occurring in the DL image reconstruction methods. A stability analysis was performed by adding perturbations to the input, which demonstrated that the proposed method is robust and stable with respect to small structural changes, and different undersampling ratios. Comprehensive validation on large datasets demonstrated that incorporation of domain knowledge sufficiently regularizes the DL based image reconstruction and produces accurate and stable image enhancement.
Collapse
|
144
|
|
145
|
Dao X, Gao M, Wang Y. Atom selection strategy for signal compressed recovery based on sensing information entropy. ISA TRANSACTIONS 2021; 114:242-250. [PMID: 33422334 DOI: 10.1016/j.isatra.2020.12.050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/21/2020] [Revised: 12/04/2020] [Accepted: 12/23/2020] [Indexed: 06/12/2023]
Abstract
In greedy pursuit algorithm, atom selection is commonly a concerned topic for signal compressed recovery. To improve the recovery performance, an optimal atom selection strategy without the prior information is proposed in this paper. The sensing information entropy is defined to prune the possible false atoms in the estimated support set. Fewer iterations are required in the proposed strategy and it can also be applied in the case with high sparsity level or low signal-noise-ratio. Compared with the existing representative algorithms, the superiority of the recovery error and probability is verified by the simulations. Furthermore, the proposed method is applied to recover the real random modulated signal. The results show that the recovered signal has greater consistence with the original input signal.
Collapse
Affiliation(s)
- Xinyu Dao
- Army Engineering University, Shijiazhuang, China.
| | - Min Gao
- Army Engineering University, Shijiazhuang, China.
| | - Yi Wang
- Army Engineering University, Shijiazhuang, China.
| |
Collapse
|
146
|
Ghodrati V, Bydder M, Bedayat A, Prosper A, Yoshida T, Nguyen KL, Finn JP, Hu P. Temporally aware volumetric generative adversarial network-based MR image reconstruction with simultaneous respiratory motion compensation: Initial feasibility in 3D dynamic cine cardiac MRI. Magn Reson Med 2021; 86:2666-2683. [PMID: 34254363 PMCID: PMC10172149 DOI: 10.1002/mrm.28912] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2021] [Revised: 06/02/2021] [Accepted: 06/12/2021] [Indexed: 12/26/2022]
Abstract
PURPOSE Develop a novel three-dimensional (3D) generative adversarial network (GAN)-based technique for simultaneous image reconstruction and respiratory motion compensation of 4D MRI. Our goal was to enable high-acceleration factors 10.7X-15.8X, while maintaining robust and diagnostic image quality superior to state-of-the-art self-gating (SG) compressed sensing wavelet (CS-WV) reconstruction at lower acceleration factors 3.5X-7.9X. METHODS Our GAN was trained based on pixel-wise content loss functions, adversarial loss function, and a novel data-driven temporal aware loss function to maintain anatomical accuracy and temporal coherence. Besides image reconstruction, our network also performs respiratory motion compensation for free-breathing scans. A novel progressive growing-based strategy was adapted to make the training process possible for the proposed GAN-based structure. The proposed method was developed and thoroughly evaluated qualitatively and quantitatively based on 3D cardiac cine data from 42 patients. RESULTS Our proposed method achieved significantly better scores in general image quality and image artifacts at 10.7X-15.8X acceleration than the SG CS-WV approach at 3.5X-7.9X acceleration (4.53 ± 0.540 vs. 3.13 ± 0.681 for general image quality, 4.12 ± 0.429 vs. 2.97 ± 0.434 for image artifacts, P < .05 for both). No spurious anatomical structures were observed in our images. The proposed method enabled similar cardiac-function quantification as conventional SG CS-WV. The proposed method achieved faster central processing unit-based image reconstruction (6 s/cardiac phase) than the SG CS-WV (312 s/cardiac phase). CONCLUSION The proposed method showed promising potential for high-resolution (1 mm3 ) free-breathing 4D MR data acquisition with simultaneous respiratory motion compensation and fast reconstruction time.
Collapse
Affiliation(s)
- Vahid Ghodrati
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, California, USA.,Biomedical Physics Inter-Departmental Graduate Program, University of California, Los Angeles, California, USA
| | - Mark Bydder
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, California, USA
| | - Arash Bedayat
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, California, USA
| | - Ashley Prosper
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, California, USA
| | - Takegawa Yoshida
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, California, USA
| | - Kim-Lien Nguyen
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, California, USA.,Biomedical Physics Inter-Departmental Graduate Program, University of California, Los Angeles, California, USA.,Department of Medicine (Cardiology), David Geffen School of Medicine, University of California, Los Angeles, California, USA
| | - J Paul Finn
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, California, USA
| | - Peng Hu
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, California, USA.,Biomedical Physics Inter-Departmental Graduate Program, University of California, Los Angeles, California, USA
| |
Collapse
|
147
|
Lv J, Li G, Tong X, Chen W, Huang J, Wang C, Yang G. Transfer learning enhanced generative adversarial networks for multi-channel MRI reconstruction. Comput Biol Med 2021; 134:104504. [PMID: 34062366 DOI: 10.1016/j.compbiomed.2021.104504] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2021] [Revised: 05/17/2021] [Accepted: 05/17/2021] [Indexed: 12/23/2022]
Abstract
Deep learning based generative adversarial networks (GAN) can effectively perform image reconstruction with under-sampled MR data. In general, a large number of training samples are required to improve the reconstruction performance of a certain model. However, in real clinical applications, it is difficult to obtain tens of thousands of raw patient data to train the model since saving k-space data is not in the routine clinical flow. Therefore, enhancing the generalizability of a network based on small samples is urgently needed. In this study, three novel applications were explored based on parallel imaging combined with the GAN model (PI-GAN) and transfer learning. The model was pre-trained with public Calgary brain images and then fine-tuned for use in (1) patients with tumors in our center; (2) different anatomies, including knee and liver; (3) different k-space sampling masks with acceleration factors (AFs) of 2 and 6. As for the brain tumor dataset, the transfer learning results could remove the artifacts found in PI-GAN and yield smoother brain edges. The transfer learning results for the knee and liver were superior to those of the PI-GAN model trained with its own dataset using a smaller number of training cases. However, the learning procedure converged more slowly in the knee datasets compared to the learning in the brain tumor datasets. The reconstruction performance was improved by transfer learning both in the models with AFs of 2 and 6. Of these two models, the one with AF = 2 showed better results. The results also showed that transfer learning with the pre-trained model could solve the problem of inconsistency between the training and test datasets and facilitate generalization to unseen data.
Collapse
Affiliation(s)
- Jun Lv
- School of Computer and Control Engineering, Yantai University, Yantai, China
| | - Guangyuan Li
- School of Computer and Control Engineering, Yantai University, Yantai, China
| | - Xiangrong Tong
- School of Computer and Control Engineering, Yantai University, Yantai, China
| | | | - Jiahao Huang
- School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Chengyan Wang
- Human Phenome Institute, Fudan University, Shanghai, China.
| | - Guang Yang
- Cardiovascular Research Centre, Royal Brompton Hospital, London, SW3 6NP, UK; National Heart and Lung Institute, Imperial College London, London, SW7 2AZ, UK.
| |
Collapse
|
148
|
Wang S, Xiao T, Liu Q, Zheng H. Deep learning for fast MR imaging: A review for learning reconstruction from incomplete k-space data. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102579] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
|
149
|
Lv J, Zhu J, Yang G. Which GAN? A comparative study of generative adversarial network-based fast MRI reconstruction. PHILOSOPHICAL TRANSACTIONS. SERIES A, MATHEMATICAL, PHYSICAL, AND ENGINEERING SCIENCES 2021; 379:20200203. [PMID: 33966462 DOI: 10.1098/rsta.2020.0203] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 12/14/2020] [Indexed: 05/03/2023]
Abstract
Fast magnetic resonance imaging (MRI) is crucial for clinical applications that can alleviate motion artefacts and increase patient throughput. K-space undersampling is an obvious approach to accelerate MR acquisition. However, undersampling of k-space data can result in blurring and aliasing artefacts for the reconstructed images. Recently, several studies have been proposed to use deep learning-based data-driven models for MRI reconstruction and have obtained promising results. However, the comparison of these methods remains limited because the models have not been trained on the same datasets and the validation strategies may be different. The purpose of this work is to conduct a comparative study to investigate the generative adversarial network (GAN)-based models for MRI reconstruction. We reimplemented and benchmarked four widely used GAN-based architectures including DAGAN, ReconGAN, RefineGAN and KIGAN. These four frameworks were trained and tested on brain, knee and liver MRI images using twofold, fourfold and sixfold accelerations, respectively, with a random undersampling mask. Both quantitative evaluations and qualitative visualization have shown that the RefineGAN method has achieved superior performance in reconstruction with better accuracy and perceptual quality compared to other GAN-based methods. This article is part of the theme issue 'Synergistic tomographic image reconstruction: part 1'.
Collapse
Affiliation(s)
- Jun Lv
- School of Computer and Control Engineering, Yantai University, Yantai, People's Republic of China
| | - Jin Zhu
- Department of Computer Science and Technology, University of Cambridge, Cambridge CB3 0FD, UK
| | - Guang Yang
- Cardiovascular Research Centre, Royal Brompton Hospital, SW3 6NP London, UK
- National Heart and Lung Institute, Imperial College London, London SW7 2AZ, UK
| |
Collapse
|
150
|
Zhang Y, Andreas Noack M, Vagovic P, Fezzaa K, Garcia-Moreno F, Ritschel T, Villanueva-Perez P. PhaseGAN: a deep-learning phase-retrieval approach for unpaired datasets. OPTICS EXPRESS 2021; 29:19593-19604. [PMID: 34266067 DOI: 10.1364/oe.423222] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Accepted: 05/27/2021] [Indexed: 06/13/2023]
Abstract
Phase retrieval approaches based on deep learning (DL) provide a framework to obtain phase information from an intensity hologram or diffraction pattern in a robust manner and in real-time. However, current DL architectures applied to the phase problem rely on i) paired datasets, i. e., they are only applicable when a satisfactory solution of the phase problem has been found, and ii) the fact that most of them ignore the physics of the imaging process. Here, we present PhaseGAN, a new DL approach based on Generative Adversarial Networks, which allows the use of unpaired datasets and includes the physics of image formation. The performance of our approach is enhanced by including the image formation physics and a novel Fourier loss function, providing phase reconstructions when conventional phase retrieval algorithms fail, such as ultra-fast experiments. Thus, PhaseGAN offers the opportunity to address the phase problem in real-time when no phase reconstructions but good simulations or data from other experiments are available.
Collapse
|