1
|
Kuang S, Huang Y, Song J. Unsupervised data imputation with multiple importance sampling variational autoencoders. Sci Rep 2025; 15:3409. [PMID: 39870723 PMCID: PMC11772574 DOI: 10.1038/s41598-025-87641-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2024] [Accepted: 01/21/2025] [Indexed: 01/29/2025] Open
Abstract
Recently, deep latent variable models have made significant progress in dealing with missing data problems, benefiting from their ability to capture intricate and non-linear relationships within the data. In this work, we further investigate the potential of Variational Autoencoders (VAEs) in addressing the uncertainty associated with missing data via a multiple importance sampling strategy. We propose a Missing data Multiple Importance Sampling Variational Auto-Encoder (MMISVAE) method to effectively model incomplete data. Our approach consists of a learning step and an imputation step. During the learning step, the mixture components are represented by multiple separate encoder networks, which are later combined through simple averaging to enhance the latent representation capabilities of the VAEs when dealing with incomplete data. The statistical model and variational distributions are iteratively updated by maximizing the Multiple Importance Sampling Evidence Lower Bound (MISELBO) on the joint log-likelihood. In the imputation step, missing data is estimated using conditional expectation through multiple importance resampling. We propose an efficient imputation algorithm that broadens the scope of Missing data Importance Weighted Auto-Encoder (MIWAE) by incorporating multiple proposal probability distributions and the resampling schema. One notable characteristic of our method is the complete unsupervised nature of both the learning and imputation processes. Through comprehensive experimental analysis, we present evidence of the effectiveness of our method in improving the imputation accuracy of incomplete data when compared to current state-of-the-art VAEs-based methods.
Collapse
Affiliation(s)
- Shenfen Kuang
- School of Mathematics and Statistics, Shaoguan University, Shaoguan, 512005, China
| | - Yewen Huang
- School of Electronics and Information, Guangdong Polytechnic Normal University, Guangzhou, 510665, China
| | - Jie Song
- School of Mathematics and Statistics, Shaoguan University, Shaoguan, 512005, China.
| |
Collapse
|
2
|
Chen G, Zhang G, Yang Z, Liu W. Multi-scale patch-GAN with edge detection for image inpainting. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03577-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
3
|
Sui Y, Afacan O, Jaimes C, Gholipour A, Warfield SK. Scan-Specific Generative Neural Network for MRI Super-Resolution Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1383-1399. [PMID: 35020591 PMCID: PMC9208763 DOI: 10.1109/tmi.2022.3142610] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
The interpretation and analysis of Magnetic resonance imaging (MRI) benefit from high spatial resolution. Unfortunately, direct acquisition of high spatial resolution MRI is time-consuming and costly, which increases the potential for motion artifact, and suffers from reduced signal-to-noise ratio (SNR). Super-resolution reconstruction (SRR) is one of the most widely used methods in MRI since it allows for the trade-off between high spatial resolution, high SNR, and reduced scan times. Deep learning has emerged for improved SRR as compared to conventional methods. However, current deep learning-based SRR methods require large-scale training datasets of high-resolution images, which are practically difficult to obtain at a suitable SNR. We sought to develop a methodology that allows for dataset-free deep learning-based SRR, through which to construct images with higher spatial resolution and of higher SNR than can be practically obtained by direct Fourier encoding. We developed a dataset-free learning method that leverages a generative neural network trained for each specific scan or set of scans, which in turn, allows for SRR tailored to the individual patient. With the SRR from three short duration scans, we achieved high quality brain MRI at an isotropic spatial resolution of 0.125 cubic mm with six minutes of imaging time for T2 contrast and an average increase of 7.2 dB (34.2%) in SNR to these short duration scans. Motion compensation was achieved by aligning the three short duration scans together. We assessed our technique on simulated MRI data and clinical data acquired from 15 subjects. Extensive experimental results demonstrate that our approach achieved superior results to state-of-the-art methods, while in parallel, performed at reduced cost as scans delivered with direct high-resolution acquisition.
Collapse
|
4
|
Kamran SA, Hossain KF, Moghnieh H, Riar S, Bartlett A, Tavakkoli A, Sanders KM, Baker SA. New open-source software for subcellular segmentation and analysis of spatiotemporal fluorescence signals using deep learning. iScience 2022; 25:104277. [PMID: 35573197 PMCID: PMC9095751 DOI: 10.1016/j.isci.2022.104277] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Revised: 04/04/2022] [Accepted: 04/18/2022] [Indexed: 11/20/2022] Open
Abstract
Cellular imaging instrumentation advancements as well as readily available optogenetic and fluorescence sensors have yielded a profound need for fast, accurate, and standardized analysis. Deep-learning architectures have revolutionized the field of biomedical image analysis and have achieved state-of-the-art accuracy. Despite these advancements, deep learning architectures for the segmentation of subcellular fluorescence signals is lacking. Cellular dynamic fluorescence signals can be plotted and visualized using spatiotemporal maps (STMaps), and currently their segmentation and quantification are hindered by slow workflow speed and lack of accuracy, especially for large datasets. In this study, we provide a software tool that utilizes a deep-learning methodology to fundamentally overcome signal segmentation challenges. The software framework demonstrates highly optimized and accurate calcium signal segmentation and provides a fast analysis pipeline that can accommodate different patterns of signals across multiple cell types. The software allows seamless data accessibility, quantification, and graphical visualization and enables large dataset analysis throughput.
Collapse
Affiliation(s)
- Sharif Amit Kamran
- Department of Physiology and Cell Biology, University of Nevada, Reno School of Medicine, Anderson Medical Building MS352, Reno, NV 89557, USA
- Department of Computer Science and Engineering, University of Nevada, Reno, NV 89557, USA
| | | | - Hussein Moghnieh
- Department of Electrical and Computer Engineering], McGill University, Montréal, QC H3A 0E9, Canada
| | - Sarah Riar
- Department of Physiology and Cell Biology, University of Nevada, Reno School of Medicine, Anderson Medical Building MS352, Reno, NV 89557, USA
| | - Allison Bartlett
- Department of Physiology and Cell Biology, University of Nevada, Reno School of Medicine, Anderson Medical Building MS352, Reno, NV 89557, USA
| | - Alireza Tavakkoli
- Department of Computer Science and Engineering, University of Nevada, Reno, NV 89557, USA
| | - Kenton M. Sanders
- Department of Physiology and Cell Biology, University of Nevada, Reno School of Medicine, Anderson Medical Building MS352, Reno, NV 89557, USA
| | - Salah A. Baker
- Department of Physiology and Cell Biology, University of Nevada, Reno School of Medicine, Anderson Medical Building MS352, Reno, NV 89557, USA
| |
Collapse
|
5
|
Slice imputation: Multiple intermediate slices interpolation for anisotropic 3D medical image segmentation. Comput Biol Med 2022; 147:105667. [DOI: 10.1016/j.compbiomed.2022.105667] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 05/07/2022] [Accepted: 05/22/2022] [Indexed: 11/18/2022]
|
6
|
Autoencoding Low-Resolution MRI for Semantically Smooth Interpolation of Anisotropic MRI. Med Image Anal 2022; 78:102393. [DOI: 10.1016/j.media.2022.102393] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2021] [Revised: 02/10/2022] [Accepted: 02/11/2022] [Indexed: 11/20/2022]
|
7
|
GAGIN: generative adversarial guider imputation network for missing data. Neural Comput Appl 2022. [DOI: 10.1007/s00521-021-06862-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
8
|
Xia Y, Ravikumar N, Frangi AF. Learning to Complete Incomplete Hearts for Population Analysis of Cardiac MR Images. Med Image Anal 2022; 77:102354. [DOI: 10.1016/j.media.2022.102354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Revised: 11/10/2021] [Accepted: 01/03/2022] [Indexed: 10/19/2022]
|
9
|
Sui Y, Afacan O, Jaimes C, Gholipour A, Warfield SK. Gradient-Guided Isotropic MRI Reconstruction from Anisotropic Acquisitions. IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING 2021; 7:1240-1253. [PMID: 35252479 PMCID: PMC8896514 DOI: 10.1109/tci.2021.3128745] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
The trade-off between image resolution, signal-to-noise ratio (SNR), and scan time in any magnetic resonance imaging (MRI) protocol is inevitable and unavoidable. Super-resolution reconstruction (SRR) has been shown effective in mitigating these factors, and thus, has become an important approach in addressing the current limitations of MRI. In this work, we developed a novel, image-based MRI SRR approach based on anisotropic acquisition schemes, which utilizes a new gradient guidance regularization method that guides the high-resolution (HR) reconstruction via a spatial gradient estimate. Further, we designed an analytical solution to propagate the spatial gradient fields from the low-resolution (LR) images to the HR image space and exploited these gradient fields over multiple scales with a dynamic update scheme for more accurate edge localization in the reconstruction. We also established a forward model of image formation and inverted it along with the proposed gradient guidance. The proposed SRR method allows subject motion between volumes and is able to incorporate various acquisition schemes where the LR images are acquired with arbitrary orientations and displacements, such as orthogonal and through-plane origin-shifted scans. We assessed our proposed approach on simulated data as well as on the data acquired on a Siemens 3T MRI scanner containing 45 MRI scans from 14 subjects. Our experimental results demonstrate that our approach achieved superior reconstructions compared to state-of-the-art methods, both in terms of local spatial smoothness and edge preservation, while, in parallel, at reduced, or at the same cost as scans delivered with direct HR acquisition.
Collapse
Affiliation(s)
- Yao Sui
- Harvard Medical School and Boston Children's Hospital, Boston, Massachusetts, United States
| | - Onur Afacan
- Harvard Medical School and Boston Children's Hospital, Boston, Massachusetts, United States
| | - Camilo Jaimes
- Harvard Medical School and Boston Children's Hospital, Boston, Massachusetts, United States
| | - Ali Gholipour
- Harvard Medical School and Boston Children's Hospital, Boston, Massachusetts, United States
| | - Simon K Warfield
- Harvard Medical School and Boston Children's Hospital, Boston, Massachusetts, United States
| |
Collapse
|
10
|
Iglesias JE, Billot B, Balbastre Y, Tabari A, Conklin J, Gilberto González R, Alexander DC, Golland P, Edlow BL, Fischl B. Joint super-resolution and synthesis of 1 mm isotropic MP-RAGE volumes from clinical MRI exams with scans of different orientation, resolution and contrast. Neuroimage 2021; 237:118206. [PMID: 34048902 PMCID: PMC8354427 DOI: 10.1016/j.neuroimage.2021.118206] [Citation(s) in RCA: 61] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Revised: 05/20/2021] [Accepted: 05/24/2021] [Indexed: 12/14/2022] Open
Abstract
Most existing algorithms for automatic 3D morphometry of human brain MRI scans are designed for data with near-isotropic voxels at approximately 1 mm resolution, and frequently have contrast constraints as well-typically requiring T1-weighted images (e.g., MP-RAGE scans). This limitation prevents the analysis of millions of MRI scans acquired with large inter-slice spacing in clinical settings every year. In turn, the inability to quantitatively analyze these scans hinders the adoption of quantitative neuro imaging in healthcare, and also precludes research studies that could attain huge sample sizes and hence greatly improve our understanding of the human brain. Recent advances in convolutional neural networks (CNNs) are producing outstanding results in super-resolution and contrast synthesis of MRI. However, these approaches are very sensitive to the specific combination of contrast, resolution and orientation of the input images, and thus do not generalize to diverse clinical acquisition protocols - even within sites. In this article, we present SynthSR, a method to train a CNN that receives one or more scans with spaced slices, acquired with different contrast, resolution and orientation, and produces an isotropic scan of canonical contrast (typically a 1 mm MP-RAGE). The presented method does not require any preprocessing, beyond rigid coregistration of the input scans. Crucially, SynthSR trains on synthetic input images generated from 3D segmentations, and can thus be used to train CNNs for any combination of contrasts, resolutions and orientations without high-resolution real images of the input contrasts. We test the images generated with SynthSR in an array of common downstream analyses, and show that they can be reliably used for subcortical segmentation and volumetry, image registration (e.g., for tensor-based morphometry), and, if some image quality requirements are met, even cortical thickness morphometry. The source code is publicly available at https://github.com/BBillot/SynthSR.
Collapse
Affiliation(s)
- Juan Eugenio Iglesias
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, UK; Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, USA; Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Boston, USA.
| | - Benjamin Billot
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, UK
| | - Yaël Balbastre
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, UK
| | - Azadeh Tabari
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, USA; Department of Radiology, Massachusetts General Hospital, Boston, USA
| | - John Conklin
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, USA; Department of Radiology, Massachusetts General Hospital, Boston, USA
| | - R Gilberto González
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, USA; Neuroradiology Division, Massachusetts General Hospital, Boston, USA
| | - Daniel C Alexander
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, UK
| | - Polina Golland
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Boston, USA
| | - Brian L Edlow
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, USA; Center for Neurotechnology and Neurorecovery, Massachusetts General Hospital, Boston, USA
| | - Bruce Fischl
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, USA
| |
Collapse
|
11
|
Sui Y, Afacan O, Gholipour A, Warfield SK. Fast and High-Resolution Neonatal Brain MRI Through Super-Resolution Reconstruction From Acquisitions With Variable Slice Selection Direction. Front Neurosci 2021; 15:636268. [PMID: 34220414 PMCID: PMC8242183 DOI: 10.3389/fnins.2021.636268] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Accepted: 05/19/2021] [Indexed: 12/18/2022] Open
Abstract
The brain of neonates is small in comparison to adults. Imaging at typical resolutions such as one cubic mm incurs more partial voluming artifacts in a neonate than in an adult. The interpretation and analysis of MRI of the neonatal brain benefit from a reduction in partial volume averaging that can be achieved with high spatial resolution. Unfortunately, direct acquisition of high spatial resolution MRI is slow, which increases the potential for motion artifact, and suffers from reduced signal-to-noise ratio. The purpose of this study is thus that using super-resolution reconstruction in conjunction with fast imaging protocols to construct neonatal brain MRI images at a suitable signal-to-noise ratio and with higher spatial resolution than can be practically obtained by direct Fourier encoding. We achieved high quality brain MRI at a spatial resolution of isotropic 0.4 mm with 6 min of imaging time, using super-resolution reconstruction from three short duration scans with variable directions of slice selection. Motion compensation was achieved by aligning the three short duration scans together. We applied this technique to 20 newborns and assessed the quality of the images we reconstructed. Experiments show that our approach to super-resolution reconstruction achieved considerable improvement in spatial resolution and signal-to-noise ratio, while, in parallel, substantially reduced scan times, as compared to direct high-resolution acquisitions. The experimental results demonstrate that our approach allowed for fast and high-quality neonatal brain MRI for both scientific research and clinical studies.
Collapse
Affiliation(s)
- Yao Sui
- Computational Radiology Laboratory, Department of Radiology, Boston Children's Hospital, Boston, MA, United States
- Harvard Medical School, Boston, MA, United States
| | - Onur Afacan
- Computational Radiology Laboratory, Department of Radiology, Boston Children's Hospital, Boston, MA, United States
- Harvard Medical School, Boston, MA, United States
| | - Ali Gholipour
- Computational Radiology Laboratory, Department of Radiology, Boston Children's Hospital, Boston, MA, United States
- Harvard Medical School, Boston, MA, United States
| | - Simon K. Warfield
- Computational Radiology Laboratory, Department of Radiology, Boston Children's Hospital, Boston, MA, United States
- Harvard Medical School, Boston, MA, United States
| |
Collapse
|
12
|
Xia Y, Ravikumar N, Greenwood JP, Neubauer S, Petersen SE, Frangi AF. Super-Resolution of Cardiac MR Cine Imaging using Conditional GANs and Unsupervised Transfer Learning. Med Image Anal 2021; 71:102037. [PMID: 33910110 DOI: 10.1016/j.media.2021.102037] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2020] [Revised: 03/02/2021] [Accepted: 03/05/2021] [Indexed: 12/22/2022]
Abstract
High-resolution (HR), isotropic cardiac Magnetic Resonance (MR) cine imaging is challenging since it requires long acquisition and patient breath-hold times. Instead, 2D balanced steady-state free precession (SSFP) sequence is widely used in clinical routine. However, it produces highly-anisotropic image stacks, with large through-plane spacing that can hinder subsequent image analysis. To resolve this, we propose a novel, robust adversarial learning super-resolution (SR) algorithm based on conditional generative adversarial nets (GANs), that incorporates a state-of-the-art optical flow component to generate an auxiliary image to guide image synthesis. The approach is designed for real-world clinical scenarios and requires neither multiple low-resolution (LR) scans with multiple views, nor the corresponding HR scans, and is trained in an end-to-end unsupervised transfer learning fashion. The designed framework effectively incorporates visual properties and relevant structures of input images and can synthesise 3D isotropic, anatomically plausible cardiac MR images, consistent with the acquired slices. Experimental results show that the proposed SR method outperforms several state-of-the-art methods both qualitatively and quantitatively. We show that subsequent image analyses including ventricle segmentation, cardiac quantification, and non-rigid registration can benefit from the super-resolved, isotropic cardiac MR images, to produce more accurate quantitative results, without increasing the acquisition time. The average Dice similarity coefficient (DSC) for the left ventricular (LV) cavity and myocardium are 0.95 and 0.81, respectively, between real and synthesised slice segmentation. For non-rigid registration and motion tracking through the cardiac cycle, the proposed method improves the average DSC from 0.75 to 0.86, compared to the original resolution images.
Collapse
Affiliation(s)
- Yan Xia
- Centre for Computational Imaging and Simulation Technologies in Biomedicine (CISTIB), School of Computing, University of Leeds, Leeds, UK; Leeds Institute for Cardiovascular and Metabolic Medicine (LICAMM), School of Medicine, University of Leeds, Leeds, UK.
| | - Nishant Ravikumar
- Centre for Computational Imaging and Simulation Technologies in Biomedicine (CISTIB), School of Computing, University of Leeds, Leeds, UK; Leeds Institute for Cardiovascular and Metabolic Medicine (LICAMM), School of Medicine, University of Leeds, Leeds, UK
| | - John P Greenwood
- Leeds Institute for Cardiovascular and Metabolic Medicine (LICAMM), School of Medicine, University of Leeds, Leeds, UK
| | - Stefan Neubauer
- Oxford Center for Clinical Magnetic Resonance Research, Division of Cardiovascular Medicine, John Radcliffe Hospital, University of Oxford, Oxford, UK
| | - Steffen E Petersen
- William Harvey Research Institute, NIHR Barts Biomedical Research Centre, Queen Mary University of London, London, UK; Barts Heart Centre, St Bartholomew's Hospital, Barts Health NHS Trust, London, UK
| | - Alejandro F Frangi
- Centre for Computational Imaging and Simulation Technologies in Biomedicine (CISTIB), School of Computing, University of Leeds, Leeds, UK; Leeds Institute for Cardiovascular and Metabolic Medicine (LICAMM), School of Medicine, University of Leeds, Leeds, UK; Medical Imaging Research Center (MIRC), Cardiovascular Science and Electronic Engineering Departments, KU Leuven, Leuven, Belgium
| |
Collapse
|
13
|
Zhao C, Dewey BE, Pham DL, Calabresi PA, Reich DS, Prince JL. SMORE: A Self-Supervised Anti-Aliasing and Super-Resolution Algorithm for MRI Using Deep Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:805-817. [PMID: 33170776 PMCID: PMC8053388 DOI: 10.1109/tmi.2020.3037187] [Citation(s) in RCA: 61] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
High resolution magnetic resonance (MR) images are desired in many clinical and research applications. Acquiring such images with high signal-to-noise (SNR), however, can require a long scan duration, which is difficult for patient comfort, is more costly, and makes the images susceptible to motion artifacts. A very common practical compromise for both 2D and 3D MR imaging protocols is to acquire volumetric MR images with high in-plane resolution, but lower through-plane resolution. In addition to having poor resolution in one orientation, 2D MRI acquisitions will also have aliasing artifacts, which further degrade the appearance of these images. This paper presents an approach SMORE1 based on convolutional neural networks (CNNs) that restores image quality by improving resolution and reducing aliasing in MR images.2 This approach is self-supervised, which requires no external training data because the high-resolution and low-resolution data that are present in the image itself are used for training. For 3D MRI, the method consists of only one self-supervised super-resolution (SSR) deep CNN that is trained from the volumetric image data. For 2D MRI, there is a self-supervised anti-aliasing (SAA) deep CNN that precedes the SSR CNN, also trained from the volumetric image data. Both methods were evaluated on a broad collection of MR data, including filtered and downsampled images so that quantitative metrics could be computed and compared, and actual acquired low resolution images for which visual and sharpness measures could be computed and compared. The super-resolution method is shown to be visually and quantitatively superior to previously reported methods.
Collapse
|
14
|
Wang C, Yang G, Papanastasiou G, Tsaftaris SA, Newby DE, Gray C, Macnaught G, MacGillivray TJ. DiCyc: GAN-based deformation invariant cross-domain information fusion for medical image synthesis. AN INTERNATIONAL JOURNAL ON INFORMATION FUSION 2021; 67:147-160. [PMID: 33658909 PMCID: PMC7763495 DOI: 10.1016/j.inffus.2020.10.015] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/04/2020] [Revised: 10/19/2020] [Accepted: 10/21/2020] [Indexed: 05/22/2023]
Abstract
Cycle-consistent generative adversarial network (CycleGAN) has been widely used for cross-domain medical image synthesis tasks particularly due to its ability to deal with unpaired data. However, most CycleGAN-based synthesis methods cannot achieve good alignment between the synthesized images and data from the source domain, even with additional image alignment losses. This is because the CycleGAN generator network can encode the relative deformations and noises associated to different domains. This can be detrimental for the downstream applications that rely on the synthesized images, such as generating pseudo-CT for PET-MR attenuation correction. In this paper, we present a deformation invariant cycle-consistency model that can filter out these domain-specific deformation. The deformation is globally parameterized by thin-plate-spline (TPS), and locally learned by modified deformable convolutional layers. Robustness to domain-specific deformations has been evaluated through experiments on multi-sequence brain MR data and multi-modality abdominal CT and MR data. Experiment results demonstrated that our method can achieve better alignment between the source and target data while maintaining superior image quality of signal compared to several state-of-the-art CycleGAN-based methods.
Collapse
Affiliation(s)
- Chengjia Wang
- BHF Centre for Cardiovascular Science, University of Edinburgh, Edinburgh, UK
- Corresponding author.
| | - Guang Yang
- National Heart and Lung Institute, Imperial College London, London, UK
| | | | - Sotirios A. Tsaftaris
- Institute for Digital Communications, School of Engineering, University of Edinburgh, Edinburgh, UK
| | - David E. Newby
- BHF Centre for Cardiovascular Science, University of Edinburgh, Edinburgh, UK
| | - Calum Gray
- Edinburgh Imaging Facility QMRI, University of Edinburgh, Edinburgh, UK
| | - Gillian Macnaught
- Edinburgh Imaging Facility QMRI, University of Edinburgh, Edinburgh, UK
| | | |
Collapse
|
15
|
A Multifeature Extraction Method Using Deep Residual Network for MR Image Denoising. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2020; 2020:8823861. [PMID: 33204301 PMCID: PMC7665932 DOI: 10.1155/2020/8823861] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/30/2020] [Revised: 10/21/2020] [Accepted: 10/26/2020] [Indexed: 12/22/2022]
Abstract
In order to improve the resolution of magnetic resonance (MR) image and reduce the interference of noise, a multifeature extraction denoising algorithm based on a deep residual network is proposed. First, the feature extraction layer is constructed by combining three different sizes of convolution kernels, which are used to obtain multiple shallow features for fusion and increase the network's multiscale perception ability. Then, it combines batch normalization and residual learning technology to accelerate and optimize the deep network, while solving the problem of internal covariate transfer in deep learning. Finally, the joint loss function is defined by combining the perceptual loss and the traditional mean square error loss. When the network is trained, it can not only be compared at the pixel level but also be learned at a higher level of semantic features to generate a clearer target image. Based on the MATLAB simulation platform, the TCGA-GBM and CH-GBM datasets are used to experimentally demonstrate the proposed algorithm. The results show that when the image size is set to 190 × 215 and the optimization algorithm is Adam, the performance of the proposed algorithm is the best, and its denoising effect is significantly better than other comparison algorithms. Especially under high-intensity noise levels, the denoising advantage is more prominent.
Collapse
|
16
|
Fan M, Liu Z, Xu M, Wang S, Zeng T, Gao X, Li L. Generative adversarial network-based super-resolution of diffusion-weighted imaging: Application to tumour radiomics in breast cancer. NMR IN BIOMEDICINE 2020; 33:e4345. [PMID: 32521567 DOI: 10.1002/nbm.4345] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/06/2020] [Revised: 04/19/2020] [Accepted: 05/14/2020] [Indexed: 06/11/2023]
Abstract
Diffusion-weighted imaging (DWI) is increasingly used to guide the clinical management of patients with breast tumours. However, accurate tumour characterization with DWI and the corresponding apparent diffusion coefficient (ADC) maps are challenging due to their limited resolution. This study aimed to produce super-resolution (SR) ADC images and to assess the clinical utility of these SR images by performing a radiomic analysis for predicting the histologic grade and Ki-67 expression status of breast cancer. To this end, 322 samples of dynamic enhanced magnetic resonance imaging (DCE-MRI) and the corresponding DWI data were collected. A SR generative adversarial (SRGAN) and an enhanced deep SR (EDSR) network along with the bicubic interpolation were utilized to generate SR-ADC images from which radiomic features were extracted. The dataset was randomly separated into a development dataset (n = 222) to establish a deep SR model using DCE-MRI and a validation dataset (n = 100) to improve the resolution of ADC images. This random separation of datasets was performed 10 times, and the results were averaged. The EDSR method was significantly better than the SRGAN and bicubic methods in terms of objective quality criteria. Univariate and multivariate predictive models of radiomic features were established to determine the area under the receiver operating characteristic curve (AUC). Individual features from the tumour SR-ADC images showed a higher performance with the EDSR and SRGAN methods than with the bicubic method and the original images. Multivariate analysis of the collective radiomics showed that the EDSR- and SRGAN-based SR-ADC images performed better than the bicubic method and original images in predicting either Ki-67 expression levels (AUCs of 0.818 and 0.801, respectively) or the tumour grade (AUCs of 0.826 and 0.828, respectively). This work demonstrates that in addition to improving the resolution of ADC images, deep SR networks can also improve tumour image-based diagnosis in breast cancer.
Collapse
Affiliation(s)
- Ming Fan
- Institute of Biomedical Engineering and Instrumentation, Hangzhou Dianzi University, Hangzhou, China
| | - Zuhui Liu
- Institute of Biomedical Engineering and Instrumentation, Hangzhou Dianzi University, Hangzhou, China
| | - Maosheng Xu
- Department of Radiology, First Affiliated Hospital of Zhejiang Chinese Medical University, Zhejiang, Hangzhou, China
| | - Shiwei Wang
- Department of Radiology, First Affiliated Hospital of Zhejiang Chinese Medical University, Zhejiang, Hangzhou, China
| | - Tieyong Zeng
- Department of Mathematics, The Chinese University of Hong Kong, Shatin, Hong Kong, China
| | - Xin Gao
- Computational Bioscience Research Center (CBRC), Computer, Electrical and Mathematical Sciences and Engineering Division (CEMSE), King Abdullah University of Science and Technology (KAUST), Thuwal, Saudi Arabia
| | - Lihua Li
- Institute of Biomedical Engineering and Instrumentation, Hangzhou Dianzi University, Hangzhou, China
| |
Collapse
|
17
|
Chai Y, Xu B, Zhang K, Lepore N, Wood J. MRI restoration using edge-guided adversarial learning. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2020; 8:83858-83870. [PMID: 33747672 PMCID: PMC7977797 DOI: 10.1109/access.2020.2992204] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Magnetic resonance imaging (MRI) images acquired as multislice two-dimensional (2D) images present challenges when reformatted in orthogonal planes due to sparser sampling in the through-plane direction. Restoring the "missing" through-plane slices, or regions of an MRI image damaged by acquisition artifacts can be modeled as an image imputation task. In this work, we consider the damaged image data or missing through-plane slices as image masks and proposed an edge-guided generative adversarial network to restore brain MRI images. Inspired by the procedure of image inpainting, our proposed method decouples image repair into two stages: edge connection and contrast completion, both of which used general adversarial networks (GAN). We trained and tested on a dataset from the Human Connectome Project to test the application of our method for thick slice imputation, while we tested the artifact correction on clinical data and simulated datasets. Our Edge-Guided GAN had superior PSNR, SSIM, conspicuity and signal texture compared to traditional imputation tools, the Context Encoder and the Densely Connected Super Resolution Network with GAN (DCSRN-GAN). The proposed network may improve utilization of clinical 2D scans for 3D atlas generation and big-data comparative studies of brain morphometry.
Collapse
Affiliation(s)
- Yaqiong Chai
- Department of Biomedical Engineering, University of Southern California, CA, USA
- CIBORG lab, Department of Radiology, Children’s Hospital Los Angeles, CA, USA
| | - Botian Xu
- Department of Biomedical Engineering, University of Southern California, CA, USA
| | - Kangning Zhang
- Department of Electrical Engineering, University of Southern California, CA, USA
| | - Natasha Lepore
- Department of Biomedical Engineering, University of Southern California, CA, USA
- CIBORG lab, Department of Radiology, Children’s Hospital Los Angeles, CA, USA
| | - John Wood
- Department of Biomedical Engineering, University of Southern California, CA, USA
- Division of Cardiology, Children’s Hospital Los Angeles, CA, USA
| |
Collapse
|