1
|
Abbott RE, Nishimwe A, Wiputra H, Breighner RE, Ellingson AM. A super-resolution algorithm to fuse orthogonal CT volumes using OrthoFusion. Sci Rep 2025; 15:1382. [PMID: 39779816 PMCID: PMC11711182 DOI: 10.1038/s41598-025-85516-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2024] [Accepted: 01/03/2025] [Indexed: 01/11/2025] Open
Abstract
OrthoFusion, an intuitive super-resolution algorithm, is presented in this study to enhance the spatial resolution of clinical CT volumes. The efficacy of OrthoFusion is evaluated, relative to high-resolution CT volumes (ground truth), by assessing image volume and derived bone morphological similarity, as well as its performance in specific applications in 2D-3D registration tasks. Results demonstrate that OrthoFusion significantly reduced segmentation time, while improving structural similarity of bone images and relative accuracy of derived bone model geometries. Moreover, it proved beneficial in the context of biplane videoradiography, enhancing the similarity of digitally reconstructed radiographs to radiographic images and improving the accuracy of relative bony kinematics. OrthoFusion's simplicity, ease of implementation, and generalizability make it a valuable tool for researchers and clinicians seeking high spatial resolution from existing clinical CT data. This study opens new avenues for retrospectively utilizing clinical images for research and advanced clinical purposes, while reducing the need for additional scans, mitigating associated costs and radiation exposure.
Collapse
Affiliation(s)
- Rebecca E Abbott
- Divisions of Physical Therapy and Rehabilitation Science, Department of Family Medicine and Community Health, University of Minnesota, Minneapolis, MN, 55455, USA
| | - Alain Nishimwe
- Department of Biomedical Engineering, University of Minnesota, Minneapolis, MN, 55455, USA
| | - Hadi Wiputra
- Department of Biomedical Engineering, University of Minnesota, Minneapolis, MN, 55455, USA
| | - Ryan E Breighner
- Department of Radiology and Imaging, Hospital for Special Surgery, New York, NY, 10021, USA
| | - Arin M Ellingson
- Divisions of Physical Therapy and Rehabilitation Science, Department of Family Medicine and Community Health, University of Minnesota, Minneapolis, MN, 55455, USA.
| |
Collapse
|
2
|
Wang L, Zhang W, Chen W, He Z, Jia Y, Du J. Cross-Modality Reference and Feature Mutual-Projection for 3D Brain MRI Image Super-Resolution. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:2838-2851. [PMID: 38829472 PMCID: PMC11612118 DOI: 10.1007/s10278-024-01139-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 03/30/2024] [Accepted: 04/21/2024] [Indexed: 06/05/2024]
Abstract
High-resolution (HR) magnetic resonance imaging (MRI) can reveal rich anatomical structures for clinical diagnoses. However, due to hardware and signal-to-noise ratio limitations, MRI images are often collected with low resolution (LR) which is not conducive to diagnosing and analyzing clinical diseases. Recently, deep learning super-resolution (SR) methods have demonstrated great potential in enhancing the resolution of MRI images; however, most of them did not take the cross-modality and internal priors of MR seriously, which hinders the SR performance. In this paper, we propose a cross-modality reference and feature mutual-projection (CRFM) method to enhance the spatial resolution of brain MRI images. Specifically, we feed the gradients of HR MRI images from referenced imaging modality into the SR network to transform true clear textures to LR feature maps. Meanwhile, we design a plug-in feature mutual-projection (FMP) method to capture the cross-scale dependency and cross-modality similarity details of MRI images. Finally, we fuse all feature maps with parallel attentions to produce and refine the HR features adaptively. Extensive experiments on MRI images in the image domain and k-space show that our CRFM method outperforms existing state-of-the-art MRI SR methods.
Collapse
Affiliation(s)
- Lulu Wang
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology and Yunnan Key Laboratory of Computer Technologies Application, Kunming, 650500, China.
| | - Wanqi Zhang
- College of Computer Science, Chongqing University, Chongqing, 400044, China
| | - Wei Chen
- College of Computer Science, Chongqing University, Chongqing, 400044, China
| | - Zhongshi He
- College of Computer Science, Chongqing University, Chongqing, 400044, China
| | - Yuanyuan Jia
- Medical Data Science Academy and College of Medical Informatics, Chongqing Medical University, Chongqing, 400016, China
| | - Jinglong Du
- Medical Data Science Academy and College of Medical Informatics, Chongqing Medical University, Chongqing, 400016, China
| |
Collapse
|
3
|
Abbott RE, Nishimwe A, Wiputra H, Breighner RE, Ellingson AM. OrthoFusion: A Super-Resolution Algorithm to Fuse Orthogonal CT Volumes. RESEARCH SQUARE 2024:rs.3.rs-4117386. [PMID: 38645068 PMCID: PMC11030529 DOI: 10.21203/rs.3.rs-4117386/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/23/2024]
Abstract
OrthoFusion, an intuitive super-resolution algorithm, is presented in this study to enhance the spatial resolution of clinical CT volumes. The efficacy of OrthoFusion is evaluated, relative to high-resolution CT volumes (ground truth), by assessing image volume and derived bone morphological similarity, as well as its performance in specific applications in 2D-3D registration tasks. Results demonstrate that OrthoFusion significantly reduced segmentation time, while improving structural similarity of bone images and relative accuracy of derived bone model geometries. Moreover, it proved beneficial in the context of biplane videoradiography, enhancing the similarity of digitally reconstructed radiographs to radiographic images and improving the accuracy of relative bony kinematics. OrthoFusion's simplicity, ease of implementation, and generalizability make it a valuable tool for researchers and clinicians seeking high spatial resolution from existing clinical CT data. This study opens new avenues for retrospectively utilizing clinical images for research and advanced clinical purposes, while reducing the need for additional scans, mitigating associated costs and radiation exposure.
Collapse
|
4
|
Li H, Jia Y, Zhu H, Han B, Du J, Liu Y. Multi-level feature extraction and reconstruction for 3D MRI image super-resolution. Comput Biol Med 2024; 171:108151. [PMID: 38387383 DOI: 10.1016/j.compbiomed.2024.108151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2023] [Revised: 02/04/2024] [Accepted: 02/12/2024] [Indexed: 02/24/2024]
Abstract
Magnetic resonance imaging (MRI) is an essential radiology technique in clinical diagnosis, but its spatial resolution may not suffice to meet the growing need for precise diagnosis due to hardware limitations and thicker slice thickness. Therefore, it is crucial to explore suitable methods to increase the resolution of MRI images. Recently, deep learning has yielded many impressive results in MRI image super-resolution (SR) reconstruction. However, current SR networks mainly use convolutions to extract relatively single image features, which may not be optimal for further enhancing the quality of image reconstruction. In this work, we propose a multi-level feature extraction and reconstruction (MFER) method to restore the degraded high-resolution details of MRI images. Specifically, to comprehensively extract different types of features, we design the triple-mixed convolution by leveraging the strengths and uniqueness of different filter operations. For the features of each level, we then apply deconvolutions to upsample them separately at the tail of the network, followed by the feature calibration of spatial and channel attention. Besides, we also use a soft cross-scale residual operation to improve the effectiveness of parameter optimization. Experiments on lesion-free and glioma datasets indicate that our method obtains superior quantitative performance and visual effects when compared with state-of-the-art MRI image SR methods.
Collapse
Affiliation(s)
- Hongbi Li
- College of Medical Informatics, Chongqing Medical University, Chongqing 400016, China
| | - Yuanyuan Jia
- College of Medical Informatics, Chongqing Medical University, Chongqing 400016, China
| | - Huazheng Zhu
- College of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing 401331, China
| | - Baoru Han
- College of Medical Informatics, Chongqing Medical University, Chongqing 400016, China
| | - Jinglong Du
- College of Medical Informatics, Chongqing Medical University, Chongqing 400016, China.
| | - Yanbing Liu
- College of Medical Informatics, Chongqing Medical University, Chongqing 400016, China; Chongqing Municipal Education Commission, Chongqing 400020, China.
| |
Collapse
|
5
|
Lu Z, Wang J, Li Z, Ying S, Wang J, Shi J, Shen D. Two-Stage Self-Supervised Cycle-Consistency Transformer Network for Reducing Slice Gap in MR Images. IEEE J Biomed Health Inform 2023; 27:3337-3348. [PMID: 37126622 DOI: 10.1109/jbhi.2023.3271815] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Magnetic resonance (MR) images are usually acquired with large slice gap in clinical practice, i.e., low resolution (LR) along the through-plane direction. It is feasible to reduce the slice gap and reconstruct high-resolution (HR) images with the deep learning (DL) methods. To this end, the paired LR and HR images are generally required to train a DL model in a popular fully supervised manner. However, since the HR images are hardly acquired in clinical routine, it is difficult to get sufficient paired samples to train a robust model. Moreover, the widely used convolutional Neural Network (CNN) still cannot capture long-range image dependencies to combine useful information of similar contents, which are often spatially far away from each other across neighboring slices. To this end, a Two-stage Self-supervised Cycle-consistency Transformer Network (TSCTNet) is proposed to reduce the slice gap for MR images in this work. A novel self-supervised learning (SSL) strategy is designed with two stages respectively for robust network pre-training and specialized network refinement based on a cycle-consistency constraint. A hybrid Transformer and CNN structure is utilized to build an interpolation model, which explores both local and global slice representations. The experimental results on two public MR image datasets indicate that TSCTNet achieves superior performance over other compared SSL-based algorithms.
Collapse
|
6
|
Wu Q, Li Y, Sun Y, Zhou Y, Wei H, Yu J, Zhang Y. An Arbitrary Scale Super-Resolution Approach for 3D MR Images via Implicit Neural Representation. IEEE J Biomed Health Inform 2023; 27:1004-1015. [PMID: 37022393 DOI: 10.1109/jbhi.2022.3223106] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
High Resolution (HR) medical images provide rich anatomical structure details to facilitate early and accurate diagnosis. In magnetic resonance imaging (MRI), restricted by hardware capacity, scan time, and patient cooperation ability, isotropic 3-dimensional (3D) HR image acquisition typically requests long scan time and, results in small spatial coverage and low signal-to-noise ratio (SNR). Recent studies showed that, with deep convolutional neural networks, isotropic HR MR images could be recovered from low-resolution (LR) input via single image super-resolution (SISR) algorithms. However, most existing SISR methods tend to approach scale-specific projection between LR and HR images, thus these methods can only deal with fixed up-sampling rates. In this paper, we propose ArSSR, an Arbitrary Scale Super-Resolution approach for recovering 3D HR MR images. In the ArSSR model, the LR image and the HR image are represented using the same implicit neural voxel function with different sampling rates. Due to the continuity of the learned implicit function, a single ArSSR model is able to achieve arbitrary and infinite up-sampling rate reconstructions of HR images from any input LR image. Then the SR task is converted to approach the implicit voxel function via deep neural networks from a set of paired HR and LR training examples. The ArSSR model consists of an encoder network and a decoder network. Specifically, the convolutional encoder network is to extract feature maps from the LR input images and the fully-connected decoder network is to approximate the implicit voxel function. Experimental results on three datasets show that the ArSSR model can achieve state-of-the-art SR performance for 3D HR MR image reconstruction while using a single trained model to achieve arbitrary up-sampling scales.
Collapse
|
7
|
Xu Y, Dai S, Song H, Du L, Chen Y. Multi-modal brain MRI images enhancement based on framelet and local weights super-resolution. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:4258-4273. [PMID: 36899626 DOI: 10.3934/mbe.2023199] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
Magnetic resonance (MR) image enhancement technology can reconstruct high-resolution image from a low-resolution image, which is of great significance for clinical application and scientific research. T1 weighting and T2 weighting are the two common magnetic resonance imaging modes, each of which has its own advantages, but the imaging time of T2 is much longer than that of T1. Related studies have shown that they have very similar anatomical structures in brain images, which can be utilized to enhance the resolution of low-resolution T2 images by using the edge information of high-resolution T1 images that can be rapidly imaged, so as to shorten the imaging time needed for T2 images. In order to overcome the inflexibility of traditional methods using fixed weights for interpolation and the inaccuracy of using gradient threshold to determine edge regions, we propose a new model based on previous studies on multi-contrast MR image enhancement. Our model uses framelet decomposition to finely separate the edge structure of the T2 brain image, and uses the local regression weights calculated from T1 image to construct a global interpolation matrix, so that our model can not only guide the edge reconstruction more accurately where the weights are shared, but also carry out collaborative global optimization for the remaining pixels and their interpolated weights. Experimental results on a set of simulated MR data and two sets of real MR images show that the enhanced images obtained by the proposed method are superior to the compared methods in terms of visual sharpness or qualitative indicators.
Collapse
Affiliation(s)
- Yingying Xu
- School of Electronics and Information Engineering, Taizhou University, Taizhou 318000, China
| | - Songsong Dai
- School of Electronics and Information Engineering, Taizhou University, Taizhou 318000, China
| | - Haifeng Song
- School of Electronics and Information Engineering, Taizhou University, Taizhou 318000, China
| | - Lei Du
- School of Electronics and Information Engineering, Taizhou University, Taizhou 318000, China
| | - Ying Chen
- School of Electronics and Information Engineering, Taizhou University, Taizhou 318000, China
| |
Collapse
|
8
|
Liachenko S, Chelonis J, Paule MG, Li M, Sadovova N, Talpos JC. The effects of long-term methylphenidate administration and withdrawal on progressive ratio responding and T 2 MRI in the male rhesus monkey. Neurotoxicol Teratol 2022; 93:107119. [PMID: 35970252 DOI: 10.1016/j.ntt.2022.107119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Revised: 08/03/2022] [Accepted: 08/08/2022] [Indexed: 11/16/2022]
Abstract
Methylphenidate is a frequently prescribed drug treatment for Attention-Deficit/Hyperactivity Disorder. However, methylphenidate has a mode of action similar to amphetamine and cocaine, both powerful drugs of abuse. There is lingering concern over the long-term safety of methylphenidate, especially in a pediatric population, where the drug may be used for years. We performed a long-term evaluation of the effects of chronic methylphenidate use on a behavioral measure of motivation in male rhesus monkeys. Animals were orally administered a sweetened methylphenidate solution (2.5 or 12.5 mg/kg, twice a day, Mon-Fri) or vehicle during adolescence and into adulthood. These animals were assessed on a test of motivation (progressive ratio responding), during methylphenidate treatment, and after cessation of use. Moreover, animals were evaluated with quantitative T2 MRI about one year after cessation of use. During the administration phase of the study animals treated with a clinically relevant dose of methylphenidate generally had a higher rate of responding than the control group, while the high dose group generally had a lower rate of responding. These differences were not statistically significant. In the month after cessation of methylphenidate, responding in both experimental groups dropped compared to their previous level of performance (p = 0.19 2.5 mg/kg, p = 0.06 12.5 mg/kg), and responding in the control animals was unchanged (p = 0.81). While cessation of methylphenidate was associated with an acute reduction in responding, group differences were not observed in the following months. These data suggest that methylphenidate did not have a significant impact on responding, but withdrawal from methylphenidate did cause a temporary change in motivation. No changes in T2 MRI values were detected when measured about one year after cessation of treatment. These data suggest that long-term methylphenidate use does not have a negative effect on a measure of motivation or brain function / microstructure as measured by quantitative T2 MRI. However, cessation of use might be associated with temporary cognitive changes, specifically alteration in motivation. Importantly, this study modeled use in healthy individuals, and results may differ if the same work was repeated in a model of ADHD.
Collapse
Affiliation(s)
- Serguei Liachenko
- Division of Neurotoxicology, National Center for Toxicological Research, 3900 NCTR Road, Jefferson, AR 72079, USA
| | - John Chelonis
- Division of Neurotoxicology, National Center for Toxicological Research, 3900 NCTR Road, Jefferson, AR 72079, USA
| | - Merle G Paule
- Division of Neurotoxicology, National Center for Toxicological Research, 3900 NCTR Road, Jefferson, AR 72079, USA
| | - Mi Li
- Division of Neurotoxicology, National Center for Toxicological Research, 3900 NCTR Road, Jefferson, AR 72079, USA
| | - Natalya Sadovova
- Division of Neurotoxicology, National Center for Toxicological Research, 3900 NCTR Road, Jefferson, AR 72079, USA
| | - John C Talpos
- Division of Neurotoxicology, National Center for Toxicological Research, 3900 NCTR Road, Jefferson, AR 72079, USA.
| |
Collapse
|
9
|
Sui Y, Afacan O, Jaimes C, Gholipour A, Warfield SK. Scan-Specific Generative Neural Network for MRI Super-Resolution Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1383-1399. [PMID: 35020591 PMCID: PMC9208763 DOI: 10.1109/tmi.2022.3142610] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
The interpretation and analysis of Magnetic resonance imaging (MRI) benefit from high spatial resolution. Unfortunately, direct acquisition of high spatial resolution MRI is time-consuming and costly, which increases the potential for motion artifact, and suffers from reduced signal-to-noise ratio (SNR). Super-resolution reconstruction (SRR) is one of the most widely used methods in MRI since it allows for the trade-off between high spatial resolution, high SNR, and reduced scan times. Deep learning has emerged for improved SRR as compared to conventional methods. However, current deep learning-based SRR methods require large-scale training datasets of high-resolution images, which are practically difficult to obtain at a suitable SNR. We sought to develop a methodology that allows for dataset-free deep learning-based SRR, through which to construct images with higher spatial resolution and of higher SNR than can be practically obtained by direct Fourier encoding. We developed a dataset-free learning method that leverages a generative neural network trained for each specific scan or set of scans, which in turn, allows for SRR tailored to the individual patient. With the SRR from three short duration scans, we achieved high quality brain MRI at an isotropic spatial resolution of 0.125 cubic mm with six minutes of imaging time for T2 contrast and an average increase of 7.2 dB (34.2%) in SNR to these short duration scans. Motion compensation was achieved by aligning the three short duration scans together. We assessed our technique on simulated MRI data and clinical data acquired from 15 subjects. Extensive experimental results demonstrate that our approach achieved superior results to state-of-the-art methods, while in parallel, performed at reduced cost as scans delivered with direct high-resolution acquisition.
Collapse
|
10
|
Harper JR, Cherukuri V, O'Reilly T, Yu M, Mbabazi-Kabachelor E, Mulando R, Sheth KN, Webb AG, Warf BC, Kulkarni AV, Monga V, Schiff SJ. Assessing the utility of low resolution brain imaging: treatment of infant hydrocephalus. Neuroimage Clin 2021; 32:102896. [PMID: 34911199 PMCID: PMC8646178 DOI: 10.1016/j.nicl.2021.102896] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Revised: 10/27/2021] [Accepted: 11/22/2021] [Indexed: 11/21/2022]
Abstract
As low-field MRI technology is being disseminated into clinical settings around the world, it is important to assess the image quality required to properly diagnose and treat a given disease and evaluate the role of machine learning algorithms, such as deep learning, in the enhancement of lower quality images. In this post hoc analysis of an ongoing randomized clinical trial, we assessed the diagnostic utility of reduced-quality and deep learning enhanced images for hydrocephalus treatment planning. CT images of post-infectious infant hydrocephalus were degraded in terms of spatial resolution, noise, and contrast between brain and CSF and enhanced using deep learning algorithms. Both degraded and enhanced images were presented to three experienced pediatric neurosurgeons accustomed to working in low- to middle-income countries (LMIC) for assessment of clinical utility in treatment planning for hydrocephalus. In addition, enhanced images were presented alongside their ground-truth CT counterparts in order to assess whether reconstruction errors caused by the deep learning enhancement routine were acceptable to the evaluators. Results indicate that image resolution and contrast-to-noise ratio between brain and CSF predict the likelihood of an image being characterized as useful for hydrocephalus treatment planning. Deep learning enhancement substantially increases contrast-to-noise ratio improving the apparent likelihood of the image being useful; however, deep learning enhancement introduces structural errors which create a substantial risk of misleading clinical interpretation. We find that images with lower quality than is customarily acceptable can be useful for hydrocephalus treatment planning. Moreover, low quality images may be preferable to images enhanced with deep learning, since they do not introduce the risk of misleading information which could misguide treatment decisions. These findings advocate for new standards in assessing acceptable image quality for clinical use.
Collapse
Affiliation(s)
- Joshua R Harper
- Center for Neural Engineering, Department of Engineering Science and Mechanics, The Pennsylvania State University, University Park, PA, USA
| | - Venkateswararao Cherukuri
- School of Electrical Engineering and Computer Science, The Pennsylvania State University, University Park, PA, USA
| | - Tom O'Reilly
- Gorter Center for High Field MRI, Leiden University Medical Center, Leiden, NL, the Netherlands
| | - Mingzhao Yu
- School of Electrical Engineering and Computer Science, The Pennsylvania State University, University Park, PA, USA
| | | | | | - Kevin N Sheth
- Department of Neurology, Yale University School of Medicine, New Haven, CT, USA
| | - Andrew G Webb
- Gorter Center for High Field MRI, Leiden University Medical Center, Leiden, NL, the Netherlands
| | - Benjamin C Warf
- Department of Neurosurgery, Boston Children's Hospital, Harvard Medical School, Boston, USA
| | - Abhaya V Kulkarni
- Department of Surgery, Hospital for Sick Children, University of Toronto, CA, USA
| | - Vishal Monga
- School of Electrical Engineering and Computer Science, The Pennsylvania State University, University Park, PA, USA
| | - Steven J Schiff
- Center for Neural Engineering, Department of Engineering Science and Mechanics, The Pennsylvania State University, University Park, PA, USA; Departments of Neurosurgery, and Physics, The Pennsylvania State University, University Park, PA, USA.
| |
Collapse
|
11
|
Sui Y, Afacan O, Jaimes C, Gholipour A, Warfield SK. Gradient-Guided Isotropic MRI Reconstruction from Anisotropic Acquisitions. IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING 2021; 7:1240-1253. [PMID: 35252479 PMCID: PMC8896514 DOI: 10.1109/tci.2021.3128745] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
The trade-off between image resolution, signal-to-noise ratio (SNR), and scan time in any magnetic resonance imaging (MRI) protocol is inevitable and unavoidable. Super-resolution reconstruction (SRR) has been shown effective in mitigating these factors, and thus, has become an important approach in addressing the current limitations of MRI. In this work, we developed a novel, image-based MRI SRR approach based on anisotropic acquisition schemes, which utilizes a new gradient guidance regularization method that guides the high-resolution (HR) reconstruction via a spatial gradient estimate. Further, we designed an analytical solution to propagate the spatial gradient fields from the low-resolution (LR) images to the HR image space and exploited these gradient fields over multiple scales with a dynamic update scheme for more accurate edge localization in the reconstruction. We also established a forward model of image formation and inverted it along with the proposed gradient guidance. The proposed SRR method allows subject motion between volumes and is able to incorporate various acquisition schemes where the LR images are acquired with arbitrary orientations and displacements, such as orthogonal and through-plane origin-shifted scans. We assessed our proposed approach on simulated data as well as on the data acquired on a Siemens 3T MRI scanner containing 45 MRI scans from 14 subjects. Our experimental results demonstrate that our approach achieved superior reconstructions compared to state-of-the-art methods, both in terms of local spatial smoothness and edge preservation, while, in parallel, at reduced, or at the same cost as scans delivered with direct HR acquisition.
Collapse
Affiliation(s)
- Yao Sui
- Harvard Medical School and Boston Children's Hospital, Boston, Massachusetts, United States
| | - Onur Afacan
- Harvard Medical School and Boston Children's Hospital, Boston, Massachusetts, United States
| | - Camilo Jaimes
- Harvard Medical School and Boston Children's Hospital, Boston, Massachusetts, United States
| | - Ali Gholipour
- Harvard Medical School and Boston Children's Hospital, Boston, Massachusetts, United States
| | - Simon K Warfield
- Harvard Medical School and Boston Children's Hospital, Boston, Massachusetts, United States
| |
Collapse
|
12
|
Wang L, Du J, Gholipour A, Zhu H, He Z, Jia Y. 3D dense convolutional neural network for fast and accurate single MR image super-resolution. Comput Med Imaging Graph 2021; 93:101973. [PMID: 34543775 DOI: 10.1016/j.compmedimag.2021.101973] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2020] [Revised: 07/13/2021] [Accepted: 08/17/2021] [Indexed: 10/20/2022]
Abstract
Super-resolution (SR) MR image reconstruction has shown to be a very promising direction to improve the spatial resolution of low-resolution (LR) MR images. In this paper, we presented a novel MR image SR method based on a dense convolutional neural network (DDSR), and its enhanced version called EDDSR. There are three major innovations: first, we re-designed dense modules to extract hierarchical features directly from LR images and propagate the extracted feature maps through dense connections. Therefore, unlike other CNN-based SR MR techniques that upsample LR patches in the initial phase, our methods take the original LR images or patches as input. This effectively reduces computational complexity and speeds up SR reconstruction. Second, a final deconvolution filter in our model automatically learns filters to fuse and upscale all hierarchical feature maps to generate HR MR images. Using this, EDDSR can perform SR reconstructions at different upscale factors using a single model with one stride fixed deconvolution operation. Third, to further improve SR reconstruction accuracy, we exploited a geometric self-ensemble strategy. Experimental results on three benchmark datasets demonstrate that our methods, DDSR and EDDSR, achieved superior performance compared to state-of-the-art MR image SR methods with less computational load and memory usage.
Collapse
Affiliation(s)
- Lulu Wang
- College of Computer Science, Chongqing University, Chongqing 400044, China.
| | - Jinglong Du
- College of Computer Science, Chongqing University, Chongqing 400044, China.
| | - Ali Gholipour
- Department of Radiology, Boston Children's Hospital, Harvard Medical School, Boston, MA 02115, USA.
| | - Huazheng Zhu
- College of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing 401331, China.
| | - Zhongshi He
- College of Computer Science, Chongqing University, Chongqing 400044, China.
| | - Yuanyuan Jia
- College of Medical Informatics, Chongqing Medical University, Chongqing 400016, China; Medical Data Science Academy, Chongqing Medical University, Chongqing 400016, China.
| |
Collapse
|
13
|
Zhuang Y, Liu H, Song E, Ma G, Xu X, Hung CC. APRNet: A 3D Anisotropic Pyramidal Reversible Network with Multi-modal Cross-Dimension Attention for Brain Tissue Segmentation in MR Images. IEEE J Biomed Health Inform 2021; 26:749-761. [PMID: 34197331 DOI: 10.1109/jbhi.2021.3093932] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Brain tissue segmentation in multi-modal magnetic resonance (MR) images is significant for the clinical diagnosis of brain diseases. Due to blurred boundaries, low contrast, and intricate anatomical relationships between brain tissue regions, automatic brain tissue segmentation without prior knowledge is still challenging. This paper presents a novel 3D fully convolutional network (FCN) for brain tissue segmentation, called APRNet. In this network, we first propose a 3D anisotropic pyramidal convolutional reversible residual sequence (3DAPC-RRS) module to integrate the intra-slice information with the inter-slice information without significant memory consumption; secondly, we design a multi-modal cross-dimension attention (MCDA) module to automatically capture the effective information in each dimension of multi-modal images; then, we apply 3DAPC-RRS modules and MCDA modules to a 3D FCN with multiple encoded streams and one decoded stream for constituting the overall architecture of APRNet. We evaluated APRNet on two benchmark challenges, namely MRBrainS13 and iSeg-2017. The experimental results show that APRNet yields state-of-the-art segmentation results on both benchmark challenge datasets and achieves the best segmentation performance on the cerebrospinal fluid region. Compared with other methods, our proposed approach exploits the complementary information of different modalities to segment brain tissue regions in both adult and infant MR images, and it achieves the average Dice coefficient of 87.22% and 93.03% on the MRBrainS13 and iSeg-2017 testing data, respectively. The proposed method is beneficial for quantitative brain analysis in the clinical study, and our code is made publicly available.
Collapse
|
14
|
Kang SK, Shin SA, Seo S, Byun MS, Lee DY, Kim YK, Lee DS, Lee JS. Deep learning-Based 3D inpainting of brain MR images. Sci Rep 2021; 11:1673. [PMID: 33462321 PMCID: PMC7814079 DOI: 10.1038/s41598-020-80930-w] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2019] [Accepted: 12/14/2020] [Indexed: 12/22/2022] Open
Abstract
The detailed anatomical information of the brain provided by 3D magnetic resonance imaging (MRI) enables various neuroscience research. However, due to the long scan time for 3D MR images, 2D images are mainly obtained in clinical environments. The purpose of this study is to generate 3D images from a sparsely sampled 2D images using an inpainting deep neural network that has a U-net-like structure and DenseNet sub-blocks. To train the network, not only fidelity loss but also perceptual loss based on the VGG network were considered. Various methods were used to assess the overall similarity between the inpainted and original 3D data. In addition, morphological analyzes were performed to investigate whether the inpainted data produced local features similar to the original 3D data. The diagnostic ability using the inpainted data was also evaluated by investigating the pattern of morphological changes in disease groups. Brain anatomy details were efficiently recovered by the proposed neural network. In voxel-based analysis to assess gray matter volume and cortical thickness, differences between the inpainted data and the original 3D data were observed only in small clusters. The proposed method will be useful for utilizing advanced neuroimaging techniques with 2D MRI data.
Collapse
Affiliation(s)
- Seung Kwan Kang
- Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul, Korea
| | - Seong A Shin
- Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul, Korea
| | - Seongho Seo
- Department of Electronic Engineering, Pai Chai University, Daejeon, Korea
| | - Min Soo Byun
- Institute of Human Behavioral Medicine, Medical Research Center, Seoul National University, Seoul, Korea
| | - Dong Young Lee
- Department of Psychiatry, Seoul National University College of Medicine, Seoul, Korea
| | - Yu Kyeong Kim
- Department of Nuclear Medicine, SMG-SNU Boramae Medical Center, Seoul, Korea
| | - Dong Soo Lee
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080, Republic of Korea
| | - Jae Sung Lee
- Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul, Korea.
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080, Republic of Korea.
| |
Collapse
|
15
|
Du J, He Z, Wang L, Gholipour A, Zhou Z, Chen D, Jia Y. Super-resolution reconstruction of single anisotropic 3D MR images using residual convolutional neural network. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2018.10.102] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
16
|
|
17
|
Pham CH, Tor-Díez C, Meunier H, Bednarek N, Fablet R, Passat N, Rousseau F. Multiscale brain MRI super-resolution using deep 3D convolutional networks. Comput Med Imaging Graph 2019; 77:101647. [PMID: 31493703 DOI: 10.1016/j.compmedimag.2019.101647] [Citation(s) in RCA: 70] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2018] [Revised: 06/18/2019] [Accepted: 08/01/2019] [Indexed: 10/26/2022]
Abstract
The purpose of super-resolution approaches is to overcome the hardware limitations and the clinical requirements of imaging procedures by reconstructing high-resolution images from low-resolution acquisitions using post-processing methods. Super-resolution techniques could have strong impacts on structural magnetic resonance imaging when focusing on cortical surface or fine-scale structure analysis for instance. In this paper, we study deep three-dimensional convolutional neural networks for the super-resolution of brain magnetic resonance imaging data. First, our work delves into the relevance of several factors in the performance of the purely convolutional neural network-based techniques for the monomodal super-resolution: optimization methods, weight initialization, network depth, residual learning, filter size in convolution layers, number of the filters, training patch size and number of training subjects. Second, our study also highlights that one single network can efficiently handle multiple arbitrary scaling factors based on a multiscale training approach. Third, we further extend our super-resolution networks to the multimodal super-resolution using intermodality priors. Fourth, we investigate the impact of transfer learning skills onto super-resolution performance in terms of generalization among different datasets. Lastly, the learnt models are used to enhance real clinical low-resolution images. Results tend to demonstrate the potential of deep neural networks with respect to practical medical image applications.
Collapse
Affiliation(s)
- Chi-Hieu Pham
- IMT Atlantique, LaTIM U1101 INSERM, UBL, Brest, France.
| | | | - Hélène Meunier
- Service de médecine néonatale et réanimation pédiatrique, CHU de Reims, France.
| | - Nathalie Bednarek
- Service de médecine néonatale et réanimation pédiatrique, CHU de Reims, France; Université de Reims Champagne Ardenne, CReSTIC EA 3804, 51097 Reims, France.
| | - Ronan Fablet
- IMT Atlantique, LabSTICC UMR CNRS 6285, UBL, Brest, France.
| | - Nicolas Passat
- Université de Reims Champagne Ardenne, CReSTIC EA 3804, 51097 Reims, France.
| | | |
Collapse
|
18
|
Zeng K, Zheng H, Cai C, Yang Y, Zhang K, Chen Z. Simultaneous single- and multi-contrast super-resolution for brain MRI images based on a convolutional neural network. Comput Biol Med 2018; 99:133-141. [DOI: 10.1016/j.compbiomed.2018.06.010] [Citation(s) in RCA: 65] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2018] [Revised: 06/12/2018] [Accepted: 06/12/2018] [Indexed: 01/04/2023]
|
19
|
Shi J, Li Z, Ying S, Wang C, Liu Q, Zhang Q, Yan P. MR Image Super-Resolution via Wide Residual Networks With Fixed Skip Connection. IEEE J Biomed Health Inform 2018; 23:1129-1140. [PMID: 29993565 DOI: 10.1109/jbhi.2018.2843819] [Citation(s) in RCA: 44] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Spatial resolution is a critical imaging parameter in magnetic resonance imaging. The image super-resolution (SR) is an effective and cost efficient alternative technique to improve the spatial resolution of MR images. Over the past several years, the convolutional neural networks (CNN)-based SR methods have achieved state-of-the-art performance. However, CNNs with very deep network structures usually suffer from the problems of degradation and diminishing feature reuse, which add difficulty to network training and degenerate the transmission capability of details for SR. To address these problems, in this work, a progressive wide residual network with a fixed skip connection (named FSCWRN) based SR algorithm is proposed to reconstruct MR images, which combines the global residual learning and the shallow network based local residual learning. The strategy of progressive wide networks is adopted to replace deeper networks, which can partially relax the above-mentioned problems, while a fixed skip connection helps provide rich local details at high frequencies from a fixed shallow layer network to subsequent networks. The experimental results on one simulated MR image database and three real MR image databases show the effectiveness of the proposed FSCWRN SR algorithm, which achieves improved reconstruction performance compared with other algorithms.
Collapse
|
20
|
Shi J, Liu Q, Wang C, Zhang Q, Ying S, Xu H. Super-resolution reconstruction of MR image with a novel residual learning network algorithm. ACTA ACUST UNITED AC 2018; 63:085011. [DOI: 10.1088/1361-6560/aab9e9] [Citation(s) in RCA: 64] [Impact Index Per Article: 9.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
|