1
|
Kageyama H, Yoshida N, Kondo K, Akai H. Dataset augmentation with multiple contrasts images in super-resolution processing of T1-weighted brain magnetic resonance images. Radiol Phys Technol 2025; 18:172-185. [PMID: 39680317 DOI: 10.1007/s12194-024-00871-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2024] [Revised: 12/04/2024] [Accepted: 12/04/2024] [Indexed: 12/17/2024]
Abstract
This study investigated the effectiveness of augmenting datasets for super-resolution processing of brain Magnetic Resonance Images (MRI) T1-weighted images (T1WIs) using deep learning. By incorporating images with different contrasts from the same subject, this study sought to improve network performance and assess its impact on image quality metrics, such as peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). This retrospective study included 240 patients who underwent brain MRI. Two types of datasets were created: the Pure-Dataset group comprising T1WIs and the Mixed-Dataset group comprising T1WIs, T2-weighted images, and fluid-attenuated inversion recovery images. A U-Net-based network and an Enhanced Deep Super-Resolution network (EDSR) were trained on these datasets. Objective image quality analysis was performed using PSNR and SSIM. Statistical analyses, including paired t test and Pearson's correlation coefficient, were conducted to evaluate the results. Augmenting datasets with images of different contrasts significantly improved training accuracy as the dataset size increased. PSNR values ranged 29.84-30.26 dB for U-Net trained on mixed datasets, and SSIM values ranged 0.9858-0.9868. Similarly, PSNR values ranged 32.34-32.64 dB for EDSR trained on mixed datasets, and SSIM values ranged 0.9941-0.9945. Significant differences in PSNR and SSIM were observed between models trained on pure and mixed datasets. Pearson's correlation coefficient indicated a strong positive correlation between dataset size and image quality metrics. Using diverse image data obtained from the same subject can improve the performance of deep-learning models in medical image super-resolution tasks.
Collapse
Affiliation(s)
- Hajime Kageyama
- Department of Radiology, Institute of Medical Science, The University of Tokyo, 4-6-1 Shirokanedai, Minato-Ku, Tokyo, 108-8639, Japan.
- Graduate Division of Health Sciences, Komazawa University, 1-23-1 Komazawa, Setagaya-Ku, Tokyo, 154-8525, Japan.
| | - Nobukiyo Yoshida
- Department of Radiology, Institute of Medical Science, The University of Tokyo, 4-6-1 Shirokanedai, Minato-Ku, Tokyo, 108-8639, Japan
- Department of Radiological Technology, Faculty of Medical Technology, Niigata University of Health and Welfare, 1398 Shimami-Cho, Kita-Ku, Niigata, 950-3198, Japan
| | - Keisuke Kondo
- Graduate Division of Health Sciences, Komazawa University, 1-23-1 Komazawa, Setagaya-Ku, Tokyo, 154-8525, Japan
| | - Hiroyuki Akai
- Department of Radiology, Institute of Medical Science, The University of Tokyo, 4-6-1 Shirokanedai, Minato-Ku, Tokyo, 108-8639, Japan
| |
Collapse
|
2
|
Zhang H, Ma Q, Qiu Y, Lai Z. ACGRHA-Net: Accelerated multi-contrast MR imaging with adjacency complementary graph assisted residual hybrid attention network. Neuroimage 2024; 303:120921. [PMID: 39521395 DOI: 10.1016/j.neuroimage.2024.120921] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2024] [Revised: 11/04/2024] [Accepted: 11/06/2024] [Indexed: 11/16/2024] Open
Abstract
Multi-contrast magnetic resonance (MR) imaging is an advanced technology used in medical diagnosis, but the long acquisition process can lead to patient discomfort and limit its broader application. Shortening acquisition time by undersampling k-space data introduces noticeable aliasing artifacts. To address this, we propose a method that reconstructs multi-contrast MR images from zero-filled data by utilizing a fully-sampled auxiliary contrast MR image as a prior to learn an adjacency complementary graph. This graph is then combined with a residual hybrid attention network, forming the adjacency complementary graph assisted residual hybrid attention network (ACGRHA-Net) for multi-contrast MR image reconstruction. Specifically, the optimal structural similarity is represented by a graph learned from the fully sampled auxiliary image, where the node features and adjacency matrices are designed to precisely capture structural information among different contrast images. This structural similarity enables effective fusion with the target image, improving the detail reconstruction. Additionally, a residual hybrid attention module is designed in parallel with the graph convolution network, allowing it to effectively capture key features and adaptively emphasize these important features in target contrast MR images. This strategy prioritizes crucial information while preserving shallow features, thereby achieving comprehensive feature fusion at deeper levels to enhance multi-contrast MR image reconstruction. Extensive experiments on the different datasets, using various sampling patterns and accelerated factors demonstrate that the proposed method outperforms the current state-of-the-art reconstruction methods.
Collapse
Affiliation(s)
- Haotian Zhang
- School of Ocean Information Engineering, Jimei University, Xiamen, China
| | - Qiaoyu Ma
- School of Ocean Information Engineering, Jimei University, Xiamen, China
| | - Yiran Qiu
- School of Ocean Information Engineering, Jimei University, Xiamen, China
| | - Zongying Lai
- School of Ocean Information Engineering, Jimei University, Xiamen, China.
| |
Collapse
|
3
|
Ayaz A, Boonstoppel R, Lorenz C, Weese J, Pluim J, Breeuwer M. Effective deep-learning brain MRI super resolution using simulated training data. Comput Biol Med 2024; 183:109301. [PMID: 39486305 DOI: 10.1016/j.compbiomed.2024.109301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Revised: 10/15/2024] [Accepted: 10/17/2024] [Indexed: 11/04/2024]
Abstract
BACKGROUND In the field of medical imaging, high-resolution (HR) magnetic resonance imaging (MRI) is essential for accurate disease diagnosis and analysis. However, HR imaging is prone to artifacts and is not universally available. Consequently, low-resolution (LR) MRI images are typically acquired. Deep learning (DL)-based super-resolution (SR) techniques can transform LR images into HR quality. However, these techniques require paired HR-LR data for training the SR networks. OBJECTIVE This research aims to investigate the potential of simulated brain MRI data to train DL-based SR networks. METHODS We simulated a large set of anatomically diverse, voxel-aligned, and artifact-free brain MRI data at different resolutions. We utilized this simulated data to train four distinct DL-based SR networks and augment their training. The trained networks were then evaluated using real data from various sources. RESULTS With our trained networks, we produced 0.7mm SR images from standard 1mm resolution multi-source T1w brain MRI. Our experimental results demonstrate that the trained networks significantly enhance the sharpness of LR input MR images. For single-source images, the performance of networks trained solely on simulated data is slightly inferior to those trained solely on real data, with an average structural similarity index (SSIM) difference of 0.025. However, networks augmented with simulated data outperform those trained on single-source real data when evaluated across datasets from multiple sources. CONCLUSION Paired HR-LR simulated brain MRI data is suitable for training and augmenting diverse brain MRI SR networks. Augmenting the training data with simulated data can enhance the generalizability of the SR networks across real datasets from multiple sources.
Collapse
Affiliation(s)
- Aymen Ayaz
- Biomedical Engineering Department, Eindhoven University of Technology, Eindhoven, The Netherlands.
| | - Rien Boonstoppel
- Biomedical Engineering Department, Eindhoven University of Technology, Eindhoven, The Netherlands.
| | | | | | - Josien Pluim
- Biomedical Engineering Department, Eindhoven University of Technology, Eindhoven, The Netherlands.
| | - Marcel Breeuwer
- Biomedical Engineering Department, Eindhoven University of Technology, Eindhoven, The Netherlands; Philips Healthcare, Best, The Netherlands.
| |
Collapse
|
4
|
Yoon MA, Gold GE, Chaudhari AS. Accelerated Musculoskeletal Magnetic Resonance Imaging. J Magn Reson Imaging 2024; 60:1806-1822. [PMID: 38156716 DOI: 10.1002/jmri.29205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Revised: 12/13/2023] [Accepted: 12/14/2023] [Indexed: 01/03/2024] Open
Abstract
With a substantial growth in the use of musculoskeletal MRI, there has been a growing need to improve MRI workflow, and faster imaging has been suggested as one of the solutions for a more efficient examination process. Consequently, there have been considerable advances in accelerated MRI scanning methods. This article aims to review the basic principles and applications of accelerated musculoskeletal MRI techniques including widely used conventional acceleration methods, more advanced deep learning-based techniques, and new approaches to reduce scan time. Specifically, conventional accelerated MRI techniques, including parallel imaging, compressed sensing, and simultaneous multislice imaging, and deep learning-based accelerated MRI techniques, including undersampled MR image reconstruction, super-resolution imaging, artifact correction, and generation of unacquired contrast images, are discussed. Finally, new approaches to reduce scan time, including synthetic MRI, novel sequences, and new coil setups and designs, are also reviewed. We believe that a deep understanding of these fast MRI techniques and proper use of combined acceleration methods will synergistically improve scan time and MRI workflow in daily practice. EVIDENCE LEVEL: 3 TECHNICAL EFFICACY: Stage 1.
Collapse
Affiliation(s)
- Min A Yoon
- Department of Radiology, Stanford University, Stanford, California, USA
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, South Korea
| | - Garry E Gold
- Department of Radiology, Stanford University, Stanford, California, USA
- Department of Orthopaedic Surgery, Stanford University, Stanford, California, USA
- Department of Bioengineering, Stanford University, Stanford, California, USA
| | | |
Collapse
|
5
|
Lyu J, Li G, Wang C, Cai Q, Dou Q, Zhang D, Qin J. Multicontrast MRI Super-Resolution via Transformer-Empowered Multiscale Contextual Matching and Aggregation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:12004-12014. [PMID: 37028326 DOI: 10.1109/tnnls.2023.3250491] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Magnetic resonance imaging (MRI) possesses the unique versatility to acquire images under a diverse array of distinct tissue contrasts, which makes multicontrast super-resolution (SR) techniques possible and needful. Compared with single-contrast MRI SR, multicontrast SR is expected to produce higher quality images by exploiting a variety of complementary information embedded in different imaging contrasts. However, existing approaches still have two shortcomings: 1) most of them are convolution-based methods and, hence, weak in capturing long-range dependencies, which are essential for MR images with complicated anatomical patterns and 2) they ignore to make full use of the multicontrast features at different scales and lack effective modules to match and aggregate these features for faithful SR. To address these issues, we develop a novel multicontrast MRI SR network via transformer-empowered multiscale feature matching and aggregation, dubbed McMRSR ++ . First, we tame transformers to model long-range dependencies in both reference and target images at different scales. Then, a novel multiscale feature matching and aggregation method is proposed to transfer corresponding contexts from reference features at different scales to the target features and interactively aggregate them. Furthermore, a texture-preserving branch and a contrastive constraint are incorporated into our framework for enhancing the textural details in the SR images. Experimental results on both public and clinical in vivo datasets show that McMRSR ++ outperforms state-of-the-art methods under peak signal to noise ratio (PSNR), structure similarity index measure (SSIM), and root mean square error (RMSE) metrics significantly. Visual results demonstrate the superiority of our method in restoring structures, demonstrating its great potential to improve scan efficiency in clinical practice.
Collapse
|
6
|
Gundogdu B, Medved M, Chatterjee A, Engelmann R, Rosado A, Lee G, Oren NC, Oto A, Karczmar GS. Self-supervised multicontrast super-resolution for diffusion-weighted prostate MRI. Magn Reson Med 2024; 92:319-331. [PMID: 38308149 PMCID: PMC11288973 DOI: 10.1002/mrm.30047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Revised: 01/19/2024] [Accepted: 01/23/2024] [Indexed: 02/04/2024]
Abstract
PURPOSE This study addresses the challenge of low resolution and signal-to-noise ratio (SNR) in diffusion-weighted images (DWI), which are pivotal for cancer detection. Traditional methods increase SNR at high b-values through multiple acquisitions, but this results in diminished image resolution due to motion-induced variations. Our research aims to enhance spatial resolution by exploiting the global structure within multicontrast DWI scans and millimetric motion between acquisitions. METHODS We introduce a novel approach employing a "Perturbation Network" to learn subvoxel-size motions between scans, trained jointly with an implicit neural representation (INR) network. INR encodes the DWI as a continuous volumetric function, treating voxel intensities of low-resolution acquisitions as discrete samples. By evaluating this function with a finer grid, our model predicts higher-resolution signal intensities for intermediate voxel locations. The Perturbation Network's motion-correction efficacy was validated through experiments on biological phantoms and in vivo prostate scans. RESULTS Quantitative analyses revealed significantly higher structural similarity measures of super-resolution images to ground truth high-resolution images compared to high-order interpolation (p< $$ < $$ 0.005). In blind qualitative experiments,96 . 1 % $$ 96.1\% $$ of super-resolution images were assessed to have superior diagnostic quality compared to interpolated images. CONCLUSION High-resolution details in DWI can be obtained without the need for high-resolution training data. One notable advantage of the proposed method is that it does not require a super-resolution training set. This is important in clinical practice because the proposed method can easily be adapted to images with different scanner settings or body parts, whereas the supervised methods do not offer such an option.
Collapse
Affiliation(s)
- Batuhan Gundogdu
- Department of Radiology, University of Chicago, Chicago, Illinois, USA
| | - Milica Medved
- Department of Radiology, University of Chicago, Chicago, Illinois, USA
| | | | - Roger Engelmann
- Department of Radiology, University of Chicago, Chicago, Illinois, USA
| | - Avery Rosado
- Department of Radiology, University of Chicago, Chicago, Illinois, USA
| | - Grace Lee
- Department of Radiology, University of Chicago, Chicago, Illinois, USA
| | - Nisa C Oren
- Department of Radiology, University of Chicago, Chicago, Illinois, USA
| | - Aytekin Oto
- Department of Radiology, University of Chicago, Chicago, Illinois, USA
| | | |
Collapse
|
7
|
U N, P M A. MRI super-resolution using similarity distance and multi-scale receptive field based feature fusion GAN and pre-trained slice interpolation network. Magn Reson Imaging 2024; 110:195-209. [PMID: 38653336 DOI: 10.1016/j.mri.2024.04.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Revised: 03/04/2024] [Accepted: 04/14/2024] [Indexed: 04/25/2024]
Abstract
Challenges arise in achieving high-resolution Magnetic Resonance Imaging (MRI) to improve disease diagnosis accuracy due to limitations in hardware, patient discomfort, long acquisition times, and high costs. While Convolutional Neural Networks (CNNs) have shown promising results in MRI super-resolution, they often don't look into the structural similarity and prior information available in consecutive MRI slices. By leveraging information from sequential slices, more robust features can be obtained, potentially leading to higher-quality MRI slices. We propose a multi-slice two-dimensional (2D) MRI super-resolution network that combines a Generative Adversarial Network (GAN) with feature fusion and a pre-trained slice interpolation network to achieve three-dimensional (3D) super-resolution. The proposed model requires consecutively acquired three low-resolution (LR) MRI slices along a specific axis, and achieves the reconstruction of the MRI slices in the remaining two axes. The network effectively enhances both in-plane and out-of-plane resolution along the sagittal axis while addressing computational and memory constraints in 3D super-resolution. The proposed generator has a in-plane and out-of-plane Attention (IOA) network that fuses both in-plane and out-plane features of MRI dynamically. In terms of out-of-plane attention, the network merges features by considering the similarity distance between features and for in-plane attention, the network employs a two-level pyramid structure with varying receptive fields to extract features at different scales, ensuring the inclusion of both global and local features. Subsequently, to achieve 3D MRI super-resolution, a pre-trained slice interpolation network is used that takes two consecutive super-resolved MRI slices to generate a new intermediate slice. To further enhance the network performance and perceptual quality, we introduce a feature up-sampling layer and a feature extraction block with Scaled Exponential Linear Unit (SeLU). Moreover, our super-resolution network incorporates VGG loss from a fine-tuned VGG-19 network to provide additional enhancement. Through experimental evaluations on the IXI dataset and BRATS dataset, using the peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM) and the number of training parameters, we demonstrate the superior performance of our method compared to the existing techniques. Also, the proposed model can be adapted or modified to achieve super-resolution for both 2D and 3D MRI data.
Collapse
Affiliation(s)
- Nimitha U
- Department of Electronics and Communication Engineering, National Institute of Technology Calicut, Kerala 673601, India.
| | - Ameer P M
- Department of Electronics and Communication Engineering, National Institute of Technology Calicut, Kerala 673601, India.
| |
Collapse
|
8
|
Fiscone C, Curti N, Ceccarelli M, Remondini D, Testa C, Lodi R, Tonon C, Manners DN, Castellani G. Generalizing the Enhanced-Deep-Super-Resolution Neural Network to Brain MR Images: A Retrospective Study on the Cam-CAN Dataset. eNeuro 2024; 11:ENEURO.0458-22.2023. [PMID: 38729763 PMCID: PMC11140654 DOI: 10.1523/eneuro.0458-22.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 09/12/2023] [Accepted: 09/28/2023] [Indexed: 05/12/2024] Open
Abstract
The Enhanced-Deep-Super-Resolution (EDSR) model is a state-of-the-art convolutional neural network suitable for improving image spatial resolution. It was previously trained with general-purpose pictures and then, in this work, tested on biomedical magnetic resonance (MR) images, comparing the network outcomes with traditional up-sampling techniques. We explored possible changes in the model response when different MR sequences were analyzed. T1w and T2w MR brain images of 70 human healthy subjects (F:M, 40:30) from the Cambridge Centre for Ageing and Neuroscience (Cam-CAN) repository were down-sampled and then up-sampled using EDSR model and BiCubic (BC) interpolation. Several reference metrics were used to quantitatively assess the performance of up-sampling operations (RMSE, pSNR, SSIM, and HFEN). Two-dimensional and three-dimensional reconstructions were evaluated. Different brain tissues were analyzed individually. The EDSR model was superior to BC interpolation on the selected metrics, both for two- and three- dimensional reconstructions. The reference metrics showed higher quality of EDSR over BC reconstructions for all the analyzed images, with a significant difference of all the criteria in T1w images and of the perception-based SSIM and HFEN in T2w images. The analysis per tissue highlights differences in EDSR performance related to the gray-level values, showing a relative lack of outperformance in reconstructing hyperintense areas. The EDSR model, trained on general-purpose images, better reconstructs MR T1w and T2w images than BC, without any retraining or fine-tuning. These results highlight the excellent generalization ability of the network and lead to possible applications on other MR measurements.
Collapse
Affiliation(s)
- Cristiana Fiscone
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna 40126, Italy
| | - Nico Curti
- Department of Physics and Astronomy, University of Bologna, Bologna 40126, Italy
| | - Mattia Ceccarelli
- Department of Agricultural and Food Sciences, University of Bologna, Bologna 40127, Italy
| | - Daniel Remondini
- Department of Physics and Astronomy, University of Bologna, Bologna 40126, Italy
- INFN, Bologna 40127, Italy
| | - Claudia Testa
- Department of Physics and Astronomy, University of Bologna, Bologna 40126, Italy
- INFN, Bologna 40127, Italy
| | - Raffaele Lodi
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna 40126, Italy
- Functional and Molecular Neuroimaging Unit, IRCCS Istituto delle Scienze Neurologiche di Bologna, Bologna 40139, Italy
| | - Caterina Tonon
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna 40126, Italy
- Functional and Molecular Neuroimaging Unit, IRCCS Istituto delle Scienze Neurologiche di Bologna, Bologna 40139, Italy
| | - David Neil Manners
- Functional and Molecular Neuroimaging Unit, IRCCS Istituto delle Scienze Neurologiche di Bologna, Bologna 40139, Italy
- Department for Life Quality Studies, University of Bologna, Rimini 47921, Italy
| | - Gastone Castellani
- Department of Medical and Surgical Sciences, University of Bologna, Bologna 40138, Italy
| |
Collapse
|
9
|
Wang L, Guo T, Wang L, Yang W, Wang J, Nie J, Cui J, Jiang P, Li J, Zhang H. Improving radiomic modeling for the identification of symptomatic carotid atherosclerotic plaques using deep learning-based 3D super-resolution CT angiography. Heliyon 2024; 10:e29331. [PMID: 38644848 PMCID: PMC11033096 DOI: 10.1016/j.heliyon.2024.e29331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2024] [Revised: 04/04/2024] [Accepted: 04/05/2024] [Indexed: 04/23/2024] Open
Abstract
Rationale and objectives Radiomic models based on normal-resolution (NR) computed tomography angiography (CTA) images can fail to distinguish between symptomatic and asymptomatic carotid atherosclerotic plaques. This study aimed to explore the effectiveness of a deep learning-based three-dimensional super-resolution (SR) CTA radiomic model for improved identification of symptomatic carotid atherosclerotic plaques. Materials and methods A total of 193 patients with carotid atherosclerotic plaques were retrospectively enrolled and allocated into either a symptomatic (n = 123) or an asymptomatic (n = 70) groups. SR CTA images were derived from NR CTA images using deep learning-based three-dimensional SR technology. Handcrafted radiomic features were extracted from both the SR and NR CTA images and three risk models were developed based on manually measured quantitative CTA characteristics and NR and SR radiomic features. Model performances were assessed via receiver operating characteristic, calibration, and decision curve analyses. Results The SR model exhibited the optimal performance (area under the curve [AUC] 0.820, accuracy 0.802, sensitivity 0.854, F1 score 0.847) in the testing cohort, outperforming the other two models. The calibration curve analyses and Hosmer-Lemeshow test demonstrated that the SR model exhibited the best goodness of fit, and decision curve analysis revealed that SR model had the highest clinical value and potential patient benefits. Conclusions Deep learning-based three-dimensional SR technology could improve the CTA-based radiomic models in identifying symptomatic carotid plaques, potentially providing more accurate and valuable information to guide clinical decision-making to reduce the risk of ischemic stroke.
Collapse
Affiliation(s)
- Lingjie Wang
- Department of Medical Imaging, First Hospital of Shanxi Medical University, Taiyuan, Shanxi Province, 030001, China
| | - Tiedan Guo
- Department of Medical Imaging, First Hospital of Shanxi Medical University, Taiyuan, Shanxi Province, 030001, China
| | - Li Wang
- Department of Medical Imaging, First Hospital of Shanxi Medical University, Taiyuan, Shanxi Province, 030001, China
| | - Wentao Yang
- Basic Medical College, Shanxi Medical University, Taiyuan, Shanxi Province, 030001, China
| | - Jingying Wang
- Department of Endemic Disease Prevention and Control, Shanxi Province Disease Prevention and Control Center, Shanxi Province, 030001, China
| | - Jianlong Nie
- Shanghai United Imaging Intelligence, Co., Ltd., Shanghai City, 200030, China
| | - Jingjing Cui
- Shanghai United Imaging Intelligence, Co., Ltd., Shanghai City, 200030, China
| | - Pengbo Jiang
- Shanghai United Imaging Intelligence, Co., Ltd., Shanghai City, 200030, China
| | - Junlin Li
- Department of Imaging Medicine, Inner Mongolia Autonomous Region People's Hospital, Hohhot, 010017, China
| | - Hua Zhang
- Department of Medical Imaging, First Hospital of Shanxi Medical University, Taiyuan, Shanxi Province, 030001, China
| |
Collapse
|
10
|
Bottani S, Thibeau-Sutre E, Maire A, Ströer S, Dormont D, Colliot O, Burgos N. Contrast-enhanced to non-contrast-enhanced image translation to exploit a clinical data warehouse of T1-weighted brain MRI. BMC Med Imaging 2024; 24:67. [PMID: 38504179 PMCID: PMC10953143 DOI: 10.1186/s12880-024-01242-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Accepted: 03/07/2024] [Indexed: 03/21/2024] Open
Abstract
BACKGROUND Clinical data warehouses provide access to massive amounts of medical images, but these images are often heterogeneous. They can for instance include images acquired both with or without the injection of a gadolinium-based contrast agent. Harmonizing such data sets is thus fundamental to guarantee unbiased results, for example when performing differential diagnosis. Furthermore, classical neuroimaging software tools for feature extraction are typically applied only to images without gadolinium. The objective of this work is to evaluate how image translation can be useful to exploit a highly heterogeneous data set containing both contrast-enhanced and non-contrast-enhanced images from a clinical data warehouse. METHODS We propose and compare different 3D U-Net and conditional GAN models to convert contrast-enhanced T1-weighted (T1ce) into non-contrast-enhanced (T1nce) brain MRI. These models were trained using 230 image pairs and tested on 77 image pairs from the clinical data warehouse of the Greater Paris area. RESULTS Validation using standard image similarity measures demonstrated that the similarity between real and synthetic T1nce images was higher than between real T1nce and T1ce images for all the models compared. The best performing models were further validated on a segmentation task. We showed that tissue volumes extracted from synthetic T1nce images were closer to those of real T1nce images than volumes extracted from T1ce images. CONCLUSION We showed that deep learning models initially developed with research quality data could synthesize T1nce from T1ce images of clinical quality and that reliable features could be extracted from the synthetic images, thus demonstrating the ability of such methods to help exploit a data set coming from a clinical data warehouse.
Collapse
Affiliation(s)
- Simona Bottani
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié-Salpêtrière, Paris, 75013, France
| | - Elina Thibeau-Sutre
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié-Salpêtrière, Paris, 75013, France
| | - Aurélien Maire
- Innovation & Données - Département des Services Numériques, AP-HP, Paris, 75013, France
| | - Sebastian Ströer
- Hôpital Pitié Salpêtrière, Department of Neuroradiology, AP-HP, Paris, 75012, France
| | - Didier Dormont
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié-Salpêtrière, DMU DIAMENT, Paris, 75013, France
| | - Olivier Colliot
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié-Salpêtrière, Paris, 75013, France
| | - Ninon Burgos
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié-Salpêtrière, Paris, 75013, France.
| |
Collapse
|
11
|
Yang Y, Xiang T, Lv X, Li L, Lui LM, Zeng T. Double Transformer Super-Resolution for Breast Cancer ADC Images. IEEE J Biomed Health Inform 2024; 28:917-928. [PMID: 38079366 DOI: 10.1109/jbhi.2023.3341250] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2024]
Abstract
Diffusion-weighted imaging (DWI) has been extensively explored in guiding the clinic management of patients with breast cancer. However, due to the limited resolution, accurately characterizing tumors using DWI and the corresponding apparent diffusion coefficient (ADC) is still a challenging problem. In this paper, we aim to address the issue of super-resolution (SR) of ADC images and evaluate the clinical utility of SR-ADC images through radiomics analysis. To this end, we propose a novel double transformer-based network (DTformer) to enhance the resolution of ADC images. More specifically, we propose a symmetric U-shaped encoder-decoder network with two different types of transformer blocks, named as UTNet, to extract deep features for super-resolution. The basic backbone of UTNet is composed of a locally-enhanced Swin transformer block (LeSwin-T) and a convolutional transformer block (Conv-T), which are responsible for capturing long-range dependencies and local spatial information, respectively. Additionally, we introduce a residual upsampling network (RUpNet) to expand image resolution by leveraging initial residual information from the original low-resolution (LR) images. Extensive experiments show that DTformer achieves superior SR performance. Moreover, radiomics analysis reveals that improving the resolution of ADC images is beneficial for tumor characteristic prediction, such as histological grade and human epidermal growth factor receptor 2 (HER2) status.
Collapse
|
12
|
Kong W, Li B, Wei K, Li D, Zhu J, Yu G. Dual contrast attention-guided multi-frequency fusion for multi-contrast MRI super-resolution. Phys Med Biol 2023; 69:015010. [PMID: 37944482 DOI: 10.1088/1361-6560/ad0b65] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Accepted: 11/09/2023] [Indexed: 11/12/2023]
Abstract
Objective. Multi-contrast magnetic resonance (MR) imaging super-resolution (SR) reconstruction is an effective solution for acquiring high-resolution MR images. It utilizes anatomical information from auxiliary contrast images to improve the quality of the target contrast images. However, existing studies have simply explored the relationships between auxiliary contrast and target contrast images but did not fully consider different anatomical information contained in multi-contrast images, resulting in texture details and artifacts unrelated to the target contrast images.Approach. To address these issues, we propose a dual contrast attention-guided multi-frequency fusion (DCAMF) network to reconstruct SR MR images from low-resolution MR images, which adaptively captures relevant anatomical information and processes the texture details and low-frequency information from multi-contrast images in parallel. Specifically, after the feature extraction, a feature selection module based on a dual contrast attention mechanism is proposed to focus on the texture details of the auxiliary contrast images and the low-frequency features of the target contrast images. Then, based on the characteristics of the selected features, a high- and low-frequency fusion decoder is constructed to fuse these features. In addition, a texture-enhancing module is embedded in the high-frequency fusion decoder, to highlight and refine the texture details of the auxiliary contrast and target contrast images. Finally, the high- and low-frequency fusion process is constrained by integrating a deeply-supervised mechanism into the DCAMF network.Main results. The experimental results show that the DCAMF outperforms other state-of-the-art methods. The peak signal-to-noise ratio and structural similarity of DCAMF are 39.02 dB and 0.9771 on the IXI dataset and 37.59 dB and 0.9770 on the BraTS2018 dataset, respectively. The image recovery is further validated in segmentation tasks.Significance. Our proposed SR model can enhance the quality of MR images. The results of the SR study provide a reliable basis for clinical diagnosis and subsequent image-guided treatment.
Collapse
Affiliation(s)
- Weipeng Kong
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, People's Republic of China
| | - Baosheng Li
- Department of Radiation Oncology Physics, Shandong Cancer Hospital and Institute, Shandong Cancer Hospital affiliate to Shandong University, Jinan, People's Republic of China
| | - Kexin Wei
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, People's Republic of China
| | - Dengwang Li
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, People's Republic of China
| | - Jian Zhu
- Department of Radiation Oncology Physics, Shandong Cancer Hospital and Institute, Shandong Cancer Hospital affiliate to Shandong University, Jinan, People's Republic of China
| | - Gang Yu
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, People's Republic of China
| |
Collapse
|
13
|
Yang G, Zhang L, Liu A, Fu X, Chen X, Wang R. MGDUN: An interpretable network for multi-contrast MRI image super-resolution reconstruction. Comput Biol Med 2023; 167:107605. [PMID: 37925907 DOI: 10.1016/j.compbiomed.2023.107605] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 09/28/2023] [Accepted: 10/17/2023] [Indexed: 11/07/2023]
Abstract
Magnetic resonance imaging (MRI) Super-Resolution (SR) aims to obtain high resolution (HR) images with more detailed information for precise diagnosis and quantitative image analysis. Deep unfolding networks outperform general MRI SR reconstruction methods by providing better performance and improved interpretability, which enhance the trustworthiness required in clinical practice. Additionally, current SR reconstruction techniques often rely on a single contrast or a simple multi-contrast fusion mechanism, ignoring the complex relationships between different contrasts. To address these issues, in this paper, we propose a Model-Guided multi-contrast interpretable Deep Unfolding Network (MGDUN) for medical image SR reconstruction, which explicitly incorporates the well-studied multi-contrast MRI observation model into an unfolding iterative network. Specifically, we manually design an objective function for MGDUN that can be iteratively computed by the half-quadratic splitting algorithm. The iterative MGDUN algorithm is unfolded into a special model-guided deep unfolding network that explicitly takes into account both the multi-contrast relationship matrix and the MRI observation matrix during the end-to-end optimization process. Extensive experimental results on the multi-contrast IXI dataset and the BraTs 2019 dataset demonstrate the superiority of our proposed model, with PSNR reaching 37.3366 and 35.9690 respectively. Our proposed MGDUN provides a promising solution for multi-contrast MR image super-resolution reconstruction. Code is available at https://github.com/yggame/MGDUN.
Collapse
Affiliation(s)
- Gang Yang
- School of Information Science and Technology, University of Science and Technology of China, Hefei 230026, China.
| | - Li Zhang
- School of Information Science and Technology, University of Science and Technology of China, Hefei 230026, China; Institute of Intelligent Machines, and Hefei Institute of Physical Science, Chinese Academy Sciences, Hefei 230031, China
| | - Aiping Liu
- School of Information Science and Technology, University of Science and Technology of China, Hefei 230026, China.
| | - Xueyang Fu
- School of Information Science and Technology, University of Science and Technology of China, Hefei 230026, China
| | - Xun Chen
- School of Information Science and Technology, University of Science and Technology of China, Hefei 230026, China
| | - Rujing Wang
- Institute of Intelligent Machines, and Hefei Institute of Physical Science, Chinese Academy Sciences, Hefei 230031, China
| |
Collapse
|
14
|
Wu Y, Ridwan AR, Niaz MR, Bennett DA, Arfanakis K. High resolution 0.5mm isotropic T 1-weighted and diffusion tensor templates of the brain of non-demented older adults in a common space for the MIITRA atlas. Neuroimage 2023; 282:120387. [PMID: 37783362 PMCID: PMC10625170 DOI: 10.1016/j.neuroimage.2023.120387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Accepted: 09/22/2023] [Indexed: 10/04/2023] Open
Abstract
High quality, high resolution T1-weighted (T1w) and diffusion tensor imaging (DTI) brain templates located in a common space can enhance the sensitivity and precision of template-based neuroimaging studies. However, such multimodal templates have not been constructed for the older adult brain. The purpose of this work which is part of the MIITRA atlas project was twofold: (A) to develop 0.5 mm isotropic resolution T1w and DTI templates that are representative of the brain of non-demented older adults and are located in the same space, using advanced multimodal template construction techniques and principles of super resolution on data from a large, diverse, community cohort of 400 non-demented older adults, and (B) to systematically compare the new templates to other standardized templates. It was demonstrated that the new MIITRA-0.5mm T1w and DTI templates are well-matched in space, exhibit good definition of brain structures, including fine structures, exhibit higher image sharpness than other standardized templates, and are free of artifacts. The MIITRA-0.5mm T1w and DTI templates allowed higher intra-modality inter-subject spatial normalization precision as well as higher inter-modality intra-subject spatial matching of older adult T1w and DTI data compared to other available templates. Consequently, MIITRA-0.5mm templates allowed detection of smaller inter-group differences for older adult data compared to other templates. The MIITRA-0.5mm templates were also shown to be most representative of the brain of non-demented older adults compared to other templates with submillimeter resolution. The new templates constructed in this work constitute two of the final products of the MIITRA atlas project and are anticipated to have important implications for the sensitivity and precision of studies on older adults.
Collapse
Affiliation(s)
- Yingjuan Wu
- Department of Biomedical Engineering, Illinois Institute of Technology, Chicago, IL, United States
| | - Abdur Raquib Ridwan
- Department of Biomedical Engineering, Illinois Institute of Technology, Chicago, IL, United States
| | - Mohammad Rakeen Niaz
- Department of Biomedical Engineering, Illinois Institute of Technology, Chicago, IL, United States
| | - David A Bennett
- Rush Alzheimer's Disease Center, Rush University Medical Center, Chicago, IL, United States
| | - Konstantinos Arfanakis
- Department of Biomedical Engineering, Illinois Institute of Technology, Chicago, IL, United States; Rush Alzheimer's Disease Center, Rush University Medical Center, Chicago, IL, United States.
| |
Collapse
|
15
|
Grigas O, Maskeliūnas R, Damaševičius R. Improving Structural MRI Preprocessing with Hybrid Transformer GANs. Life (Basel) 2023; 13:1893. [PMID: 37763297 PMCID: PMC10532639 DOI: 10.3390/life13091893] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2023] [Revised: 09/01/2023] [Accepted: 09/06/2023] [Indexed: 09/29/2023] Open
Abstract
Magnetic resonance imaging (MRI) is a technique that is widely used in practice to evaluate any pathologies in the human body. One of the areas of interest is the human brain. Naturally, MR images are low-resolution and contain noise due to signal interference, the patient's body's radio-frequency emissions and smaller Tesla coil counts in the machinery. There is a need to solve this problem, as MR tomographs that have the capability of capturing high-resolution images are extremely expensive and the length of the procedure to capture such images increases by the order of magnitude. Vision transformers have lately shown state-of-the-art results in super-resolution tasks; therefore, we decided to evaluate whether we can employ them for structural MRI super-resolution tasks. A literature review showed that similar methods do not focus on perceptual image quality because upscaled images are often blurry and are subjectively of poor quality. Knowing this, we propose a methodology called HR-MRI-GAN, which is a hybrid transformer generative adversarial network capable of increasing resolution and removing noise from 2D T1w MRI slice images. Experiments show that our method quantitatively outperforms other SOTA methods in terms of perceptual image quality and is capable of subjectively generalizing to unseen data. During the experiments, we additionally identified that the visual saliency-induced index metric is not applicable to MRI perceptual quality assessment and that general-purpose denoising networks are effective when removing noise from MR images.
Collapse
Affiliation(s)
- Ovidijus Grigas
- Faculty of Informatics, Kaunas University of Technology, 50254 Kaunas, Lithuania
| | - Rytis Maskeliūnas
- Faculty of Informatics, Kaunas University of Technology, 50254 Kaunas, Lithuania
| | - Robertas Damaševičius
- Faculty of Informatics, Kaunas University of Technology, 50254 Kaunas, Lithuania
- Department of Applied Informatics, Vytautas Magnus University, 44248 Kaunas, Lithuania
| |
Collapse
|
16
|
Lin J, Miao QI, Surawech C, Raman SS, Zhao K, Wu HH, Sung K. High-Resolution 3D MRI With Deep Generative Networks via Novel Slice-Profile Transformation Super-Resolution. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2023; 11:95022-95036. [PMID: 37711392 PMCID: PMC10501177 DOI: 10.1109/access.2023.3307577] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 09/16/2023]
Abstract
High-resolution magnetic resonance imaging (MRI) sequences, such as 3D turbo or fast spin-echo (TSE/FSE) imaging, are clinically desirable but suffer from long scanning time-related blurring when reformatted into preferred orientations. Instead, multi-slice two-dimensional (2D) TSE imaging is commonly used because of its high in-plane resolution but is limited clinically by poor through-plane resolution due to elongated voxels and the inability to generate multi-planar reformations due to staircase artifacts. Therefore, multiple 2D TSE scans are acquired in various orthogonal imaging planes, increasing the overall MRI scan time. In this study, we propose a novel slice-profile transformation super-resolution (SPTSR) framework with deep generative learning for through-plane super-resolution (SR) of multi-slice 2D TSE imaging. The deep generative networks were trained by synthesized low-resolution training input via slice-profile downsampling (SP-DS), and the trained networks inferred on the slice profile convolved (SP-conv) testing input for 5.5x through-plane SR. The network output was further slice-profile deconvolved (SP-deconv) to achieve an isotropic super-resolution. Compared to SMORE SR method and the networks trained by conventional downsampling, our SPTSR framework demonstrated the best overall image quality from 50 testing cases, evaluated by two abdominal radiologists. The quantitative analysis cross-validated the expert reader study results. 3D simulation experiments confirmed the quantitative improvement of the proposed SPTSR and the effectiveness of the SP-deconv step, compared to 3D ground-truths. Ablation studies were conducted on the individual contributions of SP-DS and SP-conv, networks structure, training dataset size, and different slice profiles.
Collapse
Affiliation(s)
- Jiahao Lin
- Department of Radiological Sciences, University of California at Los Angeles, Los Angeles, CA 90095, USA
- Department of Electrical and Computer Engineering, University of California at Los Angeles, Los Angeles, CA 90095, USA
| | - Q I Miao
- Department of Radiological Sciences, University of California at Los Angeles, Los Angeles, CA 90095, USA
- Department of Radiology, The First Affiliated Hospital of China Medical University, Shenyang, Liaoning 110001, China
| | - Chuthaporn Surawech
- Department of Radiological Sciences, University of California at Los Angeles, Los Angeles, CA 90095, USA
- Department of Radiology, Faculty of Medicine, Chulalongkorn University, Bangkok 10330, Thailand
- Division of Diagnostic Radiology, Department of Radiology, King Chulalongkorn Memorial Hospital, Bangkok 10330, Thailand
| | - Steven S Raman
- Department of Radiological Sciences, University of California at Los Angeles, Los Angeles, CA 90095, USA
| | - Kai Zhao
- Department of Radiological Sciences, University of California at Los Angeles, Los Angeles, CA 90095, USA
| | - Holden H Wu
- Department of Radiological Sciences, University of California at Los Angeles, Los Angeles, CA 90095, USA
| | - Kyunghyun Sung
- Department of Radiological Sciences, University of California at Los Angeles, Los Angeles, CA 90095, USA
| |
Collapse
|
17
|
Jin D, Zheng H, Yuan H. Exploring the Possibility of Measuring Vertebrae Bone Structure Metrics Using MDCT Images: An Unpaired Image-to-Image Translation Method. Bioengineering (Basel) 2023; 10:716. [PMID: 37370647 DOI: 10.3390/bioengineering10060716] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Revised: 06/05/2023] [Accepted: 06/08/2023] [Indexed: 06/29/2023] Open
Abstract
Bone structure metrics are vital for the evaluation of vertebral bone strength. However, the gold standard for measuring bone structure metrics, micro-Computed Tomography (micro-CT), cannot be used in vivo, which hinders the early diagnosis of fragility fractures. This paper used an unpaired image-to-image translation method to capture the mapping between clinical multidetector computed tomography (MDCT) and micro-CT images and then generated micro-CT-like images to measure bone structure metrics. MDCT and micro-CT images were scanned from 75 human lumbar spine specimens and formed training and testing sets. The generator in the model focused on learning both the structure and detailed pattern of bone trabeculae and generating micro-CT-like images, and the discriminator determined whether the generated images were micro-CT images or not. Based on similarity metrics (i.e., SSIM and FID) and bone structure metrics (i.e., bone volume fraction, trabecular separation and trabecular thickness), a set of comparisons were performed. The results show that the proposed method can perform better in terms of both similarity metrics and bone structure metrics and the improvement is statistically significant. In particular, we compared the proposed method with the paired image-to-image method and analyzed the pros and cons of the method used.
Collapse
Affiliation(s)
- Dan Jin
- Department of Radiology, Peking University Third Hospital, Beijing 100191, China
| | - Han Zheng
- School of Traffic and Transportation, Beijing Jiaotong University, Beijing 100044, China
| | - Huishu Yuan
- Department of Radiology, Peking University Third Hospital, Beijing 100191, China
| |
Collapse
|
18
|
Liang Z, Zhang J. Mouse brain MR super-resolution using a deep learning network trained with optical imaging data. FRONTIERS IN RADIOLOGY 2023; 3:1155866. [PMID: 37492378 PMCID: PMC10365285 DOI: 10.3389/fradi.2023.1155866] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Accepted: 04/28/2023] [Indexed: 07/27/2023]
Abstract
Introduction The resolution of magnetic resonance imaging is often limited at the millimeter level due to its inherent signal-to-noise disadvantage compared to other imaging modalities. Super-resolution (SR) of MRI data aims to enhance its resolution and diagnostic value. While deep learning-based SR has shown potential, its applications in MRI remain limited, especially for preclinical MRI, where large high-resolution MRI datasets for training are often lacking. Methods In this study, we first used high-resolution mouse brain auto-fluorescence (AF) data acquired using serial two-photon tomography (STPT) to examine the performance of deep learning-based SR for mouse brain images. Results We found that the best SR performance was obtained when the resolutions of training and target data were matched. We then applied the network trained using AF data to MRI data of the mouse brain, and found that the performance of the SR network depended on the tissue contrast presented in the MRI data. Using transfer learning and a limited set of high-resolution mouse brain MRI data, we were able to fine-tune the initial network trained using AF to enhance the resolution of MRI data. Discussion Our results suggest that deep learning SR networks trained using high-resolution data of a different modality can be applied to MRI data after transfer learning.
Collapse
Affiliation(s)
| | - Jiangyang Zhang
- Department of Radiology, Center for Biomedical Imaging, New York University, New York, NY, United States
| |
Collapse
|
19
|
Zhao Y, Wang X, Che T, Bao G, Li S. Multi-task deep learning for medical image computing and analysis: A review. Comput Biol Med 2023; 153:106496. [PMID: 36634599 DOI: 10.1016/j.compbiomed.2022.106496] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 12/06/2022] [Accepted: 12/27/2022] [Indexed: 12/29/2022]
Abstract
The renaissance of deep learning has provided promising solutions to various tasks. While conventional deep learning models are constructed for a single specific task, multi-task deep learning (MTDL) that is capable to simultaneously accomplish at least two tasks has attracted research attention. MTDL is a joint learning paradigm that harnesses the inherent correlation of multiple related tasks to achieve reciprocal benefits in improving performance, enhancing generalizability, and reducing the overall computational cost. This review focuses on the advanced applications of MTDL for medical image computing and analysis. We first summarize four popular MTDL network architectures (i.e., cascaded, parallel, interacted, and hybrid). Then, we review the representative MTDL-based networks for eight application areas, including the brain, eye, chest, cardiac, abdomen, musculoskeletal, pathology, and other human body regions. While MTDL-based medical image processing has been flourishing and demonstrating outstanding performance in many tasks, in the meanwhile, there are performance gaps in some tasks, and accordingly we perceive the open challenges and the perspective trends. For instance, in the 2018 Ischemic Stroke Lesion Segmentation challenge, the reported top dice score of 0.51 and top recall of 0.55 achieved by the cascaded MTDL model indicate further research efforts in high demand to escalate the performance of current models.
Collapse
Affiliation(s)
- Yan Zhao
- Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China
| | - Xiuying Wang
- School of Computer Science, The University of Sydney, Sydney, NSW, 2008, Australia.
| | - Tongtong Che
- Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China
| | - Guoqing Bao
- School of Computer Science, The University of Sydney, Sydney, NSW, 2008, Australia
| | - Shuyu Li
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, 100875, China.
| |
Collapse
|
20
|
Yang Y, Cao S, Wan W, Huang S. Multi-modal medical image super-resolution fusion based on detail enhancement and weighted local energy deviation. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
21
|
Ye X, Wang P, Li S, Zhang J, Lian Y, Zhang Y, Lu J, Guo H. Simultaneous superresolution reconstruction and distortion correction for single-shot EPI DWI using deep learning. Magn Reson Med 2023; 89:2456-2470. [PMID: 36705077 DOI: 10.1002/mrm.29601] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2022] [Revised: 12/07/2022] [Accepted: 01/12/2023] [Indexed: 01/28/2023]
Abstract
PURPOSE Single-shot (SS) EPI is widely used for clinical DWI. This study aims to develop an end-to-end deep learning-based method with a novel loss function in an improved network structure to simultaneously increase the resolution and correct distortions for SS-EPI DWI. THEORY AND METHODS Point-spread-function (PSF)-encoded EPI can provide high-resolution, distortion-free DWI images. A distorted image from SS-EPI can be described as the convolution between a PSF function with a distortion-free image. The deconvolution process to recover the distortion-free image can be achieved with a convolution neural network, which also learns the mapping function between low-resolution SS-EPI and high-resolution reference PSF-EPI to achieve superresolution. To suppress the oversmoothing effect, we proposed a modified generative adversarial network structure, in which a dense net with gradient map guidance and a multilevel fusion block was used as the generator. A fractional anisotropy loss was proposed to utilize the diffusion anisotropy information among diffusion directions. In vivo brain DWI data were used to test the proposed method. RESULTS The results show that distortion-corrected high-resolution DWI images with restored structural details can be obtained from low-resolution SS-EPI images by taking advantage of the high-resolution anatomical images. Additionally, the proposed network can improve the quantitative accuracy of diffusion metrics compared with previously reported networks. CONCLUSION Using high-resolution, distortion-free EPI-DWI images as references, a deep learning-based method to simultaneously increase the perceived resolution and correct distortions for low-resolution SS-EPI was proposed. The results show that DWI image quality and diffusion metrics can be improved.
Collapse
Affiliation(s)
- Xinyu Ye
- Center for Biomedical Imaging Research, Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Peipei Wang
- Department of Radiology and Nuclear Medicine, Xuanwu Hospital, Capital Medical University, Beijing, China
| | - Sisi Li
- Center for Biomedical Imaging Research, Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Jieying Zhang
- Center for Biomedical Imaging Research, Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Yuan Lian
- Center for Biomedical Imaging Research, Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Yajing Zhang
- MR Clinical Science, Philips Healthcare, Suzhou, China
| | - Jie Lu
- Department of Radiology and Nuclear Medicine, Xuanwu Hospital, Capital Medical University, Beijing, China
| | - Hua Guo
- Center for Biomedical Imaging Research, Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| |
Collapse
|
22
|
Fang Y, Bakian-Dogaheh K, Moghaddam M. Real-Time 3D Microwave Medical Imaging With Enhanced Variational Born Iterative Method. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:268-280. [PMID: 36166569 DOI: 10.1109/tmi.2022.3210494] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
In this paper, we present a new variational Born iterative method (VBIM) for real-time microwave imaging (MWI) applications. The S-parameter volume integral equation and waveport vector Green's function are implemented to utilize the measured signal of the MWI system. Meanwhile, the real and imaginary separation (RIS) approach is used at each iterative step to simultaneously reconstruct the dielectric permittivity and conductivity of unknown objects. Compared with the Born iterative method and distorted Born iterative method, VBIM requires less computational time to reach the convergence threshold. The graphics processing unit based acceleration technique is implemented for real-time imaging. To demonstrate the efficiency and accuracy of this VBIM-RIS method, synthetic analysis of a complex multi-layer spherical phantom is first conducted. Then, the algorithm is tested with measured data using our new MWI system prototype. Finally, a synthetic brain-tumor phantom model under a thermal therapy procedure is monitored to exemplify the real-time imaging with about 5 seconds per reconstruction frame.
Collapse
|
23
|
Devi S, Bakshi S, Sahoo MN. Effect of situational and instrumental distortions on the classification of brain MR images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
24
|
Zou B, Ji Z, Zhu C, Dai Y, Zhang W, Kui X. Multi-scale deformable transformer for multi-contrast knee MRI super-resolution. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104154] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
25
|
Molina-Maza JM, Galiana-Bordera A, Jimenez M, Malpica N, Torrado-Carvajal A. Development of a Super-Resolution Scheme for Pediatric Magnetic Resonance Brain Imaging Through Convolutional Neural Networks. Front Neurosci 2022; 16:830143. [DOI: 10.3389/fnins.2022.830143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Accepted: 06/01/2022] [Indexed: 11/13/2022] Open
Abstract
Pediatric medical imaging represents a real challenge for physicians, as children who are patients often move during the examination, and it causes the appearance of different artifacts in the images. Thus, it is not possible to obtain good quality images for this target population limiting the possibility of evaluation and diagnosis in certain pathological conditions. Specifically, magnetic resonance imaging (MRI) is a technique that requires long acquisition times and, therefore, demands the use of sedation or general anesthesia to avoid the movement of the patient, which is really damaging in this specific population. Because ALARA (as low as reasonably achievable) principles should be considered for all imaging studies, one of the most important reasons for establishing novel MRI imaging protocols is to avoid the harmful effects of anesthesia/sedation. In this context, ground-breaking concepts and novel technologies, such as artificial intelligence, can help to find a solution to these challenges while helping in the search for underlying disease mechanisms. The use of new MRI protocols and new image acquisition and/or pre-processing techniques can aid in the development of neuroimaging studies for children evaluation, and their translation to pediatric populations. In this paper, a novel super-resolution method based on a convolutional neural network (CNN) in two and three dimensions to automatically increase the resolution of pediatric brain MRI acquired in a reduced time scheme is proposed. Low resolution images have been generated from an original high resolution dataset and used as the input of the CNN, while several scaling factors have been assessed separately. Apart from a healthy dataset, we also tested our model with pathological pediatric MRI, and it successfully recovers the original image quality in both visual and quantitative ways, even for available examples of dysplasia lesions. We hope then to establish the basis for developing an innovative free-sedation protocol in pediatric anatomical MRI acquisition.
Collapse
|
26
|
|
27
|
Tax CMW, Bastiani M, Veraart J, Garyfallidis E, Okan Irfanoglu M. What's new and what's next in diffusion MRI preprocessing. Neuroimage 2022; 249:118830. [PMID: 34965454 PMCID: PMC9379864 DOI: 10.1016/j.neuroimage.2021.118830] [Citation(s) in RCA: 42] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Revised: 10/26/2021] [Accepted: 12/15/2021] [Indexed: 02/07/2023] Open
Abstract
Diffusion MRI (dMRI) provides invaluable information for the study of tissue microstructure and brain connectivity, but suffers from a range of imaging artifacts that greatly challenge the analysis of results and their interpretability if not appropriately accounted for. This review will cover dMRI artifacts and preprocessing steps, some of which have not typically been considered in existing pipelines or reviews, or have only gained attention in recent years: brain/skull extraction, B-matrix incompatibilities w.r.t the imaging data, signal drift, Gibbs ringing, noise distribution bias, denoising, between- and within-volumes motion, eddy currents, outliers, susceptibility distortions, EPI Nyquist ghosts, gradient deviations, B1 bias fields, and spatial normalization. The focus will be on "what's new" since the notable advances prior to and brought by the Human Connectome Project (HCP), as presented in the predecessing issue on "Mapping the Connectome" in 2013. In addition to the development of novel strategies for dMRI preprocessing, exciting progress has been made in the availability of open source tools and reproducible pipelines, databases and simulation tools for the evaluation of preprocessing steps, and automated quality control frameworks, amongst others. Finally, this review will consider practical considerations and our view on "what's next" in dMRI preprocessing.
Collapse
Affiliation(s)
- Chantal M W Tax
- Image Sciences Institute, University Medical Center Utrecht, The Netherlands; Cardiff University Brain Research Imaging Centre, School of Physics and Astronomy, Cardiff University, UK.
| | - Matteo Bastiani
- Sir Peter Mansfield Imaging Centre, School of Medicine, University of Nottingham, UK; Wellcome Centre for Integrative Neuroimaging (WIN), Centre for Functional Magnetic Resonance Imaging of the Brain (FMRIB), University of Oxford, UK
| | - Jelle Veraart
- Center for Biomedical Imaging, New York University Grossman School of Medicine, NY, USA
| | | | - M Okan Irfanoglu
- Quantitative Medical Imaging Section, National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Bethesda, MD, USA
| |
Collapse
|
28
|
SOUP-GAN: Super-Resolution MRI Using Generative Adversarial Networks. Tomography 2022; 8:905-919. [PMID: 35448707 PMCID: PMC9027099 DOI: 10.3390/tomography8020073] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Revised: 03/19/2022] [Accepted: 03/21/2022] [Indexed: 11/16/2022] Open
Abstract
There is a growing demand for high-resolution (HR) medical images for both clinical and research applications. Image quality is inevitably traded off with acquisition time, which in turn impacts patient comfort, examination costs, dose, and motion-induced artifacts. For many image-based tasks, increasing the apparent spatial resolution in the perpendicular plane to produce multi-planar reformats or 3D images is commonly used. Single-image super-resolution (SR) is a promising technique to provide HR images based on deep learning to increase the resolution of a 2D image, but there are few reports on 3D SR. Further, perceptual loss is proposed in the literature to better capture the textural details and edges versus pixel-wise loss functions, by comparing the semantic distances in the high-dimensional feature space of a pre-trained 2D network (e.g., VGG). However, it is not clear how one should generalize it to 3D medical images, and the attendant implications are unclear. In this paper, we propose a framework called SOUP-GAN: Super-resolution Optimized Using Perceptual-tuned Generative Adversarial Network (GAN), in order to produce thinner slices (e.g., higher resolution in the ‘Z’ plane) with anti-aliasing and deblurring. The proposed method outperforms other conventional resolution-enhancement methods and previous SR work on medical images based on both qualitative and quantitative comparisons. Moreover, we examine the model in terms of its generalization for arbitrarily user-selected SR ratios and imaging modalities. Our model shows promise as a novel 3D SR interpolation technique, providing potential applications for both clinical and research applications.
Collapse
|
29
|
Niaz MR, Ridwan AR, Wu Y, Bennett DA, Arfanakis K. Development and evaluation of a high resolution 0.5mm isotropic T1-weighted template of the older adult brain. Neuroimage 2022; 248:118869. [PMID: 34986396 PMCID: PMC8855670 DOI: 10.1016/j.neuroimage.2021.118869] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2021] [Revised: 12/08/2021] [Accepted: 12/29/2021] [Indexed: 10/28/2022] Open
Abstract
Investigating the structure of the older adult brain at high spatial resolution is of high significance, and a dedicated older adult structural brain template with sub-millimeter resolution is currently lacking. Therefore, the purpose of this work was twofold: (A) to develop a 0.5mm isotropic resolution standardized T1-weighted template of the older adult brain by applying principles of super resolution to high quality MRI data from 222 older adults (65-95 years of age), and (B) to systematically compare the new template to other standardized and study-specific templates in terms of image quality and performance when used as a reference for alignment of older adult data. The new template exhibited higher spatial resolution and improved visualization of fine structural details of the older adult brain compared to a template constructed using a conventional template building approach and the same data. In addition, the new template exhibited higher image sharpness and did not contain image artifacts observed in some of the other templates considered in this work. Due to the above enhancements, the new template provided higher inter-subject spatial normalization precision for older adult data compared to the other templates, and consequently enabled detection of smaller inter-group morphometric differences in older adult data. Finally, the new template was among those that were most representative of older adult brain data. Overall, the new template constructed here is an important resource for studies of aging, and the findings of the present work have important implications in template selection for investigations on older adults.
Collapse
Affiliation(s)
- Mohammad Rakeen Niaz
- Department of Biomedical Engineering, Illinois Institute of Technology, 3440 S Dearborn St, M-100, Chicago, IL 60616, United States
| | - Abdur Raquib Ridwan
- Department of Biomedical Engineering, Illinois Institute of Technology, 3440 S Dearborn St, M-100, Chicago, IL 60616, United States
| | - Yingjuan Wu
- Department of Biomedical Engineering, Illinois Institute of Technology, 3440 S Dearborn St, M-100, Chicago, IL 60616, United States
| | - David A Bennett
- Rush Alzheimer's Disease Center, Rush University Medical Center, Chicago, IL, United States
| | - Konstantinos Arfanakis
- Department of Biomedical Engineering, Illinois Institute of Technology, 3440 S Dearborn St, M-100, Chicago, IL 60616, United States; Rush Alzheimer's Disease Center, Rush University Medical Center, Chicago, IL, United States; Department of Diagnostic Radiology and Nuclear Medicine, Rush University Medical Center, Chicago, IL, United States.
| |
Collapse
|
30
|
Kang L, Liu G, Huang J, Li J. Super-resolution method for MR images based on multi-resolution CNN. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103372] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
|
31
|
Gourdeau D, Duchesne S, Archambault L. On the proper use of structural similarity for the robust evaluation of medical image synthesis models. Med Phys 2022; 49:2462-2474. [PMID: 35106778 DOI: 10.1002/mp.15514] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2021] [Revised: 01/18/2022] [Accepted: 01/19/2022] [Indexed: 11/07/2022] Open
Abstract
PURPOSE To propose good practices for using the structural similarity metric (SSIM) and reporting its value. SSIM is one of the most popular image quality metrics in use in the medical image synthesis community because of its alleged superiority over voxel-by-voxel measurements like the average error or the peak signal noise ratio (PSNR). It has seen massive adoption since its introduction, but its limitations are often overlooked. Notably, SSIM is designed to work on a strictly positive intensity scale, which is generally not the case in medical imaging. Common intensity scales such as the Houndsfield units (HU) contain negative numbers, and they can also be introduced by image normalization techniques such as the z-normalization. METHODS We created a series of experiments to quantify the impact of negative values in the SSIM computation. Specifically, we trained a 3D U-Net to synthesize T2 weighted MRI from T1 weighted MRI using the BRATS 2018 dataset. SSIM was computed on the synthetic images with a shifted dynamic range. Next, to evaluate the suitability of SSIM as a loss function on images with negative values, it was used as a loss function to synthesize z-normalized images. Finally, the difference between 2D SSIM and 3D SSIM was investigated using multiple 2D U-Nets trained on different planes of the images. RESULTS The impact of the misuse of the SSIM was quantified; it was established that it introduces a large downward bias in the computed SSIM. It also introduces a small random error that can change the relative ranking of models. The exact values for this bias and error depend on the quality and the intensity histogram of the synthetic images. Although small, the reported error is significant considering the small SSIM difference between state-of-the-art models. It was shown therefore that SSIM cannot be used as a loss function when images contain negative values due to major errors in the gradient calculation, resulting in under-performing models. 2D SSIM was also found to be overestimated in 2D image synthesis models when computed along the plane of synthesis, due to the discontinuities between slices that is typical of 2D synthesis methods. CONCLUSION Various types of misuse of the SSIM were identified and their impact was quantified. Based on the findings, this paper proposes good practices when using SSIM, such as reporting the average over the volume of the image containing tissue and appropriately defining the dynamic range. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Daniel Gourdeau
- Université Laval, Department of physics, engineering physics and optics, Québec, QC, G1R 2J6, Canada.,CHUQ Cancer Research Center, Québec, QC, Canada.,CERVO Brain Research Center, Québec, QC, Canada
| | - Simon Duchesne
- CERVO Brain Research Center, Québec, QC, Canada.,Université Laval, Department of radiology, Québec, QC, G1V 0A6, Canada
| | - Louis Archambault
- Université Laval, Department of physics, engineering physics and optics, Québec, QC, G1R 2J6, Canada.,CHUQ Cancer Research Center, Québec, QC, Canada
| |
Collapse
|
32
|
Koyuncu B, Melek A, Yilmaz D, Tuzer M, Unlu MB. Chemotherapy response prediction with diffuser elapser network. Sci Rep 2022; 12:1628. [PMID: 35102179 PMCID: PMC8803972 DOI: 10.1038/s41598-022-05460-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Accepted: 11/10/2021] [Indexed: 12/31/2022] Open
Abstract
In solid tumors, elevated fluid pressure and inadequate blood perfusion resulting from unbalanced angiogenesis are the prominent reasons for the ineffective drug delivery inside tumors. To normalize the heterogeneous and tortuous tumor vessel structure, antiangiogenic treatment is an effective approach. Additionally, the combined therapy of antiangiogenic agents and chemotherapy drugs has shown promising effects on enhanced drug delivery. However, the need to find the appropriate scheduling and dosages of the combination therapy is one of the main problems in anticancer therapy. Our study aims to generate a realistic response to the treatment schedule, making it possible for future works to use these patient-specific responses to decide on the optimal starting time and dosages of cytotoxic drug treatment. Our dataset is based on our previous in-silico model with a framework for the tumor microenvironment, consisting of a tumor layer, vasculature network, interstitial fluid pressure, and drug diffusion maps. In this regard, the chemotherapy response prediction problem is discussed in the study, putting forth a proof of concept for deep learning models to capture the tumor growth and drug response behaviors simultaneously. The proposed model utilizes multiple convolutional neural network submodels to predict future tumor microenvironment maps considering the effects of ongoing treatment. Since the model has the task of predicting future tumor microenvironment maps, we use two image quality evaluation metrics, which are structural similarity and peak signal-to-noise ratio, to evaluate model performance. We track tumor cell density values of ground truth and predicted tumor microenvironments. The model predicts tumor microenvironment maps seven days ahead with the average structural similarity score of 0.973 and the average peak signal ratio of 35.41 in the test set. It also predicts tumor cell density at the end day of 7 with the mean absolute percentage error of [Formula: see text].
Collapse
Affiliation(s)
- Batuhan Koyuncu
- Department of Computer Engineering, Bogazici University, Istanbul, 34342, Turkey
- Center for Life Sciences and Technologies, Bogazici University, Istanbul, 34342, Turkey
| | - Ahmet Melek
- Department of Management, Bogazici University, Istanbul, 34342, Turkey
- Center for Life Sciences and Technologies, Bogazici University, Istanbul, 34342, Turkey
| | - Defne Yilmaz
- Department of Physics, Bogazici University, Istanbul, 34342, Turkey
- Center for Life Sciences and Technologies, Bogazici University, Istanbul, 34342, Turkey
| | - Mert Tuzer
- Department of Physics, Bogazici University, Istanbul, 34342, Turkey
- Center for Life Sciences and Technologies, Bogazici University, Istanbul, 34342, Turkey
| | - Mehmet Burcin Unlu
- Department of Physics, Bogazici University, Istanbul, 34342, Turkey.
- Center for Life Sciences and Technologies, Bogazici University, Istanbul, 34342, Turkey.
- Hokkaido University, Global Station for Quantum Medical Science and Engineering, Global Institution for Collaborative Research and Education (GI-CoRE), Sapporo, 060-8648, Japan.
| |
Collapse
|
33
|
Bento M, Fantini I, Park J, Rittner L, Frayne R. Deep Learning in Large and Multi-Site Structural Brain MR Imaging Datasets. Front Neuroinform 2022; 15:805669. [PMID: 35126080 PMCID: PMC8811356 DOI: 10.3389/fninf.2021.805669] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2021] [Accepted: 12/27/2021] [Indexed: 12/22/2022] Open
Abstract
Large, multi-site, heterogeneous brain imaging datasets are increasingly required for the training, validation, and testing of advanced deep learning (DL)-based automated tools, including structural magnetic resonance (MR) image-based diagnostic and treatment monitoring approaches. When assembling a number of smaller datasets to form a larger dataset, understanding the underlying variability between different acquisition and processing protocols across the aggregated dataset (termed “batch effects”) is critical. The presence of variation in the training dataset is important as it more closely reflects the true underlying data distribution and, thus, may enhance the overall generalizability of the tool. However, the impact of batch effects must be carefully evaluated in order to avoid undesirable effects that, for example, may reduce performance measures. Batch effects can result from many sources, including differences in acquisition equipment, imaging technique and parameters, as well as applied processing methodologies. Their impact, both beneficial and adversarial, must be considered when developing tools to ensure that their outputs are related to the proposed clinical or research question (i.e., actual disease-related or pathological changes) and are not simply due to the peculiarities of underlying batch effects in the aggregated dataset. We reviewed applications of DL in structural brain MR imaging that aggregated images from neuroimaging datasets, typically acquired at multiple sites. We examined datasets containing both healthy control participants and patients that were acquired using varying acquisition protocols. First, we discussed issues around Data Access and enumerated the key characteristics of some commonly used publicly available brain datasets. Then we reviewed methods for correcting batch effects by exploring the two main classes of approaches: Data Harmonization that uses data standardization, quality control protocols or other similar algorithms and procedures to explicitly understand and minimize unwanted batch effects; and Domain Adaptation that develops DL tools that implicitly handle the batch effects by using approaches to achieve reliable and robust results. In this narrative review, we highlighted the advantages and disadvantages of both classes of DL approaches, and described key challenges to be addressed in future studies.
Collapse
Affiliation(s)
- Mariana Bento
- Electrical and Software Engineering, Schulich School of Engineering, University of Calgary, Calgary, AB, Canada
- Hotchkiss Brain Institute, University of Calgary, Calgary, AB, Canada
- Calgary Image Processing and Analysis Centre, Foothills Medical Centre, Calgary, AB, Canada
- *Correspondence: Mariana Bento
| | - Irene Fantini
- School of Electrical and Computer Engineering, University of Campinas, Campinas, Brazil
| | - Justin Park
- Hotchkiss Brain Institute, University of Calgary, Calgary, AB, Canada
- Calgary Image Processing and Analysis Centre, Foothills Medical Centre, Calgary, AB, Canada
- Radiology and Clinical Neurosciences, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada
| | - Leticia Rittner
- School of Electrical and Computer Engineering, University of Campinas, Campinas, Brazil
| | - Richard Frayne
- Hotchkiss Brain Institute, University of Calgary, Calgary, AB, Canada
- Calgary Image Processing and Analysis Centre, Foothills Medical Centre, Calgary, AB, Canada
- Radiology and Clinical Neurosciences, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada
- Seaman Family MR Research Centre, Foothills Medical Centre, Calgary, AB, Canada
| |
Collapse
|
34
|
Deep robust residual network for super-resolution of 2D fetal brain MRI. Sci Rep 2022; 12:406. [PMID: 35013383 PMCID: PMC8748749 DOI: 10.1038/s41598-021-03979-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2020] [Accepted: 12/06/2021] [Indexed: 01/22/2023] Open
Abstract
Spatial resolution is a key factor of quantitatively evaluating the quality of magnetic resonance imagery (MRI). Super-resolution (SR) approaches can improve its spatial resolution by reconstructing high-resolution (HR) images from low-resolution (LR) ones to meet clinical and scientific requirements. To increase the quality of brain MRI, we study a robust residual-learning SR network (RRLSRN) to generate a sharp HR brain image from an LR input. Due to the Charbonnier loss can handle outliers well, and Gradient Difference Loss (GDL) can sharpen an image, we combined the Charbonnier loss and GDL to improve the robustness of the model and enhance the texture information of SR results. Two MRI datasets of adult brain, Kirby 21 and NAMIC, were used to train and verify the effectiveness of our model. To further verify the generalizability and robustness of the proposed model, we collected eight clinical fetal brain MRI 2D data for evaluation. The experimental results have shown that the proposed deep residual-learning network achieved superior performance and high efficiency over other compared methods.
Collapse
|
35
|
Clinical evaluation of super-resolution for brain MRI images based on generative adversarial networks. INFORMATICS IN MEDICINE UNLOCKED 2022. [DOI: 10.1016/j.imu.2022.101030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
|
36
|
Sarasaen C, Chatterjee S, Breitkopf M, Rose G, Nürnberger A, Speck O. Fine-tuning deep learning model parameters for improved super-resolution of dynamic MRI with prior-knowledge. Artif Intell Med 2021; 121:102196. [PMID: 34763811 DOI: 10.1016/j.artmed.2021.102196] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Revised: 10/07/2021] [Accepted: 10/12/2021] [Indexed: 10/20/2022]
Abstract
Dynamic imaging is a beneficial tool for interventions to assess physiological changes. Nonetheless during dynamic MRI, while achieving a high temporal resolution, the spatial resolution is compromised. To overcome this spatio-temporal trade-off, this research presents a super-resolution (SR) MRI reconstruction with prior knowledge based fine-tuning to maximise spatial information while reducing the required scan-time for dynamic MRIs. A U-Net based network with perceptual loss is trained on a benchmark dataset and fine-tuned using one subject-specific static high resolution MRI as prior knowledge to obtain high resolution dynamic images during the inference stage. 3D dynamic data for three subjects were acquired with different parameters to test the generalisation capabilities of the network. The method was tested for different levels of in-plane undersampling for dynamic MRI. The reconstructed dynamic SR results after fine-tuning showed higher similarity with the high resolution ground-truth, while quantitatively achieving statistically significant improvement. The average SSIM of the lowest resolution experimented during this research (6.25% of the k-space) before and after fine-tuning were 0.939 ± 0.008 and 0.957 ± 0.006 respectively. This could theoretically result in an acceleration factor of 16, which can potentially be acquired in less than half a second. The proposed approach shows that the super-resolution MRI reconstruction with prior-information can alleviate the spatio-temporal trade-off in dynamic MRI, even for high acceleration factors.
Collapse
Affiliation(s)
- Chompunuch Sarasaen
- Biomedical Magnetic Resonance, Otto von Guericke University Magdeburg, Germany; Institute for Medical Engineering, Otto von Guericke University Magdeburg, Germany; Research Campus STIMULATE, Otto von Guericke University Magdeburg, Germany.
| | - Soumick Chatterjee
- Biomedical Magnetic Resonance, Otto von Guericke University Magdeburg, Germany; Research Campus STIMULATE, Otto von Guericke University Magdeburg, Germany; Faculty of Computer Science, Otto von Guericke University Magdeburg, Germany; Data and Knowledge Engineering Group, Otto von Guericke University Magdeburg, Germany
| | - Mario Breitkopf
- Biomedical Magnetic Resonance, Otto von Guericke University Magdeburg, Germany; Research Campus STIMULATE, Otto von Guericke University Magdeburg, Germany
| | - Georg Rose
- Institute for Medical Engineering, Otto von Guericke University Magdeburg, Germany; Research Campus STIMULATE, Otto von Guericke University Magdeburg, Germany
| | - Andreas Nürnberger
- Faculty of Computer Science, Otto von Guericke University Magdeburg, Germany; Data and Knowledge Engineering Group, Otto von Guericke University Magdeburg, Germany; Center for Behavioral Brain Sciences, Magdeburg, Germany
| | - Oliver Speck
- Biomedical Magnetic Resonance, Otto von Guericke University Magdeburg, Germany; Research Campus STIMULATE, Otto von Guericke University Magdeburg, Germany; German Center for Neurodegenerative Disease, Magdeburg, Germany; Center for Behavioral Brain Sciences, Magdeburg, Germany; Leibniz Institute for Neurobiology, Magdeburg, Germany
| |
Collapse
|
37
|
Doborjeh M, Doborjeh Z, Merkin A, Bahrami H, Sumich A, Krishnamurthi R, Medvedev ON, Crook-Rumsey M, Morgan C, Kirk I, Sachdev PS, Brodaty H, Kang K, Wen W, Feigin V, Kasabov N. Personalised predictive modelling with brain-inspired spiking neural networks of longitudinal MRI neuroimaging data and the case study of dementia. Neural Netw 2021; 144:522-539. [PMID: 34619582 DOI: 10.1016/j.neunet.2021.09.013] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2020] [Revised: 08/11/2021] [Accepted: 09/12/2021] [Indexed: 11/27/2022]
Abstract
BACKGROUND Longitudinal neuroimaging provides spatiotemporal brain data (STBD) measurement that can be utilised to understand dynamic changes in brain structure and/or function underpinning cognitive activities. Making sense of such highly interactive information is challenging, given that the features manifest intricate temporal, causal relations between the spatially distributed neural sources in the brain. METHODS The current paper argues for the advancement of deep learning algorithms in brain-inspired spiking neural networks (SNN), capable of modelling structural data across time (longitudinal measurement) and space (anatomical components). The paper proposes a methodology and a computational architecture based on SNN for building personalised predictive models from longitudinal brain data to accurately detect, understand, and predict the dynamics of an individual's functional brain state. The methodology includes finding clusters of similar data to each individual, data interpolation, deep learning in a 3-dimensional brain-template structured SNN model, classification and prediction of individual outcome, visualisation of structural brain changes related to the predicted outcomes, interpretation of results, and individual and group predictive marker discovery. RESULTS To demonstrate the functionality of the proposed methodology, the paper presents experimental results on a longitudinal magnetic resonance imaging (MRI) dataset derived from 175 older adults of the internationally recognised community-based cohort Sydney Memory and Ageing Study (MAS) spanning 6 years of follow-up. SIGNIFICANCE The models were able to accurately classify and predict 2 years ahead of cognitive decline, such as mild cognitive impairment (MCI) and dementia with 95% and 91% accuracy, respectively. The proposed methodology also offers a 3-dimensional visualisation of the MRI models reflecting the dynamic patterns of regional changes in white matter hyperintensity (WMH) and brain volume over 6 years. CONCLUSION The method is efficient for personalised predictive modelling on a wide range of neuroimaging longitudinal data, including also demographic, genetic, and clinical data. As a case study, it resulted in finding predictive markers for MCI and dementia as dynamic brain patterns using MRI data.
Collapse
Affiliation(s)
- Maryam Doborjeh
- Computer Science and Software Engineering Department, School of Engineering, Computer and Mathematical Sciences, Auckland University of Technology, New Zealand.
| | - Zohreh Doborjeh
- Department of Audiology, School of Population Health, Faculty of Medical and Health Sciences, The University of Auckland, New Zealand
| | - Alexander Merkin
- The National Institute for Stroke and Applied Neurosciences, School of Clinical Sciences, Auckland University of Technology, New Zealand
| | - Helena Bahrami
- School of Engineering, Computer and Mathematical Sciences, Auckland University of Technology, New Zealand
| | - Alexander Sumich
- NTU Psychology, Nottingham Trent University, Nottingham, United Kingdom
| | - Rita Krishnamurthi
- The National Institute for Stroke and Applied Neurosciences, School of Clinical Sciences, Auckland University of Technology, New Zealand
| | - Oleg N Medvedev
- University of Waikato, School of Psychology, Hamilton, New Zealand
| | - Mark Crook-Rumsey
- NTU Psychology, Nottingham Trent University, Nottingham, United Kingdom; School of Engineering, Computer and Mathematical Sciences, Auckland University of Technology, New Zealand
| | - Catherine Morgan
- School of Psychology and Centre for Brain Research, University of Auckland, New Zealand; Brain Research New Zealand - Rangahau Roro Aotearoa, Centre of Research Excellence, New Zealand
| | - Ian Kirk
- School of Psychology and Centre for Brain Research, University of Auckland, New Zealand; Brain Research New Zealand - Rangahau Roro Aotearoa, Centre of Research Excellence, New Zealand
| | - Perminder S Sachdev
- Centre for Healthy Brain Ageing (CHeBA), School of Psychiatry, University of New South Wales, Sydney, Australia; Neuropsychiatric Institute, the Prince of Wales Hospital, Sydney, Australia
| | - Henry Brodaty
- Centre for Healthy Brain Ageing (CHeBA), School of Psychiatry, University of New South Wales, Sydney, Australia
| | - Kristan Kang
- Centre for Healthy Brain Ageing (CHeBA), School of Psychiatry, University of New South Wales, Sydney, Australia
| | - Wei Wen
- Centre for Healthy Brain Ageing (CHeBA), School of Psychiatry, University of New South Wales, Sydney, Australia; Neuropsychiatric Institute, the Prince of Wales Hospital, Sydney, Australia
| | - Valery Feigin
- The National Institute for Stroke and Applied Neurosciences, School of Clinical Sciences, Auckland University of Technology, New Zealand; Research Center of Neurology, Moscow, Russia
| | - Nikola Kasabov
- School of Engineering, Computer and Mathematical Sciences, Auckland University of Technology, New Zealand; George Moore Chair, Ulster University, Londonderry, United Kingdom
| |
Collapse
|
38
|
Huang B, Xiao H, Liu W, Zhang Y, Wu H, Wang W, Yang Y, Yang Y, Miller GW, Li T, Cai J. MRI super-resolution via realistic downsampling with adversarial learning. Phys Med Biol 2021; 66. [PMID: 34474407 DOI: 10.1088/1361-6560/ac232e] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2021] [Accepted: 09/02/2021] [Indexed: 11/12/2022]
Abstract
Many deep learning (DL) frameworks have demonstrated state-of-the-art performance in the super-resolution (SR) task of magnetic resonance imaging, but most performances have been achieved with simulated low-resolution (LR) images rather than LR images from real acquisition. Due to the limited generalizability of the SR network, enhancement is not guaranteed for real LR images because of the unreality of the training LR images. In this study, we proposed a DL-based SR framework with an emphasis on data construction to achieve better performance on real LR MR images. The framework comprised two steps: (a) downsampling training using a generative adversarial network (GAN) to construct more realistic and perfectly matched LR/high-resolution (HR) pairs. The downsampling GAN input was real LR and HR images. The generator translated the HR images to LR images and the discriminator distinguished the patch-level difference between the synthetic and real LR images. (b) SR training was performed using an enhance4d deep super-resolution network (EDSR). In the controlled experiments, three EDSRs were trained using our proposed method, Gaussian blur, and k-space zero-filling. As for the data, liver MR images were obtained from 24 patients using breath-hold serial LR and HR scans (only HR images were used in the conventional methods). The k-space zero-filling group delivered almost zero enhancement on the real LR images and the Gaussian group produced a considerable number of artifacts. The proposed method exhibited significantly better resolution enhancement and fewer artifacts compared with the other two networks. Our method outperformed the Gaussian method by an improvement of 0.111 ± 0.016 in the structural similarity index and 2.76 ± 0.98 dB in the peak signal-to-noise ratio. The blind/reference-less image spatial quality evaluator metric of the conventional Gaussian method and proposed method were 46.6 ± 4.2 and 34.1 ± 2.4, respectively.
Collapse
Affiliation(s)
- Bangyan Huang
- Department of Engineering and Applied Physics, University of Science and Technology of China, Hefei, People's Republic of China
| | - Haonan Xiao
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, People's Republic of China
| | - Weiwei Liu
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Beijing Cancer Hospital and Institute, Peking University Cancer Hospital and Institute, Beijing, People's Republic of China
| | - Yibao Zhang
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Beijing Cancer Hospital and Institute, Peking University Cancer Hospital and Institute, Beijing, People's Republic of China
| | - Hao Wu
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Beijing Cancer Hospital and Institute, Peking University Cancer Hospital and Institute, Beijing, People's Republic of China
| | - Weihu Wang
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Beijing Cancer Hospital and Institute, Peking University Cancer Hospital and Institute, Beijing, People's Republic of China
| | - Yunhuan Yang
- Department of Engineering and Applied Physics, University of Science and Technology of China, Hefei, People's Republic of China
| | - Yidong Yang
- Department of Engineering and Applied Physics, University of Science and Technology of China, Hefei, People's Republic of China
| | - G Wilson Miller
- Department of Radiology and Medical Imaging, The University of Virginia, Charlottesville, VA, United States of America
| | - Tian Li
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, People's Republic of China
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, People's Republic of China
| |
Collapse
|
39
|
Wang L, Du J, Gholipour A, Zhu H, He Z, Jia Y. 3D dense convolutional neural network for fast and accurate single MR image super-resolution. Comput Med Imaging Graph 2021; 93:101973. [PMID: 34543775 DOI: 10.1016/j.compmedimag.2021.101973] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2020] [Revised: 07/13/2021] [Accepted: 08/17/2021] [Indexed: 10/20/2022]
Abstract
Super-resolution (SR) MR image reconstruction has shown to be a very promising direction to improve the spatial resolution of low-resolution (LR) MR images. In this paper, we presented a novel MR image SR method based on a dense convolutional neural network (DDSR), and its enhanced version called EDDSR. There are three major innovations: first, we re-designed dense modules to extract hierarchical features directly from LR images and propagate the extracted feature maps through dense connections. Therefore, unlike other CNN-based SR MR techniques that upsample LR patches in the initial phase, our methods take the original LR images or patches as input. This effectively reduces computational complexity and speeds up SR reconstruction. Second, a final deconvolution filter in our model automatically learns filters to fuse and upscale all hierarchical feature maps to generate HR MR images. Using this, EDDSR can perform SR reconstructions at different upscale factors using a single model with one stride fixed deconvolution operation. Third, to further improve SR reconstruction accuracy, we exploited a geometric self-ensemble strategy. Experimental results on three benchmark datasets demonstrate that our methods, DDSR and EDDSR, achieved superior performance compared to state-of-the-art MR image SR methods with less computational load and memory usage.
Collapse
Affiliation(s)
- Lulu Wang
- College of Computer Science, Chongqing University, Chongqing 400044, China.
| | - Jinglong Du
- College of Computer Science, Chongqing University, Chongqing 400044, China.
| | - Ali Gholipour
- Department of Radiology, Boston Children's Hospital, Harvard Medical School, Boston, MA 02115, USA.
| | - Huazheng Zhu
- College of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing 401331, China.
| | - Zhongshi He
- College of Computer Science, Chongqing University, Chongqing 400044, China.
| | - Yuanyuan Jia
- College of Medical Informatics, Chongqing Medical University, Chongqing 400016, China; Medical Data Science Academy, Chongqing Medical University, Chongqing 400016, China.
| |
Collapse
|
40
|
Chandra SS, Bran Lorenzana M, Liu X, Liu S, Bollmann S, Crozier S. Deep learning in magnetic resonance image reconstruction. J Med Imaging Radiat Oncol 2021; 65:564-577. [PMID: 34254448 DOI: 10.1111/1754-9485.13276] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Accepted: 06/10/2021] [Indexed: 11/26/2022]
Abstract
Magnetic resonance (MR) imaging visualises soft tissue contrast in exquisite detail without harmful ionising radiation. In this work, we provide a state-of-the-art review on the use of deep learning in MR image reconstruction from different image acquisition types involving compressed sensing techniques, parallel image acquisition and multi-contrast imaging. Publications with deep learning-based image reconstruction for MR imaging were identified from the literature (PubMed and Google Scholar), and a comprehensive description of each of the works was provided. A detailed comparison that highlights the differences, the data used and the performance of each of these works were also made. A discussion of the potential use cases for each of these methods is provided. The sparse image reconstruction methods were found to be most popular in using deep learning for improved performance, accelerating acquisitions by around 4-8 times. Multi-contrast image reconstruction methods rely on at least one pre-acquired image, but can achieve 16-fold, and even up to 32- to 50-fold acceleration depending on the set-up. Parallel imaging provides frameworks to be integrated in many of these methods for additional speed-up potential. The successful use of compressed sensing techniques and multi-contrast imaging with deep learning and parallel acquisition methods could yield significant MR acquisition speed-ups within clinical routines in the near future.
Collapse
Affiliation(s)
- Shekhar S Chandra
- School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Queensland, Australia
| | - Marlon Bran Lorenzana
- School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Queensland, Australia
| | - Xinwen Liu
- School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Queensland, Australia
| | - Siyu Liu
- School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Queensland, Australia
| | - Steffen Bollmann
- School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Queensland, Australia
| | - Stuart Crozier
- School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Queensland, Australia
| |
Collapse
|
41
|
Hamwood J, Schmutz B, Collins MJ, Allenby MC, Alonso-Caneiro D. A deep learning method for automatic segmentation of the bony orbit in MRI and CT images. Sci Rep 2021; 11:13693. [PMID: 34211081 PMCID: PMC8249400 DOI: 10.1038/s41598-021-93227-3] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Accepted: 06/15/2021] [Indexed: 12/23/2022] Open
Abstract
This paper proposes a fully automatic method to segment the inner boundary of the bony orbit in two different image modalities: magnetic resonance imaging (MRI) and computed tomography (CT). The method, based on a deep learning architecture, uses two fully convolutional neural networks in series followed by a graph-search method to generate a boundary for the orbit. When compared to human performance for segmentation of both CT and MRI data, the proposed method achieves high Dice coefficients on both orbit and background, with scores of 0.813 and 0.975 in CT images and 0.930 and 0.995 in MRI images, showing a high degree of agreement with a manual segmentation by a human expert. Given the volumetric characteristics of these imaging modalities and the complexity and time-consuming nature of the segmentation of the orbital region in the human skull, it is often impractical to manually segment these images. Thus, the proposed method provides a valid clinical and research tool that performs similarly to the human observer.
Collapse
Affiliation(s)
- Jared Hamwood
- Contact Lens and Visual Optics Laboratory, Centre for Vision and Eye Research, School of Optometry and Vision Science, Queensland University of Technology (QUT), Kelvin Grove, Qld, 4059, Australia
| | - Beat Schmutz
- Centre in Regenerative Medicine, Institute of Health and Biomedical Innovation, Queensland University of Technology, Kelvin Grove, QLD, 4059, Australia
- Metro North Hospital and Health Service, Jamieson Trauma Institute, Herston, QLD, 4029, Australia
| | - Michael J Collins
- Contact Lens and Visual Optics Laboratory, Centre for Vision and Eye Research, School of Optometry and Vision Science, Queensland University of Technology (QUT), Kelvin Grove, Qld, 4059, Australia
| | - Mark C Allenby
- Biofabrication and Tissue Morphology Laboratory, Centre for Biomedical Technologies, School of Mechanical Medical and Process Engineering, Queensland University of Technology (QUT), Herston, Qld, 4000, Australia
| | - David Alonso-Caneiro
- Contact Lens and Visual Optics Laboratory, Centre for Vision and Eye Research, School of Optometry and Vision Science, Queensland University of Technology (QUT), Kelvin Grove, Qld, 4059, Australia.
| |
Collapse
|
42
|
Gu Y, Li K. A Transfer Model Based on Supervised Multi-Layer Dictionary Learning for Brain Tumor MRI Image Recognition. Front Neurosci 2021; 15:687496. [PMID: 34122003 PMCID: PMC8193061 DOI: 10.3389/fnins.2021.687496] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Accepted: 04/19/2021] [Indexed: 11/30/2022] Open
Abstract
Artificial intelligence (AI) is an effective technology for automatic brain tumor MRI image recognition. The training of an AI model requires a large number of labeled data, but medical data needs to be labeled by professional clinicians, which makes data collection complex and expensive. Moreover, a traditional AI model requires that the training data and test data must follow the independent and identically distributed. To solve this problem, we propose a transfer model based on supervised multi-layer dictionary learning (TSMDL) for brain tumor MRI image recognition in this paper. With the help of the knowledge learned from related domains, the goal of this model is to solve the task of transfer learning where the target domain has only a small number of labeled samples. Based on the framework of multi-layer dictionary learning, the proposed model learns the common shared dictionary of source and target domains in each layer to explore the intrinsic connections and shared information between different domains. At the same time, by making full use of the label information of samples, the Laplacian regularization term is introduced to make the dictionary coding of similar samples as close as possible and the dictionary coding of different class samples as different as possible. The recognition experiments on brain MRI image datasets REMBRANDT and Figshare show that the model performs better than competitive state of-the-art methods.
Collapse
Affiliation(s)
- Yi Gu
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, China
| | - Kang Li
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, China
| |
Collapse
|
43
|
Gu X, Shen Z, Xue J, Fan Y, Ni T. Brain Tumor MR Image Classification Using Convolutional Dictionary Learning With Local Constraint. Front Neurosci 2021; 15:679847. [PMID: 34122001 PMCID: PMC8193950 DOI: 10.3389/fnins.2021.679847] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Accepted: 04/09/2021] [Indexed: 11/30/2022] Open
Abstract
Brain tumor image classification is an important part of medical image processing. It assists doctors to make accurate diagnosis and treatment plans. Magnetic resonance (MR) imaging is one of the main imaging tools to study brain tissue. In this article, we propose a brain tumor MR image classification method using convolutional dictionary learning with local constraint (CDLLC). Our method integrates the multi-layer dictionary learning into a convolutional neural network (CNN) structure to explore the discriminative information. Encoding a vector on a dictionary can be considered as multiple projections into new spaces, and the obtained coding vector is sparse. Meanwhile, in order to preserve the geometric structure of data and utilize the supervised information, we construct the local constraint of atoms through a supervised k-nearest neighbor graph, so that the discrimination of the obtained dictionary is strong. To solve the proposed problem, an efficient iterative optimization scheme is designed. In the experiment, two clinically relevant multi-class classification tasks on the Cheng and REMBRANDT datasets are designed. The evaluation results demonstrate that our method is effective for brain tumor MR image classification, and it could outperform other comparisons.
Collapse
Affiliation(s)
- Xiaoqing Gu
- School of Computer Science and Artificial Intelligence, Changzhou University, Changzhou, China
| | - Zongxuan Shen
- School of Computer Science and Artificial Intelligence, Changzhou University, Changzhou, China
| | - Jing Xue
- Department of Nephrology, Affiliated Wuxi People's Hospital of Nanjing Medical University, Wuxi, China
| | - Yiqing Fan
- Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States
| | - Tongguang Ni
- School of Computer Science and Artificial Intelligence, Changzhou University, Changzhou, China
| |
Collapse
|
44
|
Li Y, Sixou B, Peyrin F. A Review of the Deep Learning Methods for Medical Images Super Resolution Problems. Ing Rech Biomed 2021. [DOI: 10.1016/j.irbm.2020.08.004] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
|
45
|
Moran M, Faria M, Giraldi G, Bastos L, Conci A. Do Radiographic Assessments of Periodontal Bone Loss Improve with Deep Learning Methods for Enhanced Image Resolution? SENSORS 2021; 21:s21062013. [PMID: 33809165 PMCID: PMC8000288 DOI: 10.3390/s21062013] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/04/2021] [Revised: 03/05/2021] [Accepted: 03/09/2021] [Indexed: 11/25/2022]
Abstract
Resolution plays an essential role in oral imaging for periodontal disease assessment. Nevertheless, due to limitations in acquisition tools, a considerable number of oral examinations have low resolution, making the evaluation of this kind of lesion difficult. Recently, the use of deep-learning methods for image resolution improvement has seen an increase in the literature. In this work, we performed two studies to evaluate the effects of using different resolution improvement methods (nearest, bilinear, bicubic, Lanczos, SRCNN, and SRGAN). In the first one, specialized dentists visually analyzed the quality of images treated with these techniques. In the second study, we used those methods as different pre-processing steps for inputs of convolutional neural network (CNN) classifiers (Inception and ResNet) and evaluated whether this process leads to better results. The deep-learning methods lead to a substantial improvement in the visual quality of images but do not necessarily promote better classifier performance.
Collapse
Affiliation(s)
- Maira Moran
- Policlínica Piquet Carneiro, Universidade do Estado do Rio de Janeiro, Rio de Janeiro 20950-003, Brazil; (M.F.); (L.B.)
- Instituto de Computação, Universidade Federal Fluminense, Niterói 24210-310, Brazil
- Correspondence: (M.M.); (A.C.)
| | - Marcelo Faria
- Policlínica Piquet Carneiro, Universidade do Estado do Rio de Janeiro, Rio de Janeiro 20950-003, Brazil; (M.F.); (L.B.)
- Faculdade de Odontologia, Universidade Federal do Rio de Janeiro, Rio de Janeiro 21941-617, Brazil
| | - Gilson Giraldi
- Laboratório Nacional de Computação Científica, Petrópolis 25651-076, Brazil;
| | - Luciana Bastos
- Policlínica Piquet Carneiro, Universidade do Estado do Rio de Janeiro, Rio de Janeiro 20950-003, Brazil; (M.F.); (L.B.)
| | - Aura Conci
- Instituto de Computação, Universidade Federal Fluminense, Niterói 24210-310, Brazil
- Correspondence: (M.M.); (A.C.)
| |
Collapse
|
46
|
Sood RR, Shao W, Kunder C, Teslovich NC, Wang JB, Soerensen SJC, Madhuripan N, Jawahar A, Brooks JD, Ghanouni P, Fan RE, Sonn GA, Rusu M. 3D Registration of pre-surgical prostate MRI and histopathology images via super-resolution volume reconstruction. Med Image Anal 2021; 69:101957. [PMID: 33550008 DOI: 10.1016/j.media.2021.101957] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Revised: 12/23/2020] [Accepted: 01/04/2021] [Indexed: 12/15/2022]
Abstract
The use of MRI for prostate cancer diagnosis and treatment is increasing rapidly. However, identifying the presence and extent of cancer on MRI remains challenging, leading to high variability in detection even among expert radiologists. Improvement in cancer detection on MRI is essential to reducing this variability and maximizing the clinical utility of MRI. To date, such improvement has been limited by the lack of accurately labeled MRI datasets. Data from patients who underwent radical prostatectomy enables the spatial alignment of digitized histopathology images of the resected prostate with corresponding pre-surgical MRI. This alignment facilitates the delineation of detailed cancer labels on MRI via the projection of cancer from histopathology images onto MRI. We introduce a framework that performs 3D registration of whole-mount histopathology images to pre-surgical MRI in three steps. First, we developed a novel multi-image super-resolution generative adversarial network (miSRGAN), which learns information useful for 3D registration by producing a reconstructed 3D MRI. Second, we trained the network to learn information between histopathology slices to facilitate the application of 3D registration methods. Third, we registered the reconstructed 3D histopathology volumes to the reconstructed 3D MRI, mapping the extent of cancer from histopathology images onto MRI without the need for slice-to-slice correspondence. When compared to interpolation methods, our super-resolution reconstruction resulted in the highest PSNR relative to clinical 3D MRI (32.15 dB vs 30.16 dB for BSpline interpolation). Moreover, the registration of 3D volumes reconstructed via super-resolution for both MRI and histopathology images showed the best alignment of cancer regions when compared to (1) the state-of-the-art RAPSODI approach, (2) volumes that were not reconstructed, or (3) volumes that were reconstructed using nearest neighbor, linear, or BSpline interpolations. The improved 3D alignment of histopathology images and MRI facilitates the projection of accurate cancer labels on MRI, allowing for the development of improved MRI interpretation schemes and machine learning models to automatically detect cancer on MRI.
Collapse
Affiliation(s)
- Rewa R Sood
- Department of Electrical Engineering, Stanford University, 350 Jane Stanford Way, Stanford, CA 94305, USA
| | - Wei Shao
- Department of Radiology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA
| | - Christian Kunder
- Department of Pathology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA
| | - Nikola C Teslovich
- Department of Urology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA
| | - Jeffrey B Wang
- Stanford School of Medicine, 291 Campus Drive, Stanford, CA 94305, USA
| | - Simon J C Soerensen
- Department of Urology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA; Department of Urology, Aarhus University Hospital, Aarhus, Denmark
| | - Nikhil Madhuripan
- Department of Radiology, University of Colorado, Aurora, CO 80045, USA
| | | | - James D Brooks
- Department of Urology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA
| | - Pejman Ghanouni
- Department of Radiology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA
| | - Richard E Fan
- Department of Urology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA
| | - Geoffrey A Sonn
- Department of Radiology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA; Department of Urology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA
| | - Mirabela Rusu
- Department of Radiology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA.
| |
Collapse
|
47
|
Isselmou AEK, Xu G, Shuai Z, Saminu S, Javaid I, Ahmad IS. Brain Tumor identification by Convolution Neural Network with Fuzzy C-mean Model Using MR Brain Images. INTERNATIONAL JOURNAL OF CIRCUITS, SYSTEMS AND SIGNAL PROCESSING 2021; 14:1096-1102. [DOI: 10.46300/9106.2020.14.137] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
Abstract
Medical image computing techniques are essential in helping the doctors to support their decision in the diagnosis of the patients. Due to the complexity of the brain structure, we choose to use MR brain images because of their quality and the highest resolution. The objective of this article is to detect brain tumor using convolution neural network with fuzzy c-means model, the advantage of the proposed model is the ability to achieve excellent performance using accuracy, sensitivity, specificity, overall dice and recall values better than the previous models that are already published. In addition, the novel model can identify the brain tumor, using different types of MR images. The proposed model obtained accuracy with 98%.
Collapse
Affiliation(s)
- Abd El Kader Isselmou
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, 300130, P.R. China
| | - Guizhi Xu
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, 300130, P.R. China
| | - Zhang Shuai
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, 300130, P.R. China
| | - Sani Saminu
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, 300130, P.R. China
| | - Imran Javaid
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, 300130, P.R. China
| | - Isah Salim Ahmad
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, 300130, P.R. China
| |
Collapse
|
48
|
Andrew J, Mhatesh T, Sebastin RD, Sagayam KM, Eunice J, Pomplun M, Dang H. Super-resolution reconstruction of brain magnetic resonance images via lightweight autoencoder. INFORMATICS IN MEDICINE UNLOCKED 2021. [DOI: 10.1016/j.imu.2021.100713] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
|
49
|
Burgos N, Bottani S, Faouzi J, Thibeau-Sutre E, Colliot O. Deep learning for brain disorders: from data processing to disease treatment. Brief Bioinform 2020; 22:1560-1576. [PMID: 33316030 DOI: 10.1093/bib/bbaa310] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2020] [Revised: 10/09/2020] [Accepted: 10/13/2020] [Indexed: 12/19/2022] Open
Abstract
In order to reach precision medicine and improve patients' quality of life, machine learning is increasingly used in medicine. Brain disorders are often complex and heterogeneous, and several modalities such as demographic, clinical, imaging, genetics and environmental data have been studied to improve their understanding. Deep learning, a subpart of machine learning, provides complex algorithms that can learn from such various data. It has become state of the art in numerous fields, including computer vision and natural language processing, and is also growingly applied in medicine. In this article, we review the use of deep learning for brain disorders. More specifically, we identify the main applications, the concerned disorders and the types of architectures and data used. Finally, we provide guidelines to bridge the gap between research studies and clinical routine.
Collapse
|
50
|
Moran MBH, Faria MDB, Giraldi GA, Bastos LF, Conci A. Using super-resolution generative adversarial network models and transfer learning to obtain high resolution digital periapical radiographs. Comput Biol Med 2020; 129:104139. [PMID: 33271400 DOI: 10.1016/j.compbiomed.2020.104139] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Revised: 11/17/2020] [Accepted: 11/19/2020] [Indexed: 10/22/2022]
Abstract
Periapical Radiographs are commonly used to detect several anomalies, like caries, periodontal, and periapical diseases. Even considering that digital imaging systems used nowadays tend to provide high-quality images, external factors, or even system limitations can result in a vast amount of radiographic images with low quality and resolution. Commercial solutions offer tools based on interpolation methods to increase image resolution. However, previous literature shows that these methods may create undesirable effects in the images affecting the diagnosis accuracy. One alternative is using deep learning-based super-resolution methods to achieve better high-resolution images. Nevertheless, the amount of data for training such models is limited, demanding transfer learning approaches. In this work, we propose the use of super-resolution generative adversarial network (SRGAN) models and transfer learning to achieve periapical images with higher quality and resolution. Moreover, we evaluate the influence of using the transfer learning approach and the datasets selected for it in the final generated images. For that, we performed an experiment comparing the performance of the SRGAN models (with and without transfer learning) with other super-resolution methods. Considering Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR), Structural Similarity Index (SSIM), and Mean Opinion Score (MOS), the results of SRGAN models using transfer learning were better on average. This superiority was also verified statistically using the Wilcoxon paired test. In the visual analysis, the high quality achieved by the SRGAN models, in general, is visible, resulting in more defined edges details and fewer blur effects.
Collapse
Affiliation(s)
- Maira B H Moran
- Policlínica Piquet Carneiro, Universidade Do Estado Do Rio de Janeiro, 20950-003, Rio de Janeiro, Brazil; Instituto de Computação, Universidade Federal Fluminense, 24210-310, Niterói, Brazil.
| | - Marcelo D B Faria
- Policlínica Piquet Carneiro, Universidade Do Estado Do Rio de Janeiro, 20950-003, Rio de Janeiro, Brazil; Faculdade de Odontologia, Universidade Federal Do Rio de Janeiro, 21941-617, Rio de Janeiro, Brazil
| | - Gilson A Giraldi
- Laboratório Nacional de Computação Científica, 25651-076, Petrópolis, Brazil
| | - Luciana F Bastos
- Policlínica Piquet Carneiro, Universidade Do Estado Do Rio de Janeiro, 20950-003, Rio de Janeiro, Brazil
| | - Aura Conci
- Instituto de Computação, Universidade Federal Fluminense, 24210-310, Niterói, Brazil
| |
Collapse
|