1
|
Vasylechko SD, Tsai A, Afacan O, Kurugol S. Self-supervised denoising diffusion probabilistic models for abdominal DW-MRI. Magn Reson Med 2025. [PMID: 40312927 DOI: 10.1002/mrm.30536] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2024] [Revised: 03/14/2025] [Accepted: 03/28/2025] [Indexed: 05/03/2025]
Abstract
PURPOSE To improve the quality of abdominal diffusion-weighted MR images (DW-MRI) when acquired using single-repetition (NEX = 1) protocols, and thereby increase apparent diffusion coefficient (ADC) map accuracy and lesion conspicuity at high b-values. We aim to reduce the effect of blurring due to motion that obscures small lesions when averaging multiple repetition images at each b-value, which is the current clinical standard. METHODS We propose a self-supervised denoising diffusion probabilistic model (ssDDPM) to improve DW-MRI quality given noisy single-repetition acquisitions in pediatric abdominal scans. The ssDDPM is designed for multi-b-value DW-MRI and incorporates diffusion signal decay model (i.e., ADC model) constraints into its loss term. The model is trained to denoise single-repetition images from multiple b-values while ensuring that the output adheres to the signal decay model. Training was performed on a dataset of 120 pediatric subjects with liver tumors. The performance of ssDDPM was compared with non-local means (NLM) filtering and deep image prior (DIP) denoising techniques. These techniques have the capability to denoise single repetition images unlike the other techniques in literature that requires multiple direction or repetition images. Evaluation included qualitative radiologist's image quality assessment, receiver operating characteristic (ROC) analysis for lesion detection, and ADC fitting accuracy compared with motion-free, breath-hold reference data. RESULTS The ssDDPM demonstrated superior performance over comparison methods in terms of image quality, lesion conspicuity, and ADC map accuracy in NEX = 1 images. It received higher scores in radiologist assessments and showed better lesion discrimination in ROC analysis. Additionally, ssDDPM provided more precise and accurate ADC estimates when compared with the motion-free, breath-hold reference data. CONCLUSION The ssDDPM effectively reduces motion related deblurring and enhances the quality of DW-MRI images by directly denoising single-repetition (NEX = 1) images while respecting signal decay model constraints. This method improves the assessment of pediatric liver lesions, offering a more accurate and efficient diagnostic tool with reduced scan times, when compared with current clinical practice and other denoising techniques.
Collapse
Affiliation(s)
- Serge Didenko Vasylechko
- Department of Radiology, Boston Children's Hospital, Boston, Massachusetts, USA
- Harvard Medical School, Boston, Massachusetts, USA
| | - Andy Tsai
- Department of Radiology, Boston Children's Hospital, Boston, Massachusetts, USA
- Harvard Medical School, Boston, Massachusetts, USA
| | - Onur Afacan
- Department of Radiology, Boston Children's Hospital, Boston, Massachusetts, USA
- Harvard Medical School, Boston, Massachusetts, USA
| | - Sila Kurugol
- Department of Radiology, Boston Children's Hospital, Boston, Massachusetts, USA
- Harvard Medical School, Boston, Massachusetts, USA
| |
Collapse
|
2
|
Jiang M, Wang S, Chan KH, Sun Y, Xu Y, Zhang Z, Gao Q, Gao Z, Tong T, Chang HC, Tan T. Multimodal Cross Global Learnable Attention Network for MR images denoising with arbitrary modal missing. Comput Med Imaging Graph 2025; 121:102497. [PMID: 39904265 DOI: 10.1016/j.compmedimag.2025.102497] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2024] [Revised: 12/10/2024] [Accepted: 01/22/2025] [Indexed: 02/06/2025]
Abstract
Magnetic Resonance Imaging (MRI) generates medical images of multiple sequences, i.e., multimodal, from different contrasts. However, noise will reduce the quality of MR images, and then affect the doctor's diagnosis of diseases. Existing filtering methods, transform-domain methods, statistical methods and Convolutional Neural Network (CNN) methods main aim to denoise individual sequences of images without considering the relationships between multiple different sequences. They cannot balance the extraction of high-dimensional and low-dimensional features in MR images, and hard to maintain a good balance between preserving image texture details and denoising strength. To overcome these challenges, this work proposes a controllable Multimodal Cross-Global Learnable Attention Network (MMCGLANet) for MR image denoising with Arbitrary Modal Missing. Specifically, Encoder is employed to extract the shallow features of the image which share weight module, and Convolutional Long Short-Term Memory(ConvLSTM) is employed to extract the associated features between different frames within the same modal. Cross Global Learnable Attention Network(CGLANet) is employed to extract and fuse image features between multimodal and within the same modality. In addition, sequence code is employed to label missing modalities, which allows for Arbitrary Modal Missing during model training, validation, and testing. Experimental results demonstrate that our method has achieved good denoising results on different public and real MR image dataset.
Collapse
Affiliation(s)
- Mingfu Jiang
- Faculty of Applied Sciences, Macao Polytechnic University, R. de Luís Gonzaga Gomes, Macao, 999078, Macao Special Administrative Region of China; College of Information Engineering, Xinyang Agriculture and Forestry University, No. 1 North Ring Road, Pingqiao District, Xinyang, 464000, Henan, China
| | - Shuai Wang
- School of Cyberspace, Hangzhou Dianzi University, No. 65 Wen Yi Road, Hangzhou, 310018, Zhejiang, China
| | - Ka-Hou Chan
- Faculty of Applied Sciences, Macao Polytechnic University, R. de Luís Gonzaga Gomes, Macao, 999078, Macao Special Administrative Region of China
| | - Yue Sun
- Faculty of Applied Sciences, Macao Polytechnic University, R. de Luís Gonzaga Gomes, Macao, 999078, Macao Special Administrative Region of China
| | - Yi Xu
- Shanghai Key Lab of Digital Media Processing and Transmission, Shanghai Jiao Tong University MoE Key Lab of Artificial Intelligence, Shanghai Jiao Tong University, No. 800 Dongchuan Road, Minhang District, Shanghai, 200030, China
| | - Zhuoneng Zhang
- Faculty of Applied Sciences, Macao Polytechnic University, R. de Luís Gonzaga Gomes, Macao, 999078, Macao Special Administrative Region of China
| | - Qinquan Gao
- College of Physics and Information Engineering, Fuzhou University, No. 2 Wulongjiang Avenue, Fuzhou, 350108, Fujian, China
| | - Zhifan Gao
- School of Biomedical Engineering, Sun Yat-sen University, No. 66 Gongchang Road, Guangming District, Shenzhen, 518107, Guangdong, China
| | - Tong Tong
- College of Physics and Information Engineering, Fuzhou University, No. 2 Wulongjiang Avenue, Fuzhou, 350108, Fujian, China
| | - Hing-Chiu Chang
- Department of Biomedical Engineering, Chinese University of Hong Kong, Sha Tin District, 999077, Hong Kong, China
| | - Tao Tan
- Faculty of Applied Sciences, Macao Polytechnic University, R. de Luís Gonzaga Gomes, Macao, 999078, Macao Special Administrative Region of China.
| |
Collapse
|
3
|
Zhu J, Sun H, Chen W, Zhi S, Liu C, Zhao M, Zhang Y, Zhou T, Lam YL, Peng T, Qin J, Zhao L, Cai J, Ren G. Feature-targeted deep learning framework for pulmonary tumorous Cone-beam CT (CBCT) enhancement with multi-task customized perceptual loss and feature-guided CycleGAN. Comput Med Imaging Graph 2025; 121:102487. [PMID: 39891955 DOI: 10.1016/j.compmedimag.2024.102487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Revised: 12/21/2024] [Accepted: 12/30/2024] [Indexed: 02/03/2025]
Abstract
Thoracic Cone-beam computed tomography (CBCT) is routinely collected during image-guided radiation therapy (IGRT) to provide updated patient anatomy information for lung cancer treatments. However, CBCT images often suffer from streaking artifacts and noise caused by under-rate sampling projections and low-dose exposure, resulting in loss of lung anatomy which contains crucial pulmonary tumorous and functional information. While recent deep learning-based CBCT enhancement methods have shown promising results in suppressing artifacts, they have limited performance on preserving anatomical details containing crucial tumorous information due to lack of targeted guidance. To address this issue, we propose a novel feature-targeted deep learning framework which generates ultra-quality pulmonary imaging from CBCT of lung cancer patients via a multi-task customized feature-to-feature perceptual loss function and a feature-guided CycleGAN. The framework comprises two main components: a multi-task learning feature-selection network (MTFS-Net) for building up a customized feature-to-feature perceptual loss function (CFP-loss); and a feature-guided CycleGan network. Our experiments showed that the proposed framework can generate synthesized CT (sCT) images for the lung that achieved a high similarity to CT images, with an average SSIM index of 0.9747 and an average PSNR index of 38.5995 globally, and an average Pearman's coefficient of 0.8929 within the tumor region on multi-institutional datasets. The sCT images also achieved visually pleasing performance with effective artifacts suppression, noise reduction, and distinctive anatomical details preservation. Functional imaging tests further demonstrated the pulmonary texture correction performance of the sCT images, and the similarity of the functional imaging generated from sCT and CT images has reached an average DSC value of 0.9147, SCC value of 0.9615 and R value of 0.9661. Comparison experiments with pixel-to-pixel loss also showed that the proposed perceptual loss significantly enhances the performance of involved generative models. Our experiment results indicate that the proposed framework outperforms the state-of-the-art models for pulmonary CBCT enhancement. This framework holds great promise for generating high-quality pulmonary imaging from CBCT that is suitable for supporting further analysis of lung cancer treatment.
Collapse
Affiliation(s)
- Jiarui Zhu
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, 999077, Hong Kong SAR
| | - Hongfei Sun
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xian 710032, China
| | - Weixing Chen
- School of Computer Science and Engineering, Sun Yat-Sen University, Guangzhou 510006, China
| | - Shaohua Zhi
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, 999077, Hong Kong SAR
| | - Chenyang Liu
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, 999077, Hong Kong SAR
| | - Mayang Zhao
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, 999077, Hong Kong SAR
| | - Yuanpeng Zhang
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, 999077, Hong Kong SAR
| | - Ta Zhou
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, 999077, Hong Kong SAR
| | - Yu Lap Lam
- Department of Clinical Oncology, Queen Mary Hospital, 999077, Hong Kong SAR
| | - Tao Peng
- School of Future Science and Engineering, Soochow University, Suzhou 215299, China
| | - Jing Qin
- School of Nursing, The Hong Kong Polytechnic University, 999077, Hong Kong SAR
| | - Lina Zhao
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xian 710032, China.
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, 999077, Hong Kong SAR.
| | - Ge Ren
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, 999077, Hong Kong SAR; Research Institute for Intelligent Wearable Systems, The Hong Kong Polytechnic University, 999077, Hong Kong SAR; The Hong Kong Polytechnic University Shenzhen Research Institute, Shenzhen, China.
| |
Collapse
|
4
|
Shi S, Wang C, Xiao S, Li H, Zhao X, Guo F, Shi L, Zhou X. Magnetic resonance image denoising for Rician noise using a novel hybrid transformer-CNN network (HTC-net) and self-supervised pretraining. Med Phys 2025; 52:1643-1660. [PMID: 39641989 DOI: 10.1002/mp.17562] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2024] [Revised: 11/10/2024] [Accepted: 11/14/2024] [Indexed: 12/07/2024] Open
Abstract
BACKGROUND Magnetic resonance imaging (MRI) is a crucial technique for both scientific research and clinical diagnosis. However, noise generated during MR data acquisition degrades image quality, particularly in hyperpolarized (HP) gas MRI. While deep learning (DL) methods have shown promise for MR image denoising, most of them fail to adequately utilize the long-range information which is important to improve denoising performance. Furthermore, the sample size of paired noisy and noise-free MR images also limits denoising performance. PURPOSE To develop an effective DL method that enhances denoising performance and reduces the requirement of paired MR images by utilizing the long-range information and pretraining. METHODS In this work, a hybrid Transformer-convolutional neural network (CNN) network (HTC-net) and a self-supervised pretraining strategy are proposed, which effectively enhance the denoising performance. In HTC-net, a CNN branch is exploited to extract the local features. Then a Transformer-CNN branch with two parallel encoders is designed to capture the long-range information. Within this branch, a residual fusion block (RFB) with a residual feature processing module and a feature fusion module is proposed to aggregate features at different resolutions extracted by two parallel encoders. After that, HTC-net exploits the comprehensive features from the CNN branch and the Transformer-CNN branch to accurately predict noise-free MR images through a reconstruction module. To further enhance the performance on limited MRI datasets, a self-supervised pretraining strategy is proposed. This strategy employs self-supervised denoising to equip the HTC-net with denoising capabilities during pretraining, and then the pre-trained parameters are transferred to facilitate subsequent supervised training. RESULTS Experimental results on the pulmonary HP 129Xe MRI dataset (1059 images) and IXI dataset (5000 images) all demonstrate the proposed method outperforms the state-of-the-art methods, exhibiting superior preservation of edges and structures. Quantitatively, on the pulmonary HP 129Xe MRI dataset, the proposed method outperforms the state-of-the-art methods by 0.254-0.597 dB in PSNR and 0.007-0.013 in SSIM. On the IXI dataset, the proposed method outperforms the state-of-the-art methods by 0.3-0.927 dB in PSNR and 0.003-0.016 in SSIM. CONCLUSIONS The proposed method can effectively enhance the quality of MR images, which helps improve the diagnosis accuracy in clinical.
Collapse
Affiliation(s)
- Shengjie Shi
- Key Laboratory of Magnetic Resonance in Biological Systems, State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, National Center for Magnetic Resonance in Wuhan, Wuhan Institute of Physics and Mathematics, Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
| | - Cheng Wang
- Key Laboratory of Magnetic Resonance in Biological Systems, State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, National Center for Magnetic Resonance in Wuhan, Wuhan Institute of Physics and Mathematics, Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- University of Chinese Academy of Sciences, Beijing, China
- School of Physics and Optoelectronic Engineering, Yangtze University, Jingzhou, China
| | - Sa Xiao
- Key Laboratory of Magnetic Resonance in Biological Systems, State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, National Center for Magnetic Resonance in Wuhan, Wuhan Institute of Physics and Mathematics, Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Haidong Li
- Key Laboratory of Magnetic Resonance in Biological Systems, State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, National Center for Magnetic Resonance in Wuhan, Wuhan Institute of Physics and Mathematics, Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Xiuchao Zhao
- Key Laboratory of Magnetic Resonance in Biological Systems, State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, National Center for Magnetic Resonance in Wuhan, Wuhan Institute of Physics and Mathematics, Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Fumin Guo
- Wuhan National Laboratory for Optoelectronics, Department of Biomedical Engineering, Huazhong University of Science and Technology, Wuhan, China
| | - Lei Shi
- Key Laboratory of Magnetic Resonance in Biological Systems, State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, National Center for Magnetic Resonance in Wuhan, Wuhan Institute of Physics and Mathematics, Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Xin Zhou
- Key Laboratory of Magnetic Resonance in Biological Systems, State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, National Center for Magnetic Resonance in Wuhan, Wuhan Institute of Physics and Mathematics, Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- University of Chinese Academy of Sciences, Beijing, China
- Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan University, Haikou, China
| |
Collapse
|
5
|
Soltanpour S, Chang A, Madularu D, Kulkarni P, Ferris C, Joslin C. 3D Wasserstein Generative Adversarial Network with Dense U-Net-Based Discriminator for Preclinical fMRI Denoising. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2025:10.1007/s10278-025-01434-5. [PMID: 39939477 DOI: 10.1007/s10278-025-01434-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/28/2024] [Revised: 01/21/2025] [Accepted: 01/28/2025] [Indexed: 02/14/2025]
Abstract
Functional magnetic resonance imaging (fMRI) is extensively used in clinical and preclinical settings to study brain function; however, fMRI data is inherently noisy due to physiological processes, hardware, and external noise. Denoising is one of the main preprocessing steps in any fMRI analysis pipeline. This process is challenging in preclinical data in comparison to clinical data due to variations in brain geometry, image resolution, and low signal-to-noise ratios. In this paper, we propose a structure-preserved algorithm based on a 3D Wasserstein generative adversarial network with a 3D dense U-net-based discriminator called 3D U-WGAN. We apply a 4D data configuration to effectively denoise temporal and spatial information in analyzing preclinical fMRI data. GAN-based denoising methods often utilize a discriminator to identify significant differences between denoised and noise-free images, focusing on global or local features. To refine the fMRI denoising model, our method employs a 3D dense U-Net discriminator to learn both global and local distinctions. To tackle potential oversmoothing, we introduce an adversarial loss and enhance perceptual similarity by measuring feature space distances. Experiments illustrate that 3D U-WGAN significantly improves image quality in resting-state and task preclinical fMRI data, enhancing signal-to-noise ratio without introducing excessive structural changes in existing methods. The proposed method outperforms state-of-the-art methods when applied to simulated and real data in a fMRI analysis pipeline.
Collapse
Affiliation(s)
- Sima Soltanpour
- School of Information Technology, Carleton University, 1125 Colonel By Dr, Ottawa, Ontario, K1S 5B6, Canada.
| | - Arnold Chang
- Center for Translational NeuroImaging (CTNI), Northeastern University, 360 Huntington Ave, Boston, MA, 02115, USA
| | - Dan Madularu
- Department of Psychology, Carleton University, 1125 Colonel By Dr, Ottawa, Ontario, K1S 5B6, Canada
- Tessellis Ltd., 350 Legget Drive, Ottawa, Ontario, K2K 0G7, Canada
| | - Praveen Kulkarni
- Center for Translational NeuroImaging (CTNI), Northeastern University, 360 Huntington Ave, Boston, MA, 02115, USA
| | - Craig Ferris
- Center for Translational NeuroImaging (CTNI), Northeastern University, 360 Huntington Ave, Boston, MA, 02115, USA
| | - Chris Joslin
- School of Information Technology, Carleton University, 1125 Colonel By Dr, Ottawa, Ontario, K1S 5B6, Canada
| |
Collapse
|
6
|
Kharaji M, Canton G, Guo Y, Mosi MH, Zhou Z, Balu N, Mossa-Basha M. DANTE-CAIPI Accelerated Contrast-Enhanced 3D T1: Deep Learning-Based Image Quality Improvement for Vessel Wall MRI. AJNR Am J Neuroradiol 2025; 46:49-56. [PMID: 39038956 PMCID: PMC11735441 DOI: 10.3174/ajnr.a8424] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2024] [Accepted: 07/14/2024] [Indexed: 07/24/2024]
Abstract
BACKGROUND AND PURPOSE Accelerated and blood-suppressed postcontrast 3D intracranial vessel wall MRI (IVW) enables high-resolution rapid scanning but is associated with low SNR. We hypothesized that a deep-learning (DL) denoising algorithm applied to accelerated, blood-suppressed postcontrast IVW can yield high-quality images with reduced artifacts and higher SNR in shorter scan times. MATERIALS AND METHODS Sixty-four consecutive patients underwent IVW, including conventional postcontrast 3D T1-sampling perfection with application-optimized contrasts by using different flip angle evolution (SPACE) and delay alternating with nutation for tailored excitation (DANTE) blood-suppressed and CAIPIRINHIA-accelerated (CAIPI) 3D T1-weighted TSE postcontrast sequences (DANTE-CAIPI-SPACE). DANTE-CAIPI-SPACE acquisitions were then denoised by using an unrolled deep convolutional network (DANTE-CAIPI-SPACE+DL). SPACE, DANTE-CAIPI-SPACE, and DANTE-CAIPI-SPACE+DL images were compared for overall image quality, SNR, severity of artifacts, arterial and venous suppression, and lesion assessment by using 4-point or 5-point Likert scales. Quantitative evaluation of SNR and contrast-to-noise ratio (CNR) was performed. RESULTS DANTE-CAIPI-SPACE+DL showed significantly reduced arterial (1 [1-1.75] versus 3 [3-4], P < .001) and venous flow artifacts (1 [1-2] versus 3 [3-4], P < .001) compared with SPACE. There was no significant difference between DANTE-CAIPI-SPACE+DL and SPACE in terms of image quality, SNR, artifact ratings, and lesion assessment. For SNR ratings, DANTE-CAIPI-SPACE+DL was significantly better compared with DANTE-CAIPI-SPACE (2 [1-2], versus 3 [2-3], P < .001). No statistically significant differences were found between DANTE-CAIPI-SPACE and DANTE-CAIPI-SPACE+DL for image quality, artifact, arterial blood and venous blood flow artifacts, and lesion assessment. Quantitative vessel wall SNR and CNR median values were significantly higher for DANTE-CAIPI-SPACE+DL (SNR: 9.71, CNR: 4.24) compared with DANTE-CAIPI-SPACE (SNR: 5.50, CNR: 2.64) (P < .001 for each), but there was no significant difference between SPACE (SNR: 10.82, CNR: 5.21) and DANTE-CAIPI-SPACE+DL. CONCLUSIONS DL denoised postcontrast T1-weighted DANTE-CAIPI-SPACE accelerated and blood-suppressed IVW showed improved flow suppression with a shorter scan time and equivalent qualitative and quantitative SNR measures relative to conventional postcontrast IVW. It also improved SNR metrics relative to postcontrast DANTE-CAIPI-SPACE IVW. Implementing DL denoised DANTE-CAIPI-SPACE IVW has the potential to shorten protocol time while maintaining or improving the image quality of IVW.
Collapse
Affiliation(s)
- Mona Kharaji
- From the Department of Radiology (M.K., G.C., M.H.M., N.B., M.M.-B.), University of Washington School of Medicine, Seattle, Washington
| | - Gador Canton
- From the Department of Radiology (M.K., G.C., M.H.M., N.B., M.M.-B.), University of Washington School of Medicine, Seattle, Washington
| | - Yin Guo
- Department of Bioengineering (Y.G.), University of Washington, Seattle, Washington
| | - Mohamad Hosaam Mosi
- From the Department of Radiology (M.K., G.C., M.H.M., N.B., M.M.-B.), University of Washington School of Medicine, Seattle, Washington
| | - Zechen Zhou
- Subtle Medical Inc (Z.Z.), Menlo Park, California
| | - Niranjan Balu
- From the Department of Radiology (M.K., G.C., M.H.M., N.B., M.M.-B.), University of Washington School of Medicine, Seattle, Washington
| | - Mahmud Mossa-Basha
- From the Department of Radiology (M.K., G.C., M.H.M., N.B., M.M.-B.), University of Washington School of Medicine, Seattle, Washington
| |
Collapse
|
7
|
Chen X, Xia W, Yang Z, Chen H, Liu Y, Zhou J, Wang Z, Chen Y, Wen B, Zhang Y. SOUL-Net: A Sparse and Low-Rank Unrolling Network for Spectral CT Image Reconstruction. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:18620-18634. [PMID: 37792650 DOI: 10.1109/tnnls.2023.3319408] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/06/2023]
Abstract
Spectral computed tomography (CT) is an emerging technology, that generates a multienergy attenuation map for the interior of an object and extends the traditional image volume into a 4-D form. Compared with traditional CT based on energy-integrating detectors, spectral CT can make full use of spectral information, resulting in high resolution and providing accurate material quantification. Numerous model-based iterative reconstruction methods have been proposed for spectral CT reconstruction. However, these methods usually suffer from difficulties such as laborious parameter selection and expensive computational costs. In addition, due to the image similarity of different energy bins, spectral CT usually implies a strong low-rank prior, which has been widely adopted in current iterative reconstruction models. Singular value thresholding (SVT) is an effective algorithm to solve the low-rank constrained model. However, the SVT method requires a manual selection of thresholds, which may lead to suboptimal results. To relieve these problems, in this article, we propose a sparse and low-rank unrolling network (SOUL-Net) for spectral CT image reconstruction, that learns the parameters and thresholds in a data-driven manner. Furthermore, a Taylor expansion-based neural network backpropagation method is introduced to improve the numerical stability. The qualitative and quantitative results demonstrate that the proposed method outperforms several representative state-of-the-art algorithms in terms of detail preservation and artifact reduction.
Collapse
|
8
|
Singh R, Singh N, Kaur L. Deep learning methods for 3D magnetic resonance image denoising, bias field and motion artifact correction: a comprehensive review. Phys Med Biol 2024; 69:23TR01. [PMID: 39569887 DOI: 10.1088/1361-6560/ad94c7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2024] [Accepted: 11/19/2024] [Indexed: 11/22/2024]
Abstract
Magnetic resonance imaging (MRI) provides detailed structural information of the internal body organs and soft tissue regions of a patient in clinical diagnosis for disease detection, localization, and progress monitoring. MRI scanner hardware manufacturers incorporate various post-acquisition image-processing techniques into the scanner's computer software tools for different post-processing tasks. These tools provide a final image of adequate quality and essential features for accurate clinical reporting and predictive interpretation for better treatment planning. Different post-acquisition image-processing tasks for MRI quality enhancement include noise removal, motion artifact reduction, magnetic bias field correction, and eddy electric current effect removal. Recently, deep learning (DL) methods have shown great success in many research fields, including image and video applications. DL-based data-driven feature-learning approaches have great potential for MR image denoising and image-quality-degrading artifact correction. Recent studies have demonstrated significant improvements in image-analysis tasks using DL-based convolutional neural network techniques. The promising capabilities and performance of DL techniques in various problem-solving domains have motivated researchers to adapt DL methods to medical image analysis and quality enhancement tasks. This paper presents a comprehensive review of DL-based state-of-the-art MRI quality enhancement and artifact removal methods for regenerating high-quality images while preserving essential anatomical and physiological feature maps without destroying important image information. Existing research gaps and future directions have also been provided by highlighting potential research areas for future developments, along with their importance and advantages in medical imaging.
Collapse
Affiliation(s)
- Ram Singh
- Department of Computer Science & Engineering, Punjabi University, Chandigarh Road, Patiala 147002, Punjab, India
| | - Navdeep Singh
- Department of Computer Science & Engineering, Punjabi University, Chandigarh Road, Patiala 147002, Punjab, India
| | - Lakhwinder Kaur
- Department of Computer Science & Engineering, Punjabi University, Chandigarh Road, Patiala 147002, Punjab, India
| |
Collapse
|
9
|
Kang B, Lee W, Seo H, Heo HY, Park H. Self-supervised learning for denoising of multidimensional MRI data. Magn Reson Med 2024; 92:1980-1994. [PMID: 38934408 PMCID: PMC11341249 DOI: 10.1002/mrm.30197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2024] [Revised: 05/27/2024] [Accepted: 05/28/2024] [Indexed: 06/28/2024]
Abstract
PURPOSE To develop a fast denoising framework for high-dimensional MRI data based on a self-supervised learning scheme, which does not require ground truth clean image. THEORY AND METHODS Quantitative MRI faces limitations in SNR, because the variation of signal amplitude in a large set of images is the key mechanism for quantification. In addition, the complex non-linear signal models make the fitting process vulnerable to noise. To address these issues, we propose a fast deep-learning framework for denoising, which efficiently exploits the redundancy in multidimensional MRI data. A self-supervised model was designed to use only noisy images for training, bypassing the challenge of clean data paucity in clinical practice. For validation, we used two different datasets of simulated magnetization transfer contrast MR fingerprinting (MTC-MRF) dataset and in vivo DWI image dataset to show the generalizability. RESULTS The proposed method drastically improved denoising performance in the presence of mild-to-severe noise regardless of noise distributions compared to previous methods of the BM3D, tMPPCA, and Patch2self. The improvements were even pronounced in the following quantification results from the denoised images. CONCLUSION The proposed MD-S2S (Multidimensional-Self2Self) denoising technique could be further applied to various multi-dimensional MRI data and improve the quantification accuracy of tissue parameter maps.
Collapse
Affiliation(s)
- Beomgu Kang
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Guseong-dong, Yuseong-gu, Daejeon, Republic of Korea
- Bionics Research Center, Biomedical Research Division, Korea Institute of Science and Technology (KIST), Seoul, Republic of Korea
| | - Wonil Lee
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, USA
| | - Hyunseok Seo
- Bionics Research Center, Biomedical Research Division, Korea Institute of Science and Technology (KIST), Seoul, Republic of Korea
| | - Hye-Young Heo
- Divison of MR Research, Department of Radiology, Johns Hopkins University, Baltimore, Maryland, USA
| | - HyunWook Park
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Guseong-dong, Yuseong-gu, Daejeon, Republic of Korea
| |
Collapse
|
10
|
Li M, Yun J, Liu D, Jiang D, Xiong H, Jiang D, Hu S, Liu R, Li G. Global and local feature extraction based on convolutional neural network residual learning for MR image denoising. Phys Med Biol 2024; 69:205007. [PMID: 39312945 DOI: 10.1088/1361-6560/ad7e78] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2024] [Accepted: 09/23/2024] [Indexed: 09/25/2024]
Abstract
Objective.Given the different noise distribution information of global and local magnetic resonance (MR) images, this study aims to extend the current work on convolutional neural networks that preserve global structure and local details in MR image denoising tasks.Approach.This study proposed a parallel and serial network for denoising 3D MR images, called 3D-PSNet. We use the residual depthwise separable convolution block to learn the local information of the feature map, reduce the network parameters, and thus improve the training speed and parameter efficiency. In addition, we consider the feature extraction of the global image and utilize residual dilated convolution to process the feature map to expand the receptive field of the network and avoid the loss of global information. Finally, we combine both of them to form a parallel network. What's more, we integrate reinforced residual convolution blocks with dense connections to form serial network branches, which can remove redundant information and refine features to further obtain accurate noise information.Main results.The peak signal-to-noise ratio, structural similarity index measure, and root mean square error metrics of 3D-PSNet are as high as 47.79%, 99.81%, and 0.40%, respectively, achieving competitive denoising effect on three public datasets. The ablation experiments demonstrated the effectiveness of all the designed modules regarding all the evaluated metrics in both datasets.Significance.The proposed 3D-PSNet takes advantage of multi-scale receptive fields, local feature extraction and residual dense connections to more effectively restore the global structure and local fine features in MR images, and is expected to help doctors quickly and accurately diagnose patients' conditions.
Collapse
Affiliation(s)
- Meng Li
- Key Laboratory of Metallurgical Equipment and Control Technology of Ministry of Education, Wuhan University of Science and Technology, Wuhan 430081, People's Republic of China
- Research Center for Biomimetic Robot and Intelligent Measurement and Control, Wuhan University of Science and Technology, Wuhan, 430081, People's Republic of China
| | - Juntong Yun
- Hubei Key Laboratory of Mechanical Transmission and Manufacturing Engineering, Wuhan University of Science and Technology, Wuhan 430081, People's Republic of China
- Precision Manufacturing Research Institute, Wuhan University of Science and Technology, Wuhan 430081, People's Republic of China
| | - Dingxi Liu
- Key Laboratory of Metallurgical Equipment and Control Technology of Ministry of Education, Wuhan University of Science and Technology, Wuhan 430081, People's Republic of China
- Research Center for Biomimetic Robot and Intelligent Measurement and Control, Wuhan University of Science and Technology, Wuhan, 430081, People's Republic of China
| | - Daixiang Jiang
- School of Medicine, Wuhan University of Science and Technology, No.1, Huangjia Lake University Town, Wuhan, 430065, People's Republic of China
- Institute of Medical Innovation and Transformation, Puren Hospital affiliated to Wuhan University of Science and Technology, 1 Benxi Road, Wuhan 430081, People's Republic of China
- Department of Orthopaedics, Puren Hospital affiliated to Wuhan University of Science and Technology, 1 Benxi Road, Wuhan 430081, People's Republic of China
| | - Hanlin Xiong
- Key Laboratory of Metallurgical Equipment and Control Technology of Ministry of Education, Wuhan University of Science and Technology, Wuhan 430081, People's Republic of China
- Research Center for Biomimetic Robot and Intelligent Measurement and Control, Wuhan University of Science and Technology, Wuhan, 430081, People's Republic of China
| | - Du Jiang
- Key Laboratory of Metallurgical Equipment and Control Technology of Ministry of Education, Wuhan University of Science and Technology, Wuhan 430081, People's Republic of China
- Research Center for Biomimetic Robot and Intelligent Measurement and Control, Wuhan University of Science and Technology, Wuhan, 430081, People's Republic of China
| | - Shunbo Hu
- School of Information Science and Engineering, Linyi University, Linyi, Shandong 276000, People's Republic of China
| | - Rong Liu
- School of Medicine, Wuhan University of Science and Technology, No.1, Huangjia Lake University Town, Wuhan, 430065, People's Republic of China
- Institute of Medical Innovation and Transformation, Puren Hospital affiliated to Wuhan University of Science and Technology, 1 Benxi Road, Wuhan 430081, People's Republic of China
- Department of Orthopaedics, Puren Hospital affiliated to Wuhan University of Science and Technology, 1 Benxi Road, Wuhan 430081, People's Republic of China
| | - Gongfa Li
- Key Laboratory of Metallurgical Equipment and Control Technology of Ministry of Education, Wuhan University of Science and Technology, Wuhan 430081, People's Republic of China
- Research Center for Biomimetic Robot and Intelligent Measurement and Control, Wuhan University of Science and Technology, Wuhan, 430081, People's Republic of China
| |
Collapse
|
11
|
Zeng X, Guo Y, Li L, Liu Y. Continual medical image denoising based on triplet neural networks collaboration. Comput Biol Med 2024; 179:108914. [PMID: 39053331 DOI: 10.1016/j.compbiomed.2024.108914] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2023] [Revised: 07/14/2024] [Accepted: 07/15/2024] [Indexed: 07/27/2024]
Abstract
BACKGROUND When multiple tasks are learned consecutively, the old model parameters may be overwritten by the new data, resulting in the phenomenon that the new task is learned and the old task is forgotten, which leads to catastrophic forgetting. Moreover, continual learning has no mature solution for image denoising tasks. METHODS Therefore, in order to solve the problem of catastrophic forgetting caused by learning multiple denoising tasks, we propose a Triplet Neural-networks Collaboration-continuity DeNosing (TNCDN) model. Use triplet neural networks to update each other cooperatively. The knowledge from two denoising networks that maintain continual learning capability is transferred to the main-denoising network. The main-denoising network has new knowledge and can consolidate old knowledge. A co-training mechanism is designed. The main-denoising network updates the other two denoising networks with different thresholds to maintain memory reinforcement capability and knowledge extension capability. RESULTS The experimental results show that our method effectively alleviates catastrophic forgetting. In GS, CT and ADNI datasets, compared with ANCL, the TNCDN(PromptIR) method reduced the average degree of forgetting on the evaluation index PSNR by 2.38 (39%) and RMSE by 1.63 (55%). CONCLUSION This study aims to solve the problem of catastrophic forgetting caused by learning multiple denoising tasks. Although the experimental results are promising, extending the basic denoising model to more data sets and tasks will enhance its application. Nevertheless, this study is a starting point, which can provide reference and support for the further development of continuous learning image denoising task.
Collapse
Affiliation(s)
- Xianhua Zeng
- School of Computer Science and Technology/School of Artificial Intelligence, Chongqing University of Posts and Telecommunications, Chongqing 400065, China.
| | - Yongli Guo
- School of Computer Science and Technology/School of Artificial Intelligence, Chongqing University of Posts and Telecommunications, Chongqing 400065, China.
| | - Laquan Li
- School of Science, Chongqing University of Posts and Telecommunications, Chongqing 400065, China.
| | - Yuhang Liu
- School of Computer Science and Technology/School of Artificial Intelligence, Chongqing University of Posts and Telecommunications, Chongqing 400065, China.
| |
Collapse
|
12
|
Kamran SA, Hossain KF, Ong J, Waisberg E, Zaman N, Baker SA, Lee AG, Tavakkoli A. FA4SANS-GAN: A Novel Machine Learning Generative Adversarial Network to Further Understand Ophthalmic Changes in Spaceflight Associated Neuro-Ocular Syndrome (SANS). OPHTHALMOLOGY SCIENCE 2024; 4:100493. [PMID: 38682031 PMCID: PMC11046204 DOI: 10.1016/j.xops.2024.100493] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Revised: 01/11/2024] [Accepted: 02/05/2024] [Indexed: 05/01/2024]
Abstract
Purpose To provide an automated system for synthesizing fluorescein angiography (FA) images from color fundus photographs for averting risks associated with fluorescein dye and extend its future application to spaceflight associated neuro-ocular syndrome (SANS) detection in spaceflight where resources are limited. Design Development and validation of a novel conditional generative adversarial network (GAN) trained on limited amount of FA and color fundus images with diabetic retinopathy and control cases. Participants Color fundus and FA paired images for unique patients were collected from a publicly available study. Methods FA4SANS-GAN was trained to generate FA images from color fundus photographs using 2 multiscale generators coupled with 2 patch-GAN discriminators. Eight hundred fifty color fundus and FA images were utilized for training by augmenting images from 17 unique patients. The model was evaluated on 56 fluorescein images collected from 14 unique patients. In addition, it was compared with 3 other GAN architectures trained on the same data set. Furthermore, we test the robustness of the models against acquisition noise and retaining structural information when introduced to artificially created biological markers. Main Outcome Measures For GAN synthesis, metric Fréchet Inception Distance (FID) and Kernel Inception Distance (KID). Also, two 1-sided tests (TOST) based on Welch's t test for measuring statistical significance. Results On test FA images, mean FID for FA4SANS-GAN was 39.8 (standard deviation, 9.9), which is better than GANgio model's mean of 43.2 (standard deviation, 13.7), Pix2PixHD's mean of 57.3 (standard deviation, 11.5) and Pix2Pix's mean of 67.5 (standard deviation, 11.7). Similarly for KID, FA4SANS-GAN achieved mean of 0.00278 (standard deviation, 0.00167) which is better than other 3 model's mean KID of 0.00303 (standard deviation, 0.00216), 0.00609 (standard deviation, 0.00238), 0.00784 (standard deviation, 0.00218). For TOST measurement, FA4SANS-GAN was proven to be statistically significant versus GANgio (P = 0.006); versus Pix2PixHD (P < 0.00001); and versus Pix2Pix (P < 0.00001). Conclusions Our study has shown FA4SANS-GAN to be statistically significant for 2 GAN synthesis metrics. Moreover, it is robust against acquisition noise, and can retain clear biological markers compared with the other 3 GAN architectures. This deployment of this model can be crucial in the International Space Station for detecting SANS. Financial Disclosures The authors have no proprietary or commercial interest in any materials discussed in this article.
Collapse
Affiliation(s)
- Sharif Amit Kamran
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, Reno, Nevada
| | - Khondker Fariha Hossain
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, Reno, Nevada
| | - Joshua Ong
- Department of Ophthalmology and Visual Sciences, University of Michigan Kellogg Eye Center, Ann Arbor, Michigan
| | - Ethan Waisberg
- Department of Ophthalmology, University College Dublin School of Medicine, Belfield, Dublin, Ireland
| | - Nasif Zaman
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, Reno, Nevada
| | - Salah A. Baker
- Department of Physiology and Cell Biology, University of Nevada School of Medicine, Reno, Nevada
| | - Andrew G. Lee
- Center for Space Medicine, Baylor College of Medicine, Houston, Texas
- Department of Ophthalmology, Blanton Eye Institute, Houston Methodist Hospital, Houston, Texas
- Houston Methodist Research Institute, Houston Methodist Hospital, Houston, Texas
- Departments of Ophthalmology, Neurology, and Neurosurgery, Weill Cornell Medicine, New York, New York
- Department of Ophthalmology, University of Texas Medical Branch, Galveston, Texas
- Department of Ophthalmology, University of Texas MD Anderson Cancer Center, Houston, Texas
- Department of Ophthalmology, Texas A&M College of Medicine, Texas
- Department of Ophthalmology, The University of Iowa Hospitals and Clinics, Iowa City, Iowa
| | - Alireza Tavakkoli
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, Reno, Nevada
| |
Collapse
|
13
|
Li S, Wang F, Gao S. New non-local mean methods for MRI denoising based on global self-similarity between values. Comput Biol Med 2024; 174:108450. [PMID: 38608325 DOI: 10.1016/j.compbiomed.2024.108450] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2023] [Revised: 03/20/2024] [Accepted: 04/07/2024] [Indexed: 04/14/2024]
Abstract
Magnetic resonance imaging (MRI) is a non-invasive medical imaging technique that provides high-resolution 3D images and valuable insights into human tissue conditions. Even at present, the refinement of denoising methods for MRI remains a crucial concern for improving the quality of the images. This study aims to improve the prefiltered rotationally invariant non-local principal component analysis (PRI-NL-PCA) algorithm. We relaxed the original restrictions using particle swarm optimization to determine optimal parameters for the PCA part of the original algorithm. In addition, we adjusted the prefiltered rotationally invariant non-local mean (PRI-NLM) part by traversing the signal intensities of voxels instead of their spatial positions to reduce duplicate calculations and expand the search volume to the whole image when estimating voxels' signal intensities. The new method demonstrated superior denoising performance compared to the original approach. Moreover, in most cases, the new algorithm ran faster. Furthermore, our proposed method can also be applied to process Gaussian noise in natural images and has the potential to enhance other NLM-based denoising algorithms.
Collapse
Affiliation(s)
- Shiao Li
- Institute of Medical Technology, Peking University Health Science Center, Haidian District College Road No. 38, 100191, Beijing, China.
| | - Fei Wang
- Key Laboratory of Carcinogenesis and Translational Research, Department of Radiation Oncology, Beijing Cancer Hospital, Haidian District Fucheng Road No. 52, 100142, Beijing, China.
| | - Song Gao
- Institute of Medical Technology, Peking University Health Science Center, Haidian District College Road No. 38, 100191, Beijing, China.
| |
Collapse
|
14
|
Nazir N, Sarwar A, Saini BS. Recent developments in denoising medical images using deep learning: An overview of models, techniques, and challenges. Micron 2024; 180:103615. [PMID: 38471391 DOI: 10.1016/j.micron.2024.103615] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 02/20/2024] [Accepted: 02/26/2024] [Indexed: 03/14/2024]
Abstract
Medical imaging plays a critical role in diagnosing and treating various medical conditions. However, interpreting medical images can be challenging even for expert clinicians, as they are often degraded by noise and artifacts that can hinder the accurate identification and analysis of diseases, leading to severe consequences such as patient misdiagnosis or mortality. Various types of noise, including Gaussian, Rician, and Salt-pepper noise, can corrupt the area of interest, limiting the precision and accuracy of algorithms. Denoising algorithms have shown the potential in improving the quality of medical images by removing noise and other artifacts that obscure essential information. Deep learning has emerged as a powerful tool for image analysis and has demonstrated promising results in denoising different medical images such as MRIs, CT scans, PET scans, etc. This review paper provides a comprehensive overview of state-of-the-art deep learning algorithms used for denoising medical images. A total of 120 relevant papers were reviewed, and after screening with specific inclusion and exclusion criteria, 104 papers were selected for analysis. This study aims to provide a thorough understanding for researchers in the field of intelligent denoising by presenting an extensive survey of current techniques and highlighting significant challenges that remain to be addressed. The findings of this review are expected to contribute to the development of intelligent models that enable timely and accurate diagnoses of medical disorders. It was found that 40% of the researchers used models based on Deep convolutional neural networks to denoise the images, followed by encoder-decoder (18%) and other artificial intelligence-based techniques (15%) (Like DIP, etc.). Generative adversarial network was used by 12%, transformer-based approaches (13%) and multilayer perceptron was used by 2% of the researchers. Moreover, Gaussian noise was present in 35% of the images, followed by speckle noise (16%), poisson noise (14%), artifacts (10%), rician noise (7%), Salt-pepper noise (6%), Impulse noise (3%) and other types of noise (9%). While the progress in developing novel models for the denoising of medical images is evident, significant work remains to be done in creating standardized denoising models that perform well across a wide spectrum of medical images. Overall, this review highlights the importance of denoising medical images and provides a comprehensive understanding of the current state-of-the-art deep learning algorithms in this field.
Collapse
|
15
|
Ahmed HS. Uncover This Tech Term: Generative Adversarial Networks. Korean J Radiol 2024; 25:493-498. [PMID: 38627875 PMCID: PMC11058428 DOI: 10.3348/kjr.2023.1306] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2023] [Revised: 02/04/2024] [Accepted: 02/11/2024] [Indexed: 05/01/2024] Open
Affiliation(s)
- H Shafeeq Ahmed
- Bangalore Medical College and Research Institute, Bangalore, India.
| |
Collapse
|
16
|
Huynh N, Yan D, Ma Y, Wu S, Long C, Sami MT, Almudaifer A, Jiang Z, Chen H, Dretsch MN, Denney TS, Deshpande R, Deshpande G. The Use of Generative Adversarial Network and Graph Convolution Network for Neuroimaging-Based Diagnostic Classification. Brain Sci 2024; 14:456. [PMID: 38790434 PMCID: PMC11119064 DOI: 10.3390/brainsci14050456] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2024] [Revised: 04/21/2024] [Accepted: 04/23/2024] [Indexed: 05/26/2024] Open
Abstract
Functional connectivity (FC) obtained from resting-state functional magnetic resonance imaging has been integrated with machine learning algorithms to deliver consistent and reliable brain disease classification outcomes. However, in classical learning procedures, custom-built specialized feature selection techniques are typically used to filter out uninformative features from FC patterns to generalize efficiently on the datasets. The ability of convolutional neural networks (CNN) and other deep learning models to extract informative features from data with grid structure (such as images) has led to the surge in popularity of these techniques. However, the designs of many existing CNN models still fail to exploit the relationships between entities of graph-structure data (such as networks). Therefore, graph convolution network (GCN) has been suggested as a means for uncovering the intricate structure of brain network data, which has the potential to substantially improve classification accuracy. Furthermore, overfitting in classifiers can be largely attributed to the limited number of available training samples. Recently, the generative adversarial network (GAN) has been widely used in the medical field for its generative aspect that can generate synthesis images to cope with the problems of data scarcity and patient privacy. In our previous work, GCN and GAN have been designed to investigate FC patterns to perform diagnosis tasks, and their effectiveness has been tested on the ABIDE-I dataset. In this paper, the models will be further applied to FC data derived from more public datasets (ADHD, ABIDE-II, and ADNI) and our in-house dataset (PTSD) to justify their generalization on all types of data. The results of a number of experiments show the powerful characteristic of GAN to mimic FC data to achieve high performance in disease prediction. When employing GAN for data augmentation, the diagnostic accuracy across ADHD-200, ABIDE-II, and ADNI datasets surpasses that of other machine learning models, including results achieved with BrainNetCNN. Specifically, in ADHD, the accuracy increased from 67.74% to 73.96% with GAN, in ABIDE-II from 70.36% to 77.40%, and in ADNI, reaching 52.84% and 88.56% for multiclass and binary classification, respectively. GCN also obtains decent results, with the best accuracy in ADHD datasets at 71.38% for multinomial and 75% for binary classification, respectively, and the second-best accuracy in the ABIDE-II dataset (72.28% and 75.16%, respectively). Both GAN and GCN achieved the highest accuracy for the PTSD dataset, reaching 97.76%. However, there are still some limitations that can be improved. Both methods have many opportunities for the prediction and diagnosis of diseases.
Collapse
Affiliation(s)
- Nguyen Huynh
- Auburn University Neuroimaging Center, Department of Electrical and Computer Engineering, Auburn University, Auburn, AL 36849, USA; (N.H.); (T.S.D.)
| | - Da Yan
- Department of Computer Sciences, Indiana University Bloomington, Bloomington, IN 47405, USA;
| | - Yueen Ma
- Department of Computer Sciences, The Chinese University of Hong Kong, Shatin, Hong Kong;
| | - Shengbin Wu
- Department of Mechanical Engineering, University of California, Berkeley, CA 94720, USA;
| | - Cheng Long
- School of Computer Science and Engineering, Nanyang Technological University, Singapore 639798, Singapore;
| | - Mirza Tanzim Sami
- Department of Computer Sciences, University of Alabama at Birmingham, Birmingham, AL 35294, USA; (M.T.S.); (A.A.)
| | - Abdullateef Almudaifer
- Department of Computer Sciences, University of Alabama at Birmingham, Birmingham, AL 35294, USA; (M.T.S.); (A.A.)
- College of Computer Science and Engineering, Taibah University, Yanbu 41477, Saudi Arabia
| | - Zhe Jiang
- Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL 32611, USA;
| | - Haiquan Chen
- Department of Computer Sciences, California State University, Sacramento, CA 95819, USA;
| | - Michael N. Dretsch
- Walter Reed Army Institute of Research-West, Joint Base Lewis-McChord, WA 98433, USA;
| | - Thomas S. Denney
- Auburn University Neuroimaging Center, Department of Electrical and Computer Engineering, Auburn University, Auburn, AL 36849, USA; (N.H.); (T.S.D.)
- Department of Psychological Sciences, Auburn University, Auburn, AL 36849, USA
- Alabama Advanced Imaging Consortium, Birmingham, AL 36849, USA
- Center for Neuroscience, Auburn University, Auburn, AL 36849, USA
| | - Rangaprakash Deshpande
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA 02129, USA;
| | - Gopikrishna Deshpande
- Auburn University Neuroimaging Center, Department of Electrical and Computer Engineering, Auburn University, Auburn, AL 36849, USA; (N.H.); (T.S.D.)
- Department of Psychological Sciences, Auburn University, Auburn, AL 36849, USA
- Alabama Advanced Imaging Consortium, Birmingham, AL 36849, USA
- Center for Neuroscience, Auburn University, Auburn, AL 36849, USA
- Department of Psychiatry, National Institute of Mental Health and Neurosciences, Bangalore 560030, India
- Department of Heritage Science and Technology, Indian Institute of Technology, Hyderabad 502285, India
| |
Collapse
|
17
|
Bottani S, Thibeau-Sutre E, Maire A, Ströer S, Dormont D, Colliot O, Burgos N. Contrast-enhanced to non-contrast-enhanced image translation to exploit a clinical data warehouse of T1-weighted brain MRI. BMC Med Imaging 2024; 24:67. [PMID: 38504179 PMCID: PMC10953143 DOI: 10.1186/s12880-024-01242-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Accepted: 03/07/2024] [Indexed: 03/21/2024] Open
Abstract
BACKGROUND Clinical data warehouses provide access to massive amounts of medical images, but these images are often heterogeneous. They can for instance include images acquired both with or without the injection of a gadolinium-based contrast agent. Harmonizing such data sets is thus fundamental to guarantee unbiased results, for example when performing differential diagnosis. Furthermore, classical neuroimaging software tools for feature extraction are typically applied only to images without gadolinium. The objective of this work is to evaluate how image translation can be useful to exploit a highly heterogeneous data set containing both contrast-enhanced and non-contrast-enhanced images from a clinical data warehouse. METHODS We propose and compare different 3D U-Net and conditional GAN models to convert contrast-enhanced T1-weighted (T1ce) into non-contrast-enhanced (T1nce) brain MRI. These models were trained using 230 image pairs and tested on 77 image pairs from the clinical data warehouse of the Greater Paris area. RESULTS Validation using standard image similarity measures demonstrated that the similarity between real and synthetic T1nce images was higher than between real T1nce and T1ce images for all the models compared. The best performing models were further validated on a segmentation task. We showed that tissue volumes extracted from synthetic T1nce images were closer to those of real T1nce images than volumes extracted from T1ce images. CONCLUSION We showed that deep learning models initially developed with research quality data could synthesize T1nce from T1ce images of clinical quality and that reliable features could be extracted from the synthetic images, thus demonstrating the ability of such methods to help exploit a data set coming from a clinical data warehouse.
Collapse
Affiliation(s)
- Simona Bottani
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié-Salpêtrière, Paris, 75013, France
| | - Elina Thibeau-Sutre
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié-Salpêtrière, Paris, 75013, France
| | - Aurélien Maire
- Innovation & Données - Département des Services Numériques, AP-HP, Paris, 75013, France
| | - Sebastian Ströer
- Hôpital Pitié Salpêtrière, Department of Neuroradiology, AP-HP, Paris, 75012, France
| | - Didier Dormont
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié-Salpêtrière, DMU DIAMENT, Paris, 75013, France
| | - Olivier Colliot
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié-Salpêtrière, Paris, 75013, France
| | - Ninon Burgos
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié-Salpêtrière, Paris, 75013, France.
| |
Collapse
|
18
|
Vega F, Addeh A, Ganesh A, Smith EE, MacDonald ME. Image Translation for Estimating Two-Dimensional Axial Amyloid-Beta PET From Structural MRI. J Magn Reson Imaging 2024; 59:1021-1031. [PMID: 37921361 DOI: 10.1002/jmri.29070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 09/29/2023] [Accepted: 10/02/2023] [Indexed: 11/04/2023] Open
Abstract
BACKGROUND Amyloid-beta and brain atrophy are hallmarks for Alzheimer's Disease that can be targeted with positron emission tomography (PET) and MRI, respectively. MRI is cheaper, less-invasive, and more available than PET. There is a known relationship between amyloid-beta and brain atrophy, meaning PET images could be inferred from MRI. PURPOSE To build an image translation model using a Conditional Generative Adversarial Network able to synthesize Amyloid-beta PET images from structural MRI. STUDY TYPE Retrospective. POPULATION Eight hundred eighty-two adults (348 males/534 females) with different stages of cognitive decline (control, mild cognitive impairment, moderate cognitive impairment, and severe cognitive impairment). Five hundred fifty-two subjects for model training and 331 for testing (80%:20%). FIELD STRENGTH/SEQUENCE 3 T, T1-weighted structural (T1w). ASSESSMENT The testing cohort was used to evaluate the performance of the model using the Structural Similarity Index Measure (SSIM) and Peak Signal-to-Noise Ratio (PSNR), comparing the likeness of the overall synthetic PET images created from structural MRI with the overall true PET images. SSIM was computed in the overall image to include the luminance, contrast, and structural similarity components. Experienced observers reviewed the images for quality, performance and tried to determine if they could tell the difference between real and synthetic images. STATISTICAL TESTS Pixel wise Pearson correlation was significant, and had an R2 greater than 0.96 in example images. From blinded readings, a Pearson Chi-squared test showed that there was no significant difference between the real and synthetic images by the observers (P = 0.68). RESULTS A high degree of likeness across the evaluation set, which had a mean SSIM = 0.905 and PSNR = 2.685. The two observers were not able to determine the difference between the real and synthetic images, with accuracies of 54% and 46%, respectively. CONCLUSION Amyloid-beta PET images can be synthesized from structural MRI with a high degree of similarity to the real PET images. EVIDENCE LEVEL 3 TECHNICAL EFFICACY: Stage 1.
Collapse
Affiliation(s)
- Fernando Vega
- Department of Biomedical, University of Calgary, Calgary, Alberta, Canada
- Department of Electrical and Software Engineering, University of Calgary, Calgary, Alberta, Canada
- Department of Radiology, University of Calgary, Calgary, Alberta, Canada
- Hotchkiss Brain Institute, University of Calgary, Calgary, Alberta, Canada
| | - Abdoljalil Addeh
- Department of Biomedical, University of Calgary, Calgary, Alberta, Canada
- Department of Electrical and Software Engineering, University of Calgary, Calgary, Alberta, Canada
- Department of Radiology, University of Calgary, Calgary, Alberta, Canada
- Hotchkiss Brain Institute, University of Calgary, Calgary, Alberta, Canada
| | - Aravind Ganesh
- Department of Radiology, University of Calgary, Calgary, Alberta, Canada
- Hotchkiss Brain Institute, University of Calgary, Calgary, Alberta, Canada
- Department of Clinical Neuroscience, University of Calgary, Calgary, Alberta, Canada
| | - Eric E Smith
- Department of Radiology, University of Calgary, Calgary, Alberta, Canada
- Hotchkiss Brain Institute, University of Calgary, Calgary, Alberta, Canada
- Department of Clinical Neuroscience, University of Calgary, Calgary, Alberta, Canada
| | - M Ethan MacDonald
- Department of Biomedical, University of Calgary, Calgary, Alberta, Canada
- Department of Electrical and Software Engineering, University of Calgary, Calgary, Alberta, Canada
- Department of Radiology, University of Calgary, Calgary, Alberta, Canada
- Hotchkiss Brain Institute, University of Calgary, Calgary, Alberta, Canada
| |
Collapse
|
19
|
Kim J, Li Y, Shin BS. Volumetric Imitation Generative Adversarial Networks for Anatomical Human Body Modeling. Bioengineering (Basel) 2024; 11:163. [PMID: 38391649 PMCID: PMC10886047 DOI: 10.3390/bioengineering11020163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Revised: 02/02/2024] [Accepted: 02/06/2024] [Indexed: 02/24/2024] Open
Abstract
Volumetric representation is a technique used to express 3D objects in various fields, such as medical applications. On the other hand, tomography images for reconstructing volumetric data have limited utilization because they contain personal information. Existing GAN-based medical image generation techniques can produce virtual tomographic images for volume reconstruction while preserving the patient's privacy. Nevertheless, these images often do not consider vertical correlations between the adjacent slices, leading to erroneous results in 3D reconstruction. Furthermore, while volume generation techniques have been introduced, they often focus on surface modeling, making it challenging to represent the internal anatomical features accurately. This paper proposes volumetric imitation GAN (VI-GAN), which imitates a human anatomical model to generate volumetric data. The primary goal of this model is to capture the attributes and 3D structure, including the external shape, internal slices, and the relationship between the vertical slices of the human anatomical model. The proposed network consists of a generator for feature extraction and up-sampling based on a 3D U-Net and ResNet structure and a 3D-convolution-based LFFB (local feature fusion block). In addition, a discriminator utilizes 3D convolution to evaluate the authenticity of the generated volume compared to the ground truth. VI-GAN also devises reconstruction loss, including feature and similarity losses, to converge the generated volumetric data into a human anatomical model. In this experiment, the CT data of 234 people were used to assess the reliability of the results. When using volume evaluation metrics to measure similarity, VI-GAN generated a volume that realistically represented the human anatomical model compared to existing volume generation methods.
Collapse
Affiliation(s)
- Jion Kim
- Department of Electrical and Computer Engineering, Inha University, Incheon 22212, Republic of Korea
| | - Yan Li
- Department of Electrical and Computer Engineering, Inha University, Incheon 22212, Republic of Korea
| | - Byeong-Seok Shin
- Department of Electrical and Computer Engineering, Inha University, Incheon 22212, Republic of Korea
| |
Collapse
|
20
|
Zhang J, Huang X, Liu Y, Han Y, Xiang Z. GAN-based medical image small region forgery detection via a two-stage cascade framework. PLoS One 2024; 19:e0290303. [PMID: 38166011 PMCID: PMC10760893 DOI: 10.1371/journal.pone.0290303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Accepted: 08/06/2023] [Indexed: 01/04/2024] Open
Abstract
Using generative adversarial network (GAN) Goodfellow et al. (2014) for data enhancement of medical images is significantly helpful for many computer-aided diagnosis (CAD) tasks. A new GAN-based automated tampering attack, like CT-GAN Mirsky et al. (2019), has emerged. It can inject or remove lung cancer lesions to CT scans. Because the tampering region may even account for less than 1% of the original image, even state-of-the-art methods are challenging to detect the traces of such tampering. This paper proposes a two-stage cascade framework to detect GAN-based medical image small region forgery like CT-GAN. In the local detection stage, we train the detector network with small sub-images so that interference information in authentic regions will not affect the detector. We use depthwise separable convolution and residual networks to prevent the detector from over-fitting and enhance the ability to find forged regions through the attention mechanism. The detection results of all sub-images in the same image will be combined into a heatmap. In the global classification stage, using gray-level co-occurrence matrix (GLCM) can better extract features of the heatmap. Because the shape and size of the tampered region are uncertain, we use hyperplanes in an infinite-dimensional space for classification. Our method can classify whether a CT image has been tampered and locate the tampered position. Sufficient experiments show that our method can achieve excellent performance than the state-of-the-art detection methods.
Collapse
Affiliation(s)
- Jianyi Zhang
- Beijing Electronic Science and Technology Institute, Beijing, China
- University of Louisiana at Lafayette, Lafayette, Louisiana, United States of America
| | - Xuanxi Huang
- Beijing Electronic Science and Technology Institute, Beijing, China
| | - Yaqi Liu
- Beijing Electronic Science and Technology Institute, Beijing, China
| | - Yuyang Han
- Beijing Electronic Science and Technology Institute, Beijing, China
| | - Zixiao Xiang
- Beijing Electronic Science and Technology Institute, Beijing, China
| |
Collapse
|
21
|
Morales MA, Manning WJ, Nezafat R. Present and Future Innovations in AI and Cardiac MRI. Radiology 2024; 310:e231269. [PMID: 38193835 PMCID: PMC10831479 DOI: 10.1148/radiol.231269] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Revised: 10/21/2023] [Accepted: 10/26/2023] [Indexed: 01/10/2024]
Abstract
Cardiac MRI is used to diagnose and treat patients with a multitude of cardiovascular diseases. Despite the growth of clinical cardiac MRI, complicated image prescriptions and long acquisition protocols limit the specialty and restrain its impact on the practice of medicine. Artificial intelligence (AI)-the ability to mimic human intelligence in learning and performing tasks-will impact nearly all aspects of MRI. Deep learning (DL) primarily uses an artificial neural network to learn a specific task from example data sets. Self-driving scanners are increasingly available, where AI automatically controls cardiac image prescriptions. These scanners offer faster image collection with higher spatial and temporal resolution, eliminating the need for cardiac triggering or breath holding. In the future, fully automated inline image analysis will most likely provide all contour drawings and initial measurements to the reader. Advanced analysis using radiomic or DL features may provide new insights and information not typically extracted in the current analysis workflow. AI may further help integrate these features with clinical, genetic, wearable-device, and "omics" data to improve patient outcomes. This article presents an overview of AI and its application in cardiac MRI, including in image acquisition, reconstruction, and processing, and opportunities for more personalized cardiovascular care through extraction of novel imaging markers.
Collapse
Affiliation(s)
- Manuel A. Morales
- From the Department of Medicine, Cardiovascular Division (M.A.M.,
W.J.M., R.N.), and Department of Radiology (W.J.M.), Beth Israel Deaconess
Medical Center and Harvard Medical School, 330 Brookline Ave, Boston, MA
02215
| | - Warren J. Manning
- From the Department of Medicine, Cardiovascular Division (M.A.M.,
W.J.M., R.N.), and Department of Radiology (W.J.M.), Beth Israel Deaconess
Medical Center and Harvard Medical School, 330 Brookline Ave, Boston, MA
02215
| | - Reza Nezafat
- From the Department of Medicine, Cardiovascular Division (M.A.M.,
W.J.M., R.N.), and Department of Radiology (W.J.M.), Beth Israel Deaconess
Medical Center and Harvard Medical School, 330 Brookline Ave, Boston, MA
02215
| |
Collapse
|
22
|
Vasylechko S, Afacan O, Kurugol S. Self Supervised Denoising Diffusion Probabilistic Models for Abdominal DW-MRI. COMPUTATIONAL DIFFUSION MRI : MICCAI WORKSHOP 2023; 14328:80-91. [PMID: 38736559 PMCID: PMC11086684 DOI: 10.1007/978-3-031-47292-3_8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2024]
Abstract
Quantitative diffusion weighted MRI in the abdomen provides important markers of disease, however significant limitations exist for its accurate computation. One such limitation is the low signal-to-noise ratio, particularly at high diffusion b-values. To address this, multiple diffusion directional images can be collected at each b-value and geometrically averaged, which invariably leads to longer scan time, blurring due to motion and other artifacts. We propose a novel parameter estimation technique based on self supervised diffusion denoising probabilistic model that can effectively denoise diffusion weighted images and work on single diffusion gradient direction images. Our source code is made available at https://github.com/quin-med-harvard-edu/ssDDPM.
Collapse
Affiliation(s)
- Serge Vasylechko
- QUIN Lab, Department of Radiology, Boston Children's Hospital, Harvard Medical School
| | - Onur Afacan
- QUIN Lab, Department of Radiology, Boston Children's Hospital, Harvard Medical School
| | - Sila Kurugol
- QUIN Lab, Department of Radiology, Boston Children's Hospital, Harvard Medical School
| |
Collapse
|
23
|
Huang Z, Li W, Wang Y, Liu Z, Zhang Q, Jin Y, Wu R, Quan G, Liang D, Hu Z, Zhang N. MLNAN: Multi-level noise-aware network for low-dose CT imaging implemented with constrained cycle Wasserstein generative adversarial networks. Artif Intell Med 2023; 143:102609. [PMID: 37673577 DOI: 10.1016/j.artmed.2023.102609] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Revised: 05/17/2023] [Accepted: 06/06/2023] [Indexed: 09/08/2023]
Abstract
Low-dose CT techniques attempt to minimize the radiation exposure of patients by estimating the high-resolution normal-dose CT images to reduce the risk of radiation-induced cancer. In recent years, many deep learning methods have been proposed to solve this problem by building a mapping function between low-dose CT images and their high-dose counterparts. However, most of these methods ignore the effect of different radiation doses on the final CT images, which results in large differences in the intensity of the noise observable in CT images. What'more, the noise intensity of low-dose CT images exists significantly differences under different medical devices manufacturers. In this paper, we propose a multi-level noise-aware network (MLNAN) implemented with constrained cycle Wasserstein generative adversarial networks to recovery the low-dose CT images under uncertain noise levels. Particularly, the noise-level classification is predicted and reused as a prior pattern in generator networks. Moreover, the discriminator network introduces noise-level determination. Under two dose-reduction strategies, experiments to evaluate the performance of proposed method are conducted on two datasets, including the simulated clinical AAPM challenge datasets and commercial CT datasets from United Imaging Healthcare (UIH). The experimental results illustrate the effectiveness of our proposed method in terms of noise suppression and structural detail preservation compared with several other deep-learning based methods. Ablation studies validate the effectiveness of the individual components regarding the afforded performance improvement. Further research for practical clinical applications and other medical modalities is required in future works.
Collapse
Affiliation(s)
- Zhenxing Huang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Wenbo Li
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Beijing 101408, China
| | - Yunling Wang
- Department of Radiology, First Affiliated Hospital of Xinjiang Medical University, Urumqi, 830011, China.
| | - Zhou Liu
- Department of Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, 518116, China
| | - Qiyang Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Yuxi Jin
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Ruodai Wu
- Department of Radiology, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen 518055, China
| | - Guotao Quan
- Shanghai United Imaging Healthcare, Shanghai 201807, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Na Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China.
| |
Collapse
|
24
|
Gerard SE, Chaudhary MFA, Herrmann J, Christensen GE, Estépar RSJ, Reinhardt JM, Hoffman EA. Direct estimation of regional lung volume change from paired and single CT images using residual regression neural network. Med Phys 2023; 50:5698-5714. [PMID: 36929883 PMCID: PMC10743098 DOI: 10.1002/mp.16365] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Revised: 02/11/2023] [Accepted: 03/01/2023] [Indexed: 03/18/2023] Open
Abstract
BACKGROUND Chest computed tomography (CT) enables characterization of pulmonary diseases by producing high-resolution and high-contrast images of the intricate lung structures. Deformable image registration is used to align chest CT scans at different lung volumes, yielding estimates of local tissue expansion and contraction. PURPOSE We investigated the utility of deep generative models for directly predicting local tissue volume change from lung CT images, bypassing computationally expensive iterative image registration and providing a method that can be utilized in scenarios where either one or two CT scans are available. METHODS A residual regression convolutional neural network, called Reg3DNet+, is proposed for directly regressing high-resolution images of local tissue volume change (i.e., Jacobian) from CT images. Image registration was performed between lung volumes at total lung capacity (TLC) and functional residual capacity (FRC) using a tissue mass- and structure-preserving registration algorithm. The Jacobian image was calculated from the registration-derived displacement field and used as the ground truth for local tissue volume change. Four separate Reg3DNet+ models were trained to predict Jacobian images using a multifactorial study design to compare the effects of network input (i.e., single image vs. paired images) and output space (i.e., FRC vs. TLC). The models were trained and evaluated on image datasets from the COPDGene study. Models were evaluated against the registration-derived Jacobian images using local, regional, and global evaluation metrics. RESULTS Statistical analysis revealed that both factors - network input and output space - were significant determinants for change in evaluation metrics. Paired-input models performed better than single-input models, and model performance was better in the output space of FRC rather than TLC. Mean structural similarity index for paired-input models was 0.959 and 0.956 for FRC and TLC output spaces, respectively, and for single-input models was 0.951 and 0.937. Global evaluation metrics demonstrated correlation between registration-derived Jacobian mean and predicted Jacobian mean: coefficient of determination (r2 ) for paired-input models was 0.974 and 0.938 for FRC and TLC output spaces, respectively, and for single-input models was 0.598 and 0.346. After correcting for effort, registration-derived lobar volume change was strongly correlated with the predicted lobar volume change: for paired-input models r2 was 0.899 for both FRC and TLC output spaces, and for single-input models r2 was 0.803 and 0.862, respectively. CONCLUSIONS Convolutional neural networks can be used to directly predict local tissue mechanics, eliminating the need for computationally expensive image registration. Networks that use paired CT images acquired at TLC and FRC allow for more accurate prediction of local tissue expansion compared to networks that use a single image. Networks that only require a single input image still show promising results, particularly after correcting for effort, and allow for local tissue expansion estimation in cases where multiple CT scans are not available. For single-input networks, the FRC image is more predictive of local tissue volume change compared to the TLC image.
Collapse
Affiliation(s)
- Sarah E. Gerard
- Roy J. Carver Department of Biomedical Engineering, University of Iowa, Iowa City, Iowa, USA
- Department of Radiology, University of Iowa, Iowa City, Iowa, USA
| | | | - Jacob Herrmann
- Roy J. Carver Department of Biomedical Engineering, University of Iowa, Iowa City, Iowa, USA
| | - Gary E. Christensen
- Department of Electrical and Computer Engineering, University of Iowa, Iowa City, Iowa, USA
- Department of Radiation Oncology, University of Iowa, Iowa City, Iowa, USA
| | | | - Joseph M. Reinhardt
- Roy J. Carver Department of Biomedical Engineering, University of Iowa, Iowa City, Iowa, USA
- Department of Radiology, University of Iowa, Iowa City, Iowa, USA
| | - Eric A. Hoffman
- Roy J. Carver Department of Biomedical Engineering, University of Iowa, Iowa City, Iowa, USA
- Department of Radiology, University of Iowa, Iowa City, Iowa, USA
| |
Collapse
|
25
|
Hooshangnejad H, Chen Q, Feng X, Zhang R, Ding K. deepPERFECT: Novel Deep Learning CT Synthesis Method for Expeditious Pancreatic Cancer Radiotherapy. Cancers (Basel) 2023; 15:3061. [PMID: 37297023 PMCID: PMC10252954 DOI: 10.3390/cancers15113061] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Revised: 05/22/2023] [Accepted: 05/25/2023] [Indexed: 06/12/2023] Open
Abstract
Major sources of delay in the standard of care RT workflow are the need for multiple appointments and separate image acquisition. In this work, we addressed the question of how we can expedite the workflow by synthesizing planning CT from diagnostic CT. This idea is based on the theory that diagnostic CT can be used for RT planning, but in practice, due to the differences in patient setup and acquisition techniques, separate planning CT is required. We developed a generative deep learning model, deepPERFECT, that is trained to capture these differences and generate deformation vector fields to transform diagnostic CT into preliminary planning CT. We performed detailed analysis both from an image quality and a dosimetric point of view, and showed that deepPERFECT enabled the preliminary RT planning to be used for preliminary and early plan dosimetric assessment and evaluation.
Collapse
Affiliation(s)
- Hamed Hooshangnejad
- Department of Biomedical Engineering, Johns Hopkins School of Medicine, Baltimore, MD 21287, USA;
- Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins School of Medicine, Baltimore, MD 21287, USA
- Carnegie Center of Surgical Innovation, Johns Hopkins School of Medicine, Baltimore, MD 21287, USA
| | - Quan Chen
- City of Hope Comprehensive Cancer Center, Duarte, CA 91010, USA;
| | - Xue Feng
- Carina Medical LLC, Lexington, KY 40513, USA;
| | - Rui Zhang
- Division of Computational Health Sciences, Department of Surgery, University of Minnesota, Minneapolis, MN 55455, USA;
| | - Kai Ding
- Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins School of Medicine, Baltimore, MD 21287, USA
- Carnegie Center of Surgical Innovation, Johns Hopkins School of Medicine, Baltimore, MD 21287, USA
| |
Collapse
|
26
|
Miao T, Zhou B, Liu J, Guo X, Liu Q, Xie H, Chen X, Chen MK, Wu J, Carson RE, Liu C. Generation of Whole-Body FDG Parametric Ki Images from Static PET Images Using Deep Learning. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2023; 7:465-472. [PMID: 37997577 PMCID: PMC10665031 DOI: 10.1109/trpms.2023.3243576] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2023]
Abstract
FDG parametric Ki images show great advantage over static SUV images, due to the higher contrast and better accuracy in tracer uptake rate estimation. In this study, we explored the feasibility of generating synthetic Ki images from static SUV ratio (SUVR) images using three configurations of U-Nets with different sets of input and output image patches, which were the U-Nets with single input and single output (SISO), multiple inputs and single output (MISO), and single input and multiple outputs (SIMO). SUVR images were generated by averaging three 5-min dynamic SUV frames starting at 60 minutes post-injection, and then normalized by the mean SUV values in the blood pool. The corresponding ground truth Ki images were derived using Patlak graphical analysis with input functions from measurement of arterial blood samples. Even though the synthetic Ki values were not quantitatively accurate compared with ground truth, the linear regression analysis of joint histograms in the voxels of body regions showed that the mean R2 values were higher between U-Net prediction and ground truth (0.596, 0.580, 0.576 in SISO, MISO and SIMO), than that between SUVR and ground truth Ki (0.571). In terms of similarity metrics, the synthetic Ki images were closer to the ground truth Ki images (mean SSIM = 0.729, 0.704, 0.704 in SISO, MISO and MISO) than the input SUVR images (mean SSIM = 0.691). Therefore, it is feasible to use deep learning networks to estimate surrogate map of parametric Ki images from static SUVR images.
Collapse
Affiliation(s)
- Tianshun Miao
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT 06511, USA
| | - Bo Zhou
- Department of Biomedical Engineering, Yale University, New Haven, CT 06511, USA
| | - Juan Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT 06511, USA
| | - Xueqi Guo
- Department of Biomedical Engineering, Yale University, New Haven, CT 06511, USA
| | - Qiong Liu
- Department of Biomedical Engineering, Yale University, New Haven, CT 06511, USA
| | - Huidong Xie
- Department of Biomedical Engineering, Yale University, New Haven, CT 06511, USA
| | - Xiongchao Chen
- Department of Biomedical Engineering, Yale University, New Haven, CT 06511, USA
| | - Ming-Kai Chen
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT 06511, USA
| | - Jing Wu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT 06511, USA
- Department of Physics, Beijing Normal University, Beijing 100875, China
| | - Richard E. Carson
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT 06511, USA
- Department of Biomedical Engineering, Yale University, New Haven, CT 06511, USA
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT 06511, USA
- Department of Biomedical Engineering, Yale University, New Haven, CT 06511, USA
| |
Collapse
|
27
|
Aetesam H, Maji SK. Perceptually Motivated Generative Model for Magnetic Resonance Image Denoising. J Digit Imaging 2023; 36:725-738. [PMID: 36474088 PMCID: PMC10039195 DOI: 10.1007/s10278-022-00744-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 11/01/2022] [Accepted: 11/17/2022] [Indexed: 12/12/2022] Open
Abstract
Image denoising is an important preprocessing step in low-level vision problems involving biomedical images. Noise removal techniques can greatly benefit raw corrupted magnetic resonance images (MRI). It has been discovered that the MR data is corrupted by a mixture of Gaussian-impulse noise caused by detector flaws and transmission errors. This paper proposes a deep generative model (GenMRIDenoiser) for dealing with this mixed noise scenario. This work makes four contributions. To begin, Wasserstein generative adversarial network (WGAN) is used in model training to mitigate the problem of vanishing gradient, mode collapse, and convergence issues encountered while training a vanilla GAN. Second, a perceptually motivated loss function is used to guide the training process in order to preserve the low-level details in the form of high-frequency components in the image. Third, batch renormalization is used between the convolutional and activation layers to prevent performance degradation under the assumption of non-independent and identically distributed (non-iid) data. Fourth, global feature attention module (GFAM) is appended at the beginning and end of the parallel ensemble blocks to capture the long-range dependencies that are often lost due to the small receptive field of convolutional filters. The experimental results over synthetic data and MRI stack obtained from real MR scanners indicate the potential utility of the proposed technique across a wide range of degradation scenarios.
Collapse
Affiliation(s)
- Hazique Aetesam
- Department of Computer Science and Engineering, Indian Institute of Technology Patna, Patna, 801106 India
| | - Suman Kumar Maji
- Department of Computer Science and Engineering, Indian Institute of Technology Patna, Patna, 801106 India
| |
Collapse
|
28
|
Manso Jimeno M, Vaughan JT, Geethanath S. Superconducting magnet designs and MRI accessibility: A review. NMR IN BIOMEDICINE 2023:e4921. [PMID: 36914280 DOI: 10.1002/nbm.4921] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Revised: 02/13/2023] [Accepted: 02/23/2023] [Indexed: 06/18/2023]
Abstract
Presently, magnetic resonance imaging (MRI) magnets must deliver excellent magnetic field (B0 ) uniformity to achieve optimum image quality. Long magnets can satisfy the homogeneity requirements but require considerable superconducting material. These designs result in large, heavy, and costly systems that aggravate as field strength increases. Furthermore, the tight temperature tolerance of niobium titanium magnets adds instability to the system and requires operation at liquid helium temperature. These issues are crucial factors in the disparity of MR density and field strength use across the globe. Low-income settings show reduced access to MRI, especially to high field strengths. This article summarizes the proposed modifications to MRI superconducting magnet design and their impact on accessibility, including compact, reduced liquid helium, and specialty systems. Reducing the amount of superconductor inevitably entails shrinking the magnet size, resulting in higher field inhomogeneity. This work also reviews the state-of-the-art imaging and reconstruction methods to overcome this issue. Finally, we summarize the current and future challenges and opportunities in the design of accessible MRI.
Collapse
Affiliation(s)
- Marina Manso Jimeno
- Department of Biomedical Engineering, Columbia University in the City of New York, New York, New York, USA
- Columbia Magnetic Resonance Research Center, Columbia University in the City of New York, New York, New York, USA
| | - John Thomas Vaughan
- Department of Biomedical Engineering, Columbia University in the City of New York, New York, New York, USA
- Columbia Magnetic Resonance Research Center, Columbia University in the City of New York, New York, New York, USA
| | - Sairam Geethanath
- Columbia Magnetic Resonance Research Center, Columbia University in the City of New York, New York, New York, USA
- Department of Diagnostic, Molecular and Interventional Radiology, Icahn School of Medicine at Mount Sinai, The Biomedical Engineering and Imaging Institute, New York, New York, USA
| |
Collapse
|
29
|
Motion artifact correction in fetal MRI based on a Generative Adversarial network method. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104484] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
30
|
Cascade of Denoising and Mapping Neural Networks for MRI R2* Relaxometry of Iron-Loaded Liver. Bioengineering (Basel) 2023; 10:bioengineering10020209. [PMID: 36829703 PMCID: PMC9952355 DOI: 10.3390/bioengineering10020209] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 02/01/2023] [Accepted: 02/02/2023] [Indexed: 02/09/2023] Open
Abstract
MRI of effective transverse relaxation rate (R2*) measurement is a reliable method for liver iron concentration quantification. However, R2* mapping can be degraded by noise, especially in the case of iron overload. This study aimed to develop a deep learning method for MRI R2* relaxometry of an iron-loaded liver using a two-stage cascaded neural network. The proposed method, named CadamNet, combines two convolutional neural networks separately designed for image denoising and parameter mapping into a cascade framework, and the physics-based R2* decay model was incorporated in training the mapping network to enforce data consistency further. CadamNet was trained using simulated liver data with Rician noise, which was constructed from clinical liver data. The performance of CadamNet was quantitatively evaluated on simulated data with varying noise levels as well as clinical liver data and compared with the single-stage parameter mapping network (MappingNet) and two conventional model-based R2* mapping methods. CadamNet consistently achieved high-quality R2* maps and outperformed MappingNet at varying noise levels. Compared with conventional R2* mapping methods, CadamNet yielded R2* maps with lower errors, higher quality, and substantially increased efficiency. In conclusion, the proposed CadamNet enables accurate and efficient iron-loaded liver R2* mapping, especially in the presence of severe noise.
Collapse
|
31
|
Chen Z, Pawar K, Ekanayake M, Pain C, Zhong S, Egan GF. Deep Learning for Image Enhancement and Correction in Magnetic Resonance Imaging-State-of-the-Art and Challenges. J Digit Imaging 2023; 36:204-230. [PMID: 36323914 PMCID: PMC9984670 DOI: 10.1007/s10278-022-00721-9] [Citation(s) in RCA: 32] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2022] [Revised: 09/09/2022] [Accepted: 10/17/2022] [Indexed: 11/06/2022] Open
Abstract
Magnetic resonance imaging (MRI) provides excellent soft-tissue contrast for clinical diagnoses and research which underpin many recent breakthroughs in medicine and biology. The post-processing of reconstructed MR images is often automated for incorporation into MRI scanners by the manufacturers and increasingly plays a critical role in the final image quality for clinical reporting and interpretation. For image enhancement and correction, the post-processing steps include noise reduction, image artefact correction, and image resolution improvements. With the recent success of deep learning in many research fields, there is great potential to apply deep learning for MR image enhancement, and recent publications have demonstrated promising results. Motivated by the rapidly growing literature in this area, in this review paper, we provide a comprehensive overview of deep learning-based methods for post-processing MR images to enhance image quality and correct image artefacts. We aim to provide researchers in MRI or other research fields, including computer vision and image processing, a literature survey of deep learning approaches for MR image enhancement. We discuss the current limitations of the application of artificial intelligence in MRI and highlight possible directions for future developments. In the era of deep learning, we highlight the importance of a critical appraisal of the explanatory information provided and the generalizability of deep learning algorithms in medical imaging.
Collapse
Affiliation(s)
- Zhaolin Chen
- Monash Biomedical Imaging, Monash University, Melbourne, VIC, 3168, Australia.
- Department of Data Science and AI, Monash University, Melbourne, VIC, Australia.
| | - Kamlesh Pawar
- Monash Biomedical Imaging, Monash University, Melbourne, VIC, 3168, Australia
| | - Mevan Ekanayake
- Monash Biomedical Imaging, Monash University, Melbourne, VIC, 3168, Australia
- Department of Electrical and Computer Systems Engineering, Monash University, Melbourne, VIC, Australia
| | - Cameron Pain
- Monash Biomedical Imaging, Monash University, Melbourne, VIC, 3168, Australia
- Department of Electrical and Computer Systems Engineering, Monash University, Melbourne, VIC, Australia
| | - Shenjun Zhong
- Monash Biomedical Imaging, Monash University, Melbourne, VIC, 3168, Australia
- National Imaging Facility, Brisbane, QLD, Australia
| | - Gary F Egan
- Monash Biomedical Imaging, Monash University, Melbourne, VIC, 3168, Australia
- Turner Institute for Brain and Mental Health, Monash University, Melbourne, VIC, Australia
| |
Collapse
|
32
|
Gao C, Ghodrati V, Shih SF, Wu HH, Liu Y, Nickel MD, Vahle T, Dale B, Sai V, Felker E, Surawech C, Miao Q, Finn JP, Zhong X, Hu P. Undersampling artifact reduction for free-breathing 3D stack-of-radial MRI based on a deep adversarial learning network. Magn Reson Imaging 2023; 95:70-79. [PMID: 36270417 PMCID: PMC10163826 DOI: 10.1016/j.mri.2022.10.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Revised: 10/06/2022] [Accepted: 10/14/2022] [Indexed: 11/05/2022]
Abstract
PURPOSE Stack-of-radial MRI allows free-breathing abdominal scans, however, it requires relatively long acquisition time. Undersampling reduces scan time but can cause streaking artifacts and degrade image quality. This study developed deep learning networks with adversarial loss and evaluated the performance of reducing streaking artifacts and preserving perceptual image sharpness. METHODS A 3D generative adversarial network (GAN) was developed for reducing streaking artifacts in stack-of-radial abdominal scans. Training and validation datasets were self-gated to 5 respiratory states to reduce motion artifacts and to effectively augment the data. The network used a combination of three loss functions to constrain the anatomy and preserve image quality: adversarial loss, mean-squared-error loss and structural similarity index loss. The performance of the network was investigated for 3-5 times undersampled data from 2 institutions. The performance of the GAN for 5 times accelerated images was compared with a 3D U-Net and evaluated using quantitative NMSE, SSIM and region of interest (ROI) measurements as well as qualitative scores of radiologists. RESULTS The 3D GAN showed similar NMSE (0.0657 vs. 0.0559, p = 0.5217) and significantly higher SSIM (0.841 vs. 0.798, p < 0.0001) compared to U-Net. ROI analysis showed GAN removed streaks in both the background air and the tissue and was not significantly different from the reference mean and variations. Radiologists' scores showed GAN had a significant improvement of 1.6 point (p = 0.004) on a 4-point scale in streaking score while no significant difference in sharpness score compared to the input. CONCLUSION 3D GAN removes streaking artifacts and preserves perceptual image details.
Collapse
Affiliation(s)
- Chang Gao
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, United States; Inter-Departmental Graduate Program of Physics and Biology in Medicine, University of California Los Angeles, Los Angeles, CA, United States
| | - Vahid Ghodrati
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, United States; Inter-Departmental Graduate Program of Physics and Biology in Medicine, University of California Los Angeles, Los Angeles, CA, United States
| | - Shu-Fu Shih
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, United States; Department of Bioengineering, University of California Los Angeles, Los Angeles, CA, United States
| | - Holden H Wu
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, United States; Inter-Departmental Graduate Program of Physics and Biology in Medicine, University of California Los Angeles, Los Angeles, CA, United States; Department of Bioengineering, University of California Los Angeles, Los Angeles, CA, United States
| | - Yongkai Liu
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, United States; Inter-Departmental Graduate Program of Physics and Biology in Medicine, University of California Los Angeles, Los Angeles, CA, United States
| | | | - Thomas Vahle
- MR Application Predevelopment, Siemens Healthcare GmbH, Erlangen, Germany
| | - Brian Dale
- MR R&D Collaborations, Siemens Medical Solutions USA, Inc., Cary, NC, United States
| | - Victor Sai
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, United States
| | - Ely Felker
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, United States
| | - Chuthaporn Surawech
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, United States; Department of Radiology, Division of Diagnostic Radiology, Faculty of Medicine, Chulalongkorn University and King Chulalongkorn Memorial Hospital, Bangkok, Thailand
| | - Qi Miao
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, United States; Department of Radiology, The First Affiliated Hospital of China Medical University, Shenyang, Liaoning Province, China
| | - J Paul Finn
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, United States; Inter-Departmental Graduate Program of Physics and Biology in Medicine, University of California Los Angeles, Los Angeles, CA, United States
| | - Xiaodong Zhong
- MR R&D Collaborations, Siemens Medical Solutions USA, Inc., Los Angeles, CA, United States
| | - Peng Hu
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, United States; Inter-Departmental Graduate Program of Physics and Biology in Medicine, University of California Los Angeles, Los Angeles, CA, United States.
| |
Collapse
|
33
|
RED-MAM: A residual encoder-decoder network based on multi-attention fusion for ultrasound image denoising. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104062] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
34
|
Generative Adversarial Networks based on optimal transport: a survey. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10342-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022]
|
35
|
A Systematic Literature Review on Applications of GAN-Synthesized Images for Brain MRI. FUTURE INTERNET 2022. [DOI: 10.3390/fi14120351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
With the advances in brain imaging, magnetic resonance imaging (MRI) is evolving as a popular radiological tool in clinical diagnosis. Deep learning (DL) methods can detect abnormalities in brain images without an extensive manual feature extraction process. Generative adversarial network (GAN)-synthesized images have many applications in this field besides augmentation, such as image translation, registration, super-resolution, denoising, motion correction, segmentation, reconstruction, and contrast enhancement. The existing literature was reviewed systematically to understand the role of GAN-synthesized dummy images in brain disease diagnosis. Web of Science and Scopus databases were extensively searched to find relevant studies from the last 6 years to write this systematic literature review (SLR). Predefined inclusion and exclusion criteria helped in filtering the search results. Data extraction is based on related research questions (RQ). This SLR identifies various loss functions used in the above applications and software to process brain MRIs. A comparative study of existing evaluation metrics for GAN-synthesized images helps choose the proper metric for an application. GAN-synthesized images will have a crucial role in the clinical sector in the coming years, and this paper gives a baseline for other researchers in the field.
Collapse
|
36
|
Image denoising in the deep learning era. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10305-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
37
|
Pasquini L, Napolitano A, Pignatelli M, Tagliente E, Parrillo C, Nasta F, Romano A, Bozzao A, Di Napoli A. Synthetic Post-Contrast Imaging through Artificial Intelligence: Clinical Applications of Virtual and Augmented Contrast Media. Pharmaceutics 2022; 14:pharmaceutics14112378. [PMID: 36365197 PMCID: PMC9695136 DOI: 10.3390/pharmaceutics14112378] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 10/25/2022] [Accepted: 10/26/2022] [Indexed: 11/06/2022] Open
Abstract
Contrast media are widely diffused in biomedical imaging, due to their relevance in the diagnosis of numerous disorders. However, the risk of adverse reactions, the concern of potential damage to sensitive organs, and the recently described brain deposition of gadolinium salts, limit the use of contrast media in clinical practice. In recent years, the application of artificial intelligence (AI) techniques to biomedical imaging has led to the development of 'virtual' and 'augmented' contrasts. The idea behind these applications is to generate synthetic post-contrast images through AI computational modeling starting from the information available on other images acquired during the same scan. In these AI models, non-contrast images (virtual contrast) or low-dose post-contrast images (augmented contrast) are used as input data to generate synthetic post-contrast images, which are often undistinguishable from the native ones. In this review, we discuss the most recent advances of AI applications to biomedical imaging relative to synthetic contrast media.
Collapse
Affiliation(s)
- Luca Pasquini
- Neuroradiology Unit, Department of Radiology, Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065, USA
- Neuroradiology Unit, NESMOS Department, Sant’Andrea Hospital, La Sapienza University, Via di Grottarossa 1035, 00189 Rome, Italy
| | - Antonio Napolitano
- Medical Physics Department, Bambino Gesù Children’s Hospital, IRCCS, Piazza di Sant’Onofrio, 4, 00165 Rome, Italy
- Correspondence:
| | - Matteo Pignatelli
- Radiology Department, Castelli Hospital, Via Nettunense Km 11.5, 00040 Ariccia, Italy
| | - Emanuela Tagliente
- Medical Physics Department, Bambino Gesù Children’s Hospital, IRCCS, Piazza di Sant’Onofrio, 4, 00165 Rome, Italy
| | - Chiara Parrillo
- Medical Physics Department, Bambino Gesù Children’s Hospital, IRCCS, Piazza di Sant’Onofrio, 4, 00165 Rome, Italy
| | - Francesco Nasta
- Medical Physics Department, Bambino Gesù Children’s Hospital, IRCCS, Piazza di Sant’Onofrio, 4, 00165 Rome, Italy
| | - Andrea Romano
- Neuroradiology Unit, NESMOS Department, Sant’Andrea Hospital, La Sapienza University, Via di Grottarossa 1035, 00189 Rome, Italy
| | - Alessandro Bozzao
- Neuroradiology Unit, NESMOS Department, Sant’Andrea Hospital, La Sapienza University, Via di Grottarossa 1035, 00189 Rome, Italy
| | - Alberto Di Napoli
- Neuroradiology Unit, NESMOS Department, Sant’Andrea Hospital, La Sapienza University, Via di Grottarossa 1035, 00189 Rome, Italy
- Neuroimaging Lab, IRCCS Fondazione Santa Lucia, 00179 Rome, Italy
| |
Collapse
|
38
|
Farea Shaaf Z, Mahadi Abdul Jamil M, Ambar R, Abd Wahab MH. Convolutional Neural Network for Denoising Left Ventricle Magnetic Resonance Images. COMPUTATIONAL INTELLIGENCE AND MACHINE LEARNING APPROACHES IN BIOMEDICAL ENGINEERING AND HEALTH CARE SYSTEMS 2022:1-14. [DOI: 10.2174/9781681089553122010004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
Abstract
Medical image processing is critical in disease detection and prediction. For
example, they locate lesions and measure an organ's morphological structures.
Currently, cardiac magnetic resonance imaging (CMRI) plays an essential role in
cardiac motion tracking and analyzing regional and global heart functions with high
accuracy and reproducibility. Cardiac MRI datasets are images taken during the heart's
cardiac cycles. These datasets require expert labeling to accurately recognize features
and train neural networks to predict cardiac disease. Any erroneous prediction caused
by image impairment will impact patients' diagnostic decisions. As a result, image
preprocessing is used, including enhancement tools such as filtering and denoising.
This paper introduces a denoising algorithm that uses a convolution neural network
(CNN) to delineate left ventricle (LV) contours (endocardium and epicardium borders)
from MRI images. With only a small amount of training data from the EMIDEC
database, this network performs well for MRI image denoising.
Collapse
Affiliation(s)
- Zakarya Farea Shaaf
- Universiti Tun Hussein Onn Malaysia,Biomedical Engineering Modelling and Simulation Research Group, Department Of Electronic Engineering, Faculty of Electrical And Electronic Engineering,,Johor,Malaysia
| | - Muhammad Mahadi Abdul Jamil
- Biomedical Engineering Modelling and Simulation Research Group, Department Of Electronic Engineering, Faculty of Electrical And Electronic Engineering,Universiti Tun Hussein Onn Malaysia,Johor,Malaysia
| | - Radzi Ambar
- Universiti Tun Hussein Onn Malaysia,Biomedical Engineering Modelling and Simulation Research Group, Department Of Electronic Engineering, Faculty of Electrical And Electronic Engineering,Johor,Malaysia
| | - Mohd Helmy Abd Wahab
- Universiti Tun Hussein Onn Malaysia,Biomedical Engineering Modelling and Simulation Research Group, Department Of Electronic Engineering, Faculty of Electrical And Electronic Engineering,Johor,Malaysia,86400
| |
Collapse
|
39
|
Cheng H, Vinci-Booher S, Wang J, Caron B, Wen Q, Newman S, Pestilli F. Denoising diffusion weighted imaging data using convolutional neural networks. PLoS One 2022; 17:e0274396. [PMID: 36108272 PMCID: PMC9477507 DOI: 10.1371/journal.pone.0274396] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Accepted: 08/26/2022] [Indexed: 11/17/2022] Open
Abstract
Diffusion weighted imaging (DWI) with multiple, high b-values is critical for extracting tissue microstructure measurements; however, high b-value DWI images contain high noise levels that can overwhelm the signal of interest and bias microstructural measurements. Here, we propose a simple denoising method that can be applied to any dataset, provided a low-noise, single-subject dataset is acquired using the same DWI sequence. The denoising method uses a one-dimensional convolutional neural network (1D-CNN) and deep learning to learn from a low-noise dataset, voxel-by-voxel. The trained model can then be applied to high-noise datasets from other subjects. We validated the 1D-CNN denoising method by first demonstrating that 1D-CNN denoising resulted in DWI images that were more similar to the noise-free ground truth than comparable denoising methods, e.g., MP-PCA, using simulated DWI data. Using the same DWI acquisition but reconstructed with two common reconstruction methods, i.e. SENSE1 and sum-of-square, to generate a pair of low-noise and high-noise datasets, we then demonstrated that 1D-CNN denoising of high-noise DWI data collected from human subjects showed promising results in three domains: DWI images, diffusion metrics, and tractography. In particular, the denoised images were very similar to a low-noise reference image of that subject, more than the similarity between repeated low-noise images (i.e. computational reproducibility). Finally, we demonstrated the use of the 1D-CNN method in two practical examples to reduce noise from parallel imaging and simultaneous multi-slice acquisition. We conclude that the 1D-CNN denoising method is a simple, effective denoising method for DWI images that overcomes some of the limitations of current state-of-the-art denoising methods, such as the need for a large number of training subjects and the need to account for the rectified noise floor.
Collapse
Affiliation(s)
- Hu Cheng
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN, United States of America
- Program of Neuroscience, Indiana University, Bloomington, IN, United States of America
| | - Sophia Vinci-Booher
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN, United States of America
- Department of Psychology and Human Development, Vanderbilt University, Nashville, TN, United States of America
| | - Jian Wang
- School of Information Science and Engineering, Shandong Normal University, Jinan, China
| | - Bradley Caron
- Department of Psychology, Center for Perceptual Systems and Center for Theoretical and Computational Neuroscience, The University of Texas at Austin, Austin, TX, United States of America
| | - Qiuting Wen
- Department of Radiology and Imaging Sciences, Indiana University School of Medicine, Indianapolis, IN, United States of America
| | - Sharlene Newman
- Alabama Life Research Institute, The University of Alabama, Tuscaloosa, AL, United States of America
| | - Franco Pestilli
- Department of Psychology, Center for Perceptual Systems and Center for Theoretical and Computational Neuroscience, The University of Texas at Austin, Austin, TX, United States of America
| |
Collapse
|
40
|
Bressem KK, Adams LC, Proft F, Hermann KGA, Diekhoff T, Spiller L, Niehues SM, Makowski MR, Hamm B, Protopopov M, Rios Rodriguez V, Haibel H, Rademacher J, Torgutalp M, Lambert RG, Baraliakos X, Maksymowych WP, Vahldiek JL, Poddubnyy D. Deep Learning Detects Changes Indicative of Axial Spondyloarthritis at MRI of Sacroiliac Joints. Radiology 2022; 305:655-665. [PMID: 35943339 DOI: 10.1148/radiol.212526] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Background MRI is frequently used for early diagnosis of axial spondyloarthritis (axSpA). However, evaluation is time-consuming and requires profound expertise because noninflammatory degenerative changes can mimic axSpA, and early signs may therefore be missed. Deep neural networks could function as assistance for axSpA detection. Purpose To create a deep neural network to detect MRI changes in sacroiliac joints indicative of axSpA. Materials and Methods This retrospective multicenter study included MRI examinations of five cohorts of patients with clinical suspicion of axSpA collected at university and community hospitals between January 2006 and September 2020. Data from four cohorts were used as the training set, and data from one cohort as the external test set. Each MRI examination in the training and test sets was scored by six and seven raters, respectively, for inflammatory changes (bone marrow edema, enthesitis) and structural changes (erosions, sclerosis). A deep learning tool to detect changes indicative of axSpA was developed. First, a neural network to homogenize the images, then a classification network were trained. Performance was evaluated with use of area under the receiver operating characteristic curve (AUC), sensitivity, and specificity. P < .05 was considered indicative of statistically significant difference. Results Overall, 593 patients (mean age, 37 years ± 11 [SD]; 302 women) were studied. Inflammatory and structural changes were found in 197 of 477 patients (41%) and 244 of 477 (51%), respectively, in the training set and 25 of 116 patients (22%) and 26 of 116 (22%) in the test set. The AUCs were 0.94 (95% CI: 0.84, 0.97) for all inflammatory changes, 0.88 (95% CI: 0.80, 0.95) for inflammatory changes fulfilling the Assessment of SpondyloArthritis international Society definition, and 0.89 (95% CI: 0.81, 0.96) for structural changes indicative of axSpA. Sensitivity and specificity on the external test set were 22 of 25 patients (88%) and 65 of 91 patients (71%), respectively, for inflammatory changes and 22 of 26 patients (85%) and 70 of 90 patients (78%) for structural changes. Conclusion Deep neural networks can detect inflammatory or structural changes to the sacroiliac joint indicative of axial spondyloarthritis at MRI. © RSNA, 2022 Online supplemental material is available for this article.
Collapse
Affiliation(s)
- Keno K Bressem
- From the Institute for Radiology (K.K.B., L.C.A., K.G.A.H., T.D., S.M.N., B.H., J.L.V.) and Department of Gastroenterology, Infectious Diseases and Rheumatology (including Nutrition Medicine) (F.P., L.S., M.P., V.R.R., H.H., J.R., M.T., D.P.), Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Hindenburgdamm 30, 12203 Berlin, Germany; Berlin Institute of Health at Charité-Universitätsmedizin Berlin, Berlin, Germany (K.K.B., L.C.A., J.R.); Department of Diagnostic and Interventional Radiology, Faculty of Medicine, Technical University of Munich, Munich, Germany (M.R.M.); Department of Medicine, University of Alberta, Edmonton, Alberta, Canada (R.G.L., W.P.M.); Rheumazentrum Ruhrgebiet Herne, Ruhr University Bochum, Germany (X.B.); and Epidemiology Unit, German Rheumatism Research Centre, Berlin, Germany (D.P.)
| | - Lisa C Adams
- From the Institute for Radiology (K.K.B., L.C.A., K.G.A.H., T.D., S.M.N., B.H., J.L.V.) and Department of Gastroenterology, Infectious Diseases and Rheumatology (including Nutrition Medicine) (F.P., L.S., M.P., V.R.R., H.H., J.R., M.T., D.P.), Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Hindenburgdamm 30, 12203 Berlin, Germany; Berlin Institute of Health at Charité-Universitätsmedizin Berlin, Berlin, Germany (K.K.B., L.C.A., J.R.); Department of Diagnostic and Interventional Radiology, Faculty of Medicine, Technical University of Munich, Munich, Germany (M.R.M.); Department of Medicine, University of Alberta, Edmonton, Alberta, Canada (R.G.L., W.P.M.); Rheumazentrum Ruhrgebiet Herne, Ruhr University Bochum, Germany (X.B.); and Epidemiology Unit, German Rheumatism Research Centre, Berlin, Germany (D.P.)
| | - Fabian Proft
- From the Institute for Radiology (K.K.B., L.C.A., K.G.A.H., T.D., S.M.N., B.H., J.L.V.) and Department of Gastroenterology, Infectious Diseases and Rheumatology (including Nutrition Medicine) (F.P., L.S., M.P., V.R.R., H.H., J.R., M.T., D.P.), Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Hindenburgdamm 30, 12203 Berlin, Germany; Berlin Institute of Health at Charité-Universitätsmedizin Berlin, Berlin, Germany (K.K.B., L.C.A., J.R.); Department of Diagnostic and Interventional Radiology, Faculty of Medicine, Technical University of Munich, Munich, Germany (M.R.M.); Department of Medicine, University of Alberta, Edmonton, Alberta, Canada (R.G.L., W.P.M.); Rheumazentrum Ruhrgebiet Herne, Ruhr University Bochum, Germany (X.B.); and Epidemiology Unit, German Rheumatism Research Centre, Berlin, Germany (D.P.)
| | - Kay Geert A Hermann
- From the Institute for Radiology (K.K.B., L.C.A., K.G.A.H., T.D., S.M.N., B.H., J.L.V.) and Department of Gastroenterology, Infectious Diseases and Rheumatology (including Nutrition Medicine) (F.P., L.S., M.P., V.R.R., H.H., J.R., M.T., D.P.), Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Hindenburgdamm 30, 12203 Berlin, Germany; Berlin Institute of Health at Charité-Universitätsmedizin Berlin, Berlin, Germany (K.K.B., L.C.A., J.R.); Department of Diagnostic and Interventional Radiology, Faculty of Medicine, Technical University of Munich, Munich, Germany (M.R.M.); Department of Medicine, University of Alberta, Edmonton, Alberta, Canada (R.G.L., W.P.M.); Rheumazentrum Ruhrgebiet Herne, Ruhr University Bochum, Germany (X.B.); and Epidemiology Unit, German Rheumatism Research Centre, Berlin, Germany (D.P.)
| | - Torsten Diekhoff
- From the Institute for Radiology (K.K.B., L.C.A., K.G.A.H., T.D., S.M.N., B.H., J.L.V.) and Department of Gastroenterology, Infectious Diseases and Rheumatology (including Nutrition Medicine) (F.P., L.S., M.P., V.R.R., H.H., J.R., M.T., D.P.), Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Hindenburgdamm 30, 12203 Berlin, Germany; Berlin Institute of Health at Charité-Universitätsmedizin Berlin, Berlin, Germany (K.K.B., L.C.A., J.R.); Department of Diagnostic and Interventional Radiology, Faculty of Medicine, Technical University of Munich, Munich, Germany (M.R.M.); Department of Medicine, University of Alberta, Edmonton, Alberta, Canada (R.G.L., W.P.M.); Rheumazentrum Ruhrgebiet Herne, Ruhr University Bochum, Germany (X.B.); and Epidemiology Unit, German Rheumatism Research Centre, Berlin, Germany (D.P.)
| | - Laura Spiller
- From the Institute for Radiology (K.K.B., L.C.A., K.G.A.H., T.D., S.M.N., B.H., J.L.V.) and Department of Gastroenterology, Infectious Diseases and Rheumatology (including Nutrition Medicine) (F.P., L.S., M.P., V.R.R., H.H., J.R., M.T., D.P.), Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Hindenburgdamm 30, 12203 Berlin, Germany; Berlin Institute of Health at Charité-Universitätsmedizin Berlin, Berlin, Germany (K.K.B., L.C.A., J.R.); Department of Diagnostic and Interventional Radiology, Faculty of Medicine, Technical University of Munich, Munich, Germany (M.R.M.); Department of Medicine, University of Alberta, Edmonton, Alberta, Canada (R.G.L., W.P.M.); Rheumazentrum Ruhrgebiet Herne, Ruhr University Bochum, Germany (X.B.); and Epidemiology Unit, German Rheumatism Research Centre, Berlin, Germany (D.P.)
| | - Stefan M Niehues
- From the Institute for Radiology (K.K.B., L.C.A., K.G.A.H., T.D., S.M.N., B.H., J.L.V.) and Department of Gastroenterology, Infectious Diseases and Rheumatology (including Nutrition Medicine) (F.P., L.S., M.P., V.R.R., H.H., J.R., M.T., D.P.), Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Hindenburgdamm 30, 12203 Berlin, Germany; Berlin Institute of Health at Charité-Universitätsmedizin Berlin, Berlin, Germany (K.K.B., L.C.A., J.R.); Department of Diagnostic and Interventional Radiology, Faculty of Medicine, Technical University of Munich, Munich, Germany (M.R.M.); Department of Medicine, University of Alberta, Edmonton, Alberta, Canada (R.G.L., W.P.M.); Rheumazentrum Ruhrgebiet Herne, Ruhr University Bochum, Germany (X.B.); and Epidemiology Unit, German Rheumatism Research Centre, Berlin, Germany (D.P.)
| | - Marcus R Makowski
- From the Institute for Radiology (K.K.B., L.C.A., K.G.A.H., T.D., S.M.N., B.H., J.L.V.) and Department of Gastroenterology, Infectious Diseases and Rheumatology (including Nutrition Medicine) (F.P., L.S., M.P., V.R.R., H.H., J.R., M.T., D.P.), Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Hindenburgdamm 30, 12203 Berlin, Germany; Berlin Institute of Health at Charité-Universitätsmedizin Berlin, Berlin, Germany (K.K.B., L.C.A., J.R.); Department of Diagnostic and Interventional Radiology, Faculty of Medicine, Technical University of Munich, Munich, Germany (M.R.M.); Department of Medicine, University of Alberta, Edmonton, Alberta, Canada (R.G.L., W.P.M.); Rheumazentrum Ruhrgebiet Herne, Ruhr University Bochum, Germany (X.B.); and Epidemiology Unit, German Rheumatism Research Centre, Berlin, Germany (D.P.)
| | - Bernd Hamm
- From the Institute for Radiology (K.K.B., L.C.A., K.G.A.H., T.D., S.M.N., B.H., J.L.V.) and Department of Gastroenterology, Infectious Diseases and Rheumatology (including Nutrition Medicine) (F.P., L.S., M.P., V.R.R., H.H., J.R., M.T., D.P.), Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Hindenburgdamm 30, 12203 Berlin, Germany; Berlin Institute of Health at Charité-Universitätsmedizin Berlin, Berlin, Germany (K.K.B., L.C.A., J.R.); Department of Diagnostic and Interventional Radiology, Faculty of Medicine, Technical University of Munich, Munich, Germany (M.R.M.); Department of Medicine, University of Alberta, Edmonton, Alberta, Canada (R.G.L., W.P.M.); Rheumazentrum Ruhrgebiet Herne, Ruhr University Bochum, Germany (X.B.); and Epidemiology Unit, German Rheumatism Research Centre, Berlin, Germany (D.P.)
| | - Mikhail Protopopov
- From the Institute for Radiology (K.K.B., L.C.A., K.G.A.H., T.D., S.M.N., B.H., J.L.V.) and Department of Gastroenterology, Infectious Diseases and Rheumatology (including Nutrition Medicine) (F.P., L.S., M.P., V.R.R., H.H., J.R., M.T., D.P.), Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Hindenburgdamm 30, 12203 Berlin, Germany; Berlin Institute of Health at Charité-Universitätsmedizin Berlin, Berlin, Germany (K.K.B., L.C.A., J.R.); Department of Diagnostic and Interventional Radiology, Faculty of Medicine, Technical University of Munich, Munich, Germany (M.R.M.); Department of Medicine, University of Alberta, Edmonton, Alberta, Canada (R.G.L., W.P.M.); Rheumazentrum Ruhrgebiet Herne, Ruhr University Bochum, Germany (X.B.); and Epidemiology Unit, German Rheumatism Research Centre, Berlin, Germany (D.P.)
| | - Valeria Rios Rodriguez
- From the Institute for Radiology (K.K.B., L.C.A., K.G.A.H., T.D., S.M.N., B.H., J.L.V.) and Department of Gastroenterology, Infectious Diseases and Rheumatology (including Nutrition Medicine) (F.P., L.S., M.P., V.R.R., H.H., J.R., M.T., D.P.), Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Hindenburgdamm 30, 12203 Berlin, Germany; Berlin Institute of Health at Charité-Universitätsmedizin Berlin, Berlin, Germany (K.K.B., L.C.A., J.R.); Department of Diagnostic and Interventional Radiology, Faculty of Medicine, Technical University of Munich, Munich, Germany (M.R.M.); Department of Medicine, University of Alberta, Edmonton, Alberta, Canada (R.G.L., W.P.M.); Rheumazentrum Ruhrgebiet Herne, Ruhr University Bochum, Germany (X.B.); and Epidemiology Unit, German Rheumatism Research Centre, Berlin, Germany (D.P.)
| | - Hildurn Haibel
- From the Institute for Radiology (K.K.B., L.C.A., K.G.A.H., T.D., S.M.N., B.H., J.L.V.) and Department of Gastroenterology, Infectious Diseases and Rheumatology (including Nutrition Medicine) (F.P., L.S., M.P., V.R.R., H.H., J.R., M.T., D.P.), Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Hindenburgdamm 30, 12203 Berlin, Germany; Berlin Institute of Health at Charité-Universitätsmedizin Berlin, Berlin, Germany (K.K.B., L.C.A., J.R.); Department of Diagnostic and Interventional Radiology, Faculty of Medicine, Technical University of Munich, Munich, Germany (M.R.M.); Department of Medicine, University of Alberta, Edmonton, Alberta, Canada (R.G.L., W.P.M.); Rheumazentrum Ruhrgebiet Herne, Ruhr University Bochum, Germany (X.B.); and Epidemiology Unit, German Rheumatism Research Centre, Berlin, Germany (D.P.)
| | - Judith Rademacher
- From the Institute for Radiology (K.K.B., L.C.A., K.G.A.H., T.D., S.M.N., B.H., J.L.V.) and Department of Gastroenterology, Infectious Diseases and Rheumatology (including Nutrition Medicine) (F.P., L.S., M.P., V.R.R., H.H., J.R., M.T., D.P.), Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Hindenburgdamm 30, 12203 Berlin, Germany; Berlin Institute of Health at Charité-Universitätsmedizin Berlin, Berlin, Germany (K.K.B., L.C.A., J.R.); Department of Diagnostic and Interventional Radiology, Faculty of Medicine, Technical University of Munich, Munich, Germany (M.R.M.); Department of Medicine, University of Alberta, Edmonton, Alberta, Canada (R.G.L., W.P.M.); Rheumazentrum Ruhrgebiet Herne, Ruhr University Bochum, Germany (X.B.); and Epidemiology Unit, German Rheumatism Research Centre, Berlin, Germany (D.P.)
| | - Murat Torgutalp
- From the Institute for Radiology (K.K.B., L.C.A., K.G.A.H., T.D., S.M.N., B.H., J.L.V.) and Department of Gastroenterology, Infectious Diseases and Rheumatology (including Nutrition Medicine) (F.P., L.S., M.P., V.R.R., H.H., J.R., M.T., D.P.), Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Hindenburgdamm 30, 12203 Berlin, Germany; Berlin Institute of Health at Charité-Universitätsmedizin Berlin, Berlin, Germany (K.K.B., L.C.A., J.R.); Department of Diagnostic and Interventional Radiology, Faculty of Medicine, Technical University of Munich, Munich, Germany (M.R.M.); Department of Medicine, University of Alberta, Edmonton, Alberta, Canada (R.G.L., W.P.M.); Rheumazentrum Ruhrgebiet Herne, Ruhr University Bochum, Germany (X.B.); and Epidemiology Unit, German Rheumatism Research Centre, Berlin, Germany (D.P.)
| | - Robert G Lambert
- From the Institute for Radiology (K.K.B., L.C.A., K.G.A.H., T.D., S.M.N., B.H., J.L.V.) and Department of Gastroenterology, Infectious Diseases and Rheumatology (including Nutrition Medicine) (F.P., L.S., M.P., V.R.R., H.H., J.R., M.T., D.P.), Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Hindenburgdamm 30, 12203 Berlin, Germany; Berlin Institute of Health at Charité-Universitätsmedizin Berlin, Berlin, Germany (K.K.B., L.C.A., J.R.); Department of Diagnostic and Interventional Radiology, Faculty of Medicine, Technical University of Munich, Munich, Germany (M.R.M.); Department of Medicine, University of Alberta, Edmonton, Alberta, Canada (R.G.L., W.P.M.); Rheumazentrum Ruhrgebiet Herne, Ruhr University Bochum, Germany (X.B.); and Epidemiology Unit, German Rheumatism Research Centre, Berlin, Germany (D.P.)
| | - Xenofon Baraliakos
- From the Institute for Radiology (K.K.B., L.C.A., K.G.A.H., T.D., S.M.N., B.H., J.L.V.) and Department of Gastroenterology, Infectious Diseases and Rheumatology (including Nutrition Medicine) (F.P., L.S., M.P., V.R.R., H.H., J.R., M.T., D.P.), Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Hindenburgdamm 30, 12203 Berlin, Germany; Berlin Institute of Health at Charité-Universitätsmedizin Berlin, Berlin, Germany (K.K.B., L.C.A., J.R.); Department of Diagnostic and Interventional Radiology, Faculty of Medicine, Technical University of Munich, Munich, Germany (M.R.M.); Department of Medicine, University of Alberta, Edmonton, Alberta, Canada (R.G.L., W.P.M.); Rheumazentrum Ruhrgebiet Herne, Ruhr University Bochum, Germany (X.B.); and Epidemiology Unit, German Rheumatism Research Centre, Berlin, Germany (D.P.)
| | - Walter P Maksymowych
- From the Institute for Radiology (K.K.B., L.C.A., K.G.A.H., T.D., S.M.N., B.H., J.L.V.) and Department of Gastroenterology, Infectious Diseases and Rheumatology (including Nutrition Medicine) (F.P., L.S., M.P., V.R.R., H.H., J.R., M.T., D.P.), Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Hindenburgdamm 30, 12203 Berlin, Germany; Berlin Institute of Health at Charité-Universitätsmedizin Berlin, Berlin, Germany (K.K.B., L.C.A., J.R.); Department of Diagnostic and Interventional Radiology, Faculty of Medicine, Technical University of Munich, Munich, Germany (M.R.M.); Department of Medicine, University of Alberta, Edmonton, Alberta, Canada (R.G.L., W.P.M.); Rheumazentrum Ruhrgebiet Herne, Ruhr University Bochum, Germany (X.B.); and Epidemiology Unit, German Rheumatism Research Centre, Berlin, Germany (D.P.)
| | - Janis L Vahldiek
- From the Institute for Radiology (K.K.B., L.C.A., K.G.A.H., T.D., S.M.N., B.H., J.L.V.) and Department of Gastroenterology, Infectious Diseases and Rheumatology (including Nutrition Medicine) (F.P., L.S., M.P., V.R.R., H.H., J.R., M.T., D.P.), Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Hindenburgdamm 30, 12203 Berlin, Germany; Berlin Institute of Health at Charité-Universitätsmedizin Berlin, Berlin, Germany (K.K.B., L.C.A., J.R.); Department of Diagnostic and Interventional Radiology, Faculty of Medicine, Technical University of Munich, Munich, Germany (M.R.M.); Department of Medicine, University of Alberta, Edmonton, Alberta, Canada (R.G.L., W.P.M.); Rheumazentrum Ruhrgebiet Herne, Ruhr University Bochum, Germany (X.B.); and Epidemiology Unit, German Rheumatism Research Centre, Berlin, Germany (D.P.)
| | - Denis Poddubnyy
- From the Institute for Radiology (K.K.B., L.C.A., K.G.A.H., T.D., S.M.N., B.H., J.L.V.) and Department of Gastroenterology, Infectious Diseases and Rheumatology (including Nutrition Medicine) (F.P., L.S., M.P., V.R.R., H.H., J.R., M.T., D.P.), Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Hindenburgdamm 30, 12203 Berlin, Germany; Berlin Institute of Health at Charité-Universitätsmedizin Berlin, Berlin, Germany (K.K.B., L.C.A., J.R.); Department of Diagnostic and Interventional Radiology, Faculty of Medicine, Technical University of Munich, Munich, Germany (M.R.M.); Department of Medicine, University of Alberta, Edmonton, Alberta, Canada (R.G.L., W.P.M.); Rheumazentrum Ruhrgebiet Herne, Ruhr University Bochum, Germany (X.B.); and Epidemiology Unit, German Rheumatism Research Centre, Berlin, Germany (D.P.)
| |
Collapse
|
41
|
An Improved Deep Persistent Memory Network for Rician Noise Reduction in MR Images. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103736] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
42
|
Research on Optimization Scheme for Blocking Artifacts after Patch-Based Medical Image Reconstruction. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:2177159. [PMID: 35959350 PMCID: PMC9357777 DOI: 10.1155/2022/2177159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/09/2022] [Accepted: 07/12/2022] [Indexed: 11/18/2022]
Abstract
Due to limitations of computer resources, when utilizing a neural network to process an image with a high resolution, the typical processing approach is to slice the original image. However, because of the influence of zero-padding in the edge component during the convolution process, the central part of the patch often has more accurate feature information than the edge part, resulting in image blocking artifacts after patch stitching. We studied this problem in this paper and proposed a fusion method that assigns a weight to each pixel in a patch using a truncated Gaussian function as the weighting function. In this method, we used the weighting function to transform the Euclidean-distance between a point in the overlapping part and the central point of the patch where the point was located into a weight coefficient. With increasing distance, the value of the weight coefficient decreased. Finally, the reconstructed image was obtained by weighting. We employed the bias correction model to evaluate our method on the simulated database BrainWeb and the real dataset HCP (Human Connectome Project). The results show that the proposed method is capable of effectively removing blocking artifacts and obtaining a smoother bias field. To verify the effectiveness of our algorithm, we employed a denoising model to test it on the IXI-Guys human dataset. Qualitative and quantitative evaluations of both models show that the fusion method proposed in this paper can effectively remove blocking artifacts and demonstrates superior performance compared to five commonly available and state-of-the-art fusion methods.
Collapse
|
43
|
SM-SegNet: A Lightweight Squeeze M-SegNet for Tissue Segmentation in Brain MRI Scans. SENSORS 2022; 22:s22145148. [PMID: 35890829 PMCID: PMC9319649 DOI: 10.3390/s22145148] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/12/2022] [Revised: 07/02/2022] [Accepted: 07/05/2022] [Indexed: 11/17/2022]
Abstract
In this paper, we propose a novel squeeze M-SegNet (SM-SegNet) architecture featuring a fire module to perform accurate as well as fast segmentation of the brain on magnetic resonance imaging (MRI) scans. The proposed model utilizes uniform input patches, combined-connections, long skip connections, and squeeze-expand convolutional layers from the fire module to segment brain MRI data. The proposed SM-SegNet architecture involves a multi-scale deep network on the encoder side and deep supervision on the decoder side, which uses combined-connections (skip connections and pooling indices) from the encoder to the decoder layer. The multi-scale side input layers support the deep network layers' extraction of discriminative feature information, and the decoder side provides deep supervision to reduce the gradient problem. By using combined-connections, extracted features can be transferred from the encoder to the decoder resulting in recovering spatial information, which makes the model converge faster. Long skip connections were used to stabilize the gradient updates in the network. Owing to the adoption of the fire module, the proposed model was significantly faster to train and offered a more efficient memory usage with 83% fewer parameters than previously developed methods, owing to the adoption of the fire module. The proposed method was evaluated using the open-access series of imaging studies (OASIS) and the internet brain segmentation registry (IBSR) datasets. The experimental results demonstrate that the proposed SM-SegNet architecture achieves segmentation accuracies of 95% for cerebrospinal fluid, 95% for gray matter, and 96% for white matter, which outperforms the existing methods in both subjective and objective metrics in brain MRI segmentation.
Collapse
|
44
|
Ali H, Biswas MR, Mohsen F, Shah U, Alamgir A, Mousa O, Shah Z. The role of generative adversarial networks in brain MRI: a scoping review. Insights Imaging 2022; 13:98. [PMID: 35662369 PMCID: PMC9167371 DOI: 10.1186/s13244-022-01237-0] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2022] [Accepted: 05/11/2022] [Indexed: 11/23/2022] Open
Abstract
The performance of artificial intelligence (AI) for brain MRI can improve if enough data are made available. Generative adversarial networks (GANs) showed a lot of potential to generate synthetic MRI data that can capture the distribution of real MRI. Besides, GANs are also popular for segmentation, noise removal, and super-resolution of brain MRI images. This scoping review aims to explore how GANs methods are being used on brain MRI data, as reported in the literature. The review describes the different applications of GANs for brain MRI, presents the most commonly used GANs architectures, and summarizes the publicly available brain MRI datasets for advancing the research and development of GANs-based approaches. This review followed the guidelines of PRISMA-ScR to perform the study search and selection. The search was conducted on five popular scientific databases. The screening and selection of studies were performed by two independent reviewers, followed by validation by a third reviewer. Finally, the data were synthesized using a narrative approach. This review included 139 studies out of 789 search results. The most common use case of GANs was the synthesis of brain MRI images for data augmentation. GANs were also used to segment brain tumors and translate healthy images to diseased images or CT to MRI and vice versa. The included studies showed that GANs could enhance the performance of AI methods used on brain MRI imaging data. However, more efforts are needed to transform the GANs-based methods in clinical applications.
Collapse
Affiliation(s)
- Hazrat Ali
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar.
| | - Md Rafiul Biswas
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar
| | - Farida Mohsen
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar
| | - Uzair Shah
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar
| | - Asma Alamgir
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar
| | - Osama Mousa
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar
| | - Zubair Shah
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar.
| |
Collapse
|
45
|
Liu Y, Niu H, Ren P, Ren J, Wei X, Liu W, Ding H, Li J, Xia J, Zhang T, Lv H, Yin H, Wang Z. Generation of quantification maps and weighted images from synthetic magnetic resonance imaging using deep learning network. Phys Med Biol 2021; 67. [PMID: 34965516 DOI: 10.1088/1361-6560/ac46dd] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Accepted: 12/29/2021] [Indexed: 11/12/2022]
Abstract
OBJECTIVE The generation of quantification maps and weighted images in synthetic MRI techniques is based on complex fitting equations. This process requires longer image generation times. The objective of this study is to evaluate the feasibility of deep learning method for fast reconstruction of synthetic MRI. APPROACH A total of 44 healthy subjects were recruited and random divided into a training set (30 subjects) and a testing set (14 subjects). A multiple-dynamic, multiple-echo (MDME) sequence was used to acquire synthetic MRI images. Quantification maps (T1, T2, and proton density (PD) maps) and weighted (T1W, T2W, and T2W FLAIR) images were created with MAGiC software and then used as the ground truth images in the deep learning (DL) model. An improved multichannel U-Net structure network was trained to generate quantification maps and weighted images from raw synthetic MRI imaging data (8 module images). Quantitative evaluation was performed on quantification maps. Quantitative evaluation metrics, as well as qualitative evaluation were used in weighted image evaluation. Nonparametric Wilcoxon signed-rank tests were performed in this study. MAIN RESULTS The results of quantitative evaluation show that the error between the generated quantification images and the reference images is small. For weighted images, no significant difference in overall image quality or SNR was identified between DL images and synthetic images. Notably, the DL images achieved improved image contrast with T2W images, and fewer artifacts were present on DL images than synthetic images acquired by T2W FLAIR. SIGNIFICANCE The DL algorithm provides a promising method for image generation in synthetic MRI techniques, in which every step of the calculation can be optimized and faster, thereby simplifying the workflow of synthetic MRI techniques.
Collapse
Affiliation(s)
- Yawen Liu
- School of Biological Science and Medical Engineering, Beihang University, Xueyuan Road 100 hectares, Beijing, 100191, CHINA
| | - Haijun Niu
- School of Biological Science and Medical Engineering, Beihang University, Xueyuan Road 100 hectares, Beijing, 100191, CHINA
| | - Pengling Ren
- Department of Radiology, Capital Medical University Affiliated Beijing Friendship Hospital, Yong'an Road 36, Beijing, 100050, CHINA
| | - Jialiang Ren
- GE Healthcare Beijing, ., Beijing, 100176, CHINA
| | - Xuan Wei
- Department of Radiology, Capital Medical University Affiliated Beijing Friendship Hospital, Yong'an Road 36, Beijing, Beijing, 100050, CHINA
| | - Wenjuan Liu
- Department of Radiology, Capital Medical University Affiliated Beijing Friendship Hospital, Yong'an Road 36, Beijing, Beijing, 100050, CHINA
| | - Heyu Ding
- Department of Radiology, Capital Medical University Affiliated Beijing Friendship Hospital, Yong'an Road 36, Beijing, Beijing, 100050, CHINA
| | - Jing Li
- Department of Radiology, Capital Medical University Affiliated Beijing Friendship Hospital, Yong'an Road 36, Beijing, Beijing, 100050, CHINA
| | | | - Tingting Zhang
- Department of Radiology, Capital Medical University Affiliated Beijing Friendship Hospital, Yong'an Road 36, Beijing, Beijing, 100050, CHINA
| | - Han Lv
- Department of Radiology, Capital Medical University Affiliated Beijing Friendship Hospital, Yong'an Road 36, Beijing, Beijing, 100050, CHINA
| | - Hongxia Yin
- Department of Radiology, Capital Medical University Affiliated Beijing Friendship Hospital, Yong'an Road 36, Beijing, Beijing, 100050, CHINA
| | - Zhenchang Wang
- Department of Radiology, Capital Medical University Affiliated Beijing Friendship Hospital, Yong'an Road 36, Beijing, Beijing, 100050, CHINA
| |
Collapse
|
46
|
Li Z, Tian Q, Ngamsombat C, Cartmell S, Conklin J, Filho ALMG, Lo WC, Wang G, Ying K, Setsompop K, Fan Q, Bilgic B, Cauley S, Huang SY. High-fidelity fast volumetric brain MRI using synergistic wave-controlled aliasing in parallel imaging and a hybrid denoising generative adversarial network (HDnGAN). Med Phys 2021; 49:1000-1014. [PMID: 34961944 DOI: 10.1002/mp.15427] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2021] [Revised: 11/22/2021] [Accepted: 12/12/2021] [Indexed: 11/08/2022] Open
Abstract
PURPOSE The goal of this study is to leverage an advanced fast imaging technique, wave-controlled aliasing in parallel imaging (Wave-CAIPI), and a generative adversarial network (GAN) for denoising to achieve accelerated high-quality high-signal-to-noise-ratio (SNR) volumetric MRI. METHODS Three-dimensional (3D) T2 -weighted fluid-attenuated inversion recovery (FLAIR) image data were acquired on 33 multiple sclerosis (MS) patients using a prototype Wave-CAIPI sequence (acceleration factor R = 3×2, 2.75 minutes) and a standard T2 -SPACE FLAIR sequence (R = 2, 7.25 minutes). A hybrid denoising GAN entitled "HDnGAN" consisting of a 3D generator and a 2D discriminator was proposed to denoise highly accelerated Wave-CAIPI images. HDnGAN benefits from the improved image synthesis performance provided by the 3D generator and increased training samples from a limited number of patients for training the 2D discriminator. HDnGAN was trained and validated on data from 25 MS patients with the standard FLAIR images as the target and evaluated on data from 8 MS patients not seen during training. HDnGAN was compared to other denoising methods including AONLM, BM4D, MU-Net, and 3D GAN in qualitative and quantitative analysis of output images using the mean squared error (MSE) and VGG perceptual loss compared to standard FLAIR images, and a reader assessment by two neuroradiologists regarding sharpness, SNR, lesion conspicuity, and overall quality. Finally, the performance of these denoising methods was compared at higher noise levels using simulated data with added Rician noise. RESULTS HDnGAN effectively denoised low-SNR Wave-CAIPI images with sharpness and rich textural details, which could be adjusted by controlling the contribution of the adversarial loss to the total loss when training the generator. Quantitatively, HDnGAN (λ = 10-3 ) achieved low MSE and the lowest VGG perceptual loss. The reader study showed that HDnGAN (λ = 10-3 ) significantly improved the SNR of Wave-CAIPI images (P<0.001), outperformed AONLM (P = 0.015), BM4D (P<0.001), MU-Net (P<0.001) and 3D GAN (λ = 10-3 ) (P<0.001) regarding image sharpness, and outperformed MU-Net (P<0.001) and 3D GAN (λ = 10-3 ) (P = 0.001) regarding lesion conspicuity. The overall quality score of HDnGAN (λ = 10-3 ) (4.25±0.43) was significantly higher than those from Wave-CAIPI (3.69±0.46, P = 0.003), BM4D (3.50±0.71, P = 0.001), MU-Net (3.25±0.75, P<0.001), and 3D GAN (λ = 10-3 ) (3.50±0.50, P<0.001), with no significant difference compared to standard FLAIR images (4.38±0.48, P = 0.333). The advantages of HDnGAN over other methods were more obvious at higher noise levels. CONCLUSION HDnGAN provides robust and feasible denoising while preserving rich textural detail in empirical volumetric MRI data. Our study using empirical patient data and systematic evaluation supports the use of HDnGAN in combination with modern fast imaging techniques such as Wave-CAIPI to achieve high-fidelity fast volumetric MRI and represents an important step to the clinical translation of GANs. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Ziyu Li
- Department of Biomedical Engineering, Tsinghua University, Beijing, P.R. China
| | - Qiyuan Tian
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA.,Harvard Medical School, Boston, MA, USA
| | - Chanon Ngamsombat
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA.,Department of Radiology, Faculty of Medicine, Siriraj Hospital, Mahidol University, Mahidol, Thailand
| | - Samuel Cartmell
- Department of Radiology, Massachusetts General Hospital, Boston, USA
| | - John Conklin
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA.,Harvard Medical School, Boston, MA, USA.,Department of Radiology, Massachusetts General Hospital, Boston, USA
| | - Augusto Lio M Gonçalves Filho
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA.,Department of Radiology, Massachusetts General Hospital, Boston, USA
| | | | - Guangzhi Wang
- Department of Biomedical Engineering, Tsinghua University, Beijing, P.R. China
| | - Kui Ying
- Department of Engineering Physics, Tsinghua University, Beijing, P. R. China
| | - Kawin Setsompop
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA.,Harvard Medical School, Boston, MA, USA.,Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Qiuyun Fan
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA.,Harvard Medical School, Boston, MA, USA
| | - Berkin Bilgic
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA.,Harvard Medical School, Boston, MA, USA.,Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Stephen Cauley
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA.,Harvard Medical School, Boston, MA, USA
| | - Susie Y Huang
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA.,Harvard Medical School, Boston, MA, USA.,Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
47
|
Dong S, Hangel G, Bogner W, Trattnig S, Rossler K, Widhalm G, De Feyter HM, De Graaf RA, Duncan JS. High-Resolution Magnetic Resonance Spectroscopic Imaging using a Multi-Encoder Attention U-Net with Structural and Adversarial Loss. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:2891-2895. [PMID: 34891851 DOI: 10.1109/embc46164.2021.9630146] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Common to most medical imaging techniques, the spatial resolution of Magnetic Resonance Spectroscopic Imaging (MRSI) is ultimately limited by the achievable SNR. This work presents a deep learning method for 1H-MRSI spatial resolution enhancement, based on the observation that multi-parametric MRI images provide relevant spatial priors for MRSI enhancement. A Multi-encoder Attention U-Net (MAU-Net) architecture was constructed to process a MRSI metabolic map and three different MRI modalities through separate encoding paths. Spatial attention modules were incorporated to automatically learn spatial weights that highlight salient features for each MRI modality. MAU-Net was trained based on in vivo brain imaging data from patients with high-grade gliomas, using a combined loss function consisting of pixel, structural and adversarial loss. Experimental results showed that the proposed method is able to reconstruct high-quality metabolic maps with a high-resolution of 64×64 from a low-resolution of 16 × 16, with better performance compared to several baseline methods.
Collapse
|
48
|
Liu YW, Niu HJ, Yin HX, Xia JJ, Ren PL, Zhang TT, Li J, Lv H, Ding HY, Ren JL, Wang ZC. Feasibility of Brain Imaging Using a Digital Surround Technology Body Coil: A Study Based on SRGAN-VGG Convolutional Neural Networks . ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:3734-3737. [PMID: 34892048 DOI: 10.1109/embc46164.2021.9630816] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Brain imaging using conventional head coils presents several problems in routine magnetic resonance (MR) examination, such as anxiety and claustrophobic reactions during scanning with a head coil, photon attenuation caused by the MRI head coil in positron emission tomography (PET)/MRI, and coil constraints in intraoperative MRI or MRI-guided radiotherapy. In this paper, we propose a super resolution generative adversarial (SRGAN-VGG) network-based approach to enhance low-quality brain images scanned with body coils. Two types of T1 fluid-attenuated inversion recovery (FLAIR) images scanned with different coils were obtained in this study: joint images of the head-neck coil and digital surround technology body coil (H+B images) and body coil images (B images). The deep learning (DL) model was trained using images acquired from 36 subjects and tested in 4 subjects. Both quantitative and qualitative image quality assessment methods were performed during evaluation. Wilcoxon signed-rank tests were used for statistical analysis. Quantitative image quality assessment showed an improved structural similarity index (SSIM) and peak signal-to-noise ratio (PSNR) in gray matter and cerebrospinal fluid (CSF) tissues for DL images compared with B images (P <.01), while the mean square error (MSE) was significantly decreased (P <.05). The analysis also showed that the natural image quality evaluator (NIQE) and blind image quality index (BIQI) were significantly lower for DL images than for B images (P <.0001). Qualitative scoring results indicated that DL images showed an improved SNR, image contrast and sharpness (P<.0001). The outcomes of this study preliminarily indicate that body coils can be used in brain imaging, making it possible to expand the application of MR-based brain imaging.
Collapse
|
49
|
Zormpas-Petridis K, Tunariu N, Curcean A, Messiou C, Curcean S, Collins DJ, Hughes JC, Jamin Y, Koh DM, Blackledge MD. Accelerating Whole-Body Diffusion-weighted MRI with Deep Learning-based Denoising Image Filters. Radiol Artif Intell 2021; 3:e200279. [PMID: 34617028 PMCID: PMC8489468 DOI: 10.1148/ryai.2021200279] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2020] [Revised: 05/11/2021] [Accepted: 06/04/2021] [Indexed: 11/23/2022]
Abstract
Purpose To use deep learning to improve the image quality of subsampled images (number of acquisitions = 1 [NOA1]) to reduce whole-body diffusion-weighted MRI (WBDWI) acquisition times. Materials and Methods Both retrospective and prospective patient groups were used to develop a deep learning–based denoising image filter (DNIF) model. For initial model training and validation, 17 patients with metastatic prostate cancer with acquired WBDWI NOA1 and NOA9 images (acquisition period, 2015–2017) were retrospectively included. An additional 22 prospective patients with advanced prostate cancer, myeloma, and advanced breast cancer were used for model testing (2019), and the radiologic quality of DNIF-processed NOA1 (NOA1-DNIF) images were compared with NOA1 images and clinical NOA16 images by using a three-point Likert scale (good, average, or poor; statistical significance was calculated by using a Wilcoxon signed ranked test). The model was also retrained and tested in 28 patients with malignant pleural mesothelioma (MPM) who underwent lung MRI (2015–2017) to demonstrate feasibility in other body regions. Results The model visually improved the quality of NOA1 images in all test patients, with the majority of NOA1-DNIF and NOA16 images being graded as either “average” or “good” across all image-quality criteria. From validation data, the mean apparent diffusion coefficient (ADC) values within NOA1-DNIF images of bone disease deviated from those within NOA9 images by an average of 1.9% (range, 1.1%–2.6%). The model was also successfully applied in the context of MPM; the mean ADCs from NOA1-DNIF images of MPM deviated from those measured by using clinical-standard images (NOA12) by 3.7% (range, 0.2%–10.6%). Conclusion Clinical-standard images were generated from subsampled images by using a DNIF. Keywords: Image Postprocessing, MR-Diffusion-weighted Imaging, Neural Networks, Oncology, Whole-Body Imaging, Supervised Learning, MR-Functional Imaging, Metastases, Prostate, Lung Supplemental material is available for this article. Published under a CC BY 4.0 license.
Collapse
Affiliation(s)
- Konstantinos Zormpas-Petridis
- Division of Radiation Therapy and Imaging, The Institute of Cancer Research, 123 Old Brompton Rd, London SW7 3RP, England (K.Z.P., N.T., A.C., C.M., S.C., D.J.C., J.C.H., Y.J., D.M.K., M.D.B.); and Department of Radiology, The Royal Marsden National Health Service Foundation Trust, Surrey, England (N.T., A.C., C.M., S.C., J.C.H., D.M.K.)
| | - Nina Tunariu
- Division of Radiation Therapy and Imaging, The Institute of Cancer Research, 123 Old Brompton Rd, London SW7 3RP, England (K.Z.P., N.T., A.C., C.M., S.C., D.J.C., J.C.H., Y.J., D.M.K., M.D.B.); and Department of Radiology, The Royal Marsden National Health Service Foundation Trust, Surrey, England (N.T., A.C., C.M., S.C., J.C.H., D.M.K.)
| | - Andra Curcean
- Division of Radiation Therapy and Imaging, The Institute of Cancer Research, 123 Old Brompton Rd, London SW7 3RP, England (K.Z.P., N.T., A.C., C.M., S.C., D.J.C., J.C.H., Y.J., D.M.K., M.D.B.); and Department of Radiology, The Royal Marsden National Health Service Foundation Trust, Surrey, England (N.T., A.C., C.M., S.C., J.C.H., D.M.K.)
| | - Christina Messiou
- Division of Radiation Therapy and Imaging, The Institute of Cancer Research, 123 Old Brompton Rd, London SW7 3RP, England (K.Z.P., N.T., A.C., C.M., S.C., D.J.C., J.C.H., Y.J., D.M.K., M.D.B.); and Department of Radiology, The Royal Marsden National Health Service Foundation Trust, Surrey, England (N.T., A.C., C.M., S.C., J.C.H., D.M.K.)
| | - Sebastian Curcean
- Division of Radiation Therapy and Imaging, The Institute of Cancer Research, 123 Old Brompton Rd, London SW7 3RP, England (K.Z.P., N.T., A.C., C.M., S.C., D.J.C., J.C.H., Y.J., D.M.K., M.D.B.); and Department of Radiology, The Royal Marsden National Health Service Foundation Trust, Surrey, England (N.T., A.C., C.M., S.C., J.C.H., D.M.K.)
| | - David J Collins
- Division of Radiation Therapy and Imaging, The Institute of Cancer Research, 123 Old Brompton Rd, London SW7 3RP, England (K.Z.P., N.T., A.C., C.M., S.C., D.J.C., J.C.H., Y.J., D.M.K., M.D.B.); and Department of Radiology, The Royal Marsden National Health Service Foundation Trust, Surrey, England (N.T., A.C., C.M., S.C., J.C.H., D.M.K.)
| | - Julie C Hughes
- Division of Radiation Therapy and Imaging, The Institute of Cancer Research, 123 Old Brompton Rd, London SW7 3RP, England (K.Z.P., N.T., A.C., C.M., S.C., D.J.C., J.C.H., Y.J., D.M.K., M.D.B.); and Department of Radiology, The Royal Marsden National Health Service Foundation Trust, Surrey, England (N.T., A.C., C.M., S.C., J.C.H., D.M.K.)
| | - Yann Jamin
- Division of Radiation Therapy and Imaging, The Institute of Cancer Research, 123 Old Brompton Rd, London SW7 3RP, England (K.Z.P., N.T., A.C., C.M., S.C., D.J.C., J.C.H., Y.J., D.M.K., M.D.B.); and Department of Radiology, The Royal Marsden National Health Service Foundation Trust, Surrey, England (N.T., A.C., C.M., S.C., J.C.H., D.M.K.)
| | - Dow-Mu Koh
- Division of Radiation Therapy and Imaging, The Institute of Cancer Research, 123 Old Brompton Rd, London SW7 3RP, England (K.Z.P., N.T., A.C., C.M., S.C., D.J.C., J.C.H., Y.J., D.M.K., M.D.B.); and Department of Radiology, The Royal Marsden National Health Service Foundation Trust, Surrey, England (N.T., A.C., C.M., S.C., J.C.H., D.M.K.)
| | - Matthew D Blackledge
- Division of Radiation Therapy and Imaging, The Institute of Cancer Research, 123 Old Brompton Rd, London SW7 3RP, England (K.Z.P., N.T., A.C., C.M., S.C., D.J.C., J.C.H., Y.J., D.M.K., M.D.B.); and Department of Radiology, The Royal Marsden National Health Service Foundation Trust, Surrey, England (N.T., A.C., C.M., S.C., J.C.H., D.M.K.)
| |
Collapse
|
50
|
Xu Y, Han K, Zhou Y, Wu J, Xie X, Xiang W. Deep Adaptive Blending Network for 3D Magnetic Resonance Image Denoising. IEEE J Biomed Health Inform 2021; 25:3321-3331. [PMID: 34101607 DOI: 10.1109/jbhi.2021.3087407] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
The visual quality of magnetic resonance images (MRIs) is crucial for clinical diagnosis and scientific research. The main source of quality degradation is the noise generated during MRI acquisition. Although denoising MRI by deep learning methods shows great superiority compared with traditional methods, the deep learning methods reported to date in the literature cannot simultaneously leverage long-range and hierarchical information, and cannot adequately utilize the similarity in 3D MRI. In this paper, we address the two issues by proposing a deep adaptive blending network (DABN) characterized by a large receptive field residual dense block and an adaptive blending method. We first propose the large receptive field residual dense block that can capture long-range information and fuse hierarchical features simultaneously. Then we propose the adaptive blending method that produces denoised pixels by adaptively filtering 3D MRI, which explicitly utilizes the similarity in 3D MRI. Residual is also considered as a compensating item after adaptive filtering. The blending adaptive filter and residual are predicted by a network consisting of several large receptive field residual dense blocks. Experimental results show that the proposed DABN outperforms state-of-the-art denoising methods in both clinical and simulated MRI data.
Collapse
|