201
|
Park JE, Vollmuth P, Kim N, Kim HS. Research Highlight: Use of Generative Images Created with Artificial Intelligence for Brain Tumor Imaging. Korean J Radiol 2022; 23:500-504. [PMID: 35434978 PMCID: PMC9081688 DOI: 10.3348/kjr.2022.0033] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Revised: 02/13/2022] [Accepted: 02/15/2022] [Indexed: 11/29/2022] Open
Affiliation(s)
- Ji Eun Park
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| | - Philipp Vollmuth
- Department of Neuroradiology, University of Heidelberg, Heidelberg, Germany
| | - Namkug Kim
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
- Department of Convergence Medicine, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| | - Ho Sung Kim
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| |
Collapse
|
202
|
Iterative Reconstruction for Low-Dose CT using Deep Gradient Priors of Generative Model. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2022. [DOI: 10.1109/trpms.2022.3148373] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
203
|
Zhao M, Wang S, Shi F, Jia C, Sun X, Chen S. OUP accepted manuscript. Bioinformatics 2022; 38:i53-i59. [PMID: 35758798 PMCID: PMC9235483 DOI: 10.1093/bioinformatics/btac219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Motivation The presence of tumor cell clusters in pleural effusion may be a signal of cancer metastasis. The instance segmentation of single cell from cell clusters plays a pivotal role in cluster cell analysis. However, current cell segmentation methods perform poorly for cluster cells due to the overlapping/touching characters of clusters, multiple instance properties of cells, and the poor generalization ability of the models. Results In this article, we propose a contour constraint instance segmentation framework (CC framework) for cluster cells based on a cluster cell combination enhancement module. The framework can accurately locate each instance from cluster cells and realize high-precision contour segmentation under a few samples. Specifically, we propose the contour attention constraint module to alleviate over- and under-segmentation among individual cell-instance boundaries. In addition, to evaluate the framework, we construct a pleural effusion cluster cell dataset including 197 high-quality samples. The quantitative results show that the numeric result of APmask is > 90%, a more than 10% increase compared with state-of-the-art semantic segmentation algorithms. From the qualitative results, we can observe that our method rarely has segmentation errors.
Collapse
Affiliation(s)
- Meng Zhao
- To whom correspondence should be addressed. E-mail: or
| | - Siyu Wang
- Engineering Research Center of Learning-Based Intelligent System (Ministry of Education), The Key Laboratory of Computer Vision and System (Ministry of Education), and the School of Computer Science and Engineering, Tianjin University of Technology, Tianjin 300384, China
| | - Fan Shi
- Engineering Research Center of Learning-Based Intelligent System (Ministry of Education), The Key Laboratory of Computer Vision and System (Ministry of Education), and the School of Computer Science and Engineering, Tianjin University of Technology, Tianjin 300384, China
| | - Chen Jia
- Engineering Research Center of Learning-Based Intelligent System (Ministry of Education), The Key Laboratory of Computer Vision and System (Ministry of Education), and the School of Computer Science and Engineering, Tianjin University of Technology, Tianjin 300384, China
| | - Xuguo Sun
- School of Medical Laboratory, Tianjin Medical University, Tianjin 300204, China
| | - Shengyong Chen
- Engineering Research Center of Learning-Based Intelligent System (Ministry of Education), The Key Laboratory of Computer Vision and System (Ministry of Education), and the School of Computer Science and Engineering, Tianjin University of Technology, Tianjin 300384, China
| |
Collapse
|
204
|
Zeng D, Wang L, Geng M, Li S, Deng Y, Xie Q, Li D, Zhang H, Li Y, Xu Z, Meng D, Ma J. Noise-Generating-Mechanism-Driven Unsupervised Learning for Low-Dose CT Sinogram Recovery. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2022. [DOI: 10.1109/trpms.2021.3083361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
205
|
Ben Ali W, Pesaranghader A, Avram R, Overtchouk P, Perrin N, Laffite S, Cartier R, Ibrahim R, Modine T, Hussin JG. Implementing Machine Learning in Interventional Cardiology: The Benefits Are Worth the Trouble. Front Cardiovasc Med 2021; 8:711401. [PMID: 34957230 PMCID: PMC8692711 DOI: 10.3389/fcvm.2021.711401] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2021] [Accepted: 11/08/2021] [Indexed: 12/23/2022] Open
Abstract
Driven by recent innovations and technological progress, the increasing quality and amount of biomedical data coupled with the advances in computing power allowed for much progress in artificial intelligence (AI) approaches for health and biomedical research. In interventional cardiology, the hope is for AI to provide automated analysis and deeper interpretation of data from electrocardiography, computed tomography, magnetic resonance imaging, and electronic health records, among others. Furthermore, high-performance predictive models supporting decision-making hold the potential to improve safety, diagnostic and prognostic prediction in patients undergoing interventional cardiology procedures. These applications include robotic-assisted percutaneous coronary intervention procedures and automatic assessment of coronary stenosis during diagnostic coronary angiograms. Machine learning (ML) has been used in these innovations that have improved the field of interventional cardiology, and more recently, deep Learning (DL) has emerged as one of the most successful branches of ML in many applications. It remains to be seen if DL approaches will have a major impact on current and future practice. DL-based predictive systems also have several limitations, including lack of interpretability and lack of generalizability due to cohort heterogeneity and low sample sizes. There are also challenges for the clinical implementation of these systems, such as ethical limits and data privacy. This review is intended to bring the attention of health practitioners and interventional cardiologists to the broad and helpful applications of ML and DL algorithms to date in the field. Their implementation challenges in daily practice and future applications in the field of interventional cardiology are also discussed.
Collapse
Affiliation(s)
- Walid Ben Ali
- Service Médico-Chirurgical, Valvulopathies-Chirurgie Cardiaque-Cardiologie Interventionelle Structurelle, Hôpital Cardiologique de Haut Lévèque, Bordeaux, France.,Structural Heart Program and Interventional Cardiology, Université de Montréal, Montreal Heart Institute, Montréal, QC, Canada
| | - Ahmad Pesaranghader
- Faculty of Medicine, Research Center, Montreal Heart Institute, Université de Montréal, Montréal, QC, Canada.,Computer Science and Operations Research Department, Mila (Quebec Artificial Intelligence Institute), Montreal, QC, Canada
| | - Robert Avram
- Faculty of Medicine, Research Center, Montreal Heart Institute, Université de Montréal, Montréal, QC, Canada
| | - Pavel Overtchouk
- Interventional Cardiology and Cardiovascular Surgery Centre Hospitalier Regional Universitaire de Lille (CHRU de Lille), Lille, France
| | - Nils Perrin
- Structural Heart Program and Interventional Cardiology, Université de Montréal, Montreal Heart Institute, Montréal, QC, Canada
| | - Stéphane Laffite
- Service Médico-Chirurgical, Valvulopathies-Chirurgie Cardiaque-Cardiologie Interventionelle Structurelle, Hôpital Cardiologique de Haut Lévèque, Bordeaux, France
| | - Raymond Cartier
- Structural Heart Program and Interventional Cardiology, Université de Montréal, Montreal Heart Institute, Montréal, QC, Canada
| | - Reda Ibrahim
- Structural Heart Program and Interventional Cardiology, Université de Montréal, Montreal Heart Institute, Montréal, QC, Canada
| | - Thomas Modine
- Service Médico-Chirurgical, Valvulopathies-Chirurgie Cardiaque-Cardiologie Interventionelle Structurelle, Hôpital Cardiologique de Haut Lévèque, Bordeaux, France
| | - Julie G Hussin
- Faculty of Medicine, Research Center, Montreal Heart Institute, Université de Montréal, Montréal, QC, Canada
| |
Collapse
|
206
|
Li Z, Tian Q, Ngamsombat C, Cartmell S, Conklin J, Filho ALMG, Lo WC, Wang G, Ying K, Setsompop K, Fan Q, Bilgic B, Cauley S, Huang SY. High-fidelity fast volumetric brain MRI using synergistic wave-controlled aliasing in parallel imaging and a hybrid denoising generative adversarial network (HDnGAN). Med Phys 2021; 49:1000-1014. [PMID: 34961944 DOI: 10.1002/mp.15427] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2021] [Revised: 11/22/2021] [Accepted: 12/12/2021] [Indexed: 11/08/2022] Open
Abstract
PURPOSE The goal of this study is to leverage an advanced fast imaging technique, wave-controlled aliasing in parallel imaging (Wave-CAIPI), and a generative adversarial network (GAN) for denoising to achieve accelerated high-quality high-signal-to-noise-ratio (SNR) volumetric MRI. METHODS Three-dimensional (3D) T2 -weighted fluid-attenuated inversion recovery (FLAIR) image data were acquired on 33 multiple sclerosis (MS) patients using a prototype Wave-CAIPI sequence (acceleration factor R = 3×2, 2.75 minutes) and a standard T2 -SPACE FLAIR sequence (R = 2, 7.25 minutes). A hybrid denoising GAN entitled "HDnGAN" consisting of a 3D generator and a 2D discriminator was proposed to denoise highly accelerated Wave-CAIPI images. HDnGAN benefits from the improved image synthesis performance provided by the 3D generator and increased training samples from a limited number of patients for training the 2D discriminator. HDnGAN was trained and validated on data from 25 MS patients with the standard FLAIR images as the target and evaluated on data from 8 MS patients not seen during training. HDnGAN was compared to other denoising methods including AONLM, BM4D, MU-Net, and 3D GAN in qualitative and quantitative analysis of output images using the mean squared error (MSE) and VGG perceptual loss compared to standard FLAIR images, and a reader assessment by two neuroradiologists regarding sharpness, SNR, lesion conspicuity, and overall quality. Finally, the performance of these denoising methods was compared at higher noise levels using simulated data with added Rician noise. RESULTS HDnGAN effectively denoised low-SNR Wave-CAIPI images with sharpness and rich textural details, which could be adjusted by controlling the contribution of the adversarial loss to the total loss when training the generator. Quantitatively, HDnGAN (λ = 10-3 ) achieved low MSE and the lowest VGG perceptual loss. The reader study showed that HDnGAN (λ = 10-3 ) significantly improved the SNR of Wave-CAIPI images (P<0.001), outperformed AONLM (P = 0.015), BM4D (P<0.001), MU-Net (P<0.001) and 3D GAN (λ = 10-3 ) (P<0.001) regarding image sharpness, and outperformed MU-Net (P<0.001) and 3D GAN (λ = 10-3 ) (P = 0.001) regarding lesion conspicuity. The overall quality score of HDnGAN (λ = 10-3 ) (4.25±0.43) was significantly higher than those from Wave-CAIPI (3.69±0.46, P = 0.003), BM4D (3.50±0.71, P = 0.001), MU-Net (3.25±0.75, P<0.001), and 3D GAN (λ = 10-3 ) (3.50±0.50, P<0.001), with no significant difference compared to standard FLAIR images (4.38±0.48, P = 0.333). The advantages of HDnGAN over other methods were more obvious at higher noise levels. CONCLUSION HDnGAN provides robust and feasible denoising while preserving rich textural detail in empirical volumetric MRI data. Our study using empirical patient data and systematic evaluation supports the use of HDnGAN in combination with modern fast imaging techniques such as Wave-CAIPI to achieve high-fidelity fast volumetric MRI and represents an important step to the clinical translation of GANs. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Ziyu Li
- Department of Biomedical Engineering, Tsinghua University, Beijing, P.R. China
| | - Qiyuan Tian
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA.,Harvard Medical School, Boston, MA, USA
| | - Chanon Ngamsombat
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA.,Department of Radiology, Faculty of Medicine, Siriraj Hospital, Mahidol University, Mahidol, Thailand
| | - Samuel Cartmell
- Department of Radiology, Massachusetts General Hospital, Boston, USA
| | - John Conklin
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA.,Harvard Medical School, Boston, MA, USA.,Department of Radiology, Massachusetts General Hospital, Boston, USA
| | - Augusto Lio M Gonçalves Filho
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA.,Department of Radiology, Massachusetts General Hospital, Boston, USA
| | | | - Guangzhi Wang
- Department of Biomedical Engineering, Tsinghua University, Beijing, P.R. China
| | - Kui Ying
- Department of Engineering Physics, Tsinghua University, Beijing, P. R. China
| | - Kawin Setsompop
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA.,Harvard Medical School, Boston, MA, USA.,Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Qiuyun Fan
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA.,Harvard Medical School, Boston, MA, USA
| | - Berkin Bilgic
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA.,Harvard Medical School, Boston, MA, USA.,Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Stephen Cauley
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA.,Harvard Medical School, Boston, MA, USA
| | - Susie Y Huang
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA.,Harvard Medical School, Boston, MA, USA.,Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
207
|
Wang S, Cao G, Wang Y, Liao S, Wang Q, Shi J, Li C, Shen D. Review and Prospect: Artificial Intelligence in Advanced Medical Imaging. FRONTIERS IN RADIOLOGY 2021; 1:781868. [PMID: 37492170 PMCID: PMC10365109 DOI: 10.3389/fradi.2021.781868] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Accepted: 11/08/2021] [Indexed: 07/27/2023]
Abstract
Artificial intelligence (AI) as an emerging technology is gaining momentum in medical imaging. Recently, deep learning-based AI techniques have been actively investigated in medical imaging, and its potential applications range from data acquisition and image reconstruction to image analysis and understanding. In this review, we focus on the use of deep learning in image reconstruction for advanced medical imaging modalities including magnetic resonance imaging (MRI), computed tomography (CT), and positron emission tomography (PET). Particularly, recent deep learning-based methods for image reconstruction will be emphasized, in accordance with their methodology designs and performances in handling volumetric imaging data. It is expected that this review can help relevant researchers understand how to adapt AI for medical imaging and which advantages can be achieved with the assistance of AI.
Collapse
Affiliation(s)
- Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences (CAS), Shenzhen, China
- Pengcheng Laboratrory, Shenzhen, China
| | - Guohua Cao
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Yan Wang
- School of Computer Science, Sichuan University, Chengdu, China
| | - Shu Liao
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Qian Wang
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Jun Shi
- School of Communication and Information Engineering, Shanghai University, Shanghai, China
| | - Cheng Li
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences (CAS), Shenzhen, China
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| |
Collapse
|
208
|
Li X, Jiang Y, Rodriguez-Andina JJ, Luo H, Yin S, Kaynak O. When medical images meet generative adversarial network: recent development and research opportunities. DISCOVER ARTIFICIAL INTELLIGENCE 2021; 1:5. [DOI: 10.1007/s44163-021-00006-0] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/08/2021] [Accepted: 07/12/2021] [Indexed: 11/27/2022]
Abstract
AbstractDeep learning techniques have promoted the rise of artificial intelligence (AI) and performed well in computer vision. Medical image analysis is an important application of deep learning, which is expected to greatly reduce the workload of doctors, contributing to more sustainable health systems. However, most current AI methods for medical image analysis are based on supervised learning, which requires a lot of annotated data. The number of medical images available is usually small and the acquisition of medical image annotations is an expensive process. Generative adversarial network (GAN), an unsupervised method that has become very popular in recent years, can simulate the distribution of real data and reconstruct approximate real data. GAN opens some exciting new ways for medical image generation, expanding the number of medical images available for deep learning methods. Generated data can solve the problem of insufficient data or imbalanced data categories. Adversarial training is another contribution of GAN to medical imaging that has been applied to many tasks, such as classification, segmentation, or detection. This paper investigates the research status of GAN in medical images and analyzes several GAN methods commonly applied in this area. The study addresses GAN application for both medical image synthesis and adversarial learning for other medical image tasks. The open challenges and future research directions are also discussed.
Collapse
|
209
|
Zhang X, Han Z, Shangguan H, Han X, Cui X, Wang A. Artifact and Detail Attention Generative Adversarial Networks for Low-Dose CT Denoising. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3901-3918. [PMID: 34329159 DOI: 10.1109/tmi.2021.3101616] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Generative adversarial networks are being extensively studied for low-dose computed tomography denoising. However, due to the similar distribution of noise, artifacts, and high-frequency components of useful tissue images, it is difficult for existing generative adversarial network-based denoising networks to effectively separate the artifacts and noise in the low-dose computed tomography images. In addition, aggressive denoising may damage the edge and structural information of the computed tomography image and make the denoised image too smooth. To solve these problems, we propose a novel denoising network called artifact and detail attention generative adversarial network. First, a multi-channel generator is proposed. Based on the main feature extraction channel, an artifacts and noise attention channel and an edge feature attention channel are added to improve the denoising network's ability to pay attention to the noise and artifacts features and edge features of the image. Additionally, a new structure called multi-scale Res2Net discriminator is proposed, and the receptive field in the module is expanded by extracting the multi-scale features in the same scale of the image to improve the discriminative ability of discriminator. The loss functions are specially designed for each sub-channel of the denoising network corresponding to its function. Through the cooperation of multiple loss functions, the convergence speed, stability, and denoising effect of the network are accelerated, improved, and guaranteed, respectively. Experimental results show that the proposed denoising network can preserve the important information of the low-dose computed tomography image and achieve better denoising effect when compared to the state-of-the-art algorithms.
Collapse
|
210
|
Xia W, Lu Z, Huang Y, Shi Z, Liu Y, Chen H, Chen Y, Zhou J, Zhang Y. MAGIC: Manifold and Graph Integrative Convolutional Network for Low-Dose CT Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3459-3472. [PMID: 34110990 DOI: 10.1109/tmi.2021.3088344] [Citation(s) in RCA: 38] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Low-dose computed tomography (LDCT) scans, which can effectively alleviate the radiation problem, will degrade the imaging quality. In this paper, we propose a novel LDCT reconstruction network that unrolls the iterative scheme and performs in both image and manifold spaces. Because patch manifolds of medical images have low-dimensional structures, we can build graphs from the manifolds. Then, we simultaneously leverage the spatial convolution to extract the local pixel-level features from the images and incorporate the graph convolution to analyze the nonlocal topological features in manifold space. The experiments show that our proposed method outperforms both the quantitative and qualitative aspects of state-of-the-art methods. In addition, aided by a projection loss component, our proposed method also demonstrates superior performance for semi-supervised learning. The network can remove most noise while maintaining the details of only 10% (40 slices) of the training data labeled.
Collapse
|
211
|
Direct pixel to pixel principal strain mapping from tagging MRI using end to end deep convolutional neural network (DeepStrain). Sci Rep 2021; 11:23021. [PMID: 34836988 PMCID: PMC8626490 DOI: 10.1038/s41598-021-02279-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2021] [Accepted: 10/26/2021] [Indexed: 11/08/2022] Open
Abstract
Regional soft tissue mechanical strain offers crucial insights into tissue's mechanical function and vital indicators for different related disorders. Tagging magnetic resonance imaging (tMRI) has been the standard method for assessing the mechanical characteristics of organs such as the heart, the liver, and the brain. However, constructing accurate artifact-free pixelwise strain maps at the native resolution of the tagged images has for decades been a challenging unsolved task. In this work, we developed an end-to-end deep-learning framework for pixel-to-pixel mapping of the two-dimensional Eulerian principal strains [Formula: see text] and [Formula: see text] directly from 1-1 spatial modulation of magnetization (SPAMM) tMRI at native image resolution using convolutional neural network (CNN). Four different deep learning conditional generative adversarial network (cGAN) approaches were examined. Validations were performed using Monte Carlo computational model simulations, and in-vivo datasets, and compared to the harmonic phase (HARP) method, a conventional and validated method for tMRI analysis, with six different filter settings. Principal strain maps of Monte Carlo tMRI simulations with various anatomical, functional, and imaging parameters demonstrate artifact-free solid agreements with the corresponding ground-truth maps. Correlations with the ground-truth strain maps were R = 0.90 and 0.92 for the best-proposed cGAN approach compared to R = 0.12 and 0.73 for the best HARP method for [Formula: see text] and [Formula: see text], respectively. The proposed cGAN approach's error was substantially lower than the error in the best HARP method at all strain ranges. In-vivo results are presented for both healthy subjects and patients with cardiac conditions (Pulmonary Hypertension). Strain maps, obtained directly from their corresponding tagged MR images, depict for the first time anatomical, functional, and temporal details at pixelwise native high resolution with unprecedented clarity. This work demonstrates the feasibility of using the deep learning cGAN for direct myocardial and liver Eulerian strain mapping from tMRI at native image resolution with minimal artifacts.
Collapse
|
212
|
Image quality in liver CT: low-dose deep learning vs standard-dose model-based iterative reconstructions. Eur Radiol 2021; 32:2865-2874. [PMID: 34821967 DOI: 10.1007/s00330-021-08380-0] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Revised: 09/15/2021] [Accepted: 10/04/2021] [Indexed: 10/19/2022]
Abstract
OBJECTIVES To compare the overall image quality and detectability of significant (malignant and pre-malignant) liver lesions of low-dose liver CT (LDCT, 33.3% dose) using deep learning denoising (DLD) to standard-dose CT (SDCT, 100% dose) using model-based iterative reconstruction (MBIR). METHODS In this retrospective study, CT images of 80 patients with hepatic focal lesions were included. For noninferiority analysis of overall image quality, a margin of - 0.5 points (scored in a 5-point scale) for the difference between scan protocols was pre-defined. Other quantitative or qualitative image quality assessments were performed. Additionally, detectability of significant liver lesions was compared, with 64 pairs of CT, using the jackknife alternative free-response ROC analysis, with noninferior margin defined by the lower limit of 95% confidence interval (CI) of the difference of figure-of-merit less than - 0.1. RESULTS The mean overall image quality scores with LDCT and SDCT were 3.77 ± 0.38 and 3.94 ± 0.34, respectively, demonstrating a difference of - 0.17 (95% CI: - 0.21 to - 0.12), which did not cross the predefined noninferiority margin of - 0.5. Furthermore, LDCT showed significantly superior quantitative results of liver lesion contrast to noise ratio (p < 0.05). However, although LDCT scored higher than the average score in qualitative image quality assessments, they were significantly lower than those of SDCT (p < 0.05). Figure-of-merit for lesion detection was 0.859 for LDCT and 0.878 for SDCT, showing noninferiority (difference: - 0.019, 95% CI: - 0.058 to 0.021). CONCLUSION LDCT using DLD with 67% radiation dose reduction showed non-inferior overall image quality and lesion detectability, compared to SDCT. KEY POINTS • Low-dose liver CT using deep learning denoising (DLD), at 67% dose reduction, provided non-inferior overall image quality compared to standard-dose CT using model-based iterative reconstruction (MBIR). • Low-dose CT using DLD showed significantly less noise and higher CNR lesion to liver than standard-dose CT using MBIR and demonstrated at least average image quality score among all readers, albeit with lower scores than standard-dose CT using MBIR. • Low-dose liver CT showed noninferior detectability for malignant and pre-malignant liver lesions, compared to standard-dose CT.
Collapse
|
213
|
Chest Computed Tomography Images in Neonatal Bronchial Pneumonia under the Adaptive Statistical Iterative Reconstruction Algorithm. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:6183946. [PMID: 34745505 PMCID: PMC8566055 DOI: 10.1155/2021/6183946] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/28/2021] [Revised: 09/30/2021] [Accepted: 10/04/2021] [Indexed: 12/04/2022]
Abstract
This study was to explore the application value of chest computed tomography (CT) images processed by artificial intelligence (AI) algorithms in the diagnosis of neonatal bronchial pneumonia (NBP). The AI adaptive statistical iterative reconstruction (ASiR) algorithm was adopted to reconstruct the chest CT image to compare and analyze the effect of the reconstruction of CT image under the ASiR algorithm under different preweight and postweight values based on the objective measurement and subjective evaluation. 85 neonates with pneumonia treated in hospital from September 1, 2015, to July 1, 2020, were selected as the research objects to analyze their CT imaging characteristics. Subsequently, the peripheral blood of healthy neonates during the same period was collected, and the levels of C-reactive protein (CRP) and erythrocyte sedimentation rate (ESR) were detected. The efficiency of CT examination, CRP, ESR, and combined examination in the diagnosis of NBP was analyzed. The results showed that the subjective quality score, lung window subjective score, and mediastinal window subjective score were the highest after CT image reconstruction when the preweight value of the ASiR algorithm was 50%. After treatment, 79 NBP cases (92.9%) showed ground-glass features in CT images. Compared with the healthy neonates, the levels of CRP and ESR in the peripheral blood of neonates with bronchial pneumonia were much lower (P < 0.05). The accuracy rates of CT examination, CRP examination, ESR examination, CRP + ESR examination, and CRP + ESR + CT examination for the diagnosis of NBP were 80.7%, 75.3%, 75.1%, 80.3%, and 98.6%, respectively. CT technology based on AI algorithm showed high clinical application value in the feature analysis of NBP.
Collapse
|
214
|
Singh S, Sukkala R. Evaluation and comparison of performance of low-dose 128-slice CT scanner with different mAs values: A phantom study. J Carcinog 2021; 20:13. [PMID: 34729045 PMCID: PMC8511832 DOI: 10.4103/jcar.jcar_25_20] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2020] [Revised: 01/12/2021] [Accepted: 02/02/2021] [Indexed: 11/16/2022] Open
Abstract
OBJECTIVE: Radiation dose in computed tomography (CT) has been the concern of physicists ever since the introduction of CT scan. The objective of this study was to evaluate the performance of low-dose 128-slice CT scanner with different mAs values. MATERIALS AND METHODS: Quantitative study was carried out at different values of mAs. Philips brilliance CT phantom with Philips ingenuity 128-slice low-dose CT scanner was chosen for this study. CT number linearity, CT number accuracy, slice thickness accuracy, high-contrast resolution, and low-contrast resolution were calculated and estimated computed tomography dose index volume (CTDIvol) for all the mAs values were recorded. Noise was calculated for all mAs values for comparison. RESULTS: Data analysis shows that image quality was acceptable for all protocols. High-contrast resolution for all protocols was 20 line pairs per centimeter. Low-contrast resolution for 50 mAs images was 4 mm and 3 mm for other mAs protocols. Images acquired using 100 mAs revealed ring artifacts. CTDIvol using 50 mAs was 33% of the CTDIvol using 150 mAs. The dose–length product at 100 mAs was reduced to 66% of the dose–length product at 150 mAs, and the same at 50 mAs was reduced to 33%. CONCLUSION: It is evident here that mAs has direct impact on the radiation dose to patient. With iDose4, mAs can be reduced to 50 mAs in multislice low-dose CT scan to reduce the radiation dose with minimal effect on image quality for slice thickness 4 mm. However, noise would dominate at tube current lower than 50 mAs for 120 kVp.
Collapse
Affiliation(s)
- Shilpa Singh
- Department of Radiology, Maharishi Markandeshwar (Deemed to be University), Ambala, Haryana, India
| | - Rajesh Sukkala
- Department of Radiology, Centurion University, Vizianagaram, Andhra Pradesh, India
| |
Collapse
|
215
|
Jose L, Liu S, Russo C, Nadort A, Di Ieva A. Generative Adversarial Networks in Digital Pathology and Histopathological Image Processing: A Review. J Pathol Inform 2021; 12:43. [PMID: 34881098 PMCID: PMC8609288 DOI: 10.4103/jpi.jpi_103_20] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Revised: 03/03/2021] [Accepted: 04/23/2021] [Indexed: 12/13/2022] Open
Abstract
Digital pathology is gaining prominence among the researchers with developments in advanced imaging modalities and new technologies. Generative adversarial networks (GANs) are a recent development in the field of artificial intelligence and since their inception, have boosted considerable interest in digital pathology. GANs and their extensions have opened several ways to tackle many challenging histopathological image processing problems such as color normalization, virtual staining, ink removal, image enhancement, automatic feature extraction, segmentation of nuclei, domain adaptation and data augmentation. This paper reviews recent advances in histopathological image processing using GANs with special emphasis on the future perspectives related to the use of such a technique. The papers included in this review were retrieved by conducting a keyword search on Google Scholar and manually selecting the papers on the subject of H&E stained digital pathology images for histopathological image processing. In the first part, we describe recent literature that use GANs in various image preprocessing tasks such as stain normalization, virtual staining, image enhancement, ink removal, and data augmentation. In the second part, we describe literature that use GANs for image analysis, such as nuclei detection, segmentation, and feature extraction. This review illustrates the role of GANs in digital pathology with the objective to trigger new research on the application of generative models in future research in digital pathology informatics.
Collapse
Affiliation(s)
- Laya Jose
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical
School, Faculty of Medicine, Health and Human Sciences, Macquarie University,
Sydney, Australia
- ARC Centre of Excellence for Nanoscale Biophotonics,
Macquarie University, Sydney, Australia
| | - Sidong Liu
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical
School, Faculty of Medicine, Health and Human Sciences, Macquarie University,
Sydney, Australia
- Australian Institute of Health Innovation, Centre for
Health Informatics, Macquarie University, Sydney, Australia
| | - Carlo Russo
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical
School, Faculty of Medicine, Health and Human Sciences, Macquarie University,
Sydney, Australia
| | - Annemarie Nadort
- ARC Centre of Excellence for Nanoscale Biophotonics,
Macquarie University, Sydney, Australia
- Department of Physics and Astronomy, Faculty of Science
and Engineering, Macquarie University, Sydney, Australia
| | - Antonio Di Ieva
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical
School, Faculty of Medicine, Health and Human Sciences, Macquarie University,
Sydney, Australia
| |
Collapse
|
216
|
Pan M, Zhang H, Tang Z, Zhao Y, Tian J. Attention-Based Multi-Scale Generative Adversarial Network for synthesizing contrast-enhanced MRI. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:3650-3653. [PMID: 34892028 DOI: 10.1109/embc46164.2021.9630887] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
In clinical practice, about 35% of MRI scans are enhanced with Gadolinium - based contrast agents (GBCAs) worldwide currently. Injecting GBCAs can make the lesions much more visible on contrast-enhanced scans. However, the injection of GBCAs is high-risk, time-consuming, and expensive. Utilizing a generative model such as an adversarial network (GAN) to synthesize the contrast-enhanced MRI without injection of GBCAs becomes a very promising alternative method. Due to the different features of the lesions in contrast-enhanced images while the single-scale feature extraction capabilities of the traditional GAN, we propose a new generative model that a multi-scale strategy is used in the GAN to extract different scale features of the lesions. Moreover, an attention mechanism is also added in our model to learn important features automatically from all scales for better feature aggregation. We name our proposed network with an attention-based multi-scale contrasted-enhanced-image generative adversarial network (AMCGAN). We examine our proposed AMCGAN on a private dataset from 382 ankylosing spondylitis subjects. The result shows our proposed network can achieve state-of-the-art in both visual evaluations and quantitative evaluations than traditional adversarial training.Clinical Relevance-This study provides a safe, convenient, and inexpensive tool for the clinical practices to get contrast-enhanced MRI without injection of GBCAs.
Collapse
|
217
|
Lin A, Kolossváry M, Motwani M, Išgum I, Maurovich-Horvat P, Slomka PJ, Dey D. Artificial intelligence in cardiovascular CT: Current status and future implications. J Cardiovasc Comput Tomogr 2021; 15:462-469. [PMID: 33812855 PMCID: PMC8455701 DOI: 10.1016/j.jcct.2021.03.006] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/30/2020] [Revised: 01/29/2021] [Accepted: 03/15/2021] [Indexed: 12/23/2022]
Abstract
Artificial intelligence (AI) refers to the use of computational techniques to mimic human thought processes and learning capacity. The past decade has seen a rapid proliferation of AI developments for cardiovascular computed tomography (CT). These algorithms aim to increase efficiency, objectivity, and performance in clinical tasks such as image quality improvement, structure segmentation, quantitative measurements, and outcome prediction. By doing so, AI has the potential to streamline clinical workflow, increase interpretative speed and accuracy, and inform subsequent clinical pathways. This review covers state-of-the-art AI techniques in cardiovascular CT and the future role of AI as a clinical support tool.
Collapse
Affiliation(s)
- Andrew Lin
- Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Márton Kolossváry
- Cardiovascular Imaging Research Center, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Manish Motwani
- Manchester Heart Centre, Manchester University Hospitals NHS Foundation Trust, Manchester, United Kingdom
| | - Ivana Išgum
- Department of Biomedical Engineering and Physics, Amsterdam University Medical Centers, Location AMC, University of Amsterdam, Amsterdam, Netherlands; Department of Radiology and Nuclear Medicine, Amsterdam University Medical Centers, Location AMC, University of Amsterdam, Amsterdam, Netherlands; Amsterdam Cardiovascular Sciences, Amsterdam University Medical Centers, Amsterdam, Netherlands
| | | | - Piotr J Slomka
- Artificial Intelligence in Medicine Program, Cedars-Sinai Medical Center, Los Angeles, CA, United States
| | - Damini Dey
- Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA.
| |
Collapse
|
218
|
Zhang Y, Hu D, Zhao Q, Quan G, Liu J, Liu Q, Zhang Y, Coatrieux G, Chen Y, Yu H. CLEAR: Comprehensive Learning Enabled Adversarial Reconstruction for Subtle Structure Enhanced Low-Dose CT Imaging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3089-3101. [PMID: 34270418 DOI: 10.1109/tmi.2021.3097808] [Citation(s) in RCA: 43] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
X-ray computed tomography (CT) is of great clinical significance in medical practice because it can provide anatomical information about the human body without invasion, while its radiation risk has continued to attract public concerns. Reducing the radiation dose may induce noise and artifacts to the reconstructed images, which will interfere with the judgments of radiologists. Previous studies have confirmed that deep learning (DL) is promising for improving low-dose CT imaging. However, almost all the DL-based methods suffer from subtle structure degeneration and blurring effect after aggressive denoising, which has become the general challenging issue. This paper develops the Comprehensive Learning Enabled Adversarial Reconstruction (CLEAR) method to tackle the above problems. CLEAR achieves subtle structure enhanced low-dose CT imaging through a progressive improvement strategy. First, the generator established on the comprehensive domain can extract more features than the one built on degraded CT images and directly map raw projections to high-quality CT images, which is significantly different from the routine GAN practice. Second, a multi-level loss is assigned to the generator to push all the network components to be updated towards high-quality reconstruction, preserving the consistency between generated images and gold-standard images. Finally, following the WGAN-GP modality, CLEAR can migrate the real statistical properties to the generated images to alleviate over-smoothing. Qualitative and quantitative analyses have demonstrated the competitive performance of CLEAR in terms of noise suppression, structural fidelity and visual perception improvement.
Collapse
|
219
|
Xia W, Lu Z, Huang Y, Liu Y, Chen H, Zhou J, Zhang Y. CT Reconstruction With PDF: Parameter-Dependent Framework for Data From Multiple Geometries and Dose Levels. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3065-3076. [PMID: 34086564 DOI: 10.1109/tmi.2021.3085839] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The current mainstream computed tomography (CT) reconstruction methods based on deep learning usually need to fix the scanning geometry and dose level, which significantly aggravates the training costs and requires more training data for real clinical applications. In this paper, we propose a parameter-dependent framework (PDF) that trains a reconstruction network with data originating from multiple alternative geometries and dose levels simultaneously. In the proposed PDF, the geometry and dose level are parameterized and fed into two multilayer perceptrons (MLPs). The outputs of the MLPs are used to modulate the feature maps of the CT reconstruction network, which condition the network outputs on different geometries and dose levels. The experiments show that our proposed method can obtain competitive performance compared to the original network trained with either specific or mixed geometry and dose level, which can efficiently save extra training costs for multiple geometries and dose levels.
Collapse
|
220
|
Tao X, Wang Y, Lin L, Hong Z, Ma J. Learning to Reconstruct CT Images From the VVBP-Tensor. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3030-3041. [PMID: 34138703 DOI: 10.1109/tmi.2021.3090257] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Deep learning (DL) is bringing a big movement in the field of computed tomography (CT) imaging. In general, DL for CT imaging can be applied by processing the projection or the image data with trained deep neural networks (DNNs), unrolling the iterative reconstruction as a DNN for training, or training a well-designed DNN to directly reconstruct the image from the projection. In all of these applications, the whole or part of the DNNs work in the projection or image domain alone or in combination. In this study, instead of focusing on the projection or image, we train DNNs to reconstruct CT images from the view-by-view backprojection tensor (VVBP-Tensor). The VVBP-Tensor is the 3D data before summation in backprojection. It contains structures of the scanned object after applying a sorting operation. Unlike the image or projection that provides compressed information due to the integration/summation step in forward or back projection, the VVBP-Tensor provides lossless information for processing, allowing the trained DNNs to preserve fine details of the image. We develop a learning strategy by inputting slices of the VVBP-Tensor as feature maps and outputting the image. Such strategy can be viewed as a generalization of the summation step in conventional filtered backprojection reconstruction. Numerous experiments reveal that the proposed VVBP-Tensor domain learning framework obtains significant improvement over the image, projection, and hybrid projection-image domain learning frameworks. We hope the VVBP-Tensor domain learning framework could inspire algorithm development for DL-based CT imaging.
Collapse
|
221
|
Huang Y, Preuhs A, Manhart M, Lauritsch G, Maier A. Data Extrapolation From Learned Prior Images for Truncation Correction in Computed Tomography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3042-3053. [PMID: 33844627 DOI: 10.1109/tmi.2021.3072568] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Data truncation is a common problem in computed tomography (CT). Truncation causes cupping artifacts inside the field-of-view (FOV) and anatomical structures missing outside the FOV. Deep learning has achieved impressive results in CT reconstruction from limited data. However, its robustness is still a concern for clinical applications. Although the image quality of learning-based compensation schemes may be inadequate for clinical diagnosis, they can provide prior information for more accurate extrapolation than conventional heuristic extrapolation methods. With extrapolated projection, a conventional image reconstruction algorithm can be applied to obtain a final reconstruction. In this work, a general plug-and-play (PnP) method for truncation correction is proposed based on this idea, where various deep learning methods and conventional reconstruction algorithms can be plugged in. Such a PnP method integrates data consistency for measured data and learned prior image information for truncated data. This shows to have better robustness and interpretability than deep learning only. To demonstrate the efficacy of the proposed PnP method, two state-of-the-art deep learning methods, FBPConvNet and Pix2pixGAN, are investigated for truncation correction in cone-beam CT in noise-free and noisy cases. Their robustness is evaluated by showing false negative and false positive lesion cases. With our proposed PnP method, false lesion structures are corrected for both deep learning methods. For FBPConvNet, the root-mean-square error (RMSE) inside the FOV can be improved from 92HU to around 30HU by PnP in the noisy case. Pix2pixGAN solely achieves better image quality than FBPConvNet solely for truncation correction in general. PnP further improves the RMSE inside the FOV from 42HU to around 27HU for Pix2pixGAN. The efficacy of PnP is also demonstrated on real clinical head data.
Collapse
|
222
|
Keshavamurthy KN, Eickhoff C, Juluru K. Weakly supervised pneumonia localization in chest X-rays using generative adversarial networks. Med Phys 2021; 48:7154-7171. [PMID: 34459001 PMCID: PMC10997001 DOI: 10.1002/mp.15185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2020] [Revised: 07/12/2021] [Accepted: 07/27/2021] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Automatic localization of pneumonia on chest X-rays (CXRs) is highly desirable both as an interpretive aid to the radiologist and for timely diagnosis of the disease. However, pneumonia's amorphous appearance on CXRs and complexity of normal anatomy in the chest present key challenges that hinder accurate localization. Existing studies in this area are either not optimized to preserve spatial information of abnormality or depend on expensive expert-annotated bounding boxes. We present a novel generative adversarial network (GAN)-based machine learning approach for this problem, which is weakly supervised (does not require any location annotations), was trained to retain spatial information, and can produce pixel-wise abnormality maps highlighting regions of abnormality (as opposed to bounding boxes around abnormality). METHODS Our method is based on the Wasserstein GAN framework and, to the best of our knowledge, the first application of GANs to this problem. Specifically, from an abnormal CXR as input, we generated the corresponding pseudo normal CXR image as output. The pseudo normal CXR is the "hypothetical" normal, if the same abnormal CXR were not to have any abnormalities. We surmise that the difference between the pseudo normal and the abnormal CXR highlights the pixels suspected to have pneumonia and hence is our output abnormality map. We trained our algorithm on an "unpaired" data set of abnormal and normal CXRs and did not require any location annotations such as bounding boxes/segmentations of abnormal regions. Furthermore, we incorporated additional prior knowledge/constraints into the model and showed that they help improve localization performance. We validated the model on a data set consisting of 14 184 CXRs from the Radiological Society of North America pneumonia detection challenge. RESULTS We evaluated our methods by comparing the generated abnormality maps with radiologist annotated bounding boxes using receiver operating characteristic (ROC) analysis, image similarity metrics such as normalized cross-correlation/mutual information, and abnormality detection rate.We also present visual examples of the abnormality maps, covering various scenarios of abnormality occurrence. Results demonstrate the ability to highlight regions of abnormality with the best method achieving an ROC area under the curve (AUC) of 0.77 and a detection rate of 85%.The GAN tended to perform better as prior knowledge/constraints were incorporated into the model. CONCLUSIONS We presented a novel GAN based approach for localizing pneumonia on CXRs that (1) does not require expensive hand annotated location ground truth; and (2) was trained to produce abnormality maps at the pixel level as opposed to bounding boxes. We demonstrated the efficacy of our methods via quantitative and qualitative results.
Collapse
Affiliation(s)
- Krishna Nand Keshavamurthy
- Brown University, Providence, RI 02912, USA
- Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065, USA
| | | | - Krishna Juluru
- Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065, USA
| |
Collapse
|
223
|
Kusters KC, Zavala-Mondragon LA, Bescos JO, Rongen P, de With PHN, van der Sommen F. Conditional Generative Adversarial Networks for low-dose CT image denoising aiming at preservation of critical image content. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:2682-2687. [PMID: 34891804 DOI: 10.1109/embc46164.2021.9629600] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
X-ray Computed Tomography (CT) is an imaging modality where patients are exposed to potentially harmful ionizing radiation. To limit patient risk, reduced-dose protocols are desirable, which inherently lead to an increased noise level in the reconstructed CT scans. Consequently, noise reduction algorithms are indispensable in the reconstruction processing chain. In this paper, we propose to leverage a conditional Generative Adversarial Networks (cGAN) model, to translate CT images from low-to-routine dose. However, when aiming to produce realistic images, such generative models may alter critical image content. Therefore, we propose to employ a frequency-based separation of the input prior to applying the cGAN model, in order to limit the cGAN to high-frequency bands, while leaving low-frequency bands untouched. The results of the proposed method are compared to a state-of-the-art model within the cGAN model as well as in a single-network setting. The proposed method generates visually superior results compared to the single-network model and the cGAN model in terms of quality of texture and preservation of fine structural details. It also appeared that the PSNR, SSIM and TV metrics are less important than a careful visual evaluation of the results. The obtained results demonstrate the relevance of defining and separating the input image into desired and undesired content, rather than blindly denoising entire images. This study shows promising results for further investigation of generative models towards finding a reliable deep learning-based noise reduction algorithm for low-dose CT acquisition.
Collapse
|
224
|
Ye S, Li Z, McCann MT, Long Y, Ravishankar S. Unified Supervised-Unsupervised (SUPER) Learning for X-Ray CT Image Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2986-3001. [PMID: 34232871 DOI: 10.1109/tmi.2021.3095310] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Traditional model-based image reconstruction (MBIR) methods combine forward and noise models with simple object priors. Recent machine learning methods for image reconstruction typically involve supervised learning or unsupervised learning, both of which have their advantages and disadvantages. In this work, we propose a unified supervised-unsupervised (SUPER) learning framework for X-ray computed tomography (CT) image reconstruction. The proposed learning formulation combines both unsupervised learning-based priors (or even simple analytical priors) together with (supervised) deep network-based priors in a unified MBIR framework based on a fixed point iteration analysis. The proposed training algorithm is also an approximate scheme for a bilevel supervised training optimization problem, wherein the network-based regularizer in the lower-level MBIR problem is optimized using an upper-level reconstruction loss. The training problem is optimized by alternating between updating the network weights and iteratively updating the reconstructions based on those weights. We demonstrate the learned SUPER models' efficacy for low-dose CT image reconstruction, for which we use the NIH AAPM Mayo Clinic Low Dose CT Grand Challenge dataset for training and testing. In our experiments, we studied different combinations of supervised deep network priors and unsupervised learning-based or analytical priors. Both numerical and visual results show the superiority of the proposed unified SUPER methods over standalone supervised learning-based methods, iterative MBIR methods, and variations of SUPER obtained via ablation studies. We also show that the proposed algorithm converges rapidly in practice.
Collapse
|
225
|
Miao Z, Yang H, Liu B, Li W. Correlation analysis of epicardial adipose tissue volume quantified by computed tomography images and coronary heart disease under optimized reconstruction algorithm. Pak J Med Sci 2021; 37:1677-1681. [PMID: 34712305 PMCID: PMC8520373 DOI: 10.12669/pjms.37.6-wit.4882] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2021] [Revised: 06/12/2021] [Accepted: 07/05/2021] [Indexed: 11/17/2022] Open
Abstract
Objectives: This paper was aimed to explore the adoption value of low-dose computed tomography (CT) imaging based on optimized ordered subset expectation maximization (OSEM) reconstruction algorithm in the correlation analysis between epicardial adipose tissue (EAT) volume and coronary heart disease (CHD). Methods: A total of 110 patients with CHD were selected for CT angiography (CTA) and coronary arteriography (CAG) examinations from October 2017 to October 2019. The predictive value of EAT for CHD was analyzed via receiver operating characteristic (ROC) curve. Results: The results showed that the iteration time and error of the improved OSEM reconstruction algorithm were better than that of MLEM algorithm under the same number of iterations. Age, smoking, hypertension, diabetes, and EAT in control group were obviously lower in contrast to those in CHD group (P<0.05). EAT in control group was (124.50±26.72) mL, and EAT in the CHD group was (159.41±38.51) mL. EAT (B=0.023, P=0.003) was an independent risk factor for CHD, which was suggested by Multiple linear regression analysis. Moreover, EAT was a risk factor for CHD, and was positively correlated with the degree and NSCV. Conclusion: The optimized OSEM algorithm was used to improve the reconstruction quality of low-dose CT images and used in quantitative measurement of epicardial fat volume. Results showed EAT was an independent risk factor for CHD, and was positively correlated with the number of coronary lesions and Gensini score. It was of great value for the prediction of CHD.
Collapse
Affiliation(s)
- Zhenwei Miao
- Zhenwei Miao, Master of Medicine. Department of Radiology, Tianjin Baodi Hospital, Tianjin City 301800, China
| | - Hongyan Yang
- Hongyan Yang, Bachelor's Degrees. Department of Nursing, Tianjin Baodi Hospital, Tianjin City 301800, China
| | - Bofen Liu
- Bofen Liu, Bachelor's Degrees. Department of Nursing, Tianjin Baodi Hospital, Tianjin City 301800, China
| | - Wengui Li
- Wengui Li, Bachelor's Degrees. Department of Radiology, Tianjin Baodi Hospital, Tianjin City 301800, China
| |
Collapse
|
226
|
Ketola JHJ, Heino H, Juntunen MAK, Nieminen MT, Siltanen S, Inkinen SI. Generative adversarial networks improve interior computed tomography angiography reconstruction. Biomed Phys Eng Express 2021; 7. [PMID: 34673559 DOI: 10.1088/2057-1976/ac31cb] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2021] [Accepted: 10/21/2021] [Indexed: 11/12/2022]
Abstract
In interior computed tomography (CT), the x-ray beam is collimated to a limited field-of-view (FOV) (e.g. the volume of the heart) to decrease exposure to adjacent organs, but the resulting image has a severe truncation artifact when reconstructed with traditional filtered back-projection (FBP) type algorithms. In some examinations, such as cardiac or dentomaxillofacial imaging, interior CT could be used to achieve further dose reductions. In this work, we describe a deep learning (DL) method to obtain artifact-free images from interior CT angiography. Our method employs the Pix2Pix generative adversarial network (GAN) in a two-stage process: (1) An extended sinogram is computed from a truncated sinogram with one GAN model, and (2) the FBP reconstruction obtained from that extended sinogram is used as an input to another GAN model that improves the quality of the interior reconstruction. Our double GAN (DGAN) model was trained with 10 000 truncated sinograms simulated from real computed tomography angiography slice images. Truncated sinograms (input) were used with original slice images (target) in training to yield an improved reconstruction (output). DGAN performance was compared with the adaptive de-truncation method, total variation regularization, and two reference DL methods: FBPConvNet, and U-Net-based sinogram extension (ES-UNet). Our DGAN method and ES-UNet yielded the best root-mean-squared error (RMSE) (0.03 ± 0.01), and structural similarity index (SSIM) (0.92 ± 0.02) values, and reference DL methods also yielded good results. Furthermore, we performed an extended FOV analysis by increasing the reconstruction area by 10% and 20%. In both cases, the DGAN approach yielded best results at RMSE (0.03 ± 0.01 and 0.04 ± 0.01 for the 10% and 20% cases, respectively), peak signal-to-noise ratio (PSNR) (30.5 ± 2.6 dB and 28.6 ± 2.6 dB), and SSIM (0.90 ± 0.02 and 0.87 ± 0.02). In conclusion, our method was able to not only reconstruct the interior region with improved image quality, but also extend the reconstructed FOV by 20%.
Collapse
Affiliation(s)
- Juuso H J Ketola
- Research Unit of Medical Imaging, Physics and Technology, University of Oulu, FI-90014, Finland.,The South Savo Social and Health Care Authority, Mikkeli Central Hospital, FI-50100, Finland
| | - Helinä Heino
- Research Unit of Medical Imaging, Physics and Technology, University of Oulu, FI-90014, Finland
| | - Mikael A K Juntunen
- Research Unit of Medical Imaging, Physics and Technology, University of Oulu, FI-90014, Finland.,Department of Diagnostic Radiology, Oulu University Hospital, FI-90029, Finland
| | - Miika T Nieminen
- Research Unit of Medical Imaging, Physics and Technology, University of Oulu, FI-90014, Finland.,Department of Diagnostic Radiology, Oulu University Hospital, FI-90029, Finland.,Medical Research Center Oulu, University of Oulu and Oulu University Hospital, FI-90014, Finland
| | - Samuli Siltanen
- Department of Mathematics and Statistics, University of Helsinki, Helsinki, FI-00014, Finland
| | - Satu I Inkinen
- Research Unit of Medical Imaging, Physics and Technology, University of Oulu, FI-90014, Finland
| |
Collapse
|
227
|
Hariharan SG, Kaethner C, Strobel N, Kowarschik M, Fahrig R, Navab N. Robust learning-based X-ray image denoising - potential pitfalls, their analysis and solutions. Biomed Phys Eng Express 2021; 8. [PMID: 34714256 DOI: 10.1088/2057-1976/ac3489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Accepted: 10/27/2021] [Indexed: 11/12/2022]
Abstract
PURPOSE Since guidance based on X-ray imaging is an integral part of interventional procedures, continuous efforts are taken towards reducing the exposure of patients and clinical staff to ionizing radiation. Even though a reduction in the X-ray dose may lower associated radiation risks, it is likely to impair the quality of the acquired images, potentially making it more difficult for physicians to carry out their procedures. METHOD We present a robust learning-based denoising strategy involving model- based simulations of low-dose X-ray images during the training phase. The method also utilizes a data-driven normalization step - based on an X-ray imaging model - to stabilize the mixed signal-dependent noise associated with X-ray images. We thoroughly analyze the method's sensitivity to a mismatch in dose levels used for training and application. We also study the impact of differing noise models used when training for low and very low-dose X-ray images on the denoising results. RESULTS A quantitative and qualitative analysis based on acquired phantom and clinical data has shown that the proposed learning-based strategy is stable across different dose levels and yields excellent denoising results, if an accurate noise model is applied. We also found that there can be severe artifacts when the noise characteristics of the training images are significantly different from those in the actual images to be processed. This problem can be especially acute at very low dose levels. During a thorough analysis of our experimental results, we further discovered that viewing the results from the perspective of denoising via thresholding of sub-band co efficients can be very beneficial to get a better understanding of the proposed learning-based denoising strategy. CONCLUSION The proposed learning-based denoising strategy provides scope for significant X-ray dose reduction without the loss of important image information if the characteristics of noise is accurately accounted for during the training ph.
Collapse
Affiliation(s)
- Sai Gokul Hariharan
- Technische Universitat Munchen Fakultat fur Informatik, Boltzmannstr. 3, Garching, 85748, GERMANY
| | - Christian Kaethner
- Siemens Healthineers AG, Siemensstraße 1, Forchheim, Bayern, 91301, GERMANY
| | - Norbert Strobel
- Electrical Engineering, University of Applied Sciences Würzburg-Schweinfurt - Campus Schweinfurt, Campus Schweinfurt, Schweinfurt, 97421, GERMANY
| | - Markus Kowarschik
- Siemens Healthineers AG, Siemensstraße 1, Forchheim, Bayern, 91301, GERMANY
| | - Rebecca Fahrig
- Advanced Therapies, Siemens Healthineers, Siemensstraße 1, Forchheim, 91301, GERMANY
| | - Nassir Navab
- Computer Aided Medical Procedures, Technical University of Munich, Arcisstraße 21, Munchen, Bayern, 80333, GERMANY
| |
Collapse
|
228
|
Zhang Y, Liu M, Hu S, Shen Y, Lan J, Jiang B, de Bock GH, Vliegenthart R, Chen X, Xie X. Development and multicenter validation of chest X-ray radiography interpretations based on natural language processing. COMMUNICATIONS MEDICINE 2021; 1:43. [PMID: 35602222 PMCID: PMC9053275 DOI: 10.1038/s43856-021-00043-x] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Accepted: 09/23/2021] [Indexed: 01/01/2023] Open
Abstract
Background Artificial intelligence can assist in interpreting chest X-ray radiography (CXR) data, but large datasets require efficient image annotation. The purpose of this study is to extract CXR labels from diagnostic reports based on natural language processing, train convolutional neural networks (CNNs), and evaluate the classification performance of CNN using CXR data from multiple centers Methods We collected the CXR images and corresponding radiology reports of 74,082 subjects as the training dataset. The linguistic entities and relationships from unstructured radiology reports were extracted by the bidirectional encoder representations from transformers (BERT) model, and a knowledge graph was constructed to represent the association between image labels of abnormal signs and the report text of CXR. Then, a 25-label classification system were built to train and test the CNN models with weakly supervised labeling. Results In three external test cohorts of 5,996 symptomatic patients, 2,130 screening examinees, and 1,804 community clinic patients, the mean AUC of identifying 25 abnormal signs by CNN reaches 0.866 ± 0.110, 0.891 ± 0.147, and 0.796 ± 0.157, respectively. In symptomatic patients, CNN shows no significant difference with local radiologists in identifying 21 signs (p > 0.05), but is poorer for 4 signs (p < 0.05). In screening examinees, CNN shows no significant difference for 17 signs (p > 0.05), but is poorer at classifying nodules (p = 0.013). In community clinic patients, CNN shows no significant difference for 12 signs (p > 0.05), but performs better for 6 signs (p < 0.001). Conclusion We construct and validate an effective CXR interpretation system based on natural language processing. Chest X-rays are accompanied by a report from the radiologist, which contains valuable diagnostic information in text format. Extracting and interpreting information from these reports, such as keywords, is time-consuming, but artificial intelligence (AI) can help with this. Here, we use a type of AI known as natural language processing to extract information about abnormal signs seen on chest X-rays from the corresponding report. We develop and test natural language processing models using data from multiple hospitals and clinics, and show that our models achieve similar performance to interpretation from the radiologists themselves. Our findings suggest that AI might help radiologists to speed up interpretation of chest X-ray reports, which could be useful not only in patient triage and diagnosis but also cataloguing and searching of radiology datasets. Zhang et al. develop a natural language processing approach, based on the BERT model, to extract linguistic information from chest X-ray radiography reports. The authors establish a 25-label classification system for abnormal findings described in the reports and validate their model using data from multiple sites.
Collapse
|
229
|
Abstract
We present an overview of current clinical musculoskeletal imaging applications for artificial intelligence, as well as potential future applications and techniques.
Collapse
|
230
|
Dashtbani Moghari M, Young N, Moore K, Fulton RR, Evans A, Kyme AZ. Head movement during cerebral CT perfusion imaging of acute ischaemic stroke: Characterisation and correlation with patient baseline features. Eur J Radiol 2021; 144:109979. [PMID: 34678666 DOI: 10.1016/j.ejrad.2021.109979] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2021] [Revised: 09/21/2021] [Accepted: 09/23/2021] [Indexed: 10/20/2022]
Abstract
PURPOSE To quantitatively characterise head motion prevalence and severity and to identify patient-based risk factors for motion during cerebral CT perfusion (CTP) imaging of acute ischaemic stroke. METHODS The head motion of 80 stroke patients undergoing CTP imaging was classified retrospectively into four categories of severity. Each motion category was then characterised quantitatively based on the average head movement with respect to the first frame for all studies. Statistical testing and principal component analysis (PCA) were then used to identify and analyse the relationship between motion severity and patient baseline features. RESULTS 46/80 (58%) of patients showed negligible motion, 19/80 (24%) mild-to-moderate motion, and 15/80 (19%) considerable-to-extreme motion sufficient to affect diagnostic/therapeutic accuracy even with correction. The most prevalent movement was "nodding" with maximal translation/rotation in the sagittal/axial planes. There was a tendency for motion to worsen as scan proceeded and for faster motion to occur in the first 15 s. Statistical analyses showed that greater stroke severity (National Institutes of Health Stroke Scale (NIHSS)), older patient age and shorter time from stroke onset were predictive of increased head movement (p < 0.05 Kruskal-Wallis). Using PCA, the combination of NIHSS and patient age was found to be highly predictive of head movement (p < 0.001). CONCLUSIONS Quantitative methods were developed to characterise CTP studies impacted by motion and to anticipate patients at-risk of motion. NIHSS, age, and time from stroke onset function as good predictors of motion likelihood and could potentially be used pre-emptively in CTP scanning of acute stroke.
Collapse
Affiliation(s)
- Mahdieh Dashtbani Moghari
- School of Biomedical Engineering, Faculty of Engineering and Computer Science, University of Sydney, Sydney, Australia.
| | - Noel Young
- Department of Radiology, Westmead Hospital, Sydney, Australia; Medical imaging group, School of Medicine, Western Sydney University, Sydney, Australia
| | - Krystal Moore
- Department of Radiology, Westmead Hospital, Sydney, Australia
| | - Roger R Fulton
- Faculty of Medicine and Health, University of Sydney, Sydney, Australia; Department of Medical Physics, Westmead Hospital, Sydney, Australia; The Brain & Mind Centre, University of Sydney, Sydney, Australia
| | - Andrew Evans
- Department of Aged Care & Stroke, Westmead Hospital and University of Sydney, Sydney, Australia
| | - Andre Z Kyme
- School of Biomedical Engineering, Faculty of Engineering and Computer Science, University of Sydney, Sydney, Australia; The Brain & Mind Centre, University of Sydney, Sydney, Australia
| |
Collapse
|
231
|
Computed Tomography Image Feature under Intelligent Algorithms in Diagnosing the Effect of Humanized Nursing on Neuroendocrine Hormones in Patients with Primary Liver Cancer. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:4563100. [PMID: 34659687 PMCID: PMC8514893 DOI: 10.1155/2021/4563100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Revised: 09/12/2021] [Accepted: 09/16/2021] [Indexed: 11/18/2022]
Abstract
This study was to explore the application value of computed tomography (CT) images processed by intelligent algorithm denoising in the evaluation of humanized nursing in postoperative neuroendocrine hormone changes in patients with primary liver cancer (PLC). In this study, a simple-structured recursive residual coding and decoding (RRCD) algorithm was constructed on the basis of residual network, which can effectively remove artifacts and noise in CT images and can also restore image details and lesion features well. In addition, 60 postoperative patients with primary liver cancer were collected and divided into routine nursing control group (30 cases) and humanized nursing experimental group (30 cases). After a period of nursing, CT images based on intelligent algorithms were evaluated by determining the hormone content. The results showed that the focal necrosis rate (FNR) of the experimental group was 6%. The adrenocorticotropic hormone (ACTH) levels of 6 and 15 days after admission (T3 and T4) were 41.25 ± 3.81 pg/mL and 19.55 ± 1.72 pg/mL, respectively. The cortisol levels of days 6, 15, and 30 after admission (T3, T4, and T5) were 424.86 ± 16.82 nmol/L, 277.98 ± 14.36 nmol/L, and 241.53 ± 13.27 nmol/L, respectively. Estradiol levels were 53.48 ± 11.19 pg/mL, 41.64 ± 9.28 pg/mL, and 30.59 ± 8.16 pg/mL, respectively. Testosterone levels were 2.18 ± 1.14 ng/mL, 1.78 ± 1.03 ng/mL, and 1.42 ± 0.69 ng/mL, respectively. Self-Rating Anxiety Scale (SAS) scores were 40.24 ± 5.81 points, 36.55 ± 5.02 points, and 32.53 ± 4.8 points, respectively. There were 24 cases, 27 cases, 23 cases, and 21 patients who followed no smoking and drinking, taking medication on time, diet control, and self-monitoring. The scores of physical function, self-cognition, emotional function, and social function were 62.59 ± 6.82 points, 69.26 ± 8.14 points, 73.89 ± 6.35 points, and 66.88 ± 7.04 points, which were better than those of the control group in all aspects (P < 0.05). In short, the humanized nursing course can enhance the compliance of the patients after the surgery, improve the quality of life, and inhibit the anxiety and depression of the patients, so it showed a positive effect on the neuroendocrine hormones and the prognosis of the patients.
Collapse
|
232
|
Peng Z, Ni M, Shan H, Lu Y, Li Y, Zhang Y, Pei X, Chen Z, Xie Q, Wang S, Xu XG. Feasibility evaluation of PET scan-time reduction for diagnosing amyloid-β levels in Alzheimer's disease patients using a deep-learning-based denoising algorithm. Comput Biol Med 2021; 138:104919. [PMID: 34655898 DOI: 10.1016/j.compbiomed.2021.104919] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2021] [Revised: 09/29/2021] [Accepted: 09/29/2021] [Indexed: 11/27/2022]
Abstract
PURPOSE To shorten positron emission tomography (PET) scanning time in diagnosing amyloid-β levels thus increasing the workflow in centers involving Alzheimer's Disease (AD) patients. METHODS PET datasets were collected for 25 patients injected with 18F-AV45 radiopharmaceutical. To generate necessary training data, PET images from both normal-scanning-time (20-min) as well as so-called "shortened-scanning-time" (1-min, 2-min, 5-min, and 10-min) were reconstructed for each patient. Building on our earlier work on MCDNet (Monte Carlo Denoising Net) and a new Wasserstein-GAN algorithm, we developed a new denoising model called MCDNet-2 to predict normal-scanning-time PET images from a series of shortened-scanning-time PET images. The quality of the predicted PET images was quantitatively evaluated using objective metrics including normalized-root-mean-square-error (NRMSE), structural similarity (SSIM), and peak signal-to-noise ratio (PSNR). Furthermore, two radiologists performed subjective evaluations including the qualitative evaluation and a five-point grading evaluation. The denoising performance of the proposed MCDNet-2 was finally compared with those of U-Net, MCDNet, and a traditional denoising method called Gaussian Filtering. RESULTS The proposed MCDNet-2 can yield good denoising performance in 5-min PET images. In the comparison of denoising methods, MCDNet-2 yielded the best performance in the subjective evaluation although it is comparable with MCDNet in objective comparison (NRMSE, PSNR, and SSIM). In the qualitative evaluation of amyloid-β positive or negative results, MCDNet-2 was found to achieve a classification accuracy of 100%. CONCLUSIONS The proposed denoising method has been found to reduce the PET scan time from the normal level of 20 min to 5 min but still maintaining acceptable image quality in correctly diagnosing amyloid-β levels. These results suggest strongly that deep learning-based methods such as ours can be an attractive solution to the clinical needs to improve PET imaging workflow.
Collapse
Affiliation(s)
- Zhao Peng
- School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, 230026, China
| | - Ming Ni
- Department of Nuclear Medicine, The First Affiliated Hospital of USTC, Division of Life Science and Medicine, University of Science and Technology of China, Hefei, 230001, China
| | - Hongming Shan
- Institute of Science and Technology for Brain-inspired Intelligence and MOE Frontiers Center for Brain Science, Fudan University, Shanghai, 200433, China; Shanghai Center for Brain Science and Brain-inspired Technology, Shanghai, 201210, China
| | - Yu Lu
- School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, 230026, China
| | - Yongzhe Li
- School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, 230026, China
| | - Yifan Zhang
- Department of Nuclear Medicine, The First Affiliated Hospital of USTC, Division of Life Science and Medicine, University of Science and Technology of China, Hefei, 230001, China
| | - Xi Pei
- School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, 230026, China; Institute of Nuclear Medical Physics, University of Science and Technology of China, Hefei, 230026, China
| | - Zhi Chen
- School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, 230026, China; Institute of Nuclear Medical Physics, University of Science and Technology of China, Hefei, 230026, China
| | - Qiang Xie
- Department of Nuclear Medicine, The First Affiliated Hospital of USTC, Division of Life Science and Medicine, University of Science and Technology of China, Hefei, 230001, China
| | - Shicun Wang
- Department of Nuclear Medicine, The First Affiliated Hospital of USTC, Division of Life Science and Medicine, University of Science and Technology of China, Hefei, 230001, China; Institute of Nuclear Medical Physics, University of Science and Technology of China, Hefei, 230026, China
| | - X George Xu
- School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, 230026, China; Institute of Nuclear Medical Physics, University of Science and Technology of China, Hefei, 230026, China; Department of Radiation Oncology, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230001, China.
| |
Collapse
|
233
|
Hu L, Zhou DW, Zha YF, Li L, He H, Xu WH, Qian L, Zhang YK, Fu CX, Hu H, Zhao JG. Synthesizing High- b-Value Diffusion-weighted Imaging of the Prostate Using Generative Adversarial Networks. Radiol Artif Intell 2021; 3:e200237. [PMID: 34617025 DOI: 10.1148/ryai.2021200237] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Revised: 04/11/2021] [Accepted: 05/18/2021] [Indexed: 11/11/2022]
Abstract
Purpose To develop and evaluate a diffusion-weighted imaging (DWI) deep learning framework based on the generative adversarial network (GAN) to generate synthetic high-b-value (b =1500 sec/mm2) DWI (SYNb1500) sets from acquired standard-b-value (b = 800 sec/mm2) DWI (ACQb800) and acquired standard-b-value (b = 1000 sec/mm2) DWI (ACQb1000) sets. Materials and Methods This retrospective multicenter study included 395 patients who underwent prostate multiparametric MRI. This cohort was split into internal training (96 patients) and external testing (299 patients) datasets. To create SYNb1500 sets from ACQb800 and ACQb1000 sets, a deep learning model based on GAN (M0) was developed by using the internal dataset. M0 was trained and compared with a conventional model based on the cycle GAN (Mcyc). M0 was further optimized by using denoising and edge-enhancement techniques (optimized version of the M0 [Opt-M0]). The SYNb1500 sets were synthesized by using the M0 and the Opt-M0 were synthesized by using ACQb800 and ACQb1000 sets from the external testing dataset. For comparison, traditional calculated (b =1500 sec/mm2) DWI (CALb1500) sets were also obtained. Reader ratings for image quality and prostate cancer detection were performed on the acquired high-b-value (b = 1500 sec/mm2) DWI (ACQb1500), CALb1500, and SYNb1500 sets and the SYNb1500 set generated by the Opt-M0 (Opt-SYNb1500). Wilcoxon signed rank tests were used to compare the readers' scores. A multiple-reader multiple-case receiver operating characteristic curve was used to compare the diagnostic utility of each DWI set. Results When compared with the Mcyc, the M0 yielded a lower mean squared difference and higher mean scores for the peak signal-to-noise ratio, structural similarity, and feature similarity (P < .001 for all). Opt-SYNb1500 resulted in significantly better image quality (P ≤ .001 for all) and a higher mean area under the curve than ACQb1500 and CALb1500 (P ≤ .042 for all). Conclusion A deep learning framework based on GAN is a promising method to synthesize realistic high-b-value DWI sets with good image quality and accuracy in prostate cancer detection.Keywords: Prostate Cancer, Abdomen/GI, Diffusion-weighted Imaging, Deep Learning Framework, High b Value, Generative Adversarial Networks© RSNA, 2021 Supplemental material is available for this article.
Collapse
Affiliation(s)
- Lei Hu
- Department of Diagnostic and Interventional Radiology, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, 600 Yi Shan Road, Shanghai 200233, China (L.H., W.H.X., J.G.Z.); State Key Laboratory of Integrated Services Networks, School of Telecommunications Engineering, Xidian University, Xi'an, China (D.W.Z.); Department of Radiology, Renmin Hospital, Wuhan University, Wuhan, China (Y.F.Z., L.L., H. He, L.Q., Y.K.Z.); MR Application Development, Siemens Shenzhen MR, Shenzhen, China (C.X.F.); and Department of Radiology, The Affiliated Renmin Hospital of Jiangsu University, Zhenjiang, China (H. Hu)
| | - Da-Wei Zhou
- Department of Diagnostic and Interventional Radiology, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, 600 Yi Shan Road, Shanghai 200233, China (L.H., W.H.X., J.G.Z.); State Key Laboratory of Integrated Services Networks, School of Telecommunications Engineering, Xidian University, Xi'an, China (D.W.Z.); Department of Radiology, Renmin Hospital, Wuhan University, Wuhan, China (Y.F.Z., L.L., H. He, L.Q., Y.K.Z.); MR Application Development, Siemens Shenzhen MR, Shenzhen, China (C.X.F.); and Department of Radiology, The Affiliated Renmin Hospital of Jiangsu University, Zhenjiang, China (H. Hu)
| | - Yun-Fei Zha
- Department of Diagnostic and Interventional Radiology, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, 600 Yi Shan Road, Shanghai 200233, China (L.H., W.H.X., J.G.Z.); State Key Laboratory of Integrated Services Networks, School of Telecommunications Engineering, Xidian University, Xi'an, China (D.W.Z.); Department of Radiology, Renmin Hospital, Wuhan University, Wuhan, China (Y.F.Z., L.L., H. He, L.Q., Y.K.Z.); MR Application Development, Siemens Shenzhen MR, Shenzhen, China (C.X.F.); and Department of Radiology, The Affiliated Renmin Hospital of Jiangsu University, Zhenjiang, China (H. Hu)
| | - Liang Li
- Department of Diagnostic and Interventional Radiology, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, 600 Yi Shan Road, Shanghai 200233, China (L.H., W.H.X., J.G.Z.); State Key Laboratory of Integrated Services Networks, School of Telecommunications Engineering, Xidian University, Xi'an, China (D.W.Z.); Department of Radiology, Renmin Hospital, Wuhan University, Wuhan, China (Y.F.Z., L.L., H. He, L.Q., Y.K.Z.); MR Application Development, Siemens Shenzhen MR, Shenzhen, China (C.X.F.); and Department of Radiology, The Affiliated Renmin Hospital of Jiangsu University, Zhenjiang, China (H. Hu)
| | - Huan He
- Department of Diagnostic and Interventional Radiology, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, 600 Yi Shan Road, Shanghai 200233, China (L.H., W.H.X., J.G.Z.); State Key Laboratory of Integrated Services Networks, School of Telecommunications Engineering, Xidian University, Xi'an, China (D.W.Z.); Department of Radiology, Renmin Hospital, Wuhan University, Wuhan, China (Y.F.Z., L.L., H. He, L.Q., Y.K.Z.); MR Application Development, Siemens Shenzhen MR, Shenzhen, China (C.X.F.); and Department of Radiology, The Affiliated Renmin Hospital of Jiangsu University, Zhenjiang, China (H. Hu)
| | - Wen-Hao Xu
- Department of Diagnostic and Interventional Radiology, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, 600 Yi Shan Road, Shanghai 200233, China (L.H., W.H.X., J.G.Z.); State Key Laboratory of Integrated Services Networks, School of Telecommunications Engineering, Xidian University, Xi'an, China (D.W.Z.); Department of Radiology, Renmin Hospital, Wuhan University, Wuhan, China (Y.F.Z., L.L., H. He, L.Q., Y.K.Z.); MR Application Development, Siemens Shenzhen MR, Shenzhen, China (C.X.F.); and Department of Radiology, The Affiliated Renmin Hospital of Jiangsu University, Zhenjiang, China (H. Hu)
| | - Li Qian
- Department of Diagnostic and Interventional Radiology, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, 600 Yi Shan Road, Shanghai 200233, China (L.H., W.H.X., J.G.Z.); State Key Laboratory of Integrated Services Networks, School of Telecommunications Engineering, Xidian University, Xi'an, China (D.W.Z.); Department of Radiology, Renmin Hospital, Wuhan University, Wuhan, China (Y.F.Z., L.L., H. He, L.Q., Y.K.Z.); MR Application Development, Siemens Shenzhen MR, Shenzhen, China (C.X.F.); and Department of Radiology, The Affiliated Renmin Hospital of Jiangsu University, Zhenjiang, China (H. Hu)
| | - Yi-Kun Zhang
- Department of Diagnostic and Interventional Radiology, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, 600 Yi Shan Road, Shanghai 200233, China (L.H., W.H.X., J.G.Z.); State Key Laboratory of Integrated Services Networks, School of Telecommunications Engineering, Xidian University, Xi'an, China (D.W.Z.); Department of Radiology, Renmin Hospital, Wuhan University, Wuhan, China (Y.F.Z., L.L., H. He, L.Q., Y.K.Z.); MR Application Development, Siemens Shenzhen MR, Shenzhen, China (C.X.F.); and Department of Radiology, The Affiliated Renmin Hospital of Jiangsu University, Zhenjiang, China (H. Hu)
| | - Cai-Xia Fu
- Department of Diagnostic and Interventional Radiology, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, 600 Yi Shan Road, Shanghai 200233, China (L.H., W.H.X., J.G.Z.); State Key Laboratory of Integrated Services Networks, School of Telecommunications Engineering, Xidian University, Xi'an, China (D.W.Z.); Department of Radiology, Renmin Hospital, Wuhan University, Wuhan, China (Y.F.Z., L.L., H. He, L.Q., Y.K.Z.); MR Application Development, Siemens Shenzhen MR, Shenzhen, China (C.X.F.); and Department of Radiology, The Affiliated Renmin Hospital of Jiangsu University, Zhenjiang, China (H. Hu)
| | - Hui Hu
- Department of Diagnostic and Interventional Radiology, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, 600 Yi Shan Road, Shanghai 200233, China (L.H., W.H.X., J.G.Z.); State Key Laboratory of Integrated Services Networks, School of Telecommunications Engineering, Xidian University, Xi'an, China (D.W.Z.); Department of Radiology, Renmin Hospital, Wuhan University, Wuhan, China (Y.F.Z., L.L., H. He, L.Q., Y.K.Z.); MR Application Development, Siemens Shenzhen MR, Shenzhen, China (C.X.F.); and Department of Radiology, The Affiliated Renmin Hospital of Jiangsu University, Zhenjiang, China (H. Hu)
| | - Jun-Gong Zhao
- Department of Diagnostic and Interventional Radiology, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, 600 Yi Shan Road, Shanghai 200233, China (L.H., W.H.X., J.G.Z.); State Key Laboratory of Integrated Services Networks, School of Telecommunications Engineering, Xidian University, Xi'an, China (D.W.Z.); Department of Radiology, Renmin Hospital, Wuhan University, Wuhan, China (Y.F.Z., L.L., H. He, L.Q., Y.K.Z.); MR Application Development, Siemens Shenzhen MR, Shenzhen, China (C.X.F.); and Department of Radiology, The Affiliated Renmin Hospital of Jiangsu University, Zhenjiang, China (H. Hu)
| |
Collapse
|
234
|
Lu M, Liu X, Liu C, Li B, Gu W, Jiang J, Ta D. Artifact removal in photoacoustic tomography with an unsupervised method. BIOMEDICAL OPTICS EXPRESS 2021; 12:6284-6299. [PMID: 34745737 PMCID: PMC8548009 DOI: 10.1364/boe.434172] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Revised: 08/13/2021] [Accepted: 09/07/2021] [Indexed: 05/02/2023]
Abstract
Photoacoustic tomography (PAT) is an emerging biomedical imaging technology that can realize high contrast imaging with a penetration depth of the acoustic. Recently, deep learning (DL) methods have also been successfully applied to PAT for improving the image reconstruction quality. However, the current DL-based PAT methods are implemented by the supervised learning strategy, and the imaging performance is dependent on the available ground-truth data. To overcome the limitation, this work introduces a new image domain transformation method based on cyclic generative adversarial network (CycleGAN), termed as PA-GAN, which is used to remove artifacts in PAT images caused by the use of the limited-view measurement data in an unsupervised learning way. A series of data from phantom and in vivo experiments are used to evaluate the performance of the proposed PA-GAN. The experimental results show that PA-GAN provides a good performance in removing artifacts existing in photoacoustic tomographic images. In particular, when dealing with extremely sparse measurement data (e.g., 8 projections in circle phantom experiments), higher imaging performance is achieved by the proposed unsupervised PA-GAN, with an improvement of ∼14% in structural similarity (SSIM) and ∼66% in peak signal to noise ratio (PSNR), compared with the supervised-learning U-Net method. With an increasing number of projections (e.g., 128 projections), U-Net, especially FD U-Net, shows a slight improvement in artifact removal capability, in terms of SSIM and PSNR. Furthermore, the computational time obtained by PA-GAN and U-Net is similar (∼60 ms/frame), once the network is trained. More importantly, PA-GAN is more flexible than U-Net that allows the model to be effectively trained with unpaired data. As a result, PA-GAN makes it possible to implement PAT with higher flexibility without compromising imaging performance.
Collapse
Affiliation(s)
- Mengyang Lu
- School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China
| | - Xin Liu
- Academy for Engineering and Technology, Fudan University, Shanghai 200433, China
- State Key Laboratory of Medical Neurobiology, Institutes of Brain Science, Fudan University, Shanghai 200433, China
| | - Chengcheng Liu
- Academy for Engineering and Technology, Fudan University, Shanghai 200433, China
| | - Boyi Li
- Academy for Engineering and Technology, Fudan University, Shanghai 200433, China
| | - Wenting Gu
- School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China
| | - Jiehui Jiang
- School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China
| | - Dean Ta
- Academy for Engineering and Technology, Fudan University, Shanghai 200433, China
- Center for Biomedical Engineering, School of Information Science and Technology, Fudan University, Shanghai 200433, China
| |
Collapse
|
235
|
Hu L, Zhou DW, Fu CX, Benkert T, Xiao YF, Wei LM, Zhao JG. Calculation of Apparent Diffusion Coefficients in Prostate Cancer Using Deep Learning Algorithms: A Pilot Study. Front Oncol 2021; 11:697721. [PMID: 34568027 PMCID: PMC8458902 DOI: 10.3389/fonc.2021.697721] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Accepted: 08/11/2021] [Indexed: 11/29/2022] Open
Abstract
Background Apparent diffusion coefficients (ADCs) obtained with diffusion-weighted imaging (DWI) are highly valuable for the detection and staging of prostate cancer and for assessing the response to treatment. However, DWI suffers from significant anatomic distortions and susceptibility artifacts, resulting in reduced accuracy and reproducibility of the ADC calculations. The current methods for improving the DWI quality are heavily dependent on software, hardware, and additional scan time. Therefore, their clinical application is limited. An accelerated ADC generation method that maintains calculation accuracy and repeatability without heavy dependence on magnetic resonance imaging scanners is of great clinical value. Objectives We aimed to establish and evaluate a supervised learning framework for synthesizing ADC images using generative adversarial networks. Methods This prospective study included 200 patients with suspected prostate cancer (training set: 150 patients; test set #1: 50 patients) and 10 healthy volunteers (test set #2) who underwent both full field-of-view (FOV) diffusion-weighted imaging (f-DWI) and zoomed-FOV DWI (z-DWI) with b-values of 50, 1,000, and 1,500 s/mm2. ADC values based on f-DWI and z-DWI (f-ADC and z-ADC) were calculated. Herein we propose an ADC synthesis method based on generative adversarial networks that uses f-DWI with a single b-value to generate synthesized ADC (s-ADC) values using z-ADC as a reference. The image quality of the s-ADC sets was evaluated using the peak signal-to-noise ratio (PSNR), root mean squared error (RMSE), structural similarity (SSIM), and feature similarity (FSIM). The distortions of each ADC set were evaluated using the T2-weighted image reference. The calculation reproducibility of the different ADC sets was compared using the intraclass correlation coefficient. The tumor detection and classification abilities of each ADC set were evaluated using a receiver operating characteristic curve analysis and a Spearman correlation coefficient. Results The s-ADCb1000 had a significantly lower RMSE score and higher PSNR, SSIM, and FSIM scores than the s-ADCb50 and s-ADCb1500 (all P < 0.001). Both z-ADC and s-ADCb1000 had less distortion and better quantitative ADC value reproducibility for all the evaluated tissues, and they demonstrated better tumor detection and classification performance than f-ADC. Conclusion The deep learning algorithm might be a feasible method for generating ADC maps, as an alternative to z-ADC maps, without depending on hardware systems and additional scan time requirements.
Collapse
Affiliation(s)
- Lei Hu
- Department of Diagnostic and Interventional Radiology, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Shanghai, China
| | - Da Wei Zhou
- State Key Laboratory of Integrated Services Networks, School of Telecommunications Engineering, Xidian University, Xi'an, China
| | - Cai Xia Fu
- Magnetic Resonance (MR) Application Development, Siemens Shenzhen Magnetic Resonance Ltd., Shenzhen, China
| | - Thomas Benkert
- MR Application Predevelopment, Siemens Healthcare GmbH, Erlangen, Germany
| | - Yun Feng Xiao
- Department of Diagnostic and Interventional Radiology, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Shanghai, China
| | - Li Ming Wei
- Department of Diagnostic and Interventional Radiology, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Shanghai, China
| | - Jun Gong Zhao
- Department of Diagnostic and Interventional Radiology, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Shanghai, China
| |
Collapse
|
236
|
Park SB. Advances in deep learning for computed tomography denoising. World J Clin Cases 2021; 9:7614-7619. [PMID: 34621813 PMCID: PMC8462260 DOI: 10.12998/wjcc.v9.i26.7614] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Revised: 05/12/2021] [Accepted: 08/17/2021] [Indexed: 02/06/2023] Open
Abstract
Computed tomography (CT) has seen a rapid increase in use in recent years. Radiation from CT accounts for a significant proportion of total medical radiation. However, given the known harmful impact of radiation exposure to the human body, the excessive use of CT in medical environments raises concerns. Concerns over increasing CT use and its associated radiation burden have prompted efforts to reduce radiation dose during the procedure. Therefore, low-dose CT has attracted major attention in the radiology, since CT-associated x-ray radiation carries health risks for patients. The reduction of the CT radiation dose, however, compromises the signal-to-noise ratio, which affects image quality and diagnostic performance. Therefore, several denoising methods have been developed and applied to image processing technologies with the goal of reducing image noise. Recently, deep learning applications that improve image quality by reducing the noise and artifacts have become commercially available for diagnostic imaging. Deep learning image reconstruction shows great potential as an advanced reconstruction method to improve the quality of clinical CT images. These improvements can provide significant benefit to patients regardless of their disease, and further advances are expected in the near future.
Collapse
Affiliation(s)
- Sung Bin Park
- Department of Radiology, Chung-Ang University Hospital, Seoul 06973, South Korea
| |
Collapse
|
237
|
Kulathilake KASH, Abdullah NA, Bandara AMRR, Lai KW. InNetGAN: Inception Network-Based Generative Adversarial Network for Denoising Low-Dose Computed Tomography. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:9975762. [PMID: 34552709 PMCID: PMC8452440 DOI: 10.1155/2021/9975762] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Revised: 08/18/2021] [Accepted: 08/27/2021] [Indexed: 12/24/2022]
Abstract
Low-dose Computed Tomography (LDCT) has gained a great deal of attention in clinical procedures due to its ability to reduce the patient's risk of exposure to the X-ray radiation. However, reducing the X-ray dose increases the quantum noise and artifacts in the acquired LDCT images. As a result, it produces visually low-quality LDCT images that adversely affect the disease diagnosing and treatment planning in clinical procedures. Deep Learning (DL) has recently become the cutting-edge technology of LDCT denoising due to its high performance and data-driven execution compared to conventional denoising approaches. Although the DL-based models perform fairly well in LDCT noise reduction, some noise components are still retained in denoised LDCT images. One reason for this noise retention is the direct transmission of feature maps through the skip connections of contraction and extraction path-based DL modes. Therefore, in this study, we propose a Generative Adversarial Network with Inception network modules (InNetGAN) as a solution for filtering the noise transmission through skip connections and preserving the texture and fine structure of LDCT images. The proposed Generator is modeled based on the U-net architecture. The skip connections in the U-net architecture are modified with three different inception network modules to filter out the noise in the feature maps passing over them. The quantitative and qualitative experimental results have shown the performance of the InNetGAN model in reducing noise and preserving the subtle structures and texture details in LDCT images compared to the other state-of-the-art denoising algorithms.
Collapse
Affiliation(s)
- K. A. Saneera Hemantha Kulathilake
- Department of Computer System and Technology, Faculty of Computer Science and Information Technology, Universiti Malaya, 50603 Kuala Lumpur, Malaysia
- Department of Computing, Faculty of Applied Sciences, Rajarata University of Sri Lanka, Mihintale, Sri Lanka
| | - Nor Aniza Abdullah
- Department of Computer System and Technology, Faculty of Computer Science and Information Technology, Universiti Malaya, 50603 Kuala Lumpur, Malaysia
| | | | - Khin Wee Lai
- Department of Biomedical Engineering, Faculty of Engineering, Universiti Malaya, 50603 Kuala Lumpur, Malaysia
| |
Collapse
|
238
|
Low-Dose CT Image Denoising with Improving WGAN and Hybrid Loss Function. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2021; 2021:2973108. [PMID: 34484414 PMCID: PMC8416402 DOI: 10.1155/2021/2973108] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Revised: 07/12/2021] [Accepted: 08/12/2021] [Indexed: 11/17/2022]
Abstract
The X-ray radiation from computed tomography (CT) brought us the potential risk. Simply decreasing the dose makes the CT images noisy and diagnostic performance compromised. Here, we develop a novel denoising low-dose CT image method. Our framework is based on an improved generative adversarial network coupling with the hybrid loss function, including the adversarial loss, perceptual loss, sharpness loss, and structural similarity loss. Among the loss function terms, perceptual loss and structural similarity loss are made use of to preserve textural details, and sharpness loss can make reconstruction images clear. The adversarial loss can sharp the boundary regions. The results of experiments show the proposed method can effectively remove noise and artifacts better than the state-of-the-art methods in the aspects of the visual effect, the quantitative measurements, and the texture details.
Collapse
|
239
|
Wu M, Chen W, Chen Q, Park H. Noise Reduction for SD-OCT Using a Structure-Preserving Domain Transfer Approach. IEEE J Biomed Health Inform 2021; 25:3460-3472. [PMID: 33822730 DOI: 10.1109/jbhi.2021.3071421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Spectral-domain optical coherence tomography (SD-OCT) images inevitably suffer from multiplicative speckle noise caused by random interference. This study proposes an unsupervised domain adaptation approach for noise reduction by translating the SD-OCT to the corresponding high-quality enhanced depth imaging (EDI)-OCT. We propose a structure-persevered cycle-consistent generative adversarial network for unpaired image-to-image translation, which can be applied to imbalanced unpaired data, and can effectively preserve retinal details based on a structure-specific cross-domain description. It also imposes smoothness by penalizing the intensity variation of the low reflective region between consecutive slices. Our approach was tested on a local data set that consisted of 268 SD-OCT volumes and two public independent validation datasets including 20 SD-OCT volumes and 17 B-scans, respectively. Experimental results show that our method can effectively suppress noise and maintain the retinal structure, compared with other traditional approaches and deep learning methods in terms of qualitative and quantitative assessments. Our proposed method shows good performance for speckle noise reduction and can assist downstream tasks of OCT analysis.
Collapse
|
240
|
Huang Z, Liu X, Wang R, Chen Z, Yang Y, Liu X, Zheng H, Liang D, Hu Z. Learning a Deep CNN Denoising Approach Using Anatomical Prior Information Implemented With Attention Mechanism for Low-Dose CT Imaging on Clinical Patient Data From Multiple Anatomical Sites. IEEE J Biomed Health Inform 2021; 25:3416-3427. [PMID: 33625991 DOI: 10.1109/jbhi.2021.3061758] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Dose reduction in computed tomography (CT) has gained considerable attention in clinical applications because it decreases radiation risks. However, a lower dose generates noise in low-dose computed tomography (LDCT) images. Previous deep learning (DL)-based works have investigated ways to improve diagnostic performance to address this ill-posed problem. However, most of them disregard the anatomical differences among different human body sites in constructing the mapping function between LDCT images and their high-resolution normal-dose CT (NDCT) counterparts. In this article, we propose a novel deep convolutional neural network (CNN) denoising approach by introducing information of the anatomical prior. Instead of designing multiple networks for each independent human body anatomical site, a unified network framework is employed to process anatomical information. The anatomical prior is represented as a pattern of weights of the features extracted from the corresponding LDCT image in an anatomical prior fusion module. To promote diversity in the contextual information, a spatial attention fusion mechanism is introduced to capture many local regions of interest in the attention fusion module. Although many network parameters are saved, the experimental results demonstrate that our method, which incorporates anatomical prior information, is effective in denoising LDCT images. Furthermore, the anatomical prior fusion module could be conveniently integrated into other DL-based methods and avails the performance improvement on multiple anatomical data.
Collapse
|
241
|
Wang Q, Zhang X, Zhang W, Gao M, Huang S, Wang J, Zhang J, Yang D, Liu C. Realistic Lung Nodule Synthesis With Multi-Target Co-Guided Adversarial Mechanism. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2343-2353. [PMID: 33939610 DOI: 10.1109/tmi.2021.3077089] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The important cues for a realistic lung nodule synthesis include the diversity in shape and background, controllability of semantic feature levels, and overall CT image quality. To incorporate these cues as the multiple learning targets, we introduce the Multi-Target Co-Guided Adversarial Mechanism, which utilizes the foreground and background mask to guide nodule shape and lung tissues, takes advantage of the CT lung and mediastinal window as the guidance of spiculation and texture control, respectively. Further, we propose a Multi-Target Co-Guided Synthesizing Network with a joint loss function to realize the co-guidance of image generation and semantic feature learning. The proposed network contains a Mask-Guided Generative Adversarial Sub-Network (MGGAN) and a Window-Guided Semantic Learning Sub-Network (WGSLN). The MGGAN generates the initial synthesis using the mask combined with the foreground and background masks, guiding the generation of nodule shape and background tissues. Meanwhile, the WGSLN controls the semantic features and refines the synthesis quality by transforming the initial synthesis into the CT lung and mediastinal window, and performing the spiculation and texture learning simultaneously. We validated our method using the quantitative analysis of authenticity under the Fréchet Inception Score, and the results show its state-of-the-art performance. We also evaluated our method as a data augmentation method to predict malignancy level on the LIDC-IDRI database, and the results show that the accuracy of VGG-16 is improved by 5.6%. The experimental results confirm the effectiveness of the proposed method.
Collapse
|
242
|
Wang R, Liu H, Toyonaga T, Shi L, Wu J, Onofrey JA, Tsai YJ, Naganawa M, Ma T, Liu Y, Chen MK, Mecca AP, O’Dell RS, van Dyck CH, Carson RE, Liu C. Generation of synthetic PET images of synaptic density and amyloid from 18 F-FDG images using deep learning. Med Phys 2021; 48:5115-5129. [PMID: 34224153 PMCID: PMC8455448 DOI: 10.1002/mp.15073] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2021] [Revised: 06/11/2021] [Accepted: 06/12/2021] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Positron emission tomography (PET) imaging with various tracers is increasingly used in Alzheimer's disease (AD) studies. However, access to PET scans using new or less-available tracers with sophisticated synthesis and short half-life isotopes may be very limited. Therefore, it is of great significance and interest in AD research to assess the feasibility of generating synthetic PET images of less-available tracers from the PET image of another common tracer, in particular 18 F-FDG. METHODS We implemented advanced deep learning methods using the U-Net model to predict 11 C-UCB-J PET images of synaptic vesicle protein 2A (SV2A), a surrogate of synaptic density, from 18 F-FDG PET data. Dynamic 18 F-FDG and 11 C-UCB-J scans were performed in 21 participants with normal cognition (CN) and 33 participants with Alzheimer's disease (AD). Cerebellum was used as the reference region for both tracers. For 11 C-UCB-J image prediction, four network models were trained and tested, which included 1) 18 F-FDG SUV ratio (SUVR) to 11 C-UCB-J SUVR, 2) 18 F-FDG Ki ratio to 11 C-UCB-J SUVR, 3) 18 F-FDG SUVR to 11 C-UCB-J distribution volume ratio (DVR), and 4) 18 F-FDG Ki ratio to 11 C-UCB-J DVR. The normalized root mean square error (NRMSE), structure similarity index (SSIM), and Pearson's correlation coefficient were calculated for evaluating the overall image prediction accuracy. Mean bias of various ROIs in the brain and correlation plots between predicted images and true images were calculated for ROI-based prediction accuracy. Following a similar training and evaluation strategy, 18 F-FDG SUVR to 11 C-PiB SUVR network was also trained and tested for 11 C-PiB static image prediction. RESULTS The results showed that all four network models obtained satisfactory 11 C-UCB-J static and parametric images. For 11 C-UCB-J SUVR prediction, the mean ROI bias was -0.3% ± 7.4% for the AD group and -0.5% ± 7.3% for the CN group with 18 F-FDG SUVR as the input, -0.7% ± 8.1% for the AD group, and -1.3% ± 7.0% for the CN group with 18 F-FDG Ki ratio as the input. For 11 C-UCB-J DVR prediction, the mean ROI bias was -1.3% ± 7.5% for the AD group and -2.0% ± 6.9% for the CN group with 18 F-FDG SUVR as the input, -0.7% ± 9.0% for the AD group, and -1.7% ± 7.8% for the CN group with 18 F-FDG Ki ratio as the input. For 11 C-PiB SUVR image prediction, which appears to be a more challenging task, the incorporation of additional diagnostic information into the network is needed to control the bias below 5% for most ROIs. CONCLUSIONS It is feasible to use 3D U-Net-based methods to generate synthetic 11 C-UCB-J PET images from 18 F-FDG images with reasonable prediction accuracy. It is also possible to predict 11 C-PiB SUVR images from 18 F-FDG images, though the incorporation of additional non-imaging information is needed.
Collapse
Affiliation(s)
- Rui Wang
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
- Department of Engineering Physics, Tsinghua University, Beijing, China
- Key Laboratory of Particle and Radiation Imaging, Ministry of Education, Tsinghua University, Beijing, China
| | - Hui Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
- Department of Engineering Physics, Tsinghua University, Beijing, China
- Key Laboratory of Particle and Radiation Imaging, Ministry of Education, Tsinghua University, Beijing, China
| | - Takuya Toyonaga
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Luyao Shi
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Jing Wu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - John Aaron Onofrey
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Yu-Jung Tsai
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Mika Naganawa
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Tianyu Ma
- Department of Engineering Physics, Tsinghua University, Beijing, China
- Key Laboratory of Particle and Radiation Imaging, Ministry of Education, Tsinghua University, Beijing, China
| | - Yaqiang Liu
- Department of Engineering Physics, Tsinghua University, Beijing, China
- Key Laboratory of Particle and Radiation Imaging, Ministry of Education, Tsinghua University, Beijing, China
| | - Ming-Kai Chen
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Adam P. Mecca
- Department of Psychiatry, Yale University, New Haven, CT, USA
- Alzheimer’s Disease Research Unit, Yale University School of Medicine, New Haven, CT, USA
| | - Ryan S. O’Dell
- Department of Psychiatry, Yale University, New Haven, CT, USA
- Alzheimer’s Disease Research Unit, Yale University School of Medicine, New Haven, CT, USA
| | - Christopher H. van Dyck
- Department of Psychiatry, Yale University, New Haven, CT, USA
- Alzheimer’s Disease Research Unit, Yale University School of Medicine, New Haven, CT, USA
| | - Richard E. Carson
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| |
Collapse
|
243
|
Wang S, Liu X, Zhao J, Liu Y, Liu S, Liu Y, Zhao J. Computer auxiliary diagnosis technique of detecting cholangiocarcinoma based on medical imaging: A review. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 208:106265. [PMID: 34311415 DOI: 10.1016/j.cmpb.2021.106265] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/20/2021] [Accepted: 06/28/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVES Cholangiocarcinoma (CCA) is one of the most aggressive human malignant tumors and is becoming one of the main factors of death and disability globally. Specifically, 60% to 70% of CCA patients were diagnosed with local invasion or distant metastasis and lost the chance of radical operation. The overall median survival time was less than 12 months. As a non-invasive diagnostic technology, medical imaging consisting of computed tomography (CT) imaging, magnetic resonance imaging (MRI), and ultrasound (US) imaging, is the most effectively and commonly used method to detect CCA. The computer auxiliary diagnosis (CAD) system based on medical imaging is helpful for rapid diagnosis and provides credible "second opinion" for specialists. The purpose of this review is to categorize and review the CAD technique of detecting CCA based on medical imaging. METHODS This work applies a four-level screening process to choose suitable publications. 125 research papers published in different academic research databases were selected and analyzed according to specific criteria. From the five steps of medical image acquisition, processing, analysis, understanding and verification of CAD combined with artificial intelligence algorithms, we obtain the most advanced insights related to CCA detection. RESULTS This work provides a comprehensive analysis and comparison analysis of the current CAD systems of detecting CCA. After careful investigation, we find that the main detection methods are traditional machine learning method and deep learning method. For the detection, the most commonly used method is semi-automatic segmentation algorithm combined with support vector machine classifier method, combination of which has good detection performance. The end-to-end training mode makes deep learning method more and more popular in CAD systems. However, due to the limited medical training data, the accuracy of deep learning method is unsatisfactory. CONCLUSIONS Based on analysis of artificial intelligence methods applied in CCA, this work is expected to be truly applied in clinical practice in the future to improve the level of clinical diagnosis and treatment of it. This work concludes by providing a prediction of future trends, which will be of great significance for researchers in the medical imaging of CCA and artificial intelligence.
Collapse
Affiliation(s)
- Shiyu Wang
- School of Electronic and Electric Engineering, Shanghai University of Engineering Science, Shanghai 201620, China
| | - Xiang Liu
- School of Electronic and Electric Engineering, Shanghai University of Engineering Science, Shanghai 201620, China.
| | - Jingwen Zhao
- School of Electronic and Electric Engineering, Shanghai University of Engineering Science, Shanghai 201620, China
| | - Yiwen Liu
- School of Electronic and Electric Engineering, Shanghai University of Engineering Science, Shanghai 201620, China
| | - Shuhong Liu
- Department of Pathology and Hepatology, The Fifth Medical Centre of Chinese PLA General Hospital, Beijing 100039, China
| | - Yisi Liu
- Department of Pathology and Hepatology, The Fifth Medical Centre of Chinese PLA General Hospital, Beijing 100039, China
| | - Jingmin Zhao
- Department of Pathology and Hepatology, The Fifth Medical Centre of Chinese PLA General Hospital, Beijing 100039, China.
| |
Collapse
|
244
|
Fuchs P, Kröger T, Garbe CS. Defect detection in CT scans of cast aluminum parts: A machine vision perspective. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.04.094] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
245
|
Wang Z, Lim G, Ng WY, Keane PA, Campbell JP, Tan GSW, Schmetterer L, Wong TY, Liu Y, Ting DSW. Generative adversarial networks in ophthalmology: what are these and how can they be used? Curr Opin Ophthalmol 2021; 32:459-467. [PMID: 34324454 PMCID: PMC10276657 DOI: 10.1097/icu.0000000000000794] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
PURPOSE OF REVIEW The development of deep learning (DL) systems requires a large amount of data, which may be limited by costs, protection of patient information and low prevalence of some conditions. Recent developments in artificial intelligence techniques have provided an innovative alternative to this challenge via the synthesis of biomedical images within a DL framework known as generative adversarial networks (GANs). This paper aims to introduce how GANs can be deployed for image synthesis in ophthalmology and to discuss the potential applications of GANs-produced images. RECENT FINDINGS Image synthesis is the most relevant function of GANs to the medical field, and it has been widely used for generating 'new' medical images of various modalities. In ophthalmology, GANs have mainly been utilized for augmenting classification and predictive tasks, by synthesizing fundus images and optical coherence tomography images with and without pathologies such as age-related macular degeneration and diabetic retinopathy. Despite their ability to generate high-resolution images, the development of GANs remains data intensive, and there is a lack of consensus on how best to evaluate the outputs produced by GANs. SUMMARY Although the problem of artificial biomedical data generation is of great interest, image synthesis by GANs represents an innovation with yet unclear relevance for ophthalmology.
Collapse
Affiliation(s)
- Zhaoran Wang
- Duke-NUS Medical School, National University of Singapore
| | - Gilbert Lim
- Duke-NUS Medical School, National University of Singapore
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
| | - Wei Yan Ng
- Duke-NUS Medical School, National University of Singapore
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
| | - Pearse A. Keane
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore
| | - J. Peter Campbell
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon, USA
| | - Gavin Siew Wei Tan
- Duke-NUS Medical School, National University of Singapore
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
| | - Leopold Schmetterer
- Duke-NUS Medical School, National University of Singapore
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE)
- School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore
- Institute of Molecular and Clinical Ophthalmology Basel, Basel, Switzerland
- Department of Clinical Pharmacology
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | - Tien Yin Wong
- Duke-NUS Medical School, National University of Singapore
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
| | - Yong Liu
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore
| | - Daniel Shu Wei Ting
- Duke-NUS Medical School, National University of Singapore
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
| |
Collapse
|
246
|
Zhang C, Li Y, Chen GH. Accurate and robust sparse-view angle CT image reconstruction using deep learning and prior image constrained compressed sensing (DL-PICCS). Med Phys 2021; 48:5765-5781. [PMID: 34458996 DOI: 10.1002/mp.15183] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Revised: 07/09/2021] [Accepted: 08/02/2021] [Indexed: 12/27/2022] Open
Abstract
BACKGROUND Sparse-view CT image reconstruction problems encountered in dynamic CT acquisitions are technically challenging. Recently, many deep learning strategies have been proposed to reconstruct CT images from sparse-view angle acquisitions showing promising results. However, two fundamental problems with these deep learning reconstruction methods remain to be addressed: (1) limited reconstruction accuracy for individual patients and (2) limited generalizability for patient statistical cohorts. PURPOSE The purpose of this work is to address the previously mentioned challenges in current deep learning methods. METHODS A method that combines a deep learning strategy with prior image constrained compressed sensing (PICCS) was developed to address these two problems. In this method, the sparse-view CT data were reconstructed by the conventional filtered backprojection (FBP) method first, and then processed by the trained deep neural network to eliminate streaking artifacts. The outputs of the deep learning architecture were then used as the needed prior image in PICCS to reconstruct the image. If the noise level from the PICCS reconstruction is not satisfactory, another light duty deep neural network can then be used to reduce noise level. Both extensive numerical simulation data and human subject data have been used to quantitatively and qualitatively assess the performance of the proposed DL-PICCS method in terms of reconstruction accuracy and generalizability. RESULTS Extensive evaluation studies have demonstrated that: (1) quantitative reconstruction accuracy of DL-PICCS for individual patient is improved when it is compared with the deep learning methods and CS-based methods; (2) the false-positive lesion-like structures and false negative missing anatomical structures in the deep learning approaches can be effectively eliminated in the DL-PICCS reconstructed images; and (3) DL-PICCS enables a deep learning scheme to relax its working conditions to enhance its generalizability. CONCLUSIONS DL-PICCS offers a promising opportunity to achieve personalized reconstruction with improved reconstruction accuracy and enhanced generalizability.
Collapse
Affiliation(s)
- Chengzhu Zhang
- Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Yinsheng Li
- Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Guang-Hong Chen
- Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA.,Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| |
Collapse
|
247
|
Abu-Srhan A, Almallahi I, Abushariah MAM, Mahafza W, Al-Kadi OS. Paired-unpaired Unsupervised Attention Guided GAN with transfer learning for bidirectional brain MR-CT synthesis. Comput Biol Med 2021; 136:104763. [PMID: 34449305 DOI: 10.1016/j.compbiomed.2021.104763] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Revised: 08/04/2021] [Accepted: 08/10/2021] [Indexed: 11/28/2022]
Abstract
Medical image acquisition plays a significant role in the diagnosis and management of diseases. Magnetic Resonance (MR) and Computed Tomography (CT) are considered two of the most popular modalities for medical image acquisition. Some considerations, such as cost and radiation dose, may limit the acquisition of certain image modalities. Therefore, medical image synthesis can be used to generate required medical images without actual acquisition. In this paper, we propose a paired-unpaired Unsupervised Attention Guided Generative Adversarial Network (uagGAN) model to translate MR images to CT images and vice versa. The uagGAN model is pre-trained with a paired dataset for initialization and then retrained on an unpaired dataset using a cascading process. In the paired pre-training stage, we enhance the loss function of our model by combining the Wasserstein GAN adversarial loss function with a new combination of non-adversarial losses (content loss and L1) to generate fine structure images. This will ensure global consistency, and better capture of the high and low frequency details of the generated images. The uagGAN model is employed as it generates more accurate and sharper images through the production of attention masks. Knowledge from a non-medical pre-trained model is also transferred to the uagGAN model for improved learning and better image translation performance. Quantitative evaluation and qualitative perceptual analysis by radiologists indicate that employing transfer learning with the proposed paired-unpaired uagGAN model can achieve better performance as compared to other rival image-to-image translation models.
Collapse
Affiliation(s)
- Alaa Abu-Srhan
- Department of Basic Science, The Hashemite University, Zarqa, Jordan
| | - Israa Almallahi
- Department of Diagnostic Radiology, Jordan University Hospital, Amman, 11942, Jordan
| | - Mohammad A M Abushariah
- King Abdullah II School of Information Technology, The University of Jordan, Amman, 11942, Jordan
| | - Waleed Mahafza
- Department of Diagnostic Radiology, Jordan University Hospital, Amman, 11942, Jordan
| | - Omar S Al-Kadi
- King Abdullah II School of Information Technology, The University of Jordan, Amman, 11942, Jordan.
| |
Collapse
|
248
|
Mali SA, Ibrahim A, Woodruff HC, Andrearczyk V, Müller H, Primakov S, Salahuddin Z, Chatterjee A, Lambin P. Making Radiomics More Reproducible across Scanner and Imaging Protocol Variations: A Review of Harmonization Methods. J Pers Med 2021; 11:842. [PMID: 34575619 PMCID: PMC8472571 DOI: 10.3390/jpm11090842] [Citation(s) in RCA: 97] [Impact Index Per Article: 24.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Revised: 08/21/2021] [Accepted: 08/24/2021] [Indexed: 12/13/2022] Open
Abstract
Radiomics converts medical images into mineable data via a high-throughput extraction of quantitative features used for clinical decision support. However, these radiomic features are susceptible to variation across scanners, acquisition protocols, and reconstruction settings. Various investigations have assessed the reproducibility and validation of radiomic features across these discrepancies. In this narrative review, we combine systematic keyword searches with prior domain knowledge to discuss various harmonization solutions to make the radiomic features more reproducible across various scanners and protocol settings. Different harmonization solutions are discussed and divided into two main categories: image domain and feature domain. The image domain category comprises methods such as the standardization of image acquisition, post-processing of raw sensor-level image data, data augmentation techniques, and style transfer. The feature domain category consists of methods such as the identification of reproducible features and normalization techniques such as statistical normalization, intensity harmonization, ComBat and its derivatives, and normalization using deep learning. We also reflect upon the importance of deep learning solutions for addressing variability across multi-centric radiomic studies especially using generative adversarial networks (GANs), neural style transfer (NST) techniques, or a combination of both. We cover a broader range of methods especially GANs and NST methods in more detail than previous reviews.
Collapse
Affiliation(s)
- Shruti Atul Mali
- The D-Lab, Department of Precision Medicine, GROW—School for Oncology, Maastricht University, Maastricht, Universiteitssingel 40, 6229 ER Maastricht, The Netherlands; (A.I.); (H.C.W.); (S.P.); (Z.S.); (A.C.); (P.L.)
| | - Abdalla Ibrahim
- The D-Lab, Department of Precision Medicine, GROW—School for Oncology, Maastricht University, Maastricht, Universiteitssingel 40, 6229 ER Maastricht, The Netherlands; (A.I.); (H.C.W.); (S.P.); (Z.S.); (A.C.); (P.L.)
- Department of Radiology and Nuclear Medicine, GROW—School for Oncology, Maastricht University Medical Center+, P.O. Box 5800, 6202 AZ Maastricht, The Netherlands
- Department of Medical Physics, Division of Nuclear Medicine and Oncological Imaging, Hospital Center Universitaire de Liege, 4000 Liege, Belgium
- Department of Nuclear Medicine and Comprehensive Diagnostic Center Aachen (CDCA), University Hospital RWTH Aachen University, 52074 Aachen, Germany
| | - Henry C. Woodruff
- The D-Lab, Department of Precision Medicine, GROW—School for Oncology, Maastricht University, Maastricht, Universiteitssingel 40, 6229 ER Maastricht, The Netherlands; (A.I.); (H.C.W.); (S.P.); (Z.S.); (A.C.); (P.L.)
- Department of Radiology and Nuclear Medicine, GROW—School for Oncology, Maastricht University Medical Center+, P.O. Box 5800, 6202 AZ Maastricht, The Netherlands
| | - Vincent Andrearczyk
- Institute of Information Systems, University of Applied Sciences and Arts Western Switzerland (HES-SO), rue du Technopole 3, 3960 Sierre, Switzerland; (V.A.); (H.M.)
| | - Henning Müller
- Institute of Information Systems, University of Applied Sciences and Arts Western Switzerland (HES-SO), rue du Technopole 3, 3960 Sierre, Switzerland; (V.A.); (H.M.)
| | - Sergey Primakov
- The D-Lab, Department of Precision Medicine, GROW—School for Oncology, Maastricht University, Maastricht, Universiteitssingel 40, 6229 ER Maastricht, The Netherlands; (A.I.); (H.C.W.); (S.P.); (Z.S.); (A.C.); (P.L.)
| | - Zohaib Salahuddin
- The D-Lab, Department of Precision Medicine, GROW—School for Oncology, Maastricht University, Maastricht, Universiteitssingel 40, 6229 ER Maastricht, The Netherlands; (A.I.); (H.C.W.); (S.P.); (Z.S.); (A.C.); (P.L.)
| | - Avishek Chatterjee
- The D-Lab, Department of Precision Medicine, GROW—School for Oncology, Maastricht University, Maastricht, Universiteitssingel 40, 6229 ER Maastricht, The Netherlands; (A.I.); (H.C.W.); (S.P.); (Z.S.); (A.C.); (P.L.)
| | - Philippe Lambin
- The D-Lab, Department of Precision Medicine, GROW—School for Oncology, Maastricht University, Maastricht, Universiteitssingel 40, 6229 ER Maastricht, The Netherlands; (A.I.); (H.C.W.); (S.P.); (Z.S.); (A.C.); (P.L.)
- Department of Radiology and Nuclear Medicine, GROW—School for Oncology, Maastricht University Medical Center+, P.O. Box 5800, 6202 AZ Maastricht, The Netherlands
| |
Collapse
|
249
|
Yu L, Zhang Z, Li X, Ren H, Zhao W, Xing L. Metal artifact reduction in 2D CT images with self-supervised cross-domain learning. Phys Med Biol 2021; 66. [PMID: 34330119 DOI: 10.1088/1361-6560/ac195c] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Accepted: 07/30/2021] [Indexed: 11/12/2022]
Abstract
The presence of metallic implants often introduces severe metal artifacts in the x-ray computed tomography (CT) images, which could adversely influence clinical diagnosis or dose calculation in radiation therapy. In this work, we present a novel deep-learning-based approach for metal artifact reduction (MAR). In order to alleviate the need for anatomically identical CT image pairs (i.e. metal artifact-corrupted CT image and metal artifact-free CT image) for network learning, we propose a self-supervised cross-domain learning framework. Specifically, we train a neural network to restore the metal trace region values in the given metal-free sinogram, where the metal trace is identified by the forward projection of metal masks. We then design a novel filtered backward projection (FBP) reconstruction loss to encourage the network to generate more perfect completion results and a residual-learning-based image refinement module to reduce the secondary artifacts in the reconstructed CT images. To preserve the fine structure details and fidelity of the final MAR image, instead of directly adopting convolutional neural network (CNN)-refined images as output, we incorporate the metal trace replacement into our framework and replace the metal-affected projections of the original sinogram with the prior sinogram generated by the forward projection of the CNN output. We then use the FBP algorithms for final MAR image reconstruction. We conduct an extensive evaluation on simulated and real artifact data to show the effectiveness of our design. Our method produces superior MAR results and outperforms other compelling methods. We also demonstrate the potential of our framework for other organ sites.
Collapse
Affiliation(s)
- Lequan Yu
- Department of Statistics and Actuarial Science, The University of Hong Kong, Hong Kong, China, and also with the Department of Radiation Oncology, Stanford University, United States of America
| | - Zhicheng Zhang
- Department of Radiation Oncology, Stanford University, United States of America
| | - Xiaomeng Li
- Department of Electronic and Computer Engineering, Hong Kong University of Science and Technology, Hong Kong, China, and also with the Department of Radiation Oncology, Stanford University, United States of America
| | - Hongyi Ren
- Department of Radiation Oncology, Stanford University, United States of America
| | - Wei Zhao
- Department of Radiation Oncology, Stanford University, United States of America
| | - Lei Xing
- Department of Radiation Oncology, Stanford University, United States of America
| |
Collapse
|
250
|
Deep Learning for Low-Dose CT Denoising Using Perceptual Loss and Edge Detection Layer. J Digit Imaging 2021; 33:504-515. [PMID: 31515756 DOI: 10.1007/s10278-019-00274-4] [Citation(s) in RCA: 54] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023] Open
Abstract
Low-dose CT denoising is a challenging task that has been studied by many researchers. Some studies have used deep neural networks to improve the quality of low-dose CT images and achieved fruitful results. In this paper, we propose a deep neural network that uses dilated convolutions with different dilation rates instead of standard convolution helping to capture more contextual information in fewer layers. Also, we have employed residual learning by creating shortcut connections to transmit image information from the early layers to later ones. To further improve the performance of the network, we have introduced a non-trainable edge detection layer that extracts edges in horizontal, vertical, and diagonal directions. Finally, we demonstrate that optimizing the network by a combination of mean-square error loss and perceptual loss preserves many structural details in the CT image. This objective function does not suffer from over smoothing and blurring effects causing by per-pixel loss and grid-like artifacts resulting from perceptual loss. The experiments show that each modification to the network improves the outcome while changing the complexity of the network, minimally.
Collapse
|