251
|
Deep Learning for Low-Dose CT Denoising Using Perceptual Loss and Edge Detection Layer. J Digit Imaging 2021; 33:504-515. [PMID: 31515756 DOI: 10.1007/s10278-019-00274-4] [Citation(s) in RCA: 54] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023] Open
Abstract
Low-dose CT denoising is a challenging task that has been studied by many researchers. Some studies have used deep neural networks to improve the quality of low-dose CT images and achieved fruitful results. In this paper, we propose a deep neural network that uses dilated convolutions with different dilation rates instead of standard convolution helping to capture more contextual information in fewer layers. Also, we have employed residual learning by creating shortcut connections to transmit image information from the early layers to later ones. To further improve the performance of the network, we have introduced a non-trainable edge detection layer that extracts edges in horizontal, vertical, and diagonal directions. Finally, we demonstrate that optimizing the network by a combination of mean-square error loss and perceptual loss preserves many structural details in the CT image. This objective function does not suffer from over smoothing and blurring effects causing by per-pixel loss and grid-like artifacts resulting from perceptual loss. The experiments show that each modification to the network improves the outcome while changing the complexity of the network, minimally.
Collapse
|
252
|
Adu K, Yu Y, Cai J, Dela Tattrah V, Adu Ansere J, Tashi N. S-CCCapsule: Pneumonia detection in chest X-ray images using skip-connected convolutions and capsule neural network. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2021. [DOI: 10.3233/jifs-202638] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
The squash function in capsule networks (CapsNets) dynamic routing is less capable of performing discrimination of non-informative capsules which leads to abnormal activation value distribution of capsules. In this paper, we propose vertical squash (VSquash) to improve the original squash by preventing the activation values of capsules in the primary capsule layer to shrink non-informative capsules, promote discriminative capsules and avoid high information sensitivity. Furthermore, a new neural network, (i) skip-connected convolutional capsule (S-CCCapsule), (ii) Integrated skip-connected convolutional capsules (ISCC) and (iii) Ensemble skip-connected convolutional capsules (ESCC) based on CapsNets are presented where the VSquash is applied in the dynamic routing. In order to achieve uniform distribution of coupling coefficient of probabilities between capsules, we use the Sigmoid function rather than Softmax function. Experiments on Guangzhou Women and Children’s Medical Center (GWCMC), Radiological Society of North America (RSNA) and Mendeley CXR Pneumonia datasets were performed to validate the effectiveness of our proposed methods. We found that our proposed methods produce better accuracy compared to other methods based on model evaluation metrics such as confusion matrix, sensitivity, specificity and Area under the curve (AUC). Our method for pneumonia detection performs better than practicing radiologists. It minimizes human error and reduces diagnosis time.
Collapse
Affiliation(s)
- Kwabena Adu
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Yongbin Yu
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Jingye Cai
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | | | - James Adu Ansere
- College of Internet of Things Engineering, Hohai University, China
| | - Nyima Tashi
- School of Information Science and Technology, Tibet University, Lhasa, China
| |
Collapse
|
253
|
Javaid U, Souris K, Huang S, Lee JA. Denoising proton therapy Monte Carlo dose distributions in multiple tumor sites: A comparative neural networks architecture study. Phys Med 2021; 89:93-103. [PMID: 34358755 DOI: 10.1016/j.ejmp.2021.07.022] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Revised: 07/04/2021] [Accepted: 07/12/2021] [Indexed: 10/20/2022] Open
Abstract
INTRODUCTION Monte Carlo (MC) algorithms provide accurate modeling of dose calculation by simulating the delivery and interaction of many particles through patient geometry. Fast MC simulations using large number of particles are desirable as they can lead to reliable clinical decisions. In this work, we assume that faster simulations with fewer particles can approximate slower ones by denoising them with deep learning. MATERIALS AND METHODS We use mean squared error (MSE) as loss function to train networks (sNet and dUNet), with 2.5D and 3D setups considering volumes of 7 and 24 slices. Our models are trained on proton therapy MC dose distributions of six different tumor sites acquired from 50 patients. We provide networks with input MC dose distributions simulated using 1 × 106 particles while keeping 1 × 109 particles as reference. RESULTS On average over 10 new patients with different tumor sites, in 2.5D and 3D, our models recover relative residual error on target volume, ΔD95TV of 0.67 ± 0.43% and 1.32 ± 0.87% for sNet vs. 0.83 ± 0.53% and 1.66 ± 0.98% for dUNet, compared to the noisy input at 12.40 ± 4.06%. Moreover, the denoising time for a dose distribution is: < 9s and < 1s for sNet vs. < 16s and < 1.5s for dUNet in 2.5D and 3D, in comparison to about 100 min (MC simulation using 1 × 109 particles). CONCLUSION We propose a fast framework that can successfully denoise MC dose distributions. Starting from MC doses with 1 × 106 particles only, the networks provide comparable results as MC doses with1 × 109 particles, reducing simulation time significantly.
Collapse
Affiliation(s)
- Umair Javaid
- Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM), UCLouvain, Belgium; Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium.
| | - Kevin Souris
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium
| | - Sheng Huang
- Department of Med. Phys., Memorial Sloan Kettering Cancer Center, New York, United States
| | - John A Lee
- Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM), UCLouvain, Belgium; Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium
| |
Collapse
|
254
|
A Survey of Soft Computing Approaches in Biomedical Imaging. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:1563844. [PMID: 34394885 PMCID: PMC8356006 DOI: 10.1155/2021/1563844] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Revised: 07/11/2021] [Accepted: 07/21/2021] [Indexed: 12/11/2022]
Abstract
Medical imaging is an essential technique for the diagnosis and treatment of diseases in modern clinics. Soft computing plays a major role in the recent advances in medical imaging. It handles uncertainties and improves the qualities of an image. Until now, various soft computing approaches have been proposed for medical applications. This paper discusses various medical imaging modalities and presents a short review of soft computing approaches such as fuzzy logic, artificial neural network, genetic algorithm, machine learning, and deep learning. We also studied and compared each approach used for other imaging modalities based on the certain parameter used for the system evaluation. Finally, based on comparative analysis, the possible research strategies for further development are proposed. As far as we know, no previous work examined this issue.
Collapse
|
255
|
Han S, Zhao Y, Li F, Ji D, Li Y, Zheng M, Lv W, Xin X, Zhao X, Qi B, Hu C. Dual-path deep learning reconstruction framework for propagation-based X-ray phase-contrast computed tomography with sparse-view projections. OPTICS LETTERS 2021; 46:3552-3555. [PMID: 34329222 DOI: 10.1364/ol.427547] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/14/2021] [Accepted: 06/23/2021] [Indexed: 06/13/2023]
Abstract
Propagation-based X-ray phase-contrast computed tomography (PB-PCCT) can serve as an effective tool for studying organ function and pathologies. However, it usually suffers from a high radiation dose due to the long scan time. To alleviate this problem, we propose a deep learning reconstruction framework for PB-PCCT with sparse-view projections. The framework consists of dual-path deep neural networks, where the edge detection, edge guidance, and artifact removal models are incorporated into two subnetworks. It is worth noting that the framework has the ability to achieve excellent performance by exploiting the data-based knowledge of the sample material characteristics and the model-based knowledge of PB-PCCT. To evaluate the effectiveness and capability of the proposed framework, simulations and real experiments were performed. The results demonstrated that the proposed framework could significantly suppress streaking artifacts and produce high-contrast and high-resolution computed tomography images.
Collapse
|
256
|
Rawat S, Rana K, Kumar V. A novel complex-valued convolutional neural network for medical image denoising. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102859] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
257
|
Lu K, Ren L, Yin FF. A geometry-guided deep learning technique for CBCT reconstruction. Phys Med Biol 2021; 66. [PMID: 34261057 DOI: 10.1088/1361-6560/ac145b] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2021] [Accepted: 07/14/2021] [Indexed: 11/12/2022]
Abstract
Purpose.Although deep learning (DL) technique has been successfully used for computed tomography (CT) reconstruction, its implementation on cone-beam CT (CBCT) reconstruction is extremely challenging due to memory limitations. In this study, a novel DL technique is developed to resolve the memory issue, and its feasibility is demonstrated for CBCT reconstruction from sparsely sampled projection data.Methods.The novel geometry-guided deep learning (GDL) technique is composed of a GDL reconstruction module and a post-processing module. The GDL reconstruction module learns and performs projection-to-image domain transformation by replacing the traditional single fully connected layer with an array of small fully connected layers in the network architecture based on the projection geometry. The DL post-processing module further improves image quality after reconstruction. We demonstrated the feasibility and advantage of the model by comparing ground truth CBCT with CBCT images reconstructed using (1) GDL reconstruction module only, (2) GDL reconstruction module with DL post-processing module, (3) Feldkamp, Davis, and Kress (FDK) only, (4) FDK with DL post-processing module, (5) ray-tracing only, and (6) ray-tracing with DL post-processing module. The differences are quantified by peak-signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and root-mean-square error (RMSE).Results.CBCT images reconstructed with GDL show improvements in quantitative scores of PSNR, SSIM, and RMSE. Reconstruction time per image for all reconstruction methods are comparable. Compared to current DL methods using large fully connected layers, the estimated memory requirement using GDL is four orders of magnitude less, making DL CBCT reconstruction feasible.Conclusion.With much lower memory requirement compared to other existing networks, the GDL technique is demonstrated to be the first DL technique that can rapidly and accurately reconstruct CBCT images from sparsely sampled data.
Collapse
Affiliation(s)
- Ke Lu
- Medical Physics Graduate Program, Duke University, Durham, NC, United States of America.,Department of Radiation Oncology, Duke University, Durham, NC, United States of America
| | - Lei Ren
- Medical Physics Graduate Program, Duke University, Durham, NC, United States of America.,Department of Radiation Oncology, Duke University, Durham, NC, United States of America
| | - Fang-Fang Yin
- Medical Physics Graduate Program, Duke University, Durham, NC, United States of America.,Department of Radiation Oncology, Duke University, Durham, NC, United States of America.,Medical Physics Graduate Program, Duke Kunshan University, Kunshan, People's Republic of China
| |
Collapse
|
258
|
Yang DH. Application of Artificial Intelligence to Cardiovascular Computed Tomography. Korean J Radiol 2021; 22:1597-1608. [PMID: 34402240 PMCID: PMC8484158 DOI: 10.3348/kjr.2020.1314] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2020] [Revised: 03/26/2021] [Accepted: 05/14/2021] [Indexed: 11/15/2022] Open
Abstract
Cardiovascular computed tomography (CT) is among the most active fields with ongoing technical innovation related to image acquisition and analysis. Artificial intelligence can be incorporated into various clinical applications of cardiovascular CT, including imaging of the heart valves and coronary arteries, as well as imaging to evaluate myocardial function and congenital heart disease. This review summarizes the latest research on the application of deep learning to cardiovascular CT. The areas covered range from image quality improvement to automatic analysis of CT images, including methods such as calcium scoring, image segmentation, and coronary artery evaluation.
Collapse
Affiliation(s)
- Dong Hyun Yang
- Department of Radiology and Research Institute of Radiology, Cardiac Imaging Center, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea.
| |
Collapse
|
259
|
Usui K, Ogawa K, Goto M, Sakano Y, Kyougoku S, Daida H. Quantitative evaluation of deep convolutional neural network-based image denoising for low-dose computed tomography. Vis Comput Ind Biomed Art 2021; 4:21. [PMID: 34304321 PMCID: PMC8310822 DOI: 10.1186/s42492-021-00087-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Accepted: 06/10/2021] [Indexed: 11/10/2022] Open
Abstract
To minimize radiation risk, dose reduction is important in the diagnostic and therapeutic applications of computed tomography (CT). However, image noise degrades image quality owing to the reduced X-ray dose and a possible unacceptably reduced diagnostic performance. Deep learning approaches with convolutional neural networks (CNNs) have been proposed for natural image denoising; however, these approaches might introduce image blurring or loss of original gradients. The aim of this study was to compare the dose-dependent properties of a CNN-based denoising method for low-dose CT with those of other noise-reduction methods on unique CT noise-simulation images. To simulate a low-dose CT image, a Poisson noise distribution was introduced to normal-dose images while convoluting the CT unit-specific modulation transfer function. An abdominal CT of 100 images obtained from a public database was adopted, and simulated dose-reduction images were created from the original dose at equal 10-step dose-reduction intervals with a final dose of 1/100. These images were denoised using the denoising network structure of CNN (DnCNN) as the general CNN model and for transfer learning. To evaluate the image quality, image similarities determined by the structural similarity index (SSIM) and peak signal-to-noise ratio (PSNR) were calculated for the denoised images. Significantly better denoising, in terms of SSIM and PSNR, was achieved by the DnCNN than by other image denoising methods, especially at the ultra-low-dose levels used to generate the 10% and 5% dose-equivalent images. Moreover, the developed CNN model can eliminate noise and maintain image sharpness at these dose levels and improve SSIM by approximately 10% from that of the original method. In contrast, under small dose-reduction conditions, this model also led to excessive smoothing of the images. In quantitative evaluations, the CNN denoising method improved the low-dose CT and prevented over-smoothing by tailoring the CNN model.
Collapse
Affiliation(s)
- Keisuke Usui
- Department of Radiological Technology, Faculty of Health Science, Juntendo University, Tokyo, 113-8421, Japan. .,Department of Radiation Oncology, Faculty of Medicine, Juntendo University, Tokyo, 113-8421, Japan.
| | - Koichi Ogawa
- Faculty of Science and Engineering, Hosei University, Tokyo, 184-8584, Japan
| | - Masami Goto
- Department of Radiological Technology, Faculty of Health Science, Juntendo University, Tokyo, 113-8421, Japan
| | - Yasuaki Sakano
- Department of Radiological Technology, Faculty of Health Science, Juntendo University, Tokyo, 113-8421, Japan
| | - Shinsuke Kyougoku
- Department of Radiological Technology, Faculty of Health Science, Juntendo University, Tokyo, 113-8421, Japan
| | - Hiroyuki Daida
- Department of Radiological Technology, Faculty of Health Science, Juntendo University, Tokyo, 113-8421, Japan
| |
Collapse
|
260
|
Zhang Z, Liang X, Zhao W, Xing L. Noise2Context: Context-assisted learning 3D thin-layer for low-dose CT. Med Phys 2021; 48:5794-5803. [PMID: 34287948 DOI: 10.1002/mp.15119] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2021] [Revised: 05/31/2021] [Accepted: 07/08/2021] [Indexed: 12/26/2022] Open
Abstract
PURPOSE Computed tomography (CT) has played a vital role in medical diagnosis, assessment, and therapy planning, etc. In clinical practice, concerns about the increase of x-ray radiation exposure attract more and more attention. To lower the x-ray radiation, low-dose CT (LDCT) has been widely adopted in certain scenarios, while it will induce the degradation of CT image quality. In this paper, we proposed a deep learning-based method that can train denoising neural networks without any clean data. METHODS In this work, for 3D thin-slice LDCT scanning, we first drive an unsupervised loss function which was equivalent to a supervised loss function with paired noisy and clean samples when the noise in the different slices from a single scan was uncorrelated and zero-mean. Then, we trained the denoising neural network to map one noise LDCT image to its two adjacent LDCT images in a single 3D thin-layer LDCT scanning, simultaneously. In essence, with some latent assumptions, we proposed an unsupervised loss function to train the denoising neural network in an unsupervised manner, which integrated the similarity between adjacent CT slices in 3D thin-layer LDCT. RESULTS Further experiments on Mayo LDCT dataset and a realistic pig head were carried out. In the experiments using Mayo LDCT dataset, our unsupervised method can obtain performance comparable to that of the supervised baseline. With the realistic pig head, our method can achieve optimal performance at different noise levels as compared to all the other methods that demonstrated the superiority and robustness of the proposed Noise2Context. CONCLUSIONS In this work, we present a generalizable LDCT image denoising method without any clean data. As a result, our method not only gets rid of the complex artificial image priors but also amounts of paired high-quality training datasets.
Collapse
Affiliation(s)
- Zhicheng Zhang
- Department of Radiation Oncology, Stanford University, Stanford, CA, USA
| | - Xiaokun Liang
- Department of Radiation Oncology, Stanford University, Stanford, CA, USA
| | - Wei Zhao
- Department of Radiation Oncology, Stanford University, Stanford, CA, USA
| | - Lei Xing
- Department of Radiation Oncology, Stanford University, Stanford, CA, USA
| |
Collapse
|
261
|
Wang G, Hu X. Low-dose CT denoising using a Progressive Wasserstein generative adversarial network. Comput Biol Med 2021; 135:104625. [PMID: 34246157 DOI: 10.1016/j.compbiomed.2021.104625] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2021] [Revised: 05/26/2021] [Accepted: 06/28/2021] [Indexed: 12/01/2022]
Abstract
Low-dose computed tomography (LDCT) imaging can greatly reduce the radiation dose imposed on the patient. However, image noise and visual artifacts are inevitable when the radiation dose is low, which has serious impact on the clinical medical diagnosis. Hence, it is important to address the problem of LDCT denoising. Image denoising technology based on Generative Adversarial Network (GAN) has shown promising results in LDCT denoising. Unfortunately, the structures and the corresponding learning algorithms are becoming more and more complex and diverse, making it tricky to analyze the contributions of various network modules when developing new networks. In this paper, we propose a progressive Wasserstein generative adversarial network to remove the noise of LDCT images, providing a more feasible and effective way for CT denoising. Specifically, a recursive computation is designed to reduce the network parameters. Moreover, we introduce a novel hybrid loss function for achieving improved results. The hybrid loss function aims to reduce artifacts while better retaining the details in the denoising results. Therefore, we propose a novel LDCT denoising model called progressive Wasserstein generative adversarial network with the weighted structurally-sensitive hybrid loss function (PWGAN-WSHL), which provides a better and simpler baseline by considering network architecture and loss functions. Extensive experiments on a publicly available database show that our proposal achieves better performance than the state-of-the-art methods.
Collapse
Affiliation(s)
- Guan Wang
- School of Mathematics, Tianjin University, NO. 135, Yaguan Road, Jinnan District, Tianjin City, 300354, China.
| | - Xueli Hu
- School of Mathematics, Tianjin University, NO. 135, Yaguan Road, Jinnan District, Tianjin City, 300354, China
| |
Collapse
|
262
|
Keller G, Götz S, Kraus MS, Grünwald L, Springer F, Afat S. Radiation Dose Reduction in CT Torsion Measurement of the Lower Limb: Introduction of a New Ultra-Low Dose Protocol. Diagnostics (Basel) 2021; 11:diagnostics11071209. [PMID: 34359292 PMCID: PMC8304839 DOI: 10.3390/diagnostics11071209] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2021] [Revised: 06/28/2021] [Accepted: 07/01/2021] [Indexed: 11/26/2022] Open
Abstract
This study analyzed the radiation exposure of a new ultra-low dose (ULD) protocol compared to a high-quality (HQ) protocol for CT-torsion measurement of the lower limb. The analyzed patients (n = 60) were examined in the period March to October 2019. In total, 30 consecutive patients were examined with the HQ and 30 consecutive patients with the new ULD protocol comprising automatic tube voltage selection, automatic exposure control, and iterative image reconstruction algorithms. Radiation dose parameters as well as the contrast-to-noise ratio (CNR) and diagnostic confidence (DC; rated by two radiologists) were analyzed and potential predictor variables, such as body mass index and body volume, were assessed. The new ULD protocol resulted in significantly lower radiation dose parameters, with a reduction of the median total dose equivalent to 0.17 mSv in the ULD protocol compared to 4.37 mSv in the HQ protocol (p < 0.001). Both groups showed no significant differences in regard to other parameters (p = 0.344–0.923). CNR was 12.2% lower using the new ULD protocol (p = 0.033). DC was rated best by both readers in every HQ CT and in every ULD CT. The new ULD protocol for CT-torsion measurement of the lower limb resulted in a 96% decrease of radiation exposure down to the level of a single pelvic radiograph while maintaining good image quality.
Collapse
Affiliation(s)
- Gabriel Keller
- Department of Diagnostic and Interventional Radiology, University Hospital Tübingen, Eberhard Karls University Tübingen, 72076 Tübingen, Germany; (S.G.); (M.S.K.); (S.A.)
- Correspondence: (G.K.); (F.S.)
| | - Simon Götz
- Department of Diagnostic and Interventional Radiology, University Hospital Tübingen, Eberhard Karls University Tübingen, 72076 Tübingen, Germany; (S.G.); (M.S.K.); (S.A.)
| | - Mareen Sarah Kraus
- Department of Diagnostic and Interventional Radiology, University Hospital Tübingen, Eberhard Karls University Tübingen, 72076 Tübingen, Germany; (S.G.); (M.S.K.); (S.A.)
| | - Leonard Grünwald
- Department of Traumatology and Reconstructive Surgery, BG Trauma Center Tübingen, Eberhard Karls University Tübingen, 72076 Tübingen, Germany;
| | - Fabian Springer
- Department of Diagnostic and Interventional Radiology, University Hospital Tübingen, Eberhard Karls University Tübingen, 72076 Tübingen, Germany; (S.G.); (M.S.K.); (S.A.)
- Department of Diagnostic Radiology, BG Trauma Center Tübingen, Eberhard Karls University Tübingen, 72076 Tübingen, Germany
- Correspondence: (G.K.); (F.S.)
| | - Saif Afat
- Department of Diagnostic and Interventional Radiology, University Hospital Tübingen, Eberhard Karls University Tübingen, 72076 Tübingen, Germany; (S.G.); (M.S.K.); (S.A.)
| |
Collapse
|
263
|
Gao M, Fessler JA, Chan HP. Deep Convolutional Neural Network With Adversarial Training for Denoising Digital Breast Tomosynthesis Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1805-1816. [PMID: 33729933 PMCID: PMC8274391 DOI: 10.1109/tmi.2021.3066896] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Digital breast tomosynthesis (DBT) is a quasi-three-dimensional imaging modality that can reduce false negatives and false positives in mass lesion detection caused by overlapping breast tissue in conventional two-dimensional (2D) mammography. The patient dose of a DBT scan is similar to that of a single 2D mammogram, while acquisition of each projection view adds detector readout noise. The noise is propagated to the reconstructed DBT volume, possibly obscuring subtle signs of breast cancer such as microcalcifications (MCs). This study developed a deep convolutional neural network (DCNN) framework for denoising DBT images with a focus on improving the conspicuity of MCs as well as preserving the ill-defined margins of spiculated masses and normal tissue textures. We trained the DCNN using a weighted combination of mean squared error (MSE) loss and adversarial loss. We configured a dedicated x-ray imaging simulator in combination with digital breast phantoms to generate realistic in silico DBT data for training. We compared the DCNN training between using digital phantoms and using real physical phantoms. The proposed denoising method improved the contrast-to-noise ratio (CNR) and detectability index (d') of the simulated MCs in the validation phantom DBTs. These performance measures improved with increasing training target dose and training sample size. Promising denoising results were observed on the transferability of the digital-phantom-trained denoiser to DBT reconstructed with different techniques and on a small independent test set of human subject DBT images.
Collapse
|
264
|
Rath A, Mishra D, Panda G, Satapathy SC. Heart disease detection using deep learning methods from imbalanced ECG samples. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102820] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
|
265
|
Abdelmotaal H, Abdou AA, Omar AF, El-Sebaity DM, Abdelazeem K. Pix2pix Conditional Generative Adversarial Networks for Scheimpflug Camera Color-Coded Corneal Tomography Image Generation. Transl Vis Sci Technol 2021; 10:21. [PMID: 34132759 PMCID: PMC8242686 DOI: 10.1167/tvst.10.7.21] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023] Open
Abstract
Purpose To assess the ability of pix2pix conditional generative adversarial network (pix2pix cGAN) to create plausible synthesized Scheimpflug camera color-coded corneal tomography images based upon a modest-sized original dataset to be used for image augmentation during training a deep convolutional neural network (DCNN) for classification of keratoconus and normal corneal images. Methods Original images of 1778 eyes of 923 nonconsecutive patients with or without keratoconus were retrospectively analyzed. Images were labeled and preprocessed for use in training the proposed pix2pix cGAN. The best quality synthesized images were selected based on the Fréchet inception distance score, and their quality was studied by calculating the mean square error, structural similarity index, and the peak signal-to-noise ratio. We used original, traditionally augmented original and synthesized images to train a DCNN for image classification and compared classification performance metrics. Results The pix2pix cGAN synthesized images showed plausible subjectively and objectively assessed quality. Training the DCNN with a combination of real and synthesized images allowed better classification performance compared with training using original images only or with traditional augmentation. Conclusions Using the pix2pix cGAN to synthesize corneal tomography images can overcome issues related to small datasets and class imbalance when training computer-aided diagnostic models. Translational Relevance Pix2pix cGAN can provide an unlimited supply of plausible synthetic Scheimpflug camera color-coded corneal tomography images at levels useful for experimental and clinical applications.
Collapse
Affiliation(s)
- Hazem Abdelmotaal
- Department of Ophthalmology, Faculty of Medicine, Assiut University, Assiut, Egypt
| | - Ahmed A Abdou
- Department of Ophthalmology, Faculty of Medicine, Assiut University, Assiut, Egypt
| | - Ahmed F Omar
- Department of Ophthalmology, Faculty of Medicine, Assiut University, Assiut, Egypt
| | | | - Khaled Abdelazeem
- Department of Ophthalmology, Faculty of Medicine, Assiut University, Assiut, Egypt
| |
Collapse
|
266
|
Seah J, Brady Z, Ewert K, Law M. Artificial intelligence in medical imaging: implications for patient radiation safety. Br J Radiol 2021; 94:20210406. [PMID: 33989035 DOI: 10.1259/bjr.20210406] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022] Open
Abstract
Artificial intelligence, including deep learning, is currently revolutionising the field of medical imaging, with far reaching implications for almost every facet of diagnostic imaging, including patient radiation safety. This paper introduces basic concepts in deep learning and provides an overview of its recent history and its application in tomographic reconstruction as well as other applications in medical imaging to reduce patient radiation dose, as well as a brief description of previous tomographic reconstruction techniques. This review also describes the commonly used deep learning techniques as applied to tomographic reconstruction and draws parallels to current reconstruction techniques. Finally, this paper reviews some of the estimated dose reductions in CT and positron emission tomography in the recent literature enabled by deep learning, as well as some of the potential problems that may be encountered such as the obscuration of pathology, and highlights the need for additional clinical reader studies from the imaging community.
Collapse
Affiliation(s)
- Jarrel Seah
- Department of Radiology, Alfred Health, Melbourne, Australia.,Department of Neuroscience, Monash University, Melbourne, Australia.,Annalise.AI, Sydney, Australia
| | - Zoe Brady
- Department of Radiology, Alfred Health, Melbourne, Australia.,Department of Neuroscience, Monash University, Melbourne, Australia
| | - Kyle Ewert
- Department of Radiology, Alfred Health, Melbourne, Australia
| | - Meng Law
- Department of Radiology, Alfred Health, Melbourne, Australia.,Department of Neuroscience, Monash University, Melbourne, Australia.,Department of Electrical and Computer Systems Engineering, Monash University, Melbourne, Australia
| |
Collapse
|
267
|
Kulathilake KASH, Abdullah NA, Sabri AQM, Lai KW. A review on Deep Learning approaches for low-dose Computed Tomography restoration. COMPLEX INTELL SYST 2021; 9:2713-2745. [PMID: 34777967 PMCID: PMC8164834 DOI: 10.1007/s40747-021-00405-x] [Citation(s) in RCA: 41] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Accepted: 05/18/2021] [Indexed: 02/08/2023]
Abstract
Computed Tomography (CT) is a widely use medical image modality in clinical medicine, because it produces excellent visualizations of fine structural details of the human body. In clinical procedures, it is desirable to acquire CT scans by minimizing the X-ray flux to prevent patients from being exposed to high radiation. However, these Low-Dose CT (LDCT) scanning protocols compromise the signal-to-noise ratio of the CT images because of noise and artifacts over the image space. Thus, various restoration methods have been published over the past 3 decades to produce high-quality CT images from these LDCT images. More recently, as opposed to conventional LDCT restoration methods, Deep Learning (DL)-based LDCT restoration approaches have been rather common due to their characteristics of being data-driven, high-performance, and fast execution. Thus, this study aims to elaborate on the role of DL techniques in LDCT restoration and critically review the applications of DL-based approaches for LDCT restoration. To achieve this aim, different aspects of DL-based LDCT restoration applications were analyzed. These include DL architectures, performance gains, functional requirements, and the diversity of objective functions. The outcome of the study highlights the existing limitations and future directions for DL-based LDCT restoration. To the best of our knowledge, there have been no previous reviews, which specifically address this topic.
Collapse
Affiliation(s)
- K. A. Saneera Hemantha Kulathilake
- Department of Computer System and Technology, Faculty of Computer Science and Information Technology, Universiti Malaya, 50603 Kuala Lumpur, Malaysia
| | - Nor Aniza Abdullah
- Department of Computer System and Technology, Faculty of Computer Science and Information Technology, Universiti Malaya, 50603 Kuala Lumpur, Malaysia
| | - Aznul Qalid Md Sabri
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Universiti Malaya, 50603 Kuala Lumpur, Malaysia
| | - Khin Wee Lai
- Department of Biomedical Engineering, Faculty of Engineering, Universiti Malaya, 50603 Kuala Lumpur, Malaysia
| |
Collapse
|
268
|
Karbhari Y, Basu A, Geem ZW, Han GT, Sarkar R. Generation of Synthetic Chest X-ray Images and Detection of COVID-19: A Deep Learning Based Approach. Diagnostics (Basel) 2021; 11:895. [PMID: 34069841 PMCID: PMC8157360 DOI: 10.3390/diagnostics11050895] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Revised: 05/14/2021] [Accepted: 05/16/2021] [Indexed: 12/13/2022] Open
Abstract
COVID-19 is a disease caused by the SARS-CoV-2 virus. The COVID-19 virus spreads when a person comes into contact with an affected individual. This is mainly through drops of saliva or nasal discharge. Most of the affected people have mild symptoms while some people develop acute respiratory distress syndrome (ARDS), which damages organs like the lungs and heart. Chest X-rays (CXRs) have been widely used to identify abnormalities that help in detecting the COVID-19 virus. They have also been used as an initial screening procedure for individuals highly suspected of being infected. However, the availability of radiographic CXRs is still scarce. This can limit the performance of deep learning (DL) based approaches for COVID-19 detection. To overcome these limitations, in this work, we developed an Auxiliary Classifier Generative Adversarial Network (ACGAN), to generate CXRs. Each generated X-ray belongs to one of the two classes COVID-19 positive or normal. To ensure the goodness of the synthetic images, we performed some experimentation on the obtained images using the latest Convolutional Neural Networks (CNNs) to detect COVID-19 in the CXRs. We fine-tuned the models and achieved more than 98% accuracy. After that, we also performed feature selection using the Harmony Search (HS) algorithm, which reduces the number of features while retaining classification accuracy. We further release a GAN-generated dataset consisting of 500 COVID-19 radiographic images.
Collapse
Affiliation(s)
- Yash Karbhari
- Department of Information Technology, Pune Vidyarthi Griha’s College of Engineering and Technology, Pune 411009, India;
| | - Arpan Basu
- Department of Computer Science and Engineering, Jadavpur University, Kolkata 700032, India; (A.B.); (R.S.)
| | - Zong Woo Geem
- College of IT Convergence, Gachon University, 1342 Seongnam Daero, Seongnam 13120, Korea;
| | - Gi-Tae Han
- College of IT Convergence, Gachon University, 1342 Seongnam Daero, Seongnam 13120, Korea;
| | - Ram Sarkar
- Department of Computer Science and Engineering, Jadavpur University, Kolkata 700032, India; (A.B.); (R.S.)
| |
Collapse
|
269
|
Du T, Xie L, Zhang H, Liu X, Wang X, Chen D, Xu Y, Sun Z, Zhou W, Song L, Guan C, Lansky AJ, Xu B. Training and validation of a deep learning architecture for the automatic analysis of coronary angiography. EUROINTERVENTION 2021; 17:32-40. [PMID: 32830647 PMCID: PMC9753915 DOI: 10.4244/eij-d-20-00570] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
BACKGROUND In recent years, the use of deep learning has become more commonplace in the biomedical field and its development will greatly assist clinical and imaging data interpretation. Most existing machine learning methods for coronary angiography analysis are limited to a single aspect. AIMS We aimed to achieve an automatic and multimodal analysis to recognise and quantify coronary angiography, integrating multiple aspects, including the identification of coronary artery segments and the recognition of lesion morphology. METHODS A data set of 20,612 angiograms was retrospectively collected, among which 13,373 angiograms were labelled with coronary artery segments, and 7,239 were labelled with special lesion morphology. Trained and optimised by these labelled data, one network recognised 20 different segments of coronary arteries, while the other detected lesion morphology, including measures of lesion diameter stenosis as well as calcification, thrombosis, total occlusion, and dissection detections in an input angiogram. RESULTS For segment prediction, the recognition accuracy was 98.4%, and the recognition sensitivity was 85.2%. For detecting lesion morphologies including stenotic lesion, total occlusion, calcification, thrombosis, and dissection, the F1 scores were 0.829, 0.810, 0.802, 0.823, and 0.854, respectively. Only two seconds were needed for the automatic recognition. CONCLUSIONS Our deep learning architecture automatically provides a coronary diagnostic map by integrating multiple aspects. This helps cardiologists to flag and diagnose lesion severity and morphology during the intervention.
Collapse
Affiliation(s)
- Tianming Du
- Beijing University of Posts and Telecommunications, Beijing, China
| | - Lihua Xie
- Fu Wai Hospital, National Center for Cardiovascular Diseases, Chinese Academy of Medical Sciences, Beijing, China
| | - Honggang Zhang
- Beijing University of Posts and Telecommunications, Beijing, China
| | - Xuqing Liu
- Beijing University of Posts and Telecommunications, Beijing, China
| | - Xiaofei Wang
- Beijing Redcdn Technology Co., Ltd, Beijing, China
| | - Donghao Chen
- Beijing Redcdn Technology Co., Ltd, Beijing, China
| | - Yang Xu
- Beijing Redcdn Technology Co., Ltd, Beijing, China
| | - Zhongwei Sun
- Fu Wai Hospital, National Center for Cardiovascular Diseases, Chinese Academy of Medical Sciences, Beijing, China
| | - Wenhui Zhou
- Beijing Redcdn Technology Co., Ltd, Beijing, China
| | - Lei Song
- Fu Wai Hospital, National Center for Cardiovascular Diseases, Chinese Academy of Medical Sciences, Beijing, China
| | - Changdong Guan
- Fu Wai Hospital, National Center for Cardiovascular Diseases, Chinese Academy of Medical Sciences, Beijing, China
| | | | - Bo Xu
- Fu Wai Hospital, National Center for Cardiovascular Diseases, Chinese Academy of Medical Sciences, A 167, Beilishi Road, Xicheng District, Beijing, 100037, China
| |
Collapse
|
270
|
Shi Z, Li H, Cao Q, Wang Z, Cheng M. A material decomposition method for dual-energy CT via dual interactive Wasserstein generative adversarial networks. Med Phys 2021; 48:2891-2905. [PMID: 33704786 DOI: 10.1002/mp.14828] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2020] [Revised: 02/26/2021] [Accepted: 02/28/2021] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Dual-energy computed tomography (DECT) is highly promising for material characterization and identification, whereas reconstructed material-specific images are affected by magnified noise and beam-hardening artifacts. Although various DECT material decomposition methods have been proposed to solve this problem, the quality of the decomposed images is still unsatisfactory, particularly in the image edges. In this study, a data-driven approach using dual interactive Wasserstein generative adversarial networks (DIWGAN) is developed to improve DECT decomposition accuracy and perform edge-preserving images. METHODS In proposed DIWGAN, two interactive generators are used to synthesize decomposed images of two basis materials by modeling the spatial and spectral correlations from input DECT reconstructed images, and the corresponding discriminators are employed to distinguish the difference between the generated images and labels. The DECT images reconstructed from high- and low-energy bins are sent to two generators separately, and each generator synthesizes one material-specific image, thereby ensuring the specificity of the network modeling. In addition, the information from different energy bins is exploited through the feature sharing of two generators. During decomposition model training, a hybrid loss function including L1 loss, edge loss, and adversarial loss is incorporated to preserve the texture and edges in the generated images. Additionally, a selector is employed to define the generator that should be trained in each iteration, which can ensure the modeling ability of two different generators and improve the material decomposition accuracy. The performance of the proposed method is evaluated using digital phantom, XCAT phantom, and real data from a mouse. RESULTS On the digital phantom, the regions of bone and soft tissue are strictly and accurately separated using the trained decomposition model. The material densities in different bone and soft-tissue regions are near the ground truth, and the error of material densities is lower than 3 mg/ml. The results from XCAT phantom show that the material-specific images generated by directed matrix inversion and iterative decomposition methods have severe noise and artifacts. Regarding to the learning-based methods, the decomposed images of fully convolutional network (FCN) and butterfly network (Butterfly-Net) still contain varying degrees of artifacts, while proposed DIWGAN can yield high quality images. Compared to Butterfly-Net, the root-mean-square error (RMSE) of soft-tissue images generated by the DIWGAN decreased by 0.01 g/ml, whereas the peak-signal-to-noise ratio (PSNR) and structural similarity (SSIM) of the soft-tissue images reached 31.43 dB and 0.9987, respectively. The mass densities of the decomposed materials are nearest to the ground truth when using the DIWGAN method. The noise standard deviation of the decomposition images reduced by 69%, 60%, 33%, and 21% compared with direct matrix inversion, iterative decomposition, FCN, and Butterfly-Net, respectively. Furthermore, the performance of the mouse data indicates the potential of the proposed material decomposition method in real scanned data. CONCLUSIONS A DECT material decomposition method based on deep learning is proposed, and the relationship between reconstructed and material-specific images is mapped by training the DIWGAN model. Results from both the simulation phantoms and real data demonstrate the advantages of this method in suppressing noise and beam-hardening artifacts.
Collapse
Affiliation(s)
- Zaifeng Shi
- School of Microelectronics, Tianjin University, Tianjin, 300072, China.,Tianjin Key Laboratory of Imaging and Sensing Microelectronic Technology, Tianjin, 300072, China
| | - Huilong Li
- School of Microelectronics, Tianjin University, Tianjin, 300072, China
| | - Qingjie Cao
- School of Mathematical Sciences, Tianjin Normal University, Tianjin, 300072, China
| | - Zhongqi Wang
- School of Microelectronics, Tianjin University, Tianjin, 300072, China
| | - Ming Cheng
- School of Microelectronics, Tianjin University, Tianjin, 300072, China
| |
Collapse
|
271
|
Koshino K, Werner RA, Pomper MG, Bundschuh RA, Toriumi F, Higuchi T, Rowe SP. Narrative review of generative adversarial networks in medical and molecular imaging. ANNALS OF TRANSLATIONAL MEDICINE 2021; 9:821. [PMID: 34268434 PMCID: PMC8246192 DOI: 10.21037/atm-20-6325] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/09/2020] [Accepted: 01/08/2021] [Indexed: 12/22/2022]
Abstract
Recent years have witnessed a rapidly expanding use of artificial intelligence and machine learning in medical imaging. Generative adversarial networks (GANs) are techniques to synthesize images based on artificial neural networks and deep learning. In addition to the flexibility and versatility inherent in deep learning on which the GANs are based, the potential problem-solving ability of the GANs has attracted attention and is being vigorously studied in the medical and molecular imaging fields. Here this narrative review provides a comprehensive overview for GANs and discuss their usefulness in medical and molecular imaging on the following topics: (I) data augmentation to increase training data for AI-based computer-aided diagnosis as a solution for the data-hungry nature of such training sets; (II) modality conversion to complement the shortcomings of a single modality that reflects certain physical measurement principles, such as from magnetic resonance (MR) to computed tomography (CT) images or vice versa; (III) de-noising to realize less injection and/or radiation dose for nuclear medicine and CT; (IV) image reconstruction for shortening MR acquisition time while maintaining high image quality; (V) super-resolution to produce a high-resolution image from low-resolution one; (VI) domain adaptation which utilizes knowledge such as supervised labels and annotations from a source domain to the target domain with no or insufficient knowledge; and (VII) image generation with disease severity and radiogenomics. GANs are promising tools for medical and molecular imaging. The progress of model architectures and their applications should continue to be noteworthy.
Collapse
Affiliation(s)
- Kazuhiro Koshino
- Department of Systems and Informatics, Hokkaido Information University, Ebetsu, Japan
| | - Rudolf A. Werner
- The Russell H. Morgan Department of Radiology and Radiological Science, Division of Nuclear Medicine and Molecular Imaging, Johns Hopkins School of Medicine, Baltimore, MD, USA
| | - Martin G. Pomper
- The Russell H. Morgan Department of Radiology and Radiological Science, Division of Nuclear Medicine and Molecular Imaging, Johns Hopkins School of Medicine, Baltimore, MD, USA
| | | | - Fujio Toriumi
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan
| | - Takahiro Higuchi
- Department of Nuclear Medicine, University Hospital, University of Würzburg, Würzburg, Germany
- Comprehensive Heart Failure Center, University Hospital, University of Würzburg, Würzburg, Germany
- Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama, Japan
| | - Steven P. Rowe
- The Russell H. Morgan Department of Radiology and Radiological Science, Division of Nuclear Medicine and Molecular Imaging, Johns Hopkins School of Medicine, Baltimore, MD, USA
| |
Collapse
|
272
|
Slart RHJA, Williams MC, Juarez-Orozco LE, Rischpler C, Dweck MR, Glaudemans AWJM, Gimelli A, Georgoulias P, Gheysens O, Gaemperli O, Habib G, Hustinx R, Cosyns B, Verberne HJ, Hyafil F, Erba PA, Lubberink M, Slomka P, Išgum I, Visvikis D, Kolossváry M, Saraste A. Position paper of the EACVI and EANM on artificial intelligence applications in multimodality cardiovascular imaging using SPECT/CT, PET/CT, and cardiac CT. Eur J Nucl Med Mol Imaging 2021; 48:1399-1413. [PMID: 33864509 PMCID: PMC8113178 DOI: 10.1007/s00259-021-05341-z] [Citation(s) in RCA: 47] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Accepted: 03/25/2021] [Indexed: 12/18/2022]
Abstract
In daily clinical practice, clinicians integrate available data to ascertain the diagnostic and prognostic probability of a disease or clinical outcome for their patients. For patients with suspected or known cardiovascular disease, several anatomical and functional imaging techniques are commonly performed to aid this endeavor, including coronary computed tomography angiography (CCTA) and nuclear cardiology imaging. Continuous improvement in positron emission tomography (PET), single-photon emission computed tomography (SPECT), and CT hardware and software has resulted in improved diagnostic performance and wide implementation of these imaging techniques in daily clinical practice. However, the human ability to interpret, quantify, and integrate these data sets is limited. The identification of novel markers and application of machine learning (ML) algorithms, including deep learning (DL) to cardiovascular imaging techniques will further improve diagnosis and prognostication for patients with cardiovascular diseases. The goal of this position paper of the European Association of Nuclear Medicine (EANM) and the European Association of Cardiovascular Imaging (EACVI) is to provide an overview of the general concepts behind modern machine learning-based artificial intelligence, highlights currently prefered methods, practices, and computational models, and proposes new strategies to support the clinical application of ML in the field of cardiovascular imaging using nuclear cardiology (hybrid) and CT techniques.
Collapse
Affiliation(s)
- Riemer H J A Slart
- Medical Imaging Centre, Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Hanzeplein 1, PO 9700 RB, Groningen, The Netherlands.
- Faculty of Science and Technology Biomedical, Photonic Imaging, University of Twente, Enschede, The Netherlands.
| | - Michelle C Williams
- British Heart Foundation Centre for Cardiovascular Science, University of Edinburgh, Edinburgh, UK
- Edinburgh Imaging facility QMRI, Edinburgh, UK
| | - Luis Eduardo Juarez-Orozco
- Department of Cardiology, Division Heart & Lungs, University Medical Center Utrecht, Utrecht University, Utrecht, the Netherlands
- University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Christoph Rischpler
- Department of Nuclear Medicine, University Hospital Essen, University of Duisburg-Essen, Essen, Germany
| | - Marc R Dweck
- British Heart Foundation Centre for Cardiovascular Science, University of Edinburgh, Edinburgh, UK
- Edinburgh Imaging facility QMRI, Edinburgh, UK
| | - Andor W J M Glaudemans
- Medical Imaging Centre, Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Hanzeplein 1, PO 9700 RB, Groningen, The Netherlands
| | | | - Panagiotis Georgoulias
- Department of Nuclear Medicine, Faculty of Medicine, University of Thessaly, University Hospital of Larissa, Larissa, Greece
| | - Olivier Gheysens
- Department of Nuclear Medicine, Cliniques Universitaires Saint-Luc and Institute of Clinical and Experimental Research (IREC), Université catholique de Louvain (UCLouvain), Brussels, Belgium
| | | | - Gilbert Habib
- APHM, Cardiology Department, La Timone Hospital, Marseille, France
- IRD, APHM, MEPHI, IHU-Méditerranée Infection, Aix Marseille Université, Marseille, France
| | - Roland Hustinx
- Division of Nuclear Medicine and Oncological Imaging, Department of Medical Physics, ULiège, Liège, Belgium
| | - Bernard Cosyns
- Department of Cardiology, Centrum voor Hart en Vaatziekten, Universitair Ziekenhuis Brussel, 101 Laarbeeklaan, 1090, Brussels, Belgium
| | - Hein J Verberne
- Department of Radiology and Nuclear Medicine, Amsterdam UMC, location AMC, University of Amsterdam, Amsterdam, The Netherlands
| | - Fabien Hyafil
- Department of Nuclear Medicine, DMU IMAGINA, Georges-Pompidou European Hospital, Assistance Publique - Hôpitaux de Paris, F-75015, Paris, France
- University of Paris, PARCC, INSERM, F-75006, Paris, France
| | - Paola A Erba
- Medical Imaging Centre, Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Hanzeplein 1, PO 9700 RB, Groningen, The Netherlands
- Department of Nuclear Medicine (P.A.E.), University of Pisa, Pisa, Italy
- Department of Translational Research and New Technology in Medicine (P.A.E.), University of Pisa, Pisa, Italy
| | - Mark Lubberink
- Department of Surgical Sciences/Radiology, Uppsala University, Uppsala, Sweden
- Medical Physics, Uppsala University Hospital, Uppsala, Sweden
| | - Piotr Slomka
- Department of Imaging, Medicine, and Biomedical Sciences, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Ivana Išgum
- Department of Radiology and Nuclear Medicine, Amsterdam UMC, location AMC, University of Amsterdam, Amsterdam, The Netherlands
- Department of Biomedical Engineering and Physics, Amsterdam UMC - location AMC, University of Amsterdam, 1105, Amsterdam, AZ, Netherlands
| | | | - Márton Kolossváry
- MTA-SE Cardiovascular Imaging Research Group, Heart and Vascular Center, Semmelweis University, 68 Városmajor Street, Budapest, Hungary
| | - Antti Saraste
- Turku PET Centre, Turku University Hospital, University of Turku, Turku, Finland
- Heart Center, Turku University Hospital, Turku, Finland
| |
Collapse
|
273
|
Lyu Q, Guo M, Ma M. Boosting attention fusion generative adversarial network for image denoising. Neural Comput Appl 2021. [DOI: 10.1007/s00521-020-05284-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
274
|
Wolterink JM, Mukhopadhyay A, Leiner T, Vogl TJ, Bucher AM, Išgum I. Generative Adversarial Networks: A Primer for Radiologists. Radiographics 2021; 41:840-857. [PMID: 33891522 DOI: 10.1148/rg.2021200151] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Artificial intelligence techniques involving the use of artificial neural networks-that is, deep learning techniques-are expected to have a major effect on radiology. Some of the most exciting applications of deep learning in radiology make use of generative adversarial networks (GANs). GANs consist of two artificial neural networks that are jointly optimized but with opposing goals. One neural network, the generator, aims to synthesize images that cannot be distinguished from real images. The second neural network, the discriminator, aims to distinguish these synthetic images from real images. These deep learning models allow, among other applications, the synthesis of new images, acceleration of image acquisitions, reduction of imaging artifacts, efficient and accurate conversion between medical images acquired with different modalities, and identification of abnormalities depicted on images. The authors provide an introduction to GANs and adversarial deep learning methods. In addition, the different ways in which GANs can be used for image synthesis and image-to-image translation tasks, as well as the principles underlying conditional GANs and cycle-consistent GANs, are described. Illustrated examples of GAN applications in radiologic image analysis for different imaging modalities and different tasks are provided. The clinical potential of GANs, future clinical GAN applications, and potential pitfalls and caveats that radiologists should be aware of also are discussed in this review. The online slide presentation from the RSNA Annual Meeting is available for this article. ©RSNA, 2021.
Collapse
Affiliation(s)
- Jelmer M Wolterink
- From the Department of Applied Mathematics, Faculty of Electrical Engineering, Mathematics and Computer Science, Technical Medical Centre, University of Twente, Zilverling, PO Box 217, 7500 AE Enschede, the Netherlands (J.M.W.); Department of Biomedical Engineering and Physics (J.M.W., I.I.) and Department of Radiology and Nuclear Medicine (I.I.), Amsterdam University Medical Center, Amsterdam, the Netherlands; Department of Informatics, Technische Universität Darmstadt, Darmstadt, Germany (A.M.); Department of Radiology, Utrecht University Medical Center, Utrecht, the Netherlands (T.L.); and Institute of Diagnostic and Interventional Radiology, Universitätsklinikum Frankfurt, Frankfurt, Germany (T.J.V., A.M.B.)
| | - Anirban Mukhopadhyay
- From the Department of Applied Mathematics, Faculty of Electrical Engineering, Mathematics and Computer Science, Technical Medical Centre, University of Twente, Zilverling, PO Box 217, 7500 AE Enschede, the Netherlands (J.M.W.); Department of Biomedical Engineering and Physics (J.M.W., I.I.) and Department of Radiology and Nuclear Medicine (I.I.), Amsterdam University Medical Center, Amsterdam, the Netherlands; Department of Informatics, Technische Universität Darmstadt, Darmstadt, Germany (A.M.); Department of Radiology, Utrecht University Medical Center, Utrecht, the Netherlands (T.L.); and Institute of Diagnostic and Interventional Radiology, Universitätsklinikum Frankfurt, Frankfurt, Germany (T.J.V., A.M.B.)
| | - Tim Leiner
- From the Department of Applied Mathematics, Faculty of Electrical Engineering, Mathematics and Computer Science, Technical Medical Centre, University of Twente, Zilverling, PO Box 217, 7500 AE Enschede, the Netherlands (J.M.W.); Department of Biomedical Engineering and Physics (J.M.W., I.I.) and Department of Radiology and Nuclear Medicine (I.I.), Amsterdam University Medical Center, Amsterdam, the Netherlands; Department of Informatics, Technische Universität Darmstadt, Darmstadt, Germany (A.M.); Department of Radiology, Utrecht University Medical Center, Utrecht, the Netherlands (T.L.); and Institute of Diagnostic and Interventional Radiology, Universitätsklinikum Frankfurt, Frankfurt, Germany (T.J.V., A.M.B.)
| | - Thomas J Vogl
- From the Department of Applied Mathematics, Faculty of Electrical Engineering, Mathematics and Computer Science, Technical Medical Centre, University of Twente, Zilverling, PO Box 217, 7500 AE Enschede, the Netherlands (J.M.W.); Department of Biomedical Engineering and Physics (J.M.W., I.I.) and Department of Radiology and Nuclear Medicine (I.I.), Amsterdam University Medical Center, Amsterdam, the Netherlands; Department of Informatics, Technische Universität Darmstadt, Darmstadt, Germany (A.M.); Department of Radiology, Utrecht University Medical Center, Utrecht, the Netherlands (T.L.); and Institute of Diagnostic and Interventional Radiology, Universitätsklinikum Frankfurt, Frankfurt, Germany (T.J.V., A.M.B.)
| | - Andreas M Bucher
- From the Department of Applied Mathematics, Faculty of Electrical Engineering, Mathematics and Computer Science, Technical Medical Centre, University of Twente, Zilverling, PO Box 217, 7500 AE Enschede, the Netherlands (J.M.W.); Department of Biomedical Engineering and Physics (J.M.W., I.I.) and Department of Radiology and Nuclear Medicine (I.I.), Amsterdam University Medical Center, Amsterdam, the Netherlands; Department of Informatics, Technische Universität Darmstadt, Darmstadt, Germany (A.M.); Department of Radiology, Utrecht University Medical Center, Utrecht, the Netherlands (T.L.); and Institute of Diagnostic and Interventional Radiology, Universitätsklinikum Frankfurt, Frankfurt, Germany (T.J.V., A.M.B.)
| | - Ivana Išgum
- From the Department of Applied Mathematics, Faculty of Electrical Engineering, Mathematics and Computer Science, Technical Medical Centre, University of Twente, Zilverling, PO Box 217, 7500 AE Enschede, the Netherlands (J.M.W.); Department of Biomedical Engineering and Physics (J.M.W., I.I.) and Department of Radiology and Nuclear Medicine (I.I.), Amsterdam University Medical Center, Amsterdam, the Netherlands; Department of Informatics, Technische Universität Darmstadt, Darmstadt, Germany (A.M.); Department of Radiology, Utrecht University Medical Center, Utrecht, the Netherlands (T.L.); and Institute of Diagnostic and Interventional Radiology, Universitätsklinikum Frankfurt, Frankfurt, Germany (T.J.V., A.M.B.)
| |
Collapse
|
275
|
Wu D, Ren H, Li Q. Self-Supervised Dynamic CT Perfusion Image Denoising With Deep Neural Networks. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021. [DOI: 10.1109/trpms.2020.2996566] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
276
|
Wang X, Zheng F, Xiao R, Liu Z, Li Y, Li J, Zhang X, Hao X, Zhang X, Guo J, Zhang Y, Xue H, Jin Z. Comparison of image quality and lesion diagnosis in abdominopelvic unenhanced CT between reduced-dose CT using deep learning post-processing and standard-dose CT using iterative reconstruction: A prospective study. Eur J Radiol 2021; 139:109735. [PMID: 33932717 DOI: 10.1016/j.ejrad.2021.109735] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2021] [Revised: 04/06/2021] [Accepted: 04/19/2021] [Indexed: 11/24/2022]
Abstract
PURPOSE To compare image quality and lesion diagnosis between reduced-dose abdominopelvic unenhanced computed tomography (CT) using deep learning (DL) post-processing and standard-dose CT using iterative reconstruction (IR). METHOD Totally 251 patients underwent two consecutive abdominopelvic unenhanced CT scans of the same range, including standard and reduced doses, respectively. In group A, standard-dose data were reconstructed by (blend 30 %) IR. In group B, reduced-dose data were reconstructed by filtered back projection reconstruction to obtain group B1 images, and post-processed using the DL algorithm (NeuAI denosing, Neusoft medical, Shenyang, China) with 50 % and 100 % weights to obtain group B2 and B3 images, respectively. Then, CT values of the liver, the second lumbar vertebral centrum, the erector spinae and abdominal subcutaneous fat were measured. CT values, noise levels, signal-to-noise ratios (SNRs), contrast-to-noise ratios (CNRs), radiation doses and subjective scores of image quality were compared. Subjective evaluations of low-density liver lesions were compared by diagnostic results from enhanced CT or Magnetic Resonance Imaging. RESULTS Groups B3 and B1 showed the lowest and highest noise levels, respectively (P < 0.001). The SNR and CNR in group B3 were highest (P < 0.001). The radiation dose in group B was reduced by 71.5 % on average compared to group A. Subjective scores in groups A and B2 were highest (P < 0.001). Diagnostic sensitivity and confidence for liver metastases in groups A and B2 were highest (P < 0.001). CONCLUSIONS Reduced-dose abdominopelvic unenhanced CT combined with DL post-processing could ensure image quality and satisfy diagnostic needs.
Collapse
Affiliation(s)
- Xiao Wang
- From the Department of Radiology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
| | - Fuling Zheng
- From the Department of Radiology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
| | - Ran Xiao
- From the Department of Radiology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
| | - Zhuoheng Liu
- From CT Business Unit, Neusoft Medical System Company, Shenyang, China
| | - Yutong Li
- From CT Business Unit, Neusoft Medical System Company, Shenyang, China
| | - Juan Li
- From the Department of Radiology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
| | - Xi Zhang
- From the Department of Radiology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
| | - Xuemin Hao
- From the Department of Radiology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
| | - Xinhu Zhang
- From the Department of Radiology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
| | - Jiawu Guo
- From the Department of Radiology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
| | - Yan Zhang
- From the Department of Radiology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
| | - Huadan Xue
- From the Department of Radiology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China.
| | - Zhengyu Jin
- From the Department of Radiology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China.
| |
Collapse
|
277
|
Li M, Du Q, Duan L, Yang X, Zheng J, Jiang H, Li M. Incorporation of residual attention modules into two neural networks for low-dose CT denoising. Med Phys 2021; 48:2973-2990. [PMID: 33890681 DOI: 10.1002/mp.14856] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2020] [Revised: 01/06/2021] [Accepted: 03/08/2021] [Indexed: 11/07/2022] Open
Abstract
PURPOSE The low-dose computed tomography (CT) imaging can reduce the damage caused by x-ray radiation to the human body. However, low-dose CT images have a different degree of artifacts than conventional CT images, and their resolution is lower than that of conventional CT images, which can affect disease diagnosis by clinicians. Therefore, methods for noise-level reduction and resolution improvement in low-dose CT images have inevitably become a research hotspot in the field of low-dose CT imaging. METHODS In this paper, residual attention modules (RAMs) are incorporated into the residual encoder-decoder convolutional neural network (RED-CNN) and generative adversarial network with Wasserstein distance (WGAN) to learn features that are beneficial to improving the performances of denoising networks, and developed models are denoted as RED-CNN-RAM and WGAN-RAM, respectively. In detail, RAM is composed of a multi-scale convolution module and an attention module built on the residual network architecture, where the attention module consists of a channel attention module and a spatial attention module. The residual network architecture solves the problem of network degradation with increased network depth. The function of the attention module is to learn which features are beneficial to reduce the noise level of low-dose CT images to reduce the loss of detail in the final denoising images, which is also the key point of the proposed algorithms. RESULTS To develop a robust network for low-dose CT image denoising, multidose-level torso phantom images provided by a cooperating equipment vendor are used to train the network, which can improve the network's adaptability to clinical application. In addition, a clinical dataset is used to test the network's migration capabilities and clinical applicability. The experimental results demonstrate that these proposed networks can effectively remove noise and artifacts from multidose CT scans. Subjective and objective analyses of multiple groups of comparison experiments show that the proposed networks achieve good noise suppression performance while preserving the image texture details. CONCLUSION In this study, two deep learning network models are developed using multidose-level CT images acquired from a commercial spiral CT scanner. The two network models can reduce and even remove streaking artifacts, and noise from low-dose CT images confirms the effectiveness of the proposed algorithms.
Collapse
Affiliation(s)
- Mei Li
- Changchun University of Science and Technology, Changchun, China.,Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China
| | - Qiang Du
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China
| | - Luwen Duan
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China
| | - Xiaodong Yang
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China
| | - Jian Zheng
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China
| | - Haochuan Jiang
- Minfound Medical Systems Co. Ltd., Yuecheng District, Shaoxing, Zhejiang, China
| | - Ming Li
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China
| |
Collapse
|
278
|
Weakly-supervised progressive denoising with unpaired CT images. Med Image Anal 2021; 71:102065. [PMID: 33915472 DOI: 10.1016/j.media.2021.102065] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 03/16/2021] [Accepted: 03/30/2021] [Indexed: 12/12/2022]
Abstract
Although low-dose CT imaging has attracted a great interest due to its reduced radiation risk to the patients, it suffers from severe and complex noise. Recent fully-supervised methods have shown impressive performances on CT denoising task. However, they require a huge amount of paired normal-dose and low-dose CT images, which is generally unavailable in real clinical practice. To address this problem, we propose a weakly-supervised denoising framework that generates paired original and noisier CT images from unpaired CT images using a physics-based noise model. Our denoising framework also includes a progressive denoising module that bypasses the challenges of mapping from low-dose to normal-dose CT images directly via progressively compensating the small noise gap. To quantitatively evaluate diagnostic image quality, we present the noise power spectrum and signal detection accuracy, which are well correlated with the visual inspection. The experimental results demonstrate that our method achieves remarkable performances, even superior to fully-supervised CT denoising with respect to the signal detectability. Moreover, our framework increases the flexibility in data collection, allowing us to utilize any unpaired data at any dose levels.
Collapse
|
279
|
An ECG Denoising Method Based on the Generative Adversarial Residual Network. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2021. [DOI: 10.1155/2021/5527904] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
High-quality and high-fidelity removal of noise in the Electrocardiogram (ECG) signal is of great significance to the auxiliary diagnosis of ECG diseases. In view of the single function of traditional denoising methods and the insufficient performance of signal details after denoising, a new method of ECG denoising based on the combination of the Generative Adversarial Network (GAN) and Residual Network is proposed. The method adopted in this paper is based on the GAN structure, and it restructures the generator and discriminator. In the generator network, residual blocks and Skip-Connecting are used to deepen the network structure and better capture the in-depth information in the ECG signal. In the discriminator network, the ResNet framework is used. In order to optimize the noise reduction process and solve the lack of local relevance considering the global ECG problem, the differential function and overall function of the maximum local difference are added in the loss function in this paper. The experimental results prove that the method used in this article has better performance than the current excellent S-Transform (S-T) algorithm, Wavelet Transform (WT) algorithm, Stacked Denoising Autoencoder (S-DAE) algorithm, and Improved Denoising Autoencoder (I-DAE) algorithm. Experiments show that the Root Mean Square Error (RMSE) of this method in the Massachusetts Institute of Technology and Beth Israel Hospital (MIT-BIH) noise pressure database is 0.0102, and the Signal-to-Noise Ratio (SNR) is 40.8526 dB, which is compared with that of the most advanced experimental methods. Our method improves the SNR by 88.57% on average. Besides the three noise intensities for comparison experiments, additional noise reduction experiments are also performed under four noise intensities in our paper. The experimental results verify the scientific nature of the model, which is that our method can effectively retain the important information conveyed by the original signal.
Collapse
|
280
|
Shen T, Hao K, Gou C, Wang FY. Mass Image Synthesis in Mammogram with Contextual Information Based on GANs. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 202:106019. [PMID: 33640650 DOI: 10.1016/j.cmpb.2021.106019] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Accepted: 02/16/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE In medical imaging, the scarcity of labeled lesion data has hindered the application of many deep learning algorithms. To overcome this problem, the simulation of diverse lesions in medical images is proposed. However, synthesizing labeled mass images in mammograms is still challenging due to the lack of consistent patterns in shape, margin, and contextual information. Therefore, we aim to generate various labeled medical images based on contextual information in mammograms. METHODS In this paper, we propose a novel approach based on GANs to generate various mass images and then perform contextual infilling by inserting the synthetic lesions into healthy screening mammograms. Through incorporating features of both realistic mass images and corresponding masks into the adversarial learning scheme, the generator can not only learn the distribution of the real mass images but also capture the matching shape, margin and context information. RESULTS To demonstrate the effectiveness of our proposed method, we conduct experiments on publicly available mammogram database of DDSM and a private database provided by Nanfang Hospital in China. Qualitative and quantitative evaluations validate the effectiveness of our approach. Additionally, through the data augmentation by image generation of the proposed method, an improvement of 5.03% in detection rate can be achieved over the same model trained on original real lesion images. CONCLUSIONS The results show that the data augmentation based on our method increases the diversity of dataset. Our method can be viewed as one of the first steps toward generating labeled breast mass images for precise detection and can be extended in other medical imaging domains to solve similar problems.
Collapse
Affiliation(s)
- Tianyu Shen
- Institute of Automation, Chinese Academy of Sciences, Beijing, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Kunkun Hao
- School of Software Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Chao Gou
- School of Intelligent Systems Engineering, Sun Yat-Sen University, Guangzhou, China.
| | - Fei-Yue Wang
- Institute of Automation, Chinese Academy of Sciences, Beijing, China; Qingdao Academy of Intelligent Industries, Qingdao, China; Institute of Systems Engineering, Macau University of Science and Technology, Macau, China
| |
Collapse
|
281
|
Ren G, Lam SK, Zhang J, Xiao H, Cheung ALY, Ho WY, Qin J, Cai J. Investigation of a Novel Deep Learning-Based Computed Tomography Perfusion Mapping Framework for Functional Lung Avoidance Radiotherapy. Front Oncol 2021; 11:644703. [PMID: 33842356 PMCID: PMC8024641 DOI: 10.3389/fonc.2021.644703] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Accepted: 02/02/2021] [Indexed: 11/25/2022] Open
Abstract
Functional lung avoidance radiation therapy aims to minimize dose delivery to the normal lung tissue while favoring dose deposition in the defective lung tissue based on the regional function information. However, the clinical acquisition of pulmonary functional images is resource-demanding, inconvenient, and technically challenging. This study aims to investigate the deep learning-based lung functional image synthesis from the CT domain. Forty-two pulmonary macro-aggregated albumin SPECT/CT perfusion scans were retrospectively collected from the hospital. A deep learning-based framework (including image preparation, image processing, and proposed convolutional neural network) was adopted to extract features from 3D CT images and synthesize perfusion as estimations of regional lung function. Ablation experiments were performed to assess the effects of each framework component by removing each element of the framework and analyzing the testing performances. Major results showed that the removal of the CT contrast enhancement component in the image processing resulted in the largest drop in framework performance, compared to the optimal performance (~12%). In the CNN part, all the three components (residual module, ROI attention, and skip attention) were approximately equally important to the framework performance; removing one of them resulted in a 3–5% decline in performance. The proposed CNN improved ~4% overall performance and ~350% computational efficiency, compared to the U-Net model. The deep convolutional neural network, in conjunction with image processing for feature enhancement, is capable of feature extraction from CT images for pulmonary perfusion synthesis. In the proposed framework, image processing, especially CT contrast enhancement, plays a crucial role in the perfusion synthesis. This CTPM framework provides insights for relevant research studies in the future and enables other researchers to leverage for the development of optimized CNN models for functional lung avoidance radiation therapy.
Collapse
Affiliation(s)
- Ge Ren
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, Hong Kong
| | - Sai-Kit Lam
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, Hong Kong
| | - Jiang Zhang
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, Hong Kong
| | - Haonan Xiao
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, Hong Kong
| | - Andy Lai-Yin Cheung
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, Hong Kong
| | - Wai-Yin Ho
- Department of Nuclear Medicine, Queen Mary Hospital, Hong Kong, Hong Kong
| | - Jing Qin
- School of Nursing, The Hong Kong Polytechnic University, Hong Kong, Hong Kong
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, Hong Kong
| |
Collapse
|
282
|
Li X, Wang J. Network detection of malicious domain name based on adversary model. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2021. [DOI: 10.3233/jifs-189823] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
With the rapid development of the Internet, threats from the network security are emerging one after another. Driven by economic interests, attackers use malicious domain names to promote the development of botnets and phishing sites, which leads to serious information leakage of victims and devices, the proliferation of DDoS attacks and the rapid spread of viruses. Based on the above background, the purpose of this paper is to study the network detection of malicious domain name based on the adversary model. Firstly, this paper studies the generation mechanism of DGA domain name based on PCFG model, and studies the characteristics of the domain name generated by such DGA. The research shows that the domain name generated by PCFG model is usually based on the legal domain name, so the character statistical characteristics of the domain name are similar to the legal domain name. Moreover, the same PCFG model can often generate multiple types of domain names, so it is difficult to extract appropriate features manually. The experimental results show that the accuracy, recall and accuracy of the performance parameters of the classifier are over 95%. By using the open domain name data set, comparing the linear calculation edit distance method and the detection effect under different thresholds, it is proved that the proposed method can improve the detection speed of misplanted domain names under the condition of similar accuracy.
Collapse
Affiliation(s)
- Xingguo Li
- National Key Laboratory of Fundamental Science on Synthetic Vision, Sichuan University, Chengdu, Sichuan, China
| | - Junfeng Wang
- College of Computer Science and School of Aeronautics & Astronautics, Sichuan University, Chengdu, Sichuan, China
| |
Collapse
|
283
|
Yan C, Lin J, Li H, Xu J, Zhang T, Chen H, Woodruff HC, Wu G, Zhang S, Xu Y, Lambin P. Cycle-Consistent Generative Adversarial Network: Effect on Radiation Dose Reduction and Image Quality Improvement in Ultralow-Dose CT for Evaluation of Pulmonary Tuberculosis. Korean J Radiol 2021; 22:983-993. [PMID: 33739634 PMCID: PMC8154783 DOI: 10.3348/kjr.2020.0988] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2020] [Revised: 11/22/2020] [Accepted: 12/21/2020] [Indexed: 01/15/2023] Open
Abstract
Objective To investigate the image quality of ultralow-dose CT (ULDCT) of the chest reconstructed using a cycle-consistent generative adversarial network (CycleGAN)-based deep learning method in the evaluation of pulmonary tuberculosis. Materials and Methods Between June 2019 and November 2019, 103 patients (mean age, 40.8 ± 13.6 years; 61 men and 42 women) with pulmonary tuberculosis were prospectively enrolled to undergo standard-dose CT (120 kVp with automated exposure control), followed immediately by ULDCT (80 kVp and 10 mAs). The images of the two successive scans were used to train the CycleGAN framework for image-to-image translation. The denoising efficacy of the CycleGAN algorithm was compared with that of hybrid and model-based iterative reconstruction. Repeated-measures analysis of variance and Wilcoxon signed-rank test were performed to compare the objective measurements and the subjective image quality scores, respectively. Results With the optimized CycleGAN denoising model, using the ULDCT images as input, the peak signal-to-noise ratio and structural similarity index improved by 2.0 dB and 0.21, respectively. The CycleGAN-generated denoised ULDCT images typically provided satisfactory image quality for optimal visibility of anatomic structures and pathological findings, with a lower level of image noise (mean ± standard deviation [SD], 19.5 ± 3.0 Hounsfield unit [HU]) than that of the hybrid (66.3 ± 10.5 HU, p < 0.001) and a similar noise level to model-based iterative reconstruction (19.6 ± 2.6 HU, p > 0.908). The CycleGAN-generated images showed the highest contrast-to-noise ratios for the pulmonary lesions, followed by the model-based and hybrid iterative reconstruction. The mean effective radiation dose of ULDCT was 0.12 mSv with a mean 93.9% reduction compared to standard-dose CT. Conclusion The optimized CycleGAN technique may allow the synthesis of diagnostically acceptable images from ULDCT of the chest for the evaluation of pulmonary tuberculosis.
Collapse
Affiliation(s)
- Chenggong Yan
- Department of Medical Imaging Center, Nanfang Hospital, Southern Medical University, Guangzhou, China.,The D-Lab, Department of Precision Medicine, GROW-School for Oncology and Developmental Biology, Maastricht University, Maastricht, The Netherlands
| | - Jie Lin
- Department of Medical Imaging Center, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Haixia Li
- Clinical and Technical Solution, Philips Healthcare, Guangzhou, China
| | - Jun Xu
- Department of Hematology, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Tianjing Zhang
- Clinical and Technical Solution, Philips Healthcare, Guangzhou, China
| | - Hao Chen
- Jiangsu JITRI Sioux Technologies Co., Ltd., Suzhou, China
| | - Henry C Woodruff
- The D-Lab, Department of Precision Medicine, GROW-School for Oncology and Developmental Biology, Maastricht University, Maastricht, The Netherlands.,Department of Radiology and Nuclear Imaging, GROW-School for Oncology and Developmental Biology, Maastricht University Medical Centre, Maastricht, The Netherlands
| | - Guangyao Wu
- The D-Lab, Department of Precision Medicine, GROW-School for Oncology and Developmental Biology, Maastricht University, Maastricht, The Netherlands
| | - Siqi Zhang
- Department of Medical Imaging Center, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Yikai Xu
- Department of Medical Imaging Center, Nanfang Hospital, Southern Medical University, Guangzhou, China.
| | - Philippe Lambin
- The D-Lab, Department of Precision Medicine, GROW-School for Oncology and Developmental Biology, Maastricht University, Maastricht, The Netherlands.,Department of Radiology and Nuclear Imaging, GROW-School for Oncology and Developmental Biology, Maastricht University Medical Centre, Maastricht, The Netherlands
| |
Collapse
|
284
|
Hegazy MAA, Cho MH, Lee SY. Half-scan artifact correction using generative adversarial network for dental CT. Comput Biol Med 2021; 132:104313. [PMID: 33705996 DOI: 10.1016/j.compbiomed.2021.104313] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2020] [Revised: 03/02/2021] [Accepted: 03/03/2021] [Indexed: 10/22/2022]
Abstract
Half-scan image reconstruction with Parker weighting can correct motion artifacts in dental CT images taken with a slow scan-based dental CT. Since the residual half-scan artifacts in the dental CT images appear much stronger than those in medical CT images, the artifacts often persist to the extent that they compromise the surface-rendered bone and tooth images computed from the dental CT images. We used a variation of generative adversarial network (GAN), so-called U-WGAN, to correct half-scan artifacts in dental CT images. For the generative network of GAN, we used a U-net structure of five stages to take advantage of its high computational efficiency. We trained the network using the Wasserstein loss function on the dental CT images of 40 patients. We tested the network with comparing its output images to the half-scan images corrected with other methods; Parker weighting and the other two popular GANs, that is, SRGAN and m-WGAN. For the quantitative comparison, we used the image quality metrics measuring the similarity of the corrected images to the full-scan images (reference images) and the noise level on the corrected images. We also compared the visual quality of the surface-rendered bone and tooth images. We observed that the proposed network outperformed Parker weighting and other GANs in all the image quality metrics. The computation time for the proposed network to process 336×336×336 3D images on a GPU-equipped personal computer was about 3 s, which was much shorter than those of SRGAN and m-WGAN, 50 s and 54 s, respectively.
Collapse
Affiliation(s)
| | - Myung Hye Cho
- R&D Center, Ray, Seongnam, South Korea; Department of Biomedical Engineering, Kyung Hee University, Yongin, South Korea
| | - Soo Yeol Lee
- Department of Biomedical Engineering, Kyung Hee University, Yongin, South Korea.
| |
Collapse
|
285
|
Ghosh SK, Biswas B, Ghosh A. A novel stacked sparse denoising autoencoder for mammography restoration to visual interpretation of breast lesion. EVOLUTIONARY INTELLIGENCE 2021. [DOI: 10.1007/s12065-019-00344-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
286
|
Considering anatomical prior information for low-dose CT image enhancement using attribute-augmented Wasserstein generative adversarial networks. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.10.077] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
287
|
Castiglioni I, Rundo L, Codari M, Di Leo G, Salvatore C, Interlenghi M, Gallivanone F, Cozzi A, D'Amico NC, Sardanelli F. AI applications to medical images: From machine learning to deep learning. Phys Med 2021; 83:9-24. [PMID: 33662856 DOI: 10.1016/j.ejmp.2021.02.006] [Citation(s) in RCA: 209] [Impact Index Per Article: 52.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Revised: 02/09/2021] [Accepted: 02/13/2021] [Indexed: 12/13/2022] Open
Abstract
PURPOSE Artificial intelligence (AI) models are playing an increasing role in biomedical research and healthcare services. This review focuses on challenges points to be clarified about how to develop AI applications as clinical decision support systems in the real-world context. METHODS A narrative review has been performed including a critical assessment of articles published between 1989 and 2021 that guided challenging sections. RESULTS We first illustrate the architectural characteristics of machine learning (ML)/radiomics and deep learning (DL) approaches. For ML/radiomics, the phases of feature selection and of training, validation, and testing are described. DL models are presented as multi-layered artificial/convolutional neural networks, allowing us to directly process images. The data curation section includes technical steps such as image labelling, image annotation (with segmentation as a crucial step in radiomics), data harmonization (enabling compensation for differences in imaging protocols that typically generate noise in non-AI imaging studies) and federated learning. Thereafter, we dedicate specific sections to: sample size calculation, considering multiple testing in AI approaches; procedures for data augmentation to work with limited and unbalanced datasets; and the interpretability of AI models (the so-called black box issue). Pros and cons for choosing ML versus DL to implement AI applications to medical imaging are finally presented in a synoptic way. CONCLUSIONS Biomedicine and healthcare systems are one of the most important fields for AI applications and medical imaging is probably the most suitable and promising domain. Clarification of specific challenging points facilitates the development of such systems and their translation to clinical practice.
Collapse
Affiliation(s)
- Isabella Castiglioni
- Department of Physics, Università degli Studi di Milano-Bicocca, Piazza della Scienza 3, 20126 Milano, Italy; Institute of Biomedical Imaging and Physiology, National Research Council, Via Fratelli Cervi 93, 20090 Segrate, Italy.
| | - Leonardo Rundo
- Department of Radiology, Box 218, Cambridge Biomedical Campus, Cambridge CB2 0QQ, United Kingdom; Cancer Research UK Cambridge Centre, University of Cambridge Li Ka Shing Centre, Robinson Way, Cambridge CB2 0RE, United Kingdom.
| | - Marina Codari
- Department of Radiology, Stanford University School of Medicine, Stanford University, 300 Pasteur Drive, Stanford, CA, USA.
| | - Giovanni Di Leo
- Unit of Radiology, IRCCS Policlinico San Donato, Via Rodolfo Morandi 30, 20097 San Donato Milanese, Italy.
| | - Christian Salvatore
- Scuola Universitaria Superiore IUSS Pavia, Piazza della Vittoria 15, 27100 Pavia, Italy; DeepTrace Technologies S.r.l., Via Conservatorio 17, 20122 Milano, Italy.
| | - Matteo Interlenghi
- DeepTrace Technologies S.r.l., Via Conservatorio 17, 20122 Milano, Italy.
| | - Francesca Gallivanone
- Institute of Biomedical Imaging and Physiology, National Research Council, Via Fratelli Cervi 93, 20090 Segrate, Italy.
| | - Andrea Cozzi
- Department of Biomedical Sciences for Health, Università degli Studi di Milano, Via Luigi Mangiagalli 31, 20133 Milano, Italy.
| | - Natascha Claudia D'Amico
- Department of Diagnostic Imaging and Stereotactic Radiosurgery, Centro Diagnostico Italiano S.p.A., Via Saint Bon 20, 20147 Milano, Italy; Unit of Computer Systems and Bioinformatics, Department of Engineering, Università Campus Bio-Medico di Roma, Via Alvaro del Portillo 21, 00128 Roma, Italy.
| | - Francesco Sardanelli
- Unit of Radiology, IRCCS Policlinico San Donato, Via Rodolfo Morandi 30, 20097 San Donato Milanese, Italy; Department of Biomedical Sciences for Health, Università degli Studi di Milano, Via Luigi Mangiagalli 31, 20133 Milano, Italy.
| |
Collapse
|
288
|
Hasan AM, Mohebbian MR, Wahid KA, Babyn P. Hybrid-Collaborative Noise2Noise Denoiser for Low-Dose CT Images. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021. [DOI: 10.1109/trpms.2020.3002178] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
289
|
Funama Y, Oda S, Kidoh M, Nagayama Y, Goto M, Sakabe D, Nakaura T. Conditional generative adversarial networks to generate pseudo low monoenergetic CT image from a single-tube voltage CT scanner. Phys Med 2021; 83:46-51. [DOI: 10.1016/j.ejmp.2021.02.015] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/12/2020] [Revised: 02/11/2021] [Accepted: 02/21/2021] [Indexed: 01/29/2023] Open
|
290
|
Jeong YJ, Park HS, Jeong JE, Yoon HJ, Jeon K, Cho K, Kang DY. Restoration of amyloid PET images obtained with short-time data using a generative adversarial networks framework. Sci Rep 2021; 11:4825. [PMID: 33649403 PMCID: PMC7921674 DOI: 10.1038/s41598-021-84358-8] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2020] [Accepted: 02/15/2021] [Indexed: 11/15/2022] Open
Abstract
Our purpose in this study is to evaluate the clinical feasibility of deep-learning techniques for F-18 florbetaben (FBB) positron emission tomography (PET) image reconstruction using data acquired in a short time. We reconstructed raw FBB PET data of 294 patients acquired for 20 and 2 min into standard-time scanning PET (PET20m) and short-time scanning PET (PET2m) images. We generated a standard-time scanning PET-like image (sPET20m) from a PET2m image using a deep-learning network. We did qualitative and quantitative analyses to assess whether the sPET20m images were available for clinical applications. In our internal validation, sPET20m images showed substantial improvement on all quality metrics compared with the PET2m images. There was a small mean difference between the standardized uptake value ratios of sPET20m and PET20m images. A Turing test showed that the physician could not distinguish well between generated PET images and real PET images. Three nuclear medicine physicians could interpret the generated PET image and showed high accuracy and agreement. We obtained similar quantitative results by means of temporal and external validations. We can generate interpretable PET images from low-quality PET images because of the short scanning time using deep-learning techniques. Although more clinical validation is needed, we confirmed the possibility that short-scanning protocols with a deep-learning technique can be used for clinical applications.
Collapse
Affiliation(s)
- Young Jin Jeong
- Department of Nuclear Medicine, Dong-A University Hospital, Dong-A University College of Medicine, 1, 3ga, Dongdaesin-dong, Seo-gu, Busan, 602-715, South Korea.,Institute of Convergence Bio-Health, Dong-A University, Busan, Republic of Korea
| | - Hyoung Suk Park
- National Institute for Mathematical Science, Daejeon, Republic of Korea
| | - Ji Eun Jeong
- Department of Nuclear Medicine, Dong-A University Hospital, Dong-A University College of Medicine, 1, 3ga, Dongdaesin-dong, Seo-gu, Busan, 602-715, South Korea
| | - Hyun Jin Yoon
- Department of Nuclear Medicine, Dong-A University Hospital, Dong-A University College of Medicine, 1, 3ga, Dongdaesin-dong, Seo-gu, Busan, 602-715, South Korea
| | - Kiwan Jeon
- National Institute for Mathematical Science, Daejeon, Republic of Korea
| | - Kook Cho
- College of General Education, Dong-A University, Busan, Republic of Korea
| | - Do-Young Kang
- Department of Nuclear Medicine, Dong-A University Hospital, Dong-A University College of Medicine, 1, 3ga, Dongdaesin-dong, Seo-gu, Busan, 602-715, South Korea. .,Institute of Convergence Bio-Health, Dong-A University, Busan, Republic of Korea. .,Department of Translational Biomedical Sciences, Dong-A University, Busan, Republic of Korea.
| |
Collapse
|
291
|
Haubold J, Hosch R, Umutlu L, Wetter A, Haubold P, Radbruch A, Forsting M, Nensa F, Koitka S. Contrast agent dose reduction in computed tomography with deep learning using a conditional generative adversarial network. Eur Radiol 2021; 31:6087-6095. [PMID: 33630160 PMCID: PMC8270814 DOI: 10.1007/s00330-021-07714-2] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2020] [Revised: 12/13/2020] [Accepted: 01/21/2021] [Indexed: 01/02/2023]
Abstract
OBJECTIVES To reduce the dose of intravenous iodine-based contrast media (ICM) in CT through virtual contrast-enhanced images using generative adversarial networks. METHODS Dual-energy CTs in the arterial phase of 85 patients were randomly split into an 80/20 train/test collective. Four different generative adversarial networks (GANs) based on image pairs, which comprised one image with virtually reduced ICM and the original full ICM CT slice, were trained, testing two input formats (2D and 2.5D) and two reduced ICM dose levels (-50% and -80%). The amount of intravenous ICM was reduced by creating virtual non-contrast series using dual-energy and adding the corresponding percentage of the iodine map. The evaluation was based on different scores (L1 loss, SSIM, PSNR, FID), which evaluate the image quality and similarity. Additionally, a visual Turing test (VTT) with three radiologists was used to assess the similarity and pathological consistency. RESULTS The -80% models reach an SSIM of > 98%, PSNR of > 48, L1 of between 7.5 and 8, and an FID of between 1.6 and 1.7. In comparison, the -50% models reach a SSIM of > 99%, PSNR of > 51, L1 of between 6.0 and 6.1, and an FID between 0.8 and 0.95. For the crucial question of pathological consistency, only the 50% ICM reduction networks achieved 100% consistency, which is required for clinical use. CONCLUSIONS The required amount of ICM for CT can be reduced by 50% while maintaining image quality and diagnostic accuracy using GANs. Further phantom studies and animal experiments are required to confirm these initial results. KEY POINTS • The amount of contrast media required for CT can be reduced by 50% using generative adversarial networks. • Not only the image quality but especially the pathological consistency must be evaluated to assess safety. • A too pronounced contrast media reduction could influence the pathological consistency in our collective at 80%.
Collapse
Affiliation(s)
- Johannes Haubold
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany.
| | - René Hosch
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany.,Institute for Artificial Intelligence in Medicine, University Hospital Essen, Essen, Germany
| | - Lale Umutlu
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Axel Wetter
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Patrizia Haubold
- Department of Diagnostic and Interventional Radiology, Kliniken Essen-Mitte, Essen, Germany
| | | | - Michael Forsting
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Felix Nensa
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany.,Institute for Artificial Intelligence in Medicine, University Hospital Essen, Essen, Germany
| | - Sven Koitka
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany.,Institute for Artificial Intelligence in Medicine, University Hospital Essen, Essen, Germany
| |
Collapse
|
292
|
Zhang G, Mao Y, Li M, Peng L, Ling Y, Zhou X. The Optimal Tetralogy of Fallot Repair Using Generative Adversarial Networks. Front Physiol 2021; 12:613330. [PMID: 33708135 PMCID: PMC7942511 DOI: 10.3389/fphys.2021.613330] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2020] [Accepted: 01/28/2021] [Indexed: 02/05/2023] Open
Abstract
Background Tetralogy of Fallot (TOF) is a type of congenital cardiac disease with pulmonary artery (PA) stenosis being the most common defect. Repair surgery needs an appropriate patch to enlarge the narrowed artery from the right ventricular (RV) to the PA. Methods In this work, we proposed a generative adversarial networks (GANs) based method to optimize the patch size, shape, and location. Firstly, we built the 3D PA of patients by segmentation from cardiac computed tomography angiography. After that, normal and stenotic areas of each PA were detected and labeled into two sub-images groups. Then a GAN was trained based on these sub-images. Finally, an optimal prediction model was utilized to repair the PA with patch augmentation in the new patient. Results The fivefold cross-validation (CV) was performed for optimal patch prediction based on GANs in the repair of TOF and the CV accuracy was 93.33%, followed by the clinical outcome. This showed that the GAN model has a significant advantage in finding the best balance point of patch optimization. Conclusion This approach has the potential to reduce the intraoperative misjudgment rate, thereby providing a detailed surgical plan in patients with TOF.
Collapse
Affiliation(s)
- Guangming Zhang
- West China Biomedical Big Data Center, West China Hospital, Sichuan University, Chengdu, China
| | - Yujie Mao
- West China Biomedical Big Data Center, West China Hospital, Sichuan University, Chengdu, China
| | - Mingliang Li
- West China Biomedical Big Data Center, West China Hospital, Sichuan University, Chengdu, China
| | - Li Peng
- West China Biomedical Big Data Center, West China Hospital, Sichuan University, Chengdu, China
| | - Yunfei Ling
- Department of Cardiovascular Surgery, West China Hospital, Sichuan University, Chengdu, China
| | - Xiaobo Zhou
- School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, United States
| |
Collapse
|
293
|
Dashtbani Moghari M, Zhou L, Yu B, Young N, Moore K, Evans A, Fulton RR, Kyme AZ. Efficient radiation dose reduction in whole-brain CT perfusion imaging using a 3D GAN: Performance and clinical feasibility. Phys Med Biol 2021; 66. [PMID: 33621965 DOI: 10.1088/1361-6560/abe917] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Accepted: 02/23/2021] [Indexed: 02/08/2023]
Abstract
Dose reduction in cerebral CT perfusion (CTP) imaging is desirable but is accompanied by an increase in noise that can compromise the image quality and the accuracy of image-based haemodynamic modelling used for clinical decision support in acute ischaemic stroke. The few reported methods aimed at denoising low-dose CTP images lack practicality by considering only small sections of the brain or being computationally expensive. Moreover, the prediction of infarct and penumbra size and location - the chief means of decision support for treatment options - from denoised data has not been explored using these approaches. In this work, we present the first application of a 3D generative adversarial network (3D GAN) for predicting normal-dose CTP data from low-dose CTP data. Feasibility of the approach was tested using real data from 30 acute ischaemic stroke patients in conjunction with low dose simulation. The 3D GAN model was applied to 64^3 voxel patches extracted from two different configurations of the CTP data- frame-based and stacked. The method led to whole-brain denoised data being generated for haemodynamic modelling within 90 seconds. Accuracy of the method was evaluated using standard image quality metrics and the extent to which the clinical content and lesion characteristics of the denoised CTP data were preserved. Results showed an average improvement of 5.15-5.32 dB PSNR and 0.025-0.033 SSIM for CTP images and 2.66-3.95 dB PSNR and 0.036-0.067 SSIM for functional maps at 50% and 25% of normal dose using GAN model in conjunction with a stacked data regime for image synthesis. Consequently, the average lesion volumetric error reduced significantly (p-value < 0.05) by 18-29% and dice coefficient improved significantly by 15-22%. We conclude that GAN-based denoising is a promising practical approach for reducing radiation dose in CTP studies and improving lesion characterisation.
Collapse
Affiliation(s)
- Mahdieh Dashtbani Moghari
- Biomedical Engineering, Faculty of Engineering and Computer Science, Darlington Campus, The University of Sydney, NSW, 2006, AUSTRALIA
| | - Luping Zhou
- The University of Sydney, Sydney, 2006, AUSTRALIA
| | - Biting Yu
- University of Wollongong, Wollongong, New South Wales, AUSTRALIA
| | - Noel Young
- Radiology, Westmead Hospital, Sydney, New South Wales, AUSTRALIA
| | - Krystal Moore
- Westmead Hospital, Sydney, New South Wales, AUSTRALIA
| | - Andrew Evans
- Aged Care & Stroke, Westmead Hospital, Sydney, New South Wales, AUSTRALIA
| | - Roger R Fulton
- Faculty of Health Sciences, University of Sydney, 94 Mallett Street, Camperdown, Sydney, New South Wales, 2050, AUSTRALIA
| | - Andre Z Kyme
- Brain & Mind Research Institute, University of Sydney, Sydney, NSW 2006, Sydney, New South Wales, AUSTRALIA
| |
Collapse
|
294
|
Botwe BO, Akudjedu TN, Antwi WK, Rockson P, Mkoloma SS, Balogun EO, Elshami W, Bwambale J, Barare C, Mdletshe S, Yao B, Arkoh S. The integration of artificial intelligence in medical imaging practice: Perspectives of African radiographers. Radiography (Lond) 2021; 27:861-866. [PMID: 33622574 DOI: 10.1016/j.radi.2021.01.008] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2020] [Revised: 01/29/2021] [Accepted: 01/31/2021] [Indexed: 02/07/2023]
Abstract
INTRODUCTION The current technological developments in medical imaging are centred largely on the increasing integration of artificial intelligence (AI) into all equipment modalities. This survey assessed the perspectives of African radiographers on the integration of AI in medical imaging in order to offer unique recommendations to support the training of the radiography workforce. METHODS An exploratory cross-sectional online survey of radiographers working within Africa was conducted from March to August 2020. The survey obtained data about their demographics and perspectives on AI implementation and usage. Data obtained were analysed using both descriptive and inferential statistics. RESULTS A total of 1020 valid responses were obtained. Majority of the respondents (n = 883,86.6%) were working in general X-ray departments. Of the respondents, 84.9% (n = 866) indicated that AI technology would improve radiography practice and quality assurance for efficient diagnosis and improved clinical care. Fear of job losses following the implementation of AI was a key concern of most radiographers (n = 625,61.3%). CONCLUSION Generally, radiographers were delighted about the integration of AI into medical imaging, however; there were concerns about job security and lack of knowledge. There is an urgent need for stakeholders in medical imaging infrastructure development and practices in Africa to start empowering radiographers through training programmes, funding, motivational support, and create clear roadmaps to guide the adoption and integration of AI in medical imaging in Africa. IMPLICATION FOR PRACTICE The current study offers unique suggestions and recommendations to support the training of the African radiography workforce and others in similar resource-limited settings to provide quality care using AI-integrated imaging modalities.
Collapse
Affiliation(s)
- B O Botwe
- Department of Radiography, School of Biomedical and Allied Health Sciences, College of Health Sciences, University of Ghana, Box KB143, Korle Bu, Accra, Ghana.
| | - T N Akudjedu
- Institute of Medical Imaging & Visualisation, Department of Medical Science & Public Health, Faculty of Health & Social Sciences, Bournemouth University, Bournemouth, UK.
| | - W K Antwi
- Department of Radiography, School of Biomedical and Allied Health Sciences, College of Health Sciences, University of Ghana, Box KB143, Korle Bu, Accra, Ghana.
| | - P Rockson
- Department of Medical Imaging, University of Health and Allied Sciences, Ho, Ghana.
| | | | - E O Balogun
- National Orthopaedic Hospital, Igbobi, Lagos, Nigeria.
| | - W Elshami
- Department of Medical Diagnostic Imaging, College of Health Sciences, University of Sharjah, United Arab Emirates.
| | - J Bwambale
- Society of Radiography of Uganda, Uganda.
| | - C Barare
- Kenyatta National Hospital, Kenya.
| | - S Mdletshe
- University of Auckland, Faculty of Medical and Health Sciences, Department of Anatomy and Medical Imaging, Auckland, New Zealand.
| | - B Yao
- National Institute for Health Technologists' Training (INFAS) Côte d'Ivoire, Department of Medical Imaging and Radiotherapy, Côte d'Ivoire.
| | - S Arkoh
- Department of Radiography, School of Biomedical and Allied Health Sciences, College of Health Sciences, University of Ghana, Box KB143, Korle Bu, Accra, Ghana.
| |
Collapse
|
295
|
Machine Learning and Deep Neural Networks: Applications in Patient and Scan Preparation, Contrast Medium, and Radiation Dose Optimization. J Thorac Imaging 2021; 35 Suppl 1:S17-S20. [PMID: 32079904 DOI: 10.1097/rti.0000000000000482] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
Artificial intelligence (AI) algorithms are dependent on a high amount of robust data and the application of appropriate computational power and software. AI offers the potential for major changes in cardiothoracic imaging. Beyond image processing, machine learning and deep learning have the potential to support the image acquisition process. AI applications may improve patient care through superior image quality and have the potential to lower radiation dose with AI-driven reconstruction algorithms and may help avoid overscanning. This review summarizes recent promising applications of AI in patient and scan preparation as well as contrast medium and radiation dose optimization.
Collapse
|
296
|
Beyond the Artificial Intelligence Hype: What Lies Behind the Algorithms and What We Can Achieve. J Thorac Imaging 2021; 35 Suppl 1:S3-S10. [PMID: 32073539 DOI: 10.1097/rti.0000000000000485] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
Abstract
The field of artificial intelligence (AI) is currently experiencing a period of extensive growth in a wide variety of fields, medicine not being the exception. The base of AI is mathematics and computer science, and the current fame of AI in industry and research stands on 3 pillars: big data, high performance computing infrastructure, and algorithms. In the current digital era, increased storage capabilities and data collection systems, lead to a massive influx of data for AI algorithm. The size and quality of data are 2 major factors influencing performance of AI applications. However, it is highly dependent on the type of task at hand and algorithm chosen to perform this task. AI may potentially automate several tedious tasks in radiology, particularly in cardiothoracic imaging, by pre-readings for the detection of abnormalities, accurate quantifications, for example, oncologic volume lesion tracking and cardiac volume and image optimization. Although AI-based applications offer great opportunity to improve radiology workflow, several challenges need to be addressed starting from image standardization, sophisticated algorithm development, and large-scale evaluation. Integration of AI into the clinical workflow also needs to address legal barriers related to security and protection of patient-sensitive data and liability before AI will reach its full potential in cardiothoracic imaging.
Collapse
|
297
|
Xue H, Zhang Q, Zou S, Zhang W, Zhou C, Tie C, Wan Q, Teng Y, Li Y, Liang D, Liu X, Yang Y, Zheng H, Zhu X, Hu Z. LCPR-Net: low-count PET image reconstruction using the domain transform and cycle-consistent generative adversarial networks. Quant Imaging Med Surg 2021; 11:749-762. [PMID: 33532274 PMCID: PMC7779905 DOI: 10.21037/qims-20-66] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2020] [Accepted: 09/25/2020] [Indexed: 11/06/2022]
Abstract
BACKGROUND Reducing the radiation tracer dose and scanning time during positron emission tomography (PET) imaging can reduce the cost of the tracer, reduce motion artifacts, and increase the efficiency of the scanner. However, the reconstructed images to be noisy. It is very important to reconstruct high-quality images with low-count (LC) data. Therefore, we propose a deep learning method called LCPR-Net, which is used for directly reconstructing full-count (FC) PET images from corresponding LC sinogram data. METHODS Based on the framework of a generative adversarial network (GAN), we enforce a cyclic consistency constraint on the least-squares loss to establish a nonlinear end-to-end mapping process from LC sinograms to FC images. In this process, we merge a convolutional neural network (CNN) and a residual network for feature extraction and image reconstruction. In addition, the domain transform (DT) operation sends a priori information to the cycle-consistent GAN (CycleGAN) network, avoiding the need for a large amount of computational resources to learn this transformation. RESULTS The main advantages of this method are as follows. First, the network can use LC sinogram data as input to directly reconstruct an FC PET image. The reconstruction speed is faster than that provided by model-based iterative reconstruction. Second, reconstruction based on the CycleGAN framework improves the quality of the reconstructed image. CONCLUSIONS Compared with other state-of-the-art methods, the quantitative and qualitative evaluation results show that the proposed method is accurate and effective for FC PET image reconstruction.
Collapse
Affiliation(s)
- Hengzhi Xue
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Qiyang Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
| | - Sijuan Zou
- Department of Nuclear Medicine and PET, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Weiguang Zhang
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Chao Zhou
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Changjun Tie
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Qian Wan
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yueyang Teng
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Yongchang Li
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Xin Liu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yongfeng Yang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Hairong Zheng
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Xiaohua Zhu
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
298
|
Huang Z, Chen Z, Chen J, Lu P, Quan G, Du Y, Li C, Gu Z, Yang Y, Liu X, Zheng H, Liang D, Hu Z. DaNet: dose-aware network embedded with dose-level estimation for low-dose CT imaging. Phys Med Biol 2021; 66:015005. [PMID: 33120378 DOI: 10.1088/1361-6560/abc5cc] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Many deep learning (DL)-based image restoration methods for low-dose CT (LDCT) problems directly employ the end-to-end networks on low-dose training data without considering dose differences. However, the radiation dose difference has a great impact on the ultimate results, and lower doses increase the difficulty of restoration. Moreover, there is increasing demand to design and estimate acceptable scanning doses for patients in clinical practice, necessitating dose-aware networks embedded with adaptive dose estimation. In this paper, we consider these dose differences of input LDCT images and propose an adaptive dose-aware network. First, considering a large dose distribution range for simulation convenience, we coarsely define five dose levels in advance as lowest, lower, mild, higher and highest radiation dose levels. Instead of directly building the end-to-end mapping function between LDCT images and high-dose CT counterparts, the dose level is primarily estimated in the first stage. In the second stage, the adaptively learned low-dose level is used to guide the image restoration process as the pattern of prior information through the channel feature transform. We conduct experiments on a simulated dataset based on original high dose parts of American Association of Physicists in Medicine challenge datasets from the Mayo Clinic. Ablation studies validate the effectiveness of the dose-level estimation, and the experimental results show that our method is superior to several other DL-based methods. Specifically, our method provides obviously better performance in terms of the peak signal-to-noise ratio and visual quality reflected in subjective scores. Due to the dual-stage process, our method may suffer limitations under more parameters and coarse dose-level definitions, and thus, further improvements in clinical practical applications with different CT equipment vendors are planned in future work.
Collapse
Affiliation(s)
- Zhenxing Huang
- Wuhan National Laboratory for Optoelectronics, Huazhong University of Science & Technology, Wuhan 430074, People's Republic of China. School of Computer Science & Technology, Huazhong University of Science & Technology, Wuhan 430074, People's Republic of China. Key Laboratory of Information Storage System, Engineering Research Center of Data Storage Systems and Technology, Ministry of Education of China, Wuhan 430074, People's Republic of China. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
| | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
299
|
Zeng L, Xu X, Zeng W, Peng W, Zhang J, Sixian H, Liu K, Xia C, Li Z. Deep learning trained algorithm maintains the quality of half-dose contrast-enhanced liver computed tomography images: Comparison with hybrid iterative reconstruction: Study for the application of deep learning noise reduction technology in low dose. Eur J Radiol 2021; 135:109487. [PMID: 33418383 DOI: 10.1016/j.ejrad.2020.109487] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2020] [Revised: 12/14/2020] [Accepted: 12/17/2020] [Indexed: 02/05/2023]
Abstract
PURPOSE This study compares the image and diagnostic qualities of a DEep Learning Trained Algorithm (DELTA) for half-dose contrast-enhanced liver computed tomography (CT) with those of a commercial hybrid iterative reconstruction (HIR) method used for standard-dose CT (SDCT). METHODS This study enrolled 207 adults, and they were divided into two groups: SDCT and low-dose CT (LDCT). SDCT was reconstructed using the HIR method (SDCTHIR), and LDCT was reconstructed using both the HIR method (LDCTHIR) and DELTA (LDCTDL). Noise, Hounsfield unit (HU) values, signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were compared between three image series. Two radiologists assessed the noise, artefacts, overall image quality, visualisation of critical anatomical structures and lesion detection, characterisation and visualisation. RESULTS The mean effective doses were 5.64 ± 1.96 mSv for SDCT and 2.87 ± 0.87 mSv for LDCT. The noise of LDCTDL was significantly lower than that of SDCTHIR and LDCTHIR. The SNR and CNR of LDCTDL were significantly higher than those of the other two groups. The overall image quality, visualisation of anatomical structures and lesion visualisation between LDCTDL and SDCTHIR were not significantly different. For lesion detection, the sensitivities and specificities of SDCTHIR vs. LDCTDL were 81.9 % vs. 83.7 % and 89.1 % vs. 86.3 %, respectively, on a per-patient basis. SDCTHIR showed 75.4 % sensitivity and 82.6 % specificity for lesion characterisation on a per-patient basis, whereas LDCTDL showed 73.5 % sensitivity and 82.4 % specificity. CONCLUSIONS LDCT with DELTA had approximately 49 % dose reduction compared with SDCT with HIR while maintaining image quality on contrast-enhanced liver CT.
Collapse
Affiliation(s)
- Lingming Zeng
- Department of Radiology, West China Hospital, Sichuan University, Chengdu, China
| | - Xu Xu
- Department of Radiology, West China Hospital, Sichuan University, Chengdu, China
| | - Wen Zeng
- Department of Radiology, West China Hospital, Sichuan University, Chengdu, China
| | - Wanlin Peng
- Department of Radiology, West China Hospital, Sichuan University, Chengdu, China
| | - Jinge Zhang
- Department of Radiology, West China Hospital, Sichuan University, Chengdu, China
| | - Hu Sixian
- Department of Radiology, West China Hospital, Sichuan University, Chengdu, China
| | - Keling Liu
- Department of Radiology, West China Hospital, Sichuan University, Chengdu, China
| | - Chunchao Xia
- Department of Radiology, West China Hospital, Sichuan University, Chengdu, China
| | - Zhenlin Li
- Department of Radiology, West China Hospital, Sichuan University, Chengdu, China.
| |
Collapse
|
300
|
Hu D, Liu J, Lv T, Zhao Q, Zhang Y, Quan G, Feng J, Chen Y, Luo L. Hybrid-Domain Neural Network Processing for Sparse-View CT Reconstruction. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021. [DOI: 10.1109/trpms.2020.3011413] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|