1
|
Wei S, Si L, Huang T, Du S, Yao Y, Dong Y, Ma H. Deep-learning-based cross-modality translation from Stokes image to bright-field contrast. J Biomed Opt 2023; 28:102911. [PMID: 37867633 PMCID: PMC10587695 DOI: 10.1117/1.jbo.28.10.102911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 08/25/2023] [Accepted: 09/25/2023] [Indexed: 10/24/2023]
Abstract
Significance Mueller matrix (MM) microscopy has proven to be a powerful tool for probing microstructural characteristics of biological samples down to subwavelength scale. However, in clinical practice, doctors usually rely on bright-field microscopy images of stained tissue slides to identify characteristic features of specific diseases and make accurate diagnosis. Cross-modality translation based on polarization imaging helps to improve the efficiency and stability in analyzing sample properties from different modalities for pathologists. Aim In this work, we propose a computational image translation technique based on deep learning to enable bright-field microscopy contrast using snapshot Stokes images of stained pathological tissue slides. Taking Stokes images as input instead of MM images allows the translated bright-field images to be unaffected by variations of light source and samples. Approach We adopted CycleGAN as the translation model to avoid requirements on co-registered image pairs in the training. This method can generate images that are equivalent to the bright-field images with different staining styles on the same region. Results Pathological slices of liver and breast tissues with hematoxylin and eosin staining and lung tissues with two types of immunohistochemistry staining, i.e., thyroid transcription factor-1 and Ki-67, were used to demonstrate the effectiveness of our method. The output results were evaluated by four image quality assessment methods. Conclusions By comparing the cross-modality translation performance with MM images, we found that the Stokes images, with the advantages of faster acquisition and independence from light intensity and image registration, can be well translated to bright-field images.
Collapse
Affiliation(s)
- Shilong Wei
- Tsinghua University, Shenzhen International Graduate School, Shenzhen, China
| | - Lu Si
- Tsinghua University, Shenzhen International Graduate School, Shenzhen, China
| | - Tongyu Huang
- Tsinghua University, Shenzhen International Graduate School, Shenzhen, China
- Tsinghua University, Department of Biomedical Engineering, Beijing, China
| | - Shan Du
- University of Chinese Academy of Sciences, Shenzhen Hospital, Department of Pathology, Shenzhen, China
| | - Yue Yao
- Tsinghua University, Shenzhen International Graduate School, Shenzhen, China
| | - Yang Dong
- Tsinghua University, Shenzhen International Graduate School, Shenzhen, China
| | - Hui Ma
- Tsinghua University, Shenzhen International Graduate School, Shenzhen, China
- Tsinghua University, Department of Biomedical Engineering, Beijing, China
- Tsinghua University, Department of Physics, Beijing, China
| |
Collapse
|
2
|
Xu Y, Zhang H, He F, Guo J, Wang Z. Enhanced CycleGAN Network with Adaptive Dark Channel Prior for Unpaired Single-Image Dehazing. Entropy (Basel) 2023; 25:856. [PMID: 37372201 DOI: 10.3390/e25060856] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Revised: 05/22/2023] [Accepted: 05/23/2023] [Indexed: 06/29/2023]
Abstract
Unpaired single-image dehazing has become a challenging research hotspot due to its wide application in modern transportation, remote sensing, and intelligent surveillance, among other applications. Recently, CycleGAN-based approaches have been popularly adopted in single-image dehazing as the foundations of unpaired unsupervised training. However, there are still deficiencies with these approaches, such as obvious artificial recovery traces and the distortion of image processing results. This paper proposes a novel enhanced CycleGAN network with an adaptive dark channel prior for unpaired single-image dehazing. First, a Wave-Vit semantic segmentation model is utilized to achieve the adaption of the dark channel prior (DCP) to accurately recover the transmittance and atmospheric light. Then, the scattering coefficient derived from both physical calculations and random sampling means is utilized to optimize the rehazing process. Bridged by the atmospheric scattering model, the dehazing/rehazing cycle branches are successfully combined to form an enhanced CycleGAN framework. Finally, experiments are conducted on reference/no-reference datasets. The proposed model achieved an SSIM of 94.9% and a PSNR of 26.95 on the SOTS-outdoor dataset and obtained an SSIM of 84.71% and a PSNR of 22.72 on the O-HAZE dataset. The proposed model significantly outperforms typical existing algorithms in both objective quantitative evaluation and subjective visual effect.
Collapse
Affiliation(s)
- Yijun Xu
- Westa College, Southwest University, Chongqing 400715, China
| | - Hanzhi Zhang
- College of Electronic and Information Engineering, Southwest University, Chongqing 400715, China
| | - Fuliang He
- College of Electronic and Information Engineering, Southwest University, Chongqing 400715, China
- Chongqing Key Laboratory of Nolinear Circuits and Intelligent Information Processing, Southwest University, Chongqing 400715, China
| | - Jiachi Guo
- Westa College, Southwest University, Chongqing 400715, China
| | - Zichen Wang
- Westa College, Southwest University, Chongqing 400715, China
| |
Collapse
|
3
|
Deng F, Wan Q, Zeng Y, Shi Y, Wu H, Wu Y, Xu W, Mok GSP, Zhang X, Hu Z. Image restoration of motion artifacts in cardiac arteries and vessels based on a generative adversarial network. Quant Imaging Med Surg 2022; 12:2755-2766. [PMID: 35502383 PMCID: PMC9014156 DOI: 10.21037/qims-20-1400] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2020] [Accepted: 01/14/2022] [Indexed: 10/12/2023]
Abstract
BACKGROUND When the heart rate of a patient exceeds the physical limits of a scanning device, even retrospective electrocardiography (ECG) gating technology cannot correct motion artifacts. The purpose of this study was to use deep learning methods to correct motion artifacts in coronary computed tomography angiography (CCTA) images acquired with retrospective ECG gating. METHODS To correct motion artifacts in CCTA images, we used a cycle Wasserstein generative adversarial network with a gradient penalty (WGAN-GP) to synthesize CCTA images without motion artifacts, and applied objective image indicators and clinical quantitative scores to evaluate the images. The objective image indicators included peak signal-to-noise ratio (PSNR), structural similarity (SSIM), and normalized mean square error (NMSE). For clinical quantitative scoring, we randomly selected 50 sets of images from the test data set as the scoring data set. We invited 2 radiologists from Zhongnan Hospital of Wuhan University to score the composite images. RESULTS In the test images, the PSNR, SSIM, NMSE and clinical quantitative score were 24.96±1.54, 0.769±0.055, 0.031±0.023, and 4.12±0.61, respectively. The images synthesized by cycle WGAN-GP performed better on objective image indicators and clinical quantitative scores than those synthesized by cycle least squares generative adversarial network (LSGAN), UNet, WGAN, and cycle WGAN. CONCLUSIONS Our proposed method can effectively correct the motion artifacts of coronary arteries in CCTA images and performs better than other methods. According to the performance of the clinical score, correction of images by this method does not affect the clinical diagnosis.
Collapse
Affiliation(s)
- Fuquan Deng
- Computer Department of North China Electric Power University, Baoding, China
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Qian Wan
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Beijing, China
| | - Yingting Zeng
- Department of Radiology, Zhongnan Hospital of Wuhan University, Wuhan, China
| | - Yanbin Shi
- Department of Radiology, Zhongnan Hospital of Wuhan University, Wuhan, China
| | - Huiying Wu
- Department of Radiology, Guangzhou Women and Children’s Medical Center, Guangzhou Medical University, Guangzhou, China
| | - Yu Wu
- Department of Radiology, Guangzhou Women and Children’s Medical Center, Guangzhou Medical University, Guangzhou, China
| | - Weifeng Xu
- Computer Department of North China Electric Power University, Baoding, China
| | - Greta S. P. Mok
- Biomedical Imaging Laboratory (BIG), Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Avenida da Universidade, Taipa, Macau, China
| | - Xiaochun Zhang
- Department of Radiology, Guangzhou Women and Children’s Medical Center, Guangzhou Medical University, Guangzhou, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
4
|
Zhou H, Liu X, Wang H, Chen Q, Wang R, Pang ZF, Zhang Y, Hu Z. The synthesis of high-energy CT images from low-energy CT images using an improved cycle generative adversarial network. Quant Imaging Med Surg 2022; 12:28-42. [PMID: 34993058 DOI: 10.21037/qims-21-182] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2021] [Accepted: 07/02/2021] [Indexed: 12/14/2022]
Abstract
Background The dose of radiation a patient receives when undergoing dual-energy computed tomography (CT) is of significant concern to the medical community, and balancing the tradeoffs between the level of radiation used and the quality of CT images is challenging. This paper proposes a method of synthesizing high-energy CT (HECT) images from low-energy CT (LECT) images using a neural network that achieves an alternative to HECT scanning by employing an LECT scan, which greatly reduces the radiation dose a patient receives. Methods In the training phase, the proposed structure cyclically generates HECT and LECT images to improve the accuracy of extracting edge and texture features. Specifically, we combine multiple connection methods with channel attention (CA) and pixel attention (PA) mechanisms to improve the network's mapping ability of image features. In the prediction phase, we use a model consisting of only the network component that synthesizes HECT images from LECT images. Results Our proposed method was conducted on clinical hip CT image data sets from Guizhou Provincial People's Hospital. In a comparison with other available methods [a generative adversarial network (GAN), a residual encoder-to-decoder network with a visual geometry group (VGG) pretrained model (RED-VGG), a Wasserstein GAN (WGAN), and CycleGAN] in terms of metrics of peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), normalized mean square error (NMSE), and a visual effect evaluation, the proposed method was found to perform better on each of these evaluation criteria. Compared with the results produced by CycleGAN, the proposed method improved the PSNR by 2.44%, the SSIM by 1.71%, and the NMSE by 15.2%. Furthermore, the differences in the statistical indicators are statistically significant, proving the strength of the proposed method. Conclusions The proposed method synthesizes high-energy CT images from low-energy CT images, which significantly reduces both the cost of treatment and the radiation dose received by patients. Based on both image quality score metrics and visual effects comparisons, the results of the proposed method are superior to those obtained by other methods.
Collapse
Affiliation(s)
- Haojie Zhou
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,College of Software, Henan University, Kaifeng, China
| | - Xinfeng Liu
- Department of Radiology, Guizhou Provincial People's Hospital, Guiyang, China
| | - Haiyan Wang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Qihang Chen
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Rongpin Wang
- Department of Radiology, Guizhou Provincial People's Hospital, Guiyang, China
| | - Zhi-Feng Pang
- College of Mathematics and Statistics, Henan University, Kaifeng, China
| | - Yong Zhang
- Department of Orthopaedic, Shenzhen University General Hospital, Shenzhen, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
5
|
Imae T, Kaji S, Kida S, Matsuda K, Takenaka S, Aoki A, Nakamoto T, Ozaki S, Nawa K, Yamashita H, Nakagawa K, Abe O. [Improvement in Image Quality of CBCT during Treatment by Cycle Generative Adversarial Network]. Nihon Hoshasen Gijutsu Gakkai Zasshi 2020; 76:1173-1184. [PMID: 33229847 DOI: 10.6009/jjrt.2020_jsrt_76.11.1173] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
PURPOSE Volumetric modulated arc therapy (VMAT) can acquire projection images during rotational irradiation, and cone-beam computed tomography (CBCT) images during VMAT delivery can be reconstructed. The poor quality of CBCT images prevents accurate recognition of organ position during the treatment. The purpose of this study was to improve the image quality of CBCT during the treatment by cycle generative adversarial network (CycleGAN). METHOD Twenty patients with clinically localized prostate cancer were treated with VMAT, and projection images for intra-treatment CBCT (iCBCT) were acquired. Synthesis of PCT (SynPCT) with improved image quality by CycleGAN requires only unpaired and unaligned iCBCT and planning CT (PCT) images for training. We performed visual and quantitative evaluation to compare iCBCT, SynPCT and PCT deformable image registration (DIR) to confirm the clinical usefulness. RESULT We demonstrated suitable CycleGAN networks and hyperparameters for SynPCT. The image quality of SynPCT improved visually and quantitatively while preserving anatomical structures of the original iCBCT. The undesirable deformation of PCT was reduced when SynPCT was used as its reference instead of iCBCT. CONCLUSION We have performed image synthesis with preservation of organ position by CycleGAN for iCBCT and confirmed the clinical usefulness.
Collapse
Affiliation(s)
| | - Shizuo Kaji
- Department of Radiology, University of Tokyo Hospital
- Institute of Mathematics for Industry, Kyushu University
| | - Satoshi Kida
- Department of Radiology, University of Tokyo Hospital
| | | | | | - Atsushi Aoki
- Department of Radiology, University of Tokyo Hospital
| | | | - Sho Ozaki
- Department of Radiology, University of Tokyo Hospital
| | - Kanabu Nawa
- Department of Radiology, University of Tokyo Hospital
| | | | | | - Osamu Abe
- Department of Radiology, University of Tokyo Hospital
| |
Collapse
|