1
|
Shen K, Vivone G, Yang X, Lolli S, Schmitt M. A benchmarking protocol for SAR colorization: From regression to deep learning approaches. Neural Netw 2024; 169:698-712. [PMID: 37976594 DOI: 10.1016/j.neunet.2023.10.058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Revised: 10/02/2023] [Accepted: 10/31/2023] [Indexed: 11/19/2023]
Abstract
Synthetic aperture radar (SAR) images are widely used in remote sensing. Interpreting SAR images can be challenging due to their intrinsic speckle noise and grayscale nature. To address this issue, SAR colorization has emerged as a research direction to colorize gray scale SAR images while preserving the original spatial information and radiometric information. However, this research field is still in its early stages, and many limitations can be highlighted. In this paper, we propose a full research line for supervised learning-based approaches to SAR colorization. Our approach includes a protocol for generating synthetic color SAR images, several baselines, and an effective method based on the conditional generative adversarial network (cGAN) for SAR colorization. We also propose numerical assessment metrics for the problem at hand. To our knowledge, this is the first attempt to propose a research line for SAR colorization that includes a protocol, a benchmark, and a complete performance evaluation. Our extensive tests demonstrate the effectiveness of our proposed cGAN-based network for SAR colorization. The code is available at https://github.com/shenkqtx/SAR-Colorization-Benchmarking-Protocol.
Collapse
Affiliation(s)
- Kangqing Shen
- School of Mathematical Sciences, Beihang University, Beijing, 102206, China
| | - Gemine Vivone
- Institute of Methodologies for Environmental Analysis, CNR-IMAA, Tito Scalo, 85050, Italy; National Biodiversity Future Center, NBFC, Palermo, 90133, Italy
| | - Xiaoyuan Yang
- School of Mathematical Sciences, Beihang University, Beijing, 102206, China; Key Laboratory of Mathematics, Information and Behavior, Ministry of Education, Beihang University, Beijing, 102206, China.
| | - Simone Lolli
- Institute of Methodologies for Environmental Analysis, CNR-IMAA, Tito Scalo, 85050, Italy; CommSensLab, Department of Signal Theory and Communications, Polytechnic University of Catalonia, Barcelona, 08034, Spain
| | | |
Collapse
|
2
|
Jeong J, Kim KD, Nam Y, Cho CE, Go H, Kim N. Stain normalization using score-based diffusion model through stain separation and overlapped moving window patch strategies. Comput Biol Med 2023; 152:106335. [PMID: 36473344 DOI: 10.1016/j.compbiomed.2022.106335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Revised: 11/10/2022] [Accepted: 11/15/2022] [Indexed: 11/30/2022]
Abstract
Hematoxylin and eosin (H&E) staining is the gold standard modality for diagnosis in medicine. However, the dosage ratio of hematoxylin to eosin in H&E staining has not been standardized yet. Additionally, H&E stains fade out at various speeds. Therefore, the staining quality could differ among each image, and stain normalization is a critical preprocessing approach for training deep learning (DL) models, especially in long-term and/or multicenter digital pathology studies. However, conventional methods for stain normalization have some significant drawbacks, such as collapsing in the structure and/or texture of tissue. In addition, conventional methods must require a reference patch or slide. Meanwhile, DL-based methods have a risk of overfitting and/or grid artifacts. We developed a score-based diffusion model of colorization for stain normalization. However, mistransfer, in which the model confuses hematoxylin with eosin, can occur using a score-based diffusion model due to its high diversity nature. To overcome this mistransfer, we propose a stain separation method using sparse non-negative matrix factorization (SNMF), which can decompose pathology slide into Hematoxylin and Eosin to normalize each stain component. Furthermore, inpainting with overlapped moving window patches was used to prevent grid artifacts of whole slide image normalization. Our method can normalize the whole slide pathology images through this stain normalization pipeline with decent performance.
Collapse
Affiliation(s)
- Jiheon Jeong
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, College of Medicine, University of Ulsan, Seoul, Republic of Korea; Department of Convergence Medicine, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea.
| | - Ki Duk Kim
- Department of Convergence Medicine, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea.
| | - Yujin Nam
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, College of Medicine, University of Ulsan, Seoul, Republic of Korea; Department of Convergence Medicine, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea
| | - Cristina Eunbee Cho
- Department of Convergence Medicine, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea
| | - Heounjeong Go
- Department of Pathology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
| | - Namkug Kim
- Department of Convergence Medicine, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea; Department of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
| |
Collapse
|
3
|
Wang Y, Yan WQ. Colorizing Grayscale CT images of human lungs using deep learning methods. Multimed Tools Appl 2022; 81:37805-37819. [PMID: 35475169 PMCID: PMC9027015 DOI: 10.1007/s11042-022-13062-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Revised: 07/20/2021] [Accepted: 04/04/2022] [Indexed: 06/14/2023]
Abstract
Image colorization refers to computer-aided rendering technology which transfers colors from a reference color image to grayscale images or video frames. Deep learning elevated notably in the field of image colorization in the past years. In this paper, we formulate image colorization methods relying on exemplar colorization and automatic colorization, respectively. For hybrid colorization, we select appropriate reference images to colorize the grayscale CT images. The colours of meat resemble those of human lungs, so the images of fresh pork, lamb, beef, and even rotten meat are collected as our dataset for model training. Three sets of training data consisting of meat images are analysed to extract the pixelar features for colorizing lung CT images by using an automatic approach. Pertaining to the results, we consider numerous methods (i.e., loss functions, visual analysis, PSNR, and SSIM) to evaluate the proposed deep learning models. Moreover, compared with other methods of colorizing lung CT images, the results of rendering the images by using deep learning methods are significantly genuine and promising. The metrics for measuring image similarity such as SSIM and PSNR have satisfactory performance, up to 0.55 and 28.0, respectively. Additionally, the methods may provide novel ideas for rendering grayscale X-ray images in airports, ferries, and railway stations.
Collapse
Affiliation(s)
- Yuewei Wang
- Auckland University of Technology, Auckland, 1010 New Zealand
| | - Wei Qi Yan
- Auckland University of Technology, Auckland, 1010 New Zealand
| |
Collapse
|
4
|
Zhang Z, Jiang H, Liu J, Shi T. Improving the fidelity of CT image colorization based on pseudo-intensity model and tumor metabolism enhancement. Comput Biol Med 2021; 138:104885. [PMID: 34626914 DOI: 10.1016/j.compbiomed.2021.104885] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Revised: 09/17/2021] [Accepted: 09/18/2021] [Indexed: 10/20/2022]
Abstract
BACKGROUND Subject to the principle of imaging, most medical images are gray-scale images. Human eyes are more sensitive to color images compared to gray-scale images. The state-of-the-art medical image colorization results are unnatural and unrealistic, especially in some organs, such as the lung field. METHOD We propose a CT image colorization network that consists of a pseudo-intensity model, tumor metabolic enhancement, and MemoPainter-cGAN colorization network. First, the distributions of both the density of CT images and the intensity of anatomical images are analyzed with the aim of building a pseudo-intensity model. Then, the PET images, which are sensitive to tumor metabolism, are used to highlight the tumor regions. Finally, the MemoPainter-cGAN is used to generate colorized anatomical images. RESULTS Our experiment verified that the mean structural similarity between the colorized images and the original color images is 0.995, which indicates that the colorized image maintains the features of the original images enormously. The average image information entropy is 6.62, which is 13.4% higher than that of the images before metabolism enhancement and colorization. It indicates that the image fidelity is significantly improved. CONCLUSIONS Our method can generate vivid and fresh anatomical images based on prior knowledge of tissue or organ intensity. The colorized PET/CT images with abundant anatomical knowledge and high sensitivity of metabolic information provide radiologists with access to a new modality that offers additional reference information.
Collapse
Affiliation(s)
- Zexu Zhang
- Software College, Northeastern University, No.195, Chuangxin Road, Hunnan District, Shenyang, 110169, Liaoning, China
| | - Huiyan Jiang
- Software College, Northeastern University, No.195, Chuangxin Road, Hunnan District, Shenyang, 110169, Liaoning, China; Key Laboratory of Intelligent Computing in Biomedical Image, Ministry of Education, Northeastern University, No.195, Chuangxin Road, Hunnan District, Shenyang, 110169, Liaoning, China.
| | - Jiaji Liu
- Software College, Northeastern University, No.195, Chuangxin Road, Hunnan District, Shenyang, 110169, Liaoning, China
| | - Tianyu Shi
- Software College, Northeastern University, No.195, Chuangxin Road, Hunnan District, Shenyang, 110169, Liaoning, China
| |
Collapse
|
5
|
Liu Y, Zhang C, Li C, Cheng J, Zhang Y, Xu H, Song T, Zhao L, Chen X. A practical PET/CT data visualization method with dual-threshold PET colorization and image fusion. Comput Biol Med 2020; 126:104050. [PMID: 33096422 DOI: 10.1016/j.compbiomed.2020.104050] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2020] [Revised: 09/09/2020] [Accepted: 10/06/2020] [Indexed: 10/23/2022]
Abstract
Multi-modal medical imaging has emerged as a general trend in clinical diagnosis and treatment planning. In recent years, great efforts have been made to investigate and develop dual-modality scanners, among which PET/CT is the most widespread one in clinical practice. In this paper, we propose a simple yet effective PET/CT data visualization method that can integrate these two modalities into composite data for better observation. The proposed method consists of three main steps. First, a PET data colorization approach is presented based on a dual-threshold scheme, which applies a pair of high and low thresholds to colorize the PET image. Then, to extract functional information from the PET image more adequately, unlike traditional blending fashion that directly uses the CT image as underlay, we merge the CT and the PET images with a Laplacian pyramid (LP)-based image fusion approach to generate the underlay. Finally, the visualization result is obtained by blending the fused image and the colorized PET image. Experiments are conducted on 5 sets of PET/CT scans that contain 200 paired slices in total. The ClearCanvas software and the method using the presented PET colorization approach but with the CT image as underlay are adopted for comparison. Experimental results demonstrate that the proposed method can achieve more promising performance in terms of both visual perception and quantitative assessment. The code of the proposed method has been made available online athttps://github.com/yuliu316316/Visualization.
Collapse
Affiliation(s)
- Yu Liu
- Department of Biomedical Engineering, Hefei University of Technology, Hefei, 230009, China.
| | - Chao Zhang
- Department of Biomedical Engineering, Hefei University of Technology, Hefei, 230009, China
| | - Chang Li
- Department of Biomedical Engineering, Hefei University of Technology, Hefei, 230009, China
| | - Juan Cheng
- Department of Biomedical Engineering, Hefei University of Technology, Hefei, 230009, China
| | - Yadong Zhang
- The First Affiliated Hospital of Anhui Medical University, Hefei, 230022, China.
| | - Huiqin Xu
- The First Affiliated Hospital of Anhui Medical University, Hefei, 230022, China
| | - Tao Song
- The SenseTime Research, Shanghai, 200233, China
| | - Liang Zhao
- The SenseTime Research, Shanghai, 200233, China
| | - Xun Chen
- Department of Neurosurgery, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230001, China; Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei, 230026, China.
| |
Collapse
|
6
|
Nascimento R, Queiroz F, Rocha A, Ren TI, Mello V, Peixoto A. Computer-assisted coloring and illuminating based on a region-tree structure. Springerplus 2012; 1:1. [PMID: 23984219 PMCID: PMC3581111 DOI: 10.1186/2193-1801-1-1] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/07/2012] [Accepted: 03/06/2012] [Indexed: 11/24/2022]
Abstract
Colorization and illumination are key processes for creating animated cartoons. Computer assisted methods have been incorporated in animation/illustration systems to reduce the artists' workload. This paper presents a new method for illumination and colorization of 2D drawings based on a region- tree representation. Starting from a hand-drawn cartoon, the proposed method extracts geometric and topological information and builds a tree structure, ensuring independence among parts of the drawing, such as curves and regions. Based on this structure and its attributes, a colorization method that propagates through consecutive frames of animation is proposed, combined with an interpolation method that accurately computes a normal mapping for the illumination process. Different operators for curve and region attributes can be applied independently, obtaining different rendering effects.
Collapse
|