1
|
Cai Y, Liu X, Li H, Lu F, Gu X, Qin K. Research on Unsupervised Low-Light Railway Fastener Image Enhancement Method Based on Contrastive Learning GAN. SENSORS (BASEL, SWITZERLAND) 2024; 24:3794. [PMID: 38931578 PMCID: PMC11207936 DOI: 10.3390/s24123794] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/10/2024] [Revised: 05/31/2024] [Accepted: 06/07/2024] [Indexed: 06/28/2024]
Abstract
The railway fastener, as a crucial component of railway tracks, directly influences the safety and stability of a railway system. However, in practical operation, fasteners are often in low-light conditions, such as at nighttime or within tunnels, posing significant challenges to defect detection equipment and limiting its effectiveness in real-world scenarios. To address this issue, this study proposes an unsupervised low-light image enhancement algorithm, CES-GAN, which achieves the model's generalization and adaptability under different environmental conditions. The CES-GAN network architecture adopts a U-Net model with five layers of downsampling and upsampling structures as the generator, incorporating both global and local discriminators to help the generator to preserve image details and textures during the reconstruction process, thus enhancing the realism and intricacy of the enhanced images. The combination of the feature-consistency loss, contrastive learning loss, and illumination loss functions in the generator structure, along with the discriminator loss function in the discriminator structure, collectively promotes the clarity, realism, and illumination consistency of the images, thereby improving the quality and usability of low-light images. Through the CES-GAN algorithm, this study provides reliable visual support for railway construction sites and ensures the stable operation and accurate operation of fastener identification equipment in complex environments.
Collapse
Affiliation(s)
- Yijie Cai
- China Railway Wuhan Bureau Group Co., Ltd., Wuhan 430061, China;
- School of Mechanical Science and Engineering, Huazhong University of Science and Technology, Wuhan 430074, China
- Gemac Engineering Machinery Co., Ltd., Xiangyang 441000, China; (F.L.); (X.G.); (K.Q.)
- Key Laboratory of Modern Manufacturing Quality Engineering in Hubei Province, Wuhan 430068, China
| | - Xuehai Liu
- Gemac Engineering Machinery Co., Ltd., Xiangyang 441000, China; (F.L.); (X.G.); (K.Q.)
| | - Huoxing Li
- School of Mechanical Engineering, Hubei University of Technology, Wuhan 430068, China;
- Key Laboratory of Modern Manufacturing Quality Engineering in Hubei Province, Wuhan 430068, China
| | - Fei Lu
- Gemac Engineering Machinery Co., Ltd., Xiangyang 441000, China; (F.L.); (X.G.); (K.Q.)
| | - Xinghua Gu
- Gemac Engineering Machinery Co., Ltd., Xiangyang 441000, China; (F.L.); (X.G.); (K.Q.)
| | - Kang Qin
- Gemac Engineering Machinery Co., Ltd., Xiangyang 441000, China; (F.L.); (X.G.); (K.Q.)
| |
Collapse
|
2
|
You S, Lin S, Feng Y, Fan J, Yan Z, Liu S, Ji Y. ISLS: An Illumination-Aware Sauce-Packet Leakage Segmentation Method. SENSORS (BASEL, SWITZERLAND) 2024; 24:3216. [PMID: 38794069 PMCID: PMC11126124 DOI: 10.3390/s24103216] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/08/2024] [Revised: 05/04/2024] [Accepted: 05/16/2024] [Indexed: 05/26/2024]
Abstract
The segmentation of abnormal regions is vital in smart manufacturing. The blurring sauce-packet leakage segmentation task (BSLST) is designed to distinguish the sauce packet and the leakage's foreground and background at the pixel level. However, the existing segmentation system for detecting sauce-packet leakage on intelligent sensors encounters an issue of imaging blurring caused by uneven illumination. This issue adversely affects segmentation performance, thereby hindering the measurements of leakage area and impeding the automated sauce-packet production. To alleviate this issue, we propose the two-stage illumination-aware sauce-packet leakage segmentation (ISLS) method for intelligent sensors. The ISLS comprises two main stages: illumination-aware region enhancement and leakage region segmentation. In the first stage, YOLO-Fastestv2 is employed to capture the Region of Interest (ROI), which reduces redundancy computations. Additionally, we propose image enhancement to relieve the impact of uneven illumination, enhancing the texture details of the ROI. In the second stage, we propose a novel feature extraction network. Specifically, we propose the multi-scale feature fusion module (MFFM) and the Sequential Self-Attention Mechanism (SSAM) to capture discriminative representations of leakage. The multi-level features are fused by the MFFM with a small number of parameters, which capture leakage semantics at different scales. The SSAM realizes the enhancement of valid features and the suppression of invalid features by the adaptive weighting of spatial and channel dimensions. Furthermore, we generate a self-built dataset of sauce packets, including 606 images with various leakage areas. Comprehensive experiments demonstrate that our ISLS method shows better results than several state-of-the-art methods, with additional performance analyses deployed on intelligent sensors to affirm the effectiveness of our proposed method.
Collapse
Affiliation(s)
- Shuai You
- School of Internet of Things, Nanjing University of Posts and Telecommunications, Nanjing 210023, China; (S.Y.); (Y.F.)
| | - Shijun Lin
- School of Computer Science, Nanjing University of Posts and Telecommunications, Nanjing 210023, China; (S.L.); (S.L.)
| | - Yujian Feng
- School of Internet of Things, Nanjing University of Posts and Telecommunications, Nanjing 210023, China; (S.Y.); (Y.F.)
| | - Jianhua Fan
- The 63rd Research Institute, National University of Defense Technology, Nanjing 210007, China;
| | - Zhenzheng Yan
- Northern Information Control Research Academy Group Co., Ltd., Nanjing 211153, China;
| | - Shangdong Liu
- School of Computer Science, Nanjing University of Posts and Telecommunications, Nanjing 210023, China; (S.L.); (S.L.)
| | - Yimu Ji
- School of Computer Science, Nanjing University of Posts and Telecommunications, Nanjing 210023, China; (S.L.); (S.L.)
| |
Collapse
|
3
|
Liu T, Li S, Xu M, Yang L, Wang X. Assessing Face Image Quality: A Large-Scale Database and a Transformer Method. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2024; 46:3981-4000. [PMID: 38190692 DOI: 10.1109/tpami.2024.3350049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/10/2024]
Abstract
The amount of face images has been witnessing an explosive increase in the last decade, where various distortions inevitably exist on transmitted or stored face images. The distortions lead to visible and undesirable degradation on face images, affecting their quality of experience (QoE). To address this issue, this paper proposes a novel Transformer-based method for quality assessment on face images (named as TransFQA). Specifically, we first establish a large-scale face image quality assessment (FIQA) database, which includes 42,125 face images with diversifying content at different distortion types. Through an extensive crowdsource study, we obtain 712,808 subjective scores, which to the best of our knowledge contribute to the largest database for assessing the quality of face images. Furthermore, by investigating the established database, we comprehensively analyze the impacts of distortion types and facial components (FCs) on the overall image quality. Accordingly, we propose the TransFQA method, in which the FC-guided Transformer network (FT-Net) is developed to integrate the global context, face region and FC detailed features via a new progressive attention mechanism. Then, a distortion-specific prediction network (DP-Net) is designed to weight different distortions and accurately predict final quality scores. Finally, the experiments comprehensively verify that our TransFQA method significantly outperforms other state-of-the-art methods for quality assessment on face images.
Collapse
|
4
|
Zhang F, Liu X, Gao C, Sang N. Color and Luminance Separated Enhancement for Low-Light Images with Brightness Guidance. SENSORS (BASEL, SWITZERLAND) 2024; 24:2711. [PMID: 38732817 PMCID: PMC11086088 DOI: 10.3390/s24092711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/27/2024] [Revised: 04/18/2024] [Accepted: 04/22/2024] [Indexed: 05/13/2024]
Abstract
Existing retinex-based low-light image enhancement strategies focus heavily on crafting complex networks for Retinex decomposition but often result in imprecise estimations. To overcome the limitations of previous methods, we introduce a straightforward yet effective strategy for Retinex decomposition, dividing images into colormaps and graymaps as new estimations for reflectance and illumination maps. The enhancement of these maps is separately conducted using a diffusion model for improved restoration. Furthermore, we address the dual challenge of perturbation removal and brightness adjustment in illumination maps by incorporating brightness guidance. This guidance aids in precisely adjusting the brightness while eliminating disturbances, ensuring a more effective enhancement process. Extensive quantitative and qualitative experimental analyses demonstrate that our proposed method improves the performance by approximately 4.4% on the LOL dataset compared to other state-of-the-art diffusion-based methods, while also validating the model's generalizability across multiple real-world datasets.
Collapse
Affiliation(s)
| | | | - Changxin Gao
- Key Laboratory of Image Processing and Intelligent Control, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, China; (F.Z.); (X.L.); (N.S.)
| | | |
Collapse
|
5
|
Liang X, Chen X, Ren K, Miao X, Chen Z, Jin Y. Low-light image enhancement via adaptive frequency decomposition network. Sci Rep 2023; 13:14107. [PMID: 37644042 PMCID: PMC10465598 DOI: 10.1038/s41598-023-40899-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Accepted: 08/17/2023] [Indexed: 08/31/2023] Open
Abstract
Images captured in low light conditions suffer from low visibility, blurred details and strong noise, resulting in unpleasant visual appearance and poor performance of high level visual tasks. To address these problems, existing approaches have attempted to enhance the visibility of low-light images using convolutional neural networks (CNN). However, due to the insufficient consideration of the characteristics of the information of different frequency layers in the image, most of them yield blurry details and amplified noise. In this work, to fully extract and utilize these information, we proposed a novel Adaptive Frequency Decomposition Network (AFDNet) for low-light image enhancement. An Adaptive Frequency Decomposition (AFD) module is designed to adaptively extract low and high frequency information of different granularities. Specifically, the low-frequency information is employed for contrast enhancement and noise suppression in low-scale space and high-frequency information is for detail restoration in high-scale space. Meanwhile, a new frequency loss function are proposed to guarantee AFDNet's recovery capability for different frequency information. Extensive experiments on various publicly available datasets show that AFDNet outperforms the existing state-of-the-art methods both quantitatively and visually. In addition, our results showed that the performance of the face detection can be effectively improved by using AFDNet as pre-processing.
Collapse
Affiliation(s)
- Xiwen Liang
- School of Electronic Information and Automation, Tianjin University of Science and Technology, Tianjin, 300222, China
| | - Xiaoyan Chen
- School of Electronic Information and Automation, Tianjin University of Science and Technology, Tianjin, 300222, China.
| | - Keying Ren
- School of Electronic Information and Automation, Tianjin University of Science and Technology, Tianjin, 300222, China
| | - Xia Miao
- School of Electronic Information and Automation, Tianjin University of Science and Technology, Tianjin, 300222, China
| | - Zhihui Chen
- School of Electronic Information and Automation, Tianjin University of Science and Technology, Tianjin, 300222, China
| | - Yutao Jin
- School of Electronic Information and Automation, Tianjin University of Science and Technology, Tianjin, 300222, China
| |
Collapse
|
6
|
Lang YZ, Wang YL, Qian YS, Kong XY, Cao Y. Effective method for low-light image enhancement based on the JND and OCTM models. OPTICS EXPRESS 2023; 31:14008-14026. [PMID: 37157274 DOI: 10.1364/oe.485672] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Low-light images always suffer from dim overall brightness, low contrast, and low dynamic ranges, thus result in image degradation. In this paper, we propose an effective method for low-light image enhancement based on the just-noticeable-difference (JND) and the optimal contrast-tone mapping (OCTM) models. First, the guided filter decomposes the original images into base and detail images. After this filtering, detail images are processed based on the visual masking model to enhance details effectively. At the same time, the brightness of base images is adjusted based on the JND and OCTM models. Finally, we propose a new method to generate a sequence of artificial images to adjust the brightness of the output, which has a better performance in image detail preservation compared with other single-input algorithms. Experiments have demonstrated that the proposed method not only achieves low-light image enhancement, but also outperforms state-of-the-art methods qualitatively and quantitatively.
Collapse
|
7
|
Leng H, Fang B, Zhou M, Wu B, Mao Q. Low-Light Image Enhancement with Contrast Increase and Illumination Smooth. INT J PATTERN RECOGN 2023; 37. [DOI: 10.1142/s0218001423540034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
Abstract
In image enhancement, maintaining the texture and attenuating noise are worth discussing. To address these problems, we propose a low-light image enhancement method with contrast increase and illumination smooth. First, we calculate the maximum map and the minimum map of RGB channels, and then we set maximum map as the initial value for illumination and introduce minimum map to smooth illumination. Second, we use the histogram-equalized version of the input image to construct the weight for the illumination map. Third, we propose an optimization problem to obtain the smooth illumination and refined reflectance. Experimental results show that our method can achieve better performance compared to the state-of-the-art methods.
Collapse
Affiliation(s)
- Hongyue Leng
- College of Computer Science, Chongqing University, Chongqing 400044, P. R. China
| | - Bin Fang
- College of Computer Science, Chongqing University, Chongqing 400044, P. R. China
| | - Mingliang Zhou
- College of Computer Science, Chongqing University, Chongqing 400044, P. R. China
| | - Bin Wu
- Aerospace Science and Technology Industry, Microelectronics System Institute Co., Ltd., No. 269, North Section of Hupan Road, Chengdu, Sichuan 610213, P. R. China
| | - Qin Mao
- School of Computer and Information, Qiannan Normal College for Nationalities, Doupengshan Road, Duyun, Guizhou 558000, P. R. China
- Key Laboratory of Complex Systems and Intelligent Optimization of Guizhou Province, Duyun, Guizhou 558000, P. R. China
| |
Collapse
|
8
|
Han R, Tang C, Xu M, Liang B, Wu T, Lei Z. Enhancement method with naturalness preservation and artifact suppression based on an improved Retinex variational model for color retinal images. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2023; 40:155-164. [PMID: 36607085 DOI: 10.1364/josaa.474020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 11/29/2022] [Indexed: 06/17/2023]
Abstract
Retinal images are widely used for the diagnosis of various diseases. However, low-quality retinal images with uneven illumination, low contrast, or blurring may seriously interfere with diagnosis by ophthalmologists. This study proposes an enhancement method for low-quality retinal color images. In this paper, an improved variational Retinex model for color retinal images is first proposed and applied to each channel of the RGB color space to obtain the illuminance and reflectance layers. Subsequently, the Naka-Rushton equation is introduced to correct the illumination layer, and an enhancement operator is constructed to improve the clarity of the reflectance layer. Finally, the corrected illuminance and enhanced reflectance are recombined. Contrast-limited adaptive histogram equalization is introduced to further improve the clarity and contrast. To demonstrate the effectiveness of the proposed method, this method is tested on 527 images from four publicly available datasets and 40 local clinical images from Tianjin Eye Hospital (China). Experimental results show that the proposed method outperforms the other four enhancement methods and has obvious advantages in naturalness preservation and artifact suppression.
Collapse
|
9
|
Lang YZ, Qian YS, Kong XY, Zhang JZ, Wang YL, Cao Y. Effective enhancement method of low-light-level images based on the guided filter and multi-scale fusion. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2023; 40:1-9. [PMID: 36607069 DOI: 10.1364/josaa.468876] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Accepted: 11/23/2022] [Indexed: 06/17/2023]
Abstract
Aiming to solve the problem of low-light-level (LLL) images with dim overall brightness, uneven gray distribution, and low contrast, in this paper, we propose an effective LLL image enhancement method based on the guided filter and multi-scale fusion for contrast enhancement and detail preservation. First, a base image and detail image(s) are obtained by using the guided filter. After this procedure, the base image is processed by a maximum entropy-based Gamma correction to stretch the gray level distribution. Unlike the existing methods, we enhance the detail image(s) based on the guided filter kernel, which reflects the image area information. Finally, a new method is proposed to generate a sequence of artificial images to adjust the brightness of the output, which has a better performance in image detail preservation compared with other single-input algorithms. Experiments show that the proposed method can provide a more significant performance in enhancing contrast, preserving details, and maintaining the natural feeling of the image than the state of the art.
Collapse
|
10
|
Learning deep texture-structure decomposition for low-light image restoration and enhancement. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.12.043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
11
|
Feng Y, Deng S, Yan X, Yang X, Wei M, Liu L. Easy2Hard: Learning to Solve the Intractables From a Synthetic Dataset for Structure-Preserving Image Smoothing. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:7223-7236. [PMID: 34111004 DOI: 10.1109/tnnls.2021.3084473] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Image smoothing is a prerequisite for many computer vision and graphics applications. In this article, we raise an intriguing question whether a dataset that semantically describes meaningful structures and unimportant details can facilitate a deep learning model to smooth complex natural images. To answer it, we generate ground-truth labels from easy samples by candidate generation and a screening test and synthesize hard samples in structure-preserving smoothing by blending intricate and multifarious details with the labels. To take full advantage of this dataset, we present a joint edge detection and structure-preserving image smoothing neural network (JESS-Net). Moreover, we propose the distinctive total variation loss as prior knowledge to narrow the gap between synthetic and real data. Experiments on different datasets and real images show clear improvements of our method over the state of the arts in terms of both the image cleanness and structure-preserving ability. Code and dataset are available at https://github.com/YidFeng/Easy2Hard.
Collapse
|
12
|
Lin YH, Lu YC. Low-Light Enhancement Using a Plug-and-Play Retinex Model With Shrinkage Mapping for Illumination Estimation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:4897-4908. [PMID: 35839183 DOI: 10.1109/tip.2022.3189805] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Low-light photography conditions degrade image quality. This study proposes a novel Retinex-based low-light enhancement method to correctly decompose an input image into reflectance and illumination. Subsequently, we can improve the viewing experience by adjusting the illumination using intensity and contrast enhancement. Because image decomposition is a highly ill-posed problem, constraints must be properly imposed on the optimization framework. To meet the criteria of ideal Retinex decomposition, we design a nonconvex Lp norm and apply shrinkage mapping to the illumination layer. In addition, edge-preserving filters are introduced using the plug-and-play technique to improve illumination. Pixel-wise weights based on variance and image gradients are adopted to suppress noise and preserve details in the reflectance layer. We choose the alternating direction method of multipliers (ADMM) to solve the problem efficiently. Experimental results on several challenging low-light datasets show that our proposed method can more effectively enhance image brightness as compared with state-of-the-art methods. In addition to subjective observations, the proposed method also achieved competitive performance in objective image quality assessments.
Collapse
|
13
|
Joint-Prior-Based Uneven Illumination Image Enhancement for Surface Defect Detection. Symmetry (Basel) 2022. [DOI: 10.3390/sym14071473] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Images in real surface defect detection scenes often suffer from uneven illumination. Retinex-based image enhancement methods can effectively eliminate the interference caused by uneven illumination and improve the visual quality of such images. However, these methods suffer from the loss of defect-discriminative information and a high computational burden. To address the above issues, we propose a joint-prior-based uneven illumination enhancement (JPUIE) method. Specifically, a semi-coupled retinex model is first constructed to accurately and effectively eliminate uneven illumination. Furthermore, a multiscale Gaussian-difference-based background prior is proposed to reweight the data consistency term, thereby avoiding the loss of defect information in the enhanced image. Last, by using the powerful nonlinear fitting ability of deep neural networks, a deep denoised prior is proposed to replace existing physics priors, effectively reducing the time consumption. Various experiments are carried out on public and private datasets, which are used to compare the defect images and enhanced results in a symmetric way. The experimental results demonstrate that our method is more conducive to downstream visual inspection tasks than other methods.
Collapse
|
14
|
Abstract
It is quite challenging to stitch images with continuous depth changes and complex textures. To solve this problem, we propose an optimized seam-driven image stitching method considering depth, color, and texture information of the scene. Specifically, we design a new energy function to reduce the structural distortion near the seam and improve the invisibility of the seam. By additionally introducing depth information into the smoothing term of energy function, the seam is guided to pass through the continuous regions of the image with high similarity. The experimental results show that benefiting from the new defined energy function, the proposed method can find the seam that adapts to the depth of the scene, and effectively avoid the seam from passing through the salient objects, so that high-quality stitching results can be achieved. The comparison with the representative image stitching methods proves the effectiveness and generalization of the proposed method.
Collapse
|
15
|
N2PN: Non-reference two-pathway network for low-light image enhancement. APPL INTELL 2022. [DOI: 10.1007/s10489-021-02627-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
16
|
Low-Light Image Enhancement Under Mixed Noise Model with Tensor Representation. ARTIF INTELL 2022. [DOI: 10.1007/978-3-031-20497-5_48] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
17
|
Joint Dedusting and Enhancement of Top-Coal Caving Face via Single-Channel Retinex-Based Method with Frequency Domain Prior Information. Symmetry (Basel) 2021. [DOI: 10.3390/sym13112097] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Affected by the uneven concentration of coal dust and low illumination, most of the images captured in the top-coal caving face have low definition, high haze and serious noise. In order to improve the visual effect of underground images captured in the top-coal caving face, a novel single-channel Retinex dedusting algorithm with frequency domain prior information is proposed to solve the problem that Retinex defogging algorithm cannot effectively defog and denoise, simultaneously, while preserving image details. Our work is inspired by the simple and intuitive observation that the low frequency component of dust-free image will be amplified in the symmetrical spectrum after adding dusts. A single-channel multiscale Retinex algorithm with color restoration (MSRCR) in YIQ space is proposed to restore the foggy approximate component in wavelet domain. After that the multiscale convolution enhancement and fast non-local means (FNLM) filter are used to minimize noise of detail components while retaining sufficient details. Finally, a dust-free image is reconstructed to the spatial domain and the color is restored by white balance. By comparing with the state-of-the-art image dedusting and defogging algorithms, the experimental results have shown that the proposed algorithm has higher contrast and visibility in both subjective and objective analysis while retaining sufficient details.
Collapse
|
18
|
Khan R, Yang Y, Liu Q, Shen J, Li B. Deep image enhancement for ill light imaging. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2021; 38:827-839. [PMID: 34143152 DOI: 10.1364/josaa.410316] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Accepted: 03/16/2021] [Indexed: 06/12/2023]
Abstract
Imaging in the natural scene under ill lighting conditions (e.g., low light, back-lit, over-exposed front-lit, and any combinations of them) suffers from both over- and under-exposure at the same time, whereas processing of such images often results in over- and under-enhancement. A single small image sensor can hardly provide satisfactory quality for ill lighting conditions with ordinary optical lenses in capturing devices. Challenges arise in the maintenance of a visual smoothness between those regions, while color and contrast should be well preserved. The problem has been approached by various methods, including multiple sensors and handcrafted parameters, but extant model capacity is limited to only some specific scenes (i.e., lighting conditions). Motivated by these challenges, in this paper, we propose a deep image enhancement method for color images captured under ill lighting conditions. In this method, input images are first decomposed into reflection and illumination maps with the proposed layer distribution loss net, where the illumination blindness and structure degradation problem can be subsequently solved via these two components, respectively. The hidden degradation in reflection and illumination is tuned with a knowledge-based adaptive enhancement constraint designed for ill illuminated images. The model can maintain a balance of smoothness and contribute to solving the problem of noise besides over- and under-enhancement. The local consistency in illumination is achieved via a repairing operation performed in the proposed Repair-Net. The total variation operator is optimized to acquire local consistency, and the image gradient is guided with the proposed enhancement constraint. Finally, a product of updated reflection and illumination maps reconstructs an enhanced image. Experiments are organized under both very low exposure and ill illumination conditions, where a new dataset is also proposed. Results on both experiments show that our method has superior performance in preserving structural and textural details compared to other states of the art, which suggests that our method is more practical in future visual applications.
Collapse
|
19
|
Baslamisli AS, Gevers T. Invariant descriptors for intrinsic reflectance optimization. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2021; 38:887-896. [PMID: 34143158 DOI: 10.1364/josaa.414682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/10/2020] [Accepted: 05/11/2021] [Indexed: 06/12/2023]
Abstract
Intrinsic image decomposition aims to factorize an image into albedo (reflectance) and shading (illumination) sub-components. Being ill posed and under-constrained, it is a very challenging computer vision problem. There are infinite pairs of reflectance and shading images that can reconstruct the same input. To address the problem, Intrinsic Images in the Wild by Bell et al. provides an optimization framework based on a dense conditional random field (CRF) formulation that considers long-range material relations. We improve upon their model by introducing illumination invariant image descriptors: color ratios. The color ratios and the intrinsic reflectance are both invariant to illumination and thus are highly correlated. Through detailed experiments, we provide ways to inject the color ratios into the dense CRF optimization. Our approach is physics based and learning free and leads to more accurate and robust reflectance decompositions.
Collapse
|
20
|
Baslamisli AS, Das P, Le HA, Karaoglu S, Gevers T. ShadingNet: Image Intrinsics by Fine-Grained Shading Decomposition. Int J Comput Vis 2021. [DOI: 10.1007/s11263-021-01477-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
AbstractIn general, intrinsic image decomposition algorithms interpret shading as one unified component including all photometric effects. As shading transitions are generally smoother than reflectance (albedo) changes, these methods may fail in distinguishing strong photometric effects from reflectance variations. Therefore, in this paper, we propose to decompose the shading component into direct (illumination) and indirect shading (ambient light and shadows) subcomponents. The aim is to distinguish strong photometric effects from reflectance variations. An end-to-end deep convolutional neural network (ShadingNet) is proposed that operates in a fine-to-coarse manner with a specialized fusion and refinement unit exploiting the fine-grained shading model. It is designed to learn specific reflectance cues separated from specific photometric effects to analyze the disentanglement capability. A large-scale dataset of scene-level synthetic images of outdoor natural environments is provided with fine-grained intrinsic image ground-truths. Large scale experiments show that our approach using fine-grained shading decompositions outperforms state-of-the-art algorithms utilizing unified shading on NED, MPI Sintel, GTA V, IIW, MIT Intrinsic Images, 3DRMS and SRD datasets.
Collapse
|
21
|
Veluchamy M, Bhandari AK, Subramani B. Optimized Bezier Curve Based Intensity Mapping Scheme for Low Light Image Enhancement. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE 2021. [DOI: 10.1109/tetci.2021.3053253] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
22
|
Xu J, Huang Y, Cheng MM, Liu L, Zhu F, Xu Z, Shao L. Noisy-As-Clean: Learning Self-supervised Denoising from Corrupted Image. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; PP:9316-9329. [PMID: 32997627 DOI: 10.1109/tip.2020.3026622] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Supervised deep networks have achieved promising performance on image denoising, by learning image priors and noise statistics on plenty pairs of noisy and clean images. Unsupervised denoising networks are trained with only noisy images. However, for an unseen corrupted image, both supervised and unsupervised networks ignore either its particular image prior, the noise statistics, or both. That is, the networks learned from external images inherently suffer from a domain gap problem: the image priors and noise statistics are very different between the training and test images. This problem becomes more clear when dealing with the signal dependent realistic noise. To circumvent this problem, in this work, we propose a novel "Noisy-As-Clean" (NAC) strategy of training self-supervised denoising networks. Specifically, the corrupted test image is directly taken as the "clean" target, while the inputs are synthetic images consisted of this corrupted image and a second yet similar corruption. A simple but useful observation on our NAC is: as long as the noise is weak, it is feasible to learn a self-supervised network only with the corrupted image, approximating the optimal parameters of a supervised network learned with pairs of noisy and clean images. Experiments on synthetic and realistic noise removal demonstrate that, the DnCNN and ResNet networks trained with our self-supervised NAC strategy achieve comparable or better performance than the original ones and previous supervised/unsupervised/self-supervised networks. The code is publicly available at https://github.com/csjunxu/Noisy-As-Clean.
Collapse
|
23
|
Low-Light Image Enhancement Based on Quasi-Symmetric Correction Functions by Fusion. Symmetry (Basel) 2020. [DOI: 10.3390/sym12091561] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Sometimes it is very difficult to obtain high-quality images because of the limitations of image-capturing devices and the environment. Gamma correction (GC) is widely used for image enhancement. However, traditional GC perhaps cannot preserve image details and may even reduce local contrast within high-illuminance regions. Therefore, we first define two couples of quasi-symmetric correction functions (QCFs) to solve these problems. Moreover, we propose a novel low-light image enhancement method based on proposed QCFs by fusion, which combines a globally-enhanced image by QCFs and a locally-enhanced image by contrast-limited adaptive histogram equalization (CLAHE). A large number of experimental results showed that our method could significantly enhance the detail and improve the contrast of low-light images. Our method also has a better performance than other state-of-the-art methods in both subjective and objective assessments.
Collapse
|
24
|
Du Y, Xu J, Zhen X, Cheng MM, Shao L. Conditional Variational Image Deraining. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; 29:6288-6301. [PMID: 32365032 DOI: 10.1109/tip.2020.2990606] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Image deraining is an important yet challenging image processing task. Though deterministic image deraining methods are developed with encouraging performance, they are infeasible to learn flexible representations for probabilistic inference and diverse predictions. Besides, rain intensity varies both in spatial locations and across color channels, making this task more difficult. In this paper, we propose a Conditional Variational Image Deraining (CVID) network for better deraining performance, leveraging the exclusive generative ability of Conditional Variational Auto-Encoder (CVAE) on providing diverse predictions for the rainy image. To perform spatially adaptive deraining, we propose a spatial density estimation (SDE) module to estimate a rain density map for each image. Since rain density varies across different color channels, we also propose a channel-wise (CW) deraining scheme. Experiments on synthesized and real-world datasets show that the proposed CVID network achieves much better performance than previous deterministic methods on image deraining. Extensive ablation studies validate the effectiveness of the proposed SDE module and CW scheme in our CVID network. The code is available at https://github.com/Yingjun-Du/VID.
Collapse
|